Overview
- The Bulk Loader is a process that is used to add several new addresses to the EAS at one time.
- There are several stages that make up the Bulk Loader process, outlined below in Summary and Details.
Stages 1 - 3 are run only once per Bulk Loader process.
Stages 4 - 7 are run one or more times in batches.
Summary of Bulk Loader Stages
Stage Number | Stage | Category | Summary | Environment | Iterations | Estimated Person Time | Estimated Computer Time |
---|---|---|---|---|---|---|---|
1 | Import and parse reference dataset (Optional) | Parsing | This optional step in the Bulk Loader process is to cross check an address for a match in a reference data set. If a source address is found in the reference dataset the address makes it to the next step. If not found the address is put aside in an exclusion set for later review. | Once per Bulk Loader process | 1 hour | 10 minutes | |
2 | Import, parse and filter source dataset | Parsing | Import the dataset destined for the EAS. Parse and filter the set. | Once per Bulk Loader process | 90 minutes | 15 minutes | |
Geocode and filter | Geocoding | Geocode the set and filter further based on the geocoder score and status. | ArcMap | Once per Bulk Loader process | 1 hour | 5 minutes | |
4 | Export full set (single batch) or subset (multiple batches) | Geocoding | For large datasets, create one of many subsets that will be run through the Bulk Loader in multiple batches. | ArcMap | One or more batches for each Bulk Loader process | 30 minutes per batch | 5 minutes per batch |
5 | Bulk Load batch (full set or subset) | Bulk Loading | Run the entire batch or each subset batch through the Bulk Loader. | One or more batches for each Bulk Loader process | 1 hour per batch | 5 minutes per batch | |
6 | Extract results | Bulk Loading | Extract and archive the list of addresses that were added to the EAS . Also archive the unique EAS 'change request id' associated with this batch. Also archive the addresses that were rejected by the Bulk Loader in this batch. | PostgreSQL / pgAdmin | One or more batches for each Bulk Loader process | 1 hour per batch | 5 minutes per batch |
7 | Cleanup and Restoration | Bulk Loading | Clean up database, restore services and in the event of a failure, restore from backup. | PostgreSQL / pgAdmin | One or more batches for each Bulk Loader process | 1 hour per batch | 5 minutes per batch |
Important considerations when running the Bulk Loader
Downstream implications
Downstream Implications
Be aware of the downstream implications of running the Bulk Loader. The Bulk Loader adds new addresses to the EAS. This has a ripple effect of populating multiple databases.
Some of the downstream databases affected by running the Bulk Loader:
- EAS Database (master/production) - targeted directly by the Bulk Loader
- EAS Database (slave/replication)
- Other internal and external business system databases that are updated in near-real-time
Backup recommended
Backup recommended
Is is highly recommended that a backup be made of the database prior to running the Bulk Load step in Stage 5.
If a problem is noticed immediately after running the Bulk Loader it may be possible to restore the database from backup.
In order to facilitate an immediate backout it is recommended that the Bulk Loader process be run after hours and with access to the EAS temporarily suspended. Otherwise, backtracking may become difficult if not impossible.
Treat like a release in production environment
Treat like a release
When Bulk Loading in a production environment, treat it like a software release by following protocols to stop and start relevant services as outlined in the steps below.
Pick the right time
When releasing in the production environment pick a time outside core work hours that does not conflict with any cron jobs running on the production server.
Details for each Bulk Loader Stage
Stage 1 - Import and parse reference dataset (Optional)
This optional stage is run once per Bulk Loader process. This stage can be skipped if the reference dataset is already available or if the optional 'filter by reference' step (Step 2.5) is skipped.
Step 1.1 - Import reference dataset
- Script Name (*) csv2pg.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/csv2pg.py
- Important arguments
- input_file - The relative path to the raw CSV file.
- output_table - The name of the table for the imported records.
- Example usage
Import CSV file into PostgreSQL table (++<odbc>)
csv2pgpython csv2pg.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --input_file=./path/to/raw_reference_file.csv \ --output_table=reference_raw
- Output table
- This step generates a new table. The name of the new table is passed as a required command line argument.
- Output fields
- All columns in the input CSV file are imported as fields in the new table.
- sfgisgapid: a new serial, not null, primary key
- Artifacts (**)
- reference_raw.csv - The input CSV table serves as the artifact for this step.
Step 1.2 - Parse reference dataset
- Script Name (*) parse_address_table.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/parse_address_table.py
- Important arguments
- address_column: Name of address field. Required.
- city_column: Name of city field. Optional.
- state_column: Name of state field. Optional.
- zip_column: Name of ZIP Code field. Optional.
- output_table: Name of the output table. Required.
- Example usage
Parse address table (++<odbc>)
parse_address_table.pypython parse_address_table.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --address_table=reference_raw \ --primary_key=sfgisgapid \ --address_column=address \ --city_column=city \ --state_column=state \ --zip_column=zip \ --output_table=reference_parsed \ --output_report_file=./output/reference.xlsx \ --limit=-1
- Output table
- This step generates a new table. The new table contains all of the fields and records of the input table with additional fields noted below.
- Output fields
- The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields.
- address_to_parse character varying(255)
- address_number bigint
- address_number_suffix character varying(255)
- street_name_pre_directional character varying(255)
- street_name_pre_type character varying(255)
- street_name character varying(255)
- street_name_post_type character varying(255)
- street_name_post_directional character varying(255)
- subaddress_type character varying(255)
- subaddress_identifier character varying(255)
- state_name character varying(255)
- zip_code character varying(255)
- complete_address_number character varying(255)
- complete_street_name character varying(255)
- complete_subaddress character varying(255)
- complete_place_name character varying(255)
- delivery_address_with_unit character varying(255)
- delivery_address_without_unit character varying(255)
- parsed_address character varying(1000)
- parser_had_issues boolean
- parser_message character varying(2000)
- parserator_tag character varying(255)
- address_number_parity character varying(255)
- counter integer
- The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields.
- Artifacts (**)
- reference_parsed.csv - Export the new table, e.g. reference_parsed, as a CSV, e.g. reference_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration.
Stage 2 - Import, parse and filter source dataset
This stage is run once per Bulk Loader process. This stage involves importing, parsing and filtering the dataset. The result will be a dataset that will be geocoded in Stage 3.
Step 2.1 - Import source dataset
- Script Name (*) csv2pg.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/csv2pg.py
- Important arguments
- input_file: The relative path to the raw CSV file.
- output_table: The name of the table for the imported records.
- Example usage
Import CSV file into PostgreSQL table (++<odbc>)
csv2pgpython csv2pg.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --input_file=./path/to/raw_source_file.csv \ --output_table=addresses_raw
- Output table
- This step generates a new table. The name of the new table is passed as a required command line argument.
- Output fields
- All columns in the input CSV file are imported as fields in the new table.
- sfgisgapid: a new serial, not null, primary key
- Artifacts (**)
- addresses_raw.csv - The input CSV table serves as the artifact for this step.
Step 2.2 - Exclude from source dataset where address_number has range
- Create Output Tables
addresses_no_range
addresses_no_rangeCREATE TABLE addresses_no_range AS SELECT * FROM addresses_raw WHERE NOT add_number LIKE '%-%';
addresses_with_range
addresses_with_rangeCREATE TABLE addresses_with_range AS SELECT * FROM addresses_raw WHERE add_number LIKE '%-%';
- Artifacts (**)
- addresses_no_range.csv - This is an archive of the records that made it past the filter to the next step.
- addresses_with_range.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.
Step 2.3 - Parse source dataset
- Script Name (*) parse_address_table.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/parse_address_table.py
- Important arguments
- address_column: Name of address field. Required.
- city_column: Name of city field. Optional.
- state_column: Name of state field. Optional.
- zip_column: Name of ZIP Code field. Optional.
- output_table: Name of the output table. Required.
- Example usage
Parse address table (++<odbc>)
parse_address_tablepython parse_address_table.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --address_table=addresses_no_range \ --primary_key=sfgisgapid \ --address_column=address \ --city_column=city \ --state_column=state \ --zip_column=zip \ --output_table=addresses_parsed \ --output_report_file=./output/source_report.xlsx \ --limit=-1
- Output table
- This step generates a new table. The new table contains all of the fields and records of the input table with additional fields noted below.
- Output fields
- The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields. See also Step 1.2 Output Fields.
- Artifacts (**)
- addresses_parsed.csv: Export the new table, e.g. addresses_parsed, as a CSV, e.g. addresses_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration.
Step 2.4 - Exclude addresses where street_name_post_direction is not blank and corresponding streets
- The Bulk Loader is unable to correctly import addresses that contain a post directional value. As such these addresses as well as all corresponding addresses without a post directional value must be filtered out of the dataset.
- See also Issue 168, Provide for street names that have a post directional.
- Script Name (*) remove_postdirectionals.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/remove_postdirectionals.py
- Important arguments
- output_table: The name of the table for the filtered records.
- Example usage
Remove records with post directional values (++<odbc>)
remove_postdirectionalspython remove_postdirectionals.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --address_table=addresses_parsed \ --output_table=addresses_no_post_dir \ --output_report_file=./output/source_report.xlsx \ --limit=-1
Create table addresses_with_post_dir
addresses_with_post_dirCREATE TABLE addresses_with_post_dir AS SELECT * FROM addresses_parsed WHERE NOT id IN (SELECT id FROM addresses_no_post_dir);
- Output table
- This step generates a new table. The new table contains all of the fields and records of the input table excluding any that have a post direction value and corresponding addresses with streets without the post direction.
- Artifacts (**)
- addresses_no_post_dir.csv - This is an archive of the records that made it past the filter to the next step.
- addresses_with_post_dir.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.
Step 2.5 - Exclude addresses not in reference (Optional)
This optional step excludes from the source dataset any addresses that are not also found in the parsed reference dataset.
For an understanding on how this step handles duplicates in either the source or reference, see the comments in EAS Issue 281, Refine the filtering regime for address data to be bulk-loaded .
- Script Name (*) is_address_in_reference.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/is_address_in_reference.py
- Important arguments
- source_output_field - The name of a new integer field that will store the results of the step.
- Example usage
Add field to flag if address is found in reference (++<odbc>)
is_address_in_referencepython is_address_in_reference.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --source_table=addresses_no_post_dir \ --reference_table=reference_parsed \ --source_pk=id \ --reference_pk=id \ --source_address_number=address_number \ --source_address_number_suffix=address_number_suffix \ --source_street_name_pre_directional=street_name_pre_directional \ --source_street_name_pre_type=street_name_pre_type \ --source_street_name=street_name \ --source_street_name_post_type=street_name_post_type \ --source_street_name_post_directional=street_name_post_directional \ --source_subaddress_type=subaddress_type \ --source_subaddress_identifier=subaddress_identifier \ --source_place_name=place_name \ --reference_address_number=address_number \ --reference_address_number_suffix=address_number_suffix \ --reference_street_name_pre_directional=street_name_pre_directional \ --reference_street_name_pre_type=street_name_pre_type \ --reference_street_name=street_name \ --reference_street_name_post_type=street_name_post_type \ --reference_street_name_post_directional=street_name_post_directional \ --reference_subaddress_type=subaddress_type \ --reference_subaddress_identifier=subaddress_identifier \ --reference_place_name=place_name \ --source_output_field=in_reference \ --limit=-1
Create and archive table addresses_in_reference
addresses_in_referenceCREATE TABLE addresses_in_reference AS SELECT * FROM addresses_no_post_dir WHERE in_reference = 1;
Create and archive table addresses_not_in_reference
addresses_not_in_referenceCREATE TABLE addresses_not_in_reference AS SELECT * FROM addresses_no_post_dir WHERE in_reference = 0;
- Output table
- This step generates a new table. The name of the new table is passed as a required command line argument.
- Output fields
- A new field that tells if the source address was found in the reference set (1), not found (0), or not yet checked (-1). The name of the new field is passed as a required command line argument.
- Artifacts (**)
- addresses_in_reference.csv - This is an archive of the records that made it past the filter to the next step.
- addresses_not_in_reference.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.
Step 2.6 - Export source dataset for geocoding
- Script Name (*) pg2bulkloader.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/pg2bulkloader.py
- Important arguments
- input_table - The name of the table containing addresses to be geocoded.
- output_file_name - The name of the CSV created from the input table.
- input_source - The text associated with the data source of this batch of addresses. The value entered here appears in the EAS user interface (under the source of a given base address or unit). This field has a 32 character limit. Exceeding 32 characters will cause the Bulk Loader to crash.
- Example usage
Create table addresses_to_geocode
addresses_to_geocode-- Select, order and save as 'addresses_to_geocode' CREATE TABLE addresses_to_geocode AS SELECT * FROM addresses_in_reference ORDER BY street_name, street_name_pre_directional, street_name_pre_type, street_name_post_type, street_name_post_directional, address_number, address_number_suffix, subaddress_type, subaddress_identifier, place_name
Add
counter
fieldcounter-- Add and index a counter field ALTER TABLE addresses_to_geocode DROP COLUMN IF EXISTS counter; ALTER TABLE addresses_to_geocode ADD COLUMN counter SERIAL; CREATE INDEX ON addresses_to_geocode (counter);
Export PostgreSQL table to CSV file for Bulk Loading (++<odbc>)
pg2bulkloaderpython pg2bulkloader.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --input_table=addresses_to_geocode \ --output_file_name=./output/addresses_to_geocode \ --input_source='Imported Addresses (20XX-XX-XX)' \ --limit=-1
- Archive a copy of table addresses_to_geocode to artifact addresses_to_geocode.csv
- Output table
- addresses_to_geocode
- Output file
- This step generates a new CSV file. The name of the new file is passed as a required command line argument.
- Artifacts (**)
- addresses_to_geocode.csv - A CSV file of addresses ready for geocoding.
Stage 3 - Geocode and filter
Step 3.1 - Geocode source dataset
- Input - addresses_to_geocode.csv
- Output - addresses_geocoded.shp
Detailed Substeps
Create folder for storing all input and output artifacts for this iteration of the Bulk Loader process.
For example,
R:\Tec\..\Eas\_Task\2018_2019\path\to\archive\bulkloader_process_YYYYMMDD
Create new ArcMap map document (ArcMap 10.6.1)
For example,
bulkloader_YYYYMMDD.mxd
Add streets (optional)
See
StClines_20190129.shp
inR:\Tec\...\Eas\_Task\2018_2019\20181128_248_DocumentBulkLoader\Data
Create Personal Geodatabase
Right click
Home
inFolder Connections
inCatalog
Select
New → Personal Geodatabase
Import CSV into File Geodatabase
Right-click new personal geodatabase:
Import → Table (single)
Browse to
addresses_to_geocode.csv
Specify output table
addresses_to_geocode
Click
OK
and time on stopwatch. Wait up to 5 minutes for TOC to update.
- Geocode with Virtual Address Geocoder
- Right-click the table
addresses_to_geocode
in the ArcMap table of contents and selectGeocode Addresses
- In the dialog
'Choose an address geocoder to use'
select'Add'
.- Browse to and select
R:\311\...\StClines_20150729_VirtualAddressLocator (TODO - Replace with new path)
- Click
'OK'
- Browse to and select
- Under
'Address Input Fields'
select'Multiple Fields'
- Street or intersection:
address
- ZIP Code:
zip
- Street or intersection:
- Under
Output
- Click folder icon. In popup change
'Save as type'
to'Shapefile'
. Save shapefile in path dedicated to artifacts for this Bulk Loader Progress
For example,
R:\Tec\..\Eas\_Task\2018_2019\path\to\archive\bulkloader_YYYYMMDD\geocoder
\addresses_geocoded.shp
- Click folder icon. In popup change
Time on stopwatch and take note of execution time.
- Right-click the table
- Artifacts (**)
bulkloader_process_YYYYMMDD.mxd
addresses_geocoded.shp
Step 3.2 - Filter geocoded dataset
The purpose of this step is to filter geocoded addresses based on their geocoding score and status.
- The geocoding score has a maximum of 100 points. This step keeps addresses with a score of 100.
- The geocoding process also applies of a status value of either 'M' for matched, 'T' for tied with another address, and 'U' for unmatched.
- This step filters out any addresses that are tied with another address (status of 'T'), even if the score is 100.
Input - addresses_geocoded.shp
Detailed Substeps
Open the ArcMap document created in the previous step.
Filter matched addresses (status = 'M') with score of 100.
In the Table of Contents right-click the shapefile from previous step, e.g. addresses_geocoded.shp, and select
Open Attributes Table
.Click the first icon in the menu bar (top-left) and select
Select By Attributes
.Enter the following WHERE clause to select matched address with a score of 100:
- ("Status" = 'M') AND ("Score" = 100)
Click
Apply
, wait for operation to complete and then close the Attributes window
Save results to geocoder_matched_and_score100.shp
In the Table of Contents right-click the shapefile from the previous step and select
Data → Export Data
Click the browse icon.
In the file browser select the
Save as type
dropdown and selectShapefile
.Save the shapefile to the artifacts folder dedicated to this iteration of the Bulk Loader Process.
Filter the opposite set: status not matched ('M') or score not 100.
Right-click addresses_geocoded.shp and open the attributes table.
Enter the following WHERE clause:
NOT ("Status" = 'M') OR NOT ("Score" = 100)
Save results to geocoder_notmatched_or_under100.shp. (See substeps above.)
Output / Artifacts
geocoder_score_100.shp - Shapefile of matched addresses (status = 'M') with geocoding score of 100.
geocoder_notmatched_or_under100.shp - Shapefile of non-matched addresses (unmatched or tied) or with geocoding score less than 100.
The total number of records of the two output shapefiles should be the same as the number of records in the input shapefile.
Stage 4 - Export full set (single batch) or subset (multiple batches)
A note about batches
Stages 4, 5 and 6 can be run one time with the results from Stage 3, or they can be run in multiple batches of subsets.
A major consideration of when to run the full set at once versus in batches is the number of records being Bulk Loaded.
The size of each Bulk Loader operation affects the following aspects of the EAS:
- The disk space consumed by the database server
- The EAS user interface section that lists addresses loaded in a given Bulk Loader operation
- The weekly email attachment listing new addresses added to the EAS
For medium-to-large datasets (input sets with over 1,000 records) it is recommended that the process first be run on a development server to assess the implications of the operation. Where appropriate, perform the Bulk Loading process in batches over several days or weeks.
The remaining steps will document a single batch example iteration of a multi-batch process.
Step 4.1 - Export shapefile for Bulk Loading (entire set or subset batch)
- Substeps
- Create Batch from Subset (optional)
This example shows filtering for the second batch of 50,000 records using the counter_
field.
- Right-click the layer geocoder_score_100 in the ArcMap Table of Contents and select
Open Attributes Table
. - Click the first icon in the menu bar (top-left) and select
Select By Attributes
. - Enter the following WHERE clause to select the current batch of 50,000 records:
"counter_" >= 50000 AND "counter_" < 100000
- Click
Apply
, wait for operation to complete and then close the Attributes window- The batch may contain less than 50,000 records due to filtering by geocoding results in the previous step.
- Right-click the layer geocoder_score_100 in the ArcMap Table of Contents and select
- Export for Bulk Loader
In the Table of Contents right-click the layer geocoder_score_100 and select
Data → Export Data.
Click the browse icon.
In the file browser select the
Save as type
dropdown and selectShapefile
.Save the shapefile to the artifacts folder dedicated to this iteration of the Bulk Loader Process.
e.g. R:\Tec\..\Eas\_Task\2018_2019\path\to\archive\bulkloader_process_YYYYMMDD\bulkloader\batch_002\bulkload.shp
- Artifacts
- bulkload.shp - Shapefile for loading into the Bulk Loader in the next stage.
Stage 5 - Run the Bulk Loader
For a complete set of steps and background about the Bulk Loader, see also Running the Bulk Loader, a page dedicated to its input, operation and results.
Step 5.1 - Production-specific preparation
- Backup Database
- Make a backup of the EAS database
- See also Backup the EAS Databases
- Make a backup of the EAS database
Halt Services
Reason for halting services
These steps are being performed to facilitate immediate roll-back of the EAS database if the Bulk Load Process ends in failure.
Place EAS web application into maintenance mode
Place the Web servers into maintenance mode (SF PROD WEB, DR PROD WEB).
Place EAS web application into maintenance modecd /var/www/html sudo ./set_eas_mode.sh MAINT
Turn off the replication server
Disable database replication by shutting down the database service on the replication server (DR PROD DB).
Stop PostgreSQLservice postgresql9.0 stop # Alternative syntax: sudo /sbin/service postgresql-9.0 stop # Alternative syntax: /usr/pgsql/9.0/bin/pg_ctl -D /data/9.0/data start
Turn off downstream database propagation service(s)
Suspend downstream replication to internal business system database (SF PROD WEB).
stop xmitsudo /var/www/html/eas/bin/xmit_change_notifications.bsh stop
Step 5.2 - Non-production-specific preparation
- Restore Database
- Restore database from latest daily production backup
Step 5.3 - Database Preparation
Connect to the database,
<environment>_DB
, and clear any leftover records from previous Bulk Loader batches.truncateTRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest;
vacuumVACUUM FULL ANALYZE bulkloader.address_extract;
vacuumVACUUM FULL ANALYZE bulkloader.blocks_nearest;
Make note of EAS record counts before the Bulk Loading operation.
Record CountsSELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY schemaname,relname,n_live_tup
Make note of the database partition size on the file system.
disk usagedf /data
Step 5.4 - Transfer Shapefiles
- Transfer the bulkload.shp shapefile from Stage 4 to an EAS automation machine,
<environment>_AUTO
. - Substitute
<environment>
with one of the relevant environments: SF_DEV, SF_QA, SF_PROD, SD_PROD. - Copy the shapefile to the folder
C:\apps\eas_automation\app_data\data\bulkload_shapefile
.
Step 5.5 - Run Bulk Loader
- Open a command prompt and change folders:
cd C:\apps\eas_automation\automation\src
Run the step to stage the address records:
python job.py --job stage_bulkload_shapefile --env SF_DEV --action EXECUTE --v
Run the step to bulk load the address records:
python job.py --job bulkload --env SF_DEV --action EXECUTE --v
To calculate the time it took to run the Bulk Loader look at the timestamps in the output or use a stopwatch or clock to time the operation.
Step 5.6 - Analysis
Make note of EAS record counts after the Bulk Load operation.
Record CountsSELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY schemaname,relname,n_live_tup
A comparison of 'before' and 'after' record counts will indicate the number of new base addresses added to the table `public.address_base` and the number of new addresses and units added to the table `public.addresses`.
See dedicated Bulk Loader page, Running the Bulk Loader, for more analysis options.
Make note of the database partition size on the file system. Compre with size of partition prior to loading to get the total disk space used as a result of running the Bulk Loader.
disk usagedf /data
- Query and make note of totals in the
bulkloader.address_extract
table. The results here will be used to cross check the results in the next stage.Count/view new base addresses added to the EAS.
Count/view new base addressesSELECT COUNT(*) FROM bulkloader.address_extract WHERE NOT (street_segment_id IS NULL) SELECT * FROM bulkloader.address_extract WHERE NOT (street_segment_id IS NULL)
Count/view unit addresses (some were already there, some are new)
Count/view unit addressesSELECT COUNT(*) FROM bulkloader.address_extract WHERE NOT (address_id IS NULL) SELECT * FROM bulkloader.address_extract WHERE NOT (address_id IS NULL)
Stage 6 - Extract results
Step 6.1 - Archive exceptions
Info about the 'address_extract' table
The Bulk Loader operation in Stage 5 populated an EAS table named 'address_extract'
in the 'bulkloader'
schema.
It populated the 'address_extract'
table with every address it attempted to load.
If any errors occurred on a given address during the load, the Bulk Loader populated the 'exception_text'
field with a description of the error.
- Archive the entire
address_extract
table.- Use a query tool such as pgAdmin to query and save the table as a CSV file.bulkloader.address_extract
SELECT * FROM 'bulkloader.address_extract';
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\address_extract.csv
- For example,
- Use a query tool such as pgAdmin to query and save the table as a CSV file.
- Archive the addresses that raised exceptions during the Bulk Loader run
- Query the
'bulkloader.address_extract'
table for any value in the'exception_text'
field.exception_textSELECT * FROM bulkloader.address_extract WHERE NOT(exception_text IS NULL) ORDER BY exception_text;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\exceptions.csv
- For example,
- Query the
- Artifacts
- address_extract.csv - Results of every address submitted to the Bulk Loader.
- exception_text.csv - Subset of the just the records that were not loaded due to the error indicated in the
'exception_text
' field.
Step 6.2 - Archive unique EAS
change_request_id
associated with the Bulk Load
- Get the unique EAS
change_request_id
created by the Bulk Load operation. The value of<change_request_id>
will be used in the next steps to count addresses added to the EAS.Query the
'public.change_requests'
table for the new'change_request_id'
value.change_request_idSELECT change_request_id FROM public.change_requests WHERE requestor_comment LIKE 'bulk load change request' ORDER BY change_request_id DESC LIMIT 1;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\change_request_id.csv
- For example,
- Artifacts
- change_request_id.csv - The unique EAS change_request_id created by the Bulk Load operation.
Step 6.3 - Archive new EAS
address_base
records
- Get all the base records added to the EAS during the Bulk Loader operation.
Query the
public.address_base
table on the newchange_request_id
value.public.address_baseSELECT activate_change_request_id, address_id, public.address_base.* FROM public.address_base, public.addresses WHERE public.address_base.address_base_id = public.addresses.address_base_id AND public.addresses.address_base_flg = TRUE AND public.addresses.activate_change_request_id = <change_request_id>;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\address_base.csv
- For example,
- Artifacts
- address_base.csv - All the base records added to the EAS during the Bulk Loader operation.
Step 6.4 - Archive new EAS
addresses
records
- Get all the address records (including units) added to the EAS during the Bulk Loader operation..
Query the
public.addresses
table on the newchange_request_id
value.public.addressesSELECT * FROM public.addresses WHERE activate_change_request_id = <change_request_id>;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\addresses.csv
- For example,
- Artifacts
- addresses.csv - All the address records (including units) added to the EAS during the Bulk Loader operation.
Step 6.5 - Cross check results
Compare the results of Stage 5 with the results from Stage 6.
The number of base addresses found in the Stage 5 Analysis should be identical to the number of base addresses found in Step 6.3.
- The number of addresses found in the Stage 5 Analysis should be less than or equal to the number of addresses listed in Step 6.4. (The Bulk Loader does not provide enough information in the the
bulkloader.address_extract
table to determine the exact number of new addresses added. But there is enough information to determine an upper limit.)
Stage 7 - Cleanup and Restoration
Step 7.1 - Database Cleanup
Connect to the database,
<environment>_DB
, and clear temporary records from the latest Bulk Loader batch.TRUNCATETRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest;
VACUUMVACUUM FULL ANALYZE bulkloader.address_extract;
VACUUMTRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest;
Step 7.2 - On Failure Restore Database
- If the Bulk Loader failed and corrupted any data then restore from the database backup.
- Follow these steps to restore from backup.
Step 7.3 - Restore Services (Production Only)
- If the Bulk Loader Process was run on the production server then restore services
Turn on production-to-replication service
Re-enable database replication by restarting the database service on the replication server (DR PROD DB).
Stop PostgreSQLservice postgresql9.0 start # Alternative syntax: sudo /sbin/service postgresql-9.0 start
Turn on downstream database propagation service(s)
Resume downstream replication to internal business system database (SF PROD WEB).
start xmitsudo /var/www/html/eas/bin/xmit_change_notifications.bsh start
Enable front-end access to EAS
Place the Web servers into live mode (SF PROD WEB, DR PROD WEB).
cd /var/www/html sudo ./set_eas_mode.sh LIVE
Notes
(+) Substitute EAS <environment> with one of the relevant environments: SF_DEV, SF_QA, SF_PROD, SD_PROD.
(++<odbc>) Substitute <odbc> arguments with values for an available PostgreSQL database.
<odbc-server> - Name or IP address of the database server, e.g. localhost
<odbc-port> - Port of the database server, e.g. 5432
<odbc-database> - Name of the database, e.g. awgdb
<odbc-uid> - User name
<odbc-pwd> - User password
(*) Scripts are written in and require Python 3. See the source code repository for more details.
(**) Artifacts should be saved in a network folder dedicated to the entire instance of a given Bulk Loader process. Artifact names shown are suggestions. Note: For large datasets the entire process could be spread over many days or weeks. Take this into consideration when naming any artifacts and subfolders.
Development stack
This is the stack used for the development and testing of the steps. For best results run the steps with the same or compatible stack.
- Stages 1 and 2 (Parsing)
- Python 3 (3.6.4)
- PostgreSQL (10.3)
- pgAdmin (for executing raw SQL)
- Stages 3 and 4 (Geocoding)
- ArcMap (10.6.1)
- Stages 5 and 6 (Bulk Loading)
Related articles
Filter by label
There are no items with the selected labels at this time.
0 Comments