Table of Contents |
---|
...
Table of Contents |
---|
- The Bulk Loader is a process that is used to add several new addresses to the EAS at one time.
- There are several stages that make up the Bulk Loader process, outlined below in Summary and Details.
...
Stage Number | Stage | Category | Summary | Environment | Iterations | Estimated Person Time | Estimated Computer Time |
---|---|---|---|---|---|---|---|
1 | Import and parse reference dataset (Optional) | Parsing | This optional step in the Bulk Loader process is to cross check an address for a match in a reference data set. If a source address is found in the reference dataset the address makes it to the next step. If not found the address is put aside in an exclusion set for later review. | Once per Bulk Loader process | 1 hour | 10 minutes | |
2 | Import, parse and filter source dataset | Parsing | Import the dataset destined for the EAS. Parse and filter the set. | Once per Bulk Loader process | 90 minutes | 15 minutes | |
Geocode and filter | Geocoding | Geocode the set and filter further based on the geocoder score and status. | ArcMap | Once per Bulk Loader process | 1 hour | 5 minutes | |
4 | Export full set (single batch) or subset (multiple batches) | Geocoding | For large datasets, create one of many subsets that will be run through the Bulk Loader in multiple batches. | ArcMap | One or more batches for each Bulk Loader process | 30 minutes per batch | 5 minutes per batch |
5 | Bulk Load batch (full set or subset) | Bulk Loading | Run the entire batch or each subset batch through the Bulk Loader. | One or more batches for each Bulk Loader process | 1 hour per batch | 5 minutes per batch | |
6 | Extract results | Bulk Loading | Extract and archive the list of addresses that were added to the EAS . Also archive the unique EAS 'change request id' associated with this batch. Also archive the addresses that were rejected by the Bulk Loader in this batch. | PostgreSQL / pgAdmin | One or more batches for each Bulk Loader process | 1 hour per batch | 5 minutes per batch |
7 | Cleanup and Restoration | Bulk Loading | Clean up database, restore services and in the event of a failure, restore from backup. | PostgreSQL / pgAdmin | One or more batches for each Bulk Loader process | 1 hour per batch | 5 minutes per batch |
...
Anchor | ||||
---|---|---|---|---|
|
This optional stage is run once per Bulk Loader process. This stage can be skipped if the reference dataset is already available or if the optional 'filter by reference' step (Step 2.5) is skipped.
...
Anchorstage3 stage3
Stage 3 - Geocode and filter
stage3 | |
stage3 |
-
Step 3.1 - Geocode source dataset
...
Anchorstage4 stage4
Stage 4 - Export shapefile - full set (single batch) or subset (multiple batches)
stage4 | |
stage4 |
Note | ||
---|---|---|
| ||
Stages 4, 5 and 6 can be run one time with the results from Stage 3, or they can be run in multiple batches of subsets. A major consideration of when to run the full set at once versus in batches is the number of records being Bulk Loaded. The size of each Bulk Loader operation affects the following aspects of the EAS:
For medium-to-large datasets (input sets with over 1,000 records) it is recommended that the Bulk Loading process be run in batches over several days or weeks. Reminder! It is required that the process first be run on a development server to assess the implications of the operation. The remaining steps will document a single batch iteration. Repeat these steps in a multi-batch process. |
...
e.g. R:\Tec\..\Eas\_Task\2018_2019\path\to\archive\bulkloader_process_YYYYMMDD\bulkloader\batch_002NNN\bulkload.shp
- Artifacts
- bulkload.shp - Shapefile for loading into the Bulk Loader in the next stage.
Anchorstage5 stage5
Stage 5 - Run the Bulk Loader
stage5 | |
stage5 |
For a complete set of steps and background about the Bulk Loader, see also Running the Bulk Loader, a page dedicated to its input, operation and results.
...
-
Step 5.1 - Disable front-end access to EAS
- Notify relevant recipients that the Bulk Loader Process is starting
Disable web service on
<environment>_WEB
(SF DEV WEB, SF QA WEB, SF PROD WEB)
Browse to web siteCode Block language bash linenumbers true cd /var/www/html sudo ./set_eas_mode.sh MAINT
- Browse to web site, http://eas.sfgov.org/, to confirm the service has stopped. (Expect to see message that EAS is currently out of service.)
...
-
Step 5.2 - Non-production-specific preparation
Restore Database
- Restore database from latest daily production backup
...
-
Step 5.3 - Production-specific preparation
Halt Services
Warning title Reason for halting services These steps are being performed to facilitate immediate roll-back of the EAS database if the Bulk Load Process ends in failure
SKIP
Turn off the replication serverDisable database replication by shutting down the database service on the replication server (DR PROD DB).Code Block language bash title Stop PostgreSQL linenumbers true sudo#sudo -u postgres -i #/usr/pgsql-9.0/bin/pg_ctl -D /data/9.0/data stop
Turn off downstream database propagation service(s)
Suspend downstream replication to internal business system database (SF PROD WEB).
Code Block language textbash title stop xmit linenumbers true sudo /var/www/html/eas/bin/xmit_change_notifications.bsh stop
Backup Database
Make a backup of the EAS database- See also Backup the EAS Databases
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
sudo -u postgres -i
/home/dba/scripts/dbbackup.sh > /var/tmp/dbbackup.log # this step takes about 2 minutes
ls -l /var/tmp # ensure the log file is 0 bytes
ls -la /mnt/backup/pg/daily/easproddb.sfgov.org-* # the timestamp on the last file listed should match timestamp of backup
exit # logout of user postgres when done |
...
Connect to the database,
<environment>_DB
, and clear any leftover records from previous Bulk Loader batches.Code Block language sql title TRUNCATE linenumbers true TRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest;
Code Block language sql title VACUUM linenumbers true VACUUM FULL ANALYZE bulkloader.address_extract;
Code Block language sql title VACUUM linenumbers true VACUUM FULL ANALYZE bulkloader.blocks_nearest;
Make note of EAS record counts before the Bulk Loading operation.
Code Block language sql firstline 1 title Record Counts linenumbers true SELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY schemaname,relname,n_live_tup
- Save results in artifact as record_counts_before.csv
- Also save results in Excel spreadsheet artifact as bulkloader BulkLoader_resultsProcess_YYYYMMDD.xlsx(TODO provide path to a template)
Make note of the database partition size on the file system at the current point in time.
Code Block language bash firstline 1 title disk usage linenumbers true date; df /data # 1st of 3
...
Open a command prompt and change folders:
Code Block language bash linenumbers true cd C:\apps\eas_automation\automation\src
Run the step to stage the address records:
Code Block language bash linenumbers true python job.py --job stage_bulkload_shapefile --env <environment> --action EXECUTE --v --python job.py --job stage_bulkload_shapefile --env SF_DEV --action EXECUTE --v --python job.py --job stage_bulkload_shapefile --env SF_QA --action EXECUTE --v --python job.py --job stage_bulkload_shapefile --env SF_PROD --action EXECUTE --v
Run the step to bulk load the address records
Code Block language bash linenumbers true python job.py --job bulkload --env <environment> --action EXECUTE --v --python job.py --job bulkload --env SF_DEV --action EXECUTE --v --python job.py --job bulkload --env SF_QA --action EXECUTE --v --python job.py --job bulkload --env SF_PROD --action EXECUTE --v
To calculate the time it took to run the Bulk Loader look at the timestamps in the output or use a stopwatch or clock to time the operation.
- Save Bulk Loader command line output artifact as bulk-_loader-_CLI-_output.txt.
-
Step 5.6 - Analysis
Anchor analysis analysis
Make note of the database partition size on the file system at this point. Compre with Compare with size of partition prior to loading to get the total disk space used as a result of running the Bulk Loader.
Code Block language bash firstline 1 title disk usage linenumbers true date; df /data # 2nd of 3
Make note of EAS record counts after the Bulk Load operation.
Code Block language sql firstline 1 title Record Counts linenumbers true SELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY schemaname,relname,n_live_tup
- Save results as artifact record-_counts-_after.csv
- Also save results in Excel spreadsheet artifact as bulkloader BulkLoader_resultsProcess_YYYYMMDD.xlsxxlsx
In the spreadsheet, calculate the difference between the 'before' and 'after' record counts. The results will indicate the number of new base addresses added to the table `public.address_base` and the number of new addresses and units added to the table `public.addresses`.
See dedicated Bulk Loader page, Running the Bulk Loader, for more analysis options.
Anchorstage6 stage6
Stage 6 - Extract results
stage6 | |
stage6 |
-
Step 6.1 - Archive exceptions
...
- Archive the entire
address_extract
table.Use a query tool such as pgAdmin to query and save the table as a CSV file.
Code Block language sql firstline 1 title address_extract linenumbers true SELECT * FROM bulkloader.address_extract;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- Save as artifact address_extract.csv
- Archive the addresses that raised exceptions during the Bulk Loader process
Query subtotals and save as artifact exception_text_counts.csv
Code Block language sql firstline 1 title exception_text_counts linenumbers true SELECT exception_text, Count(*) FROM bulkloader.address_extract GROUP BY exception_text ORDER BY exception_text;
Save artifact as exception_text_counts.csv
Query all exception text records and save as artifact exception _ text .csvrecords
Code Block language sql firstline 1 title exception_text linenumbers true SELECT * FROM bulkloader.address_extract WHERE NOT(exception_text IS NULL) ORDER BY exception_text, id;
- Save artifact as exception_text.csv
- Artifacts
- address_extract.csv - Results of every address submitted to the Bulk Loader.
- exception_text_counts.csv - Counts of the records that were not loaded due to the error indicated in the
'exception_text
' field. - exception_text.csv - Subset of the just the records that were not loaded due to the error indicated in the
'exception_text
' field
...
- Get the unique EAS
change_request_id
created by the Bulk Load operation. The value of<change_request_id>
will be used in the next steps to count addresses added to the EAS.Query the
'public.change_requests'
table for the new'change_request_id'
value.Code Block language sql firstline 1 title change_request_id linenumbers true SELECT change_request_id FROM public.change_requests WHERE requestor_comment LIKE 'bulk load change request' ORDER BY change_request_id DESC LIMIT 1;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.For example, R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\artifact as change_request_id.csv
- Artifacts
- change_request_id.csv - The unique EAS change_request_id created by the Bulk Load operation.
...
- Get all the address records (including units) added to the EAS during the Bulk Loader operation..
Query the
public.addresses
table on the newchange_request_id
value.
Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.For example, R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\Code Block language sql firstline 1 title addresses linenumbers true SELECT * FROM public.addresses WHERE activate_change_request_id = <change_request_id>;
_request_id = <change_request_id>;
- Save artifact as addresses.csv
- Extract sample unit address from the output
- Pick a random record from the results where unit_num is not NULL. Gather the value in the address_base_id field.
- Construct a URL from this value like this: http://eas.sfgov.org/?address=NNNNNN
- Where NNNNNN is the value from the address_base_id field.
- Make note of this URL for use in Step 7 when testing EAS after services are restored.
- Artifacts
- addresses.csv - All the address records (including units) added to the EAS during the Bulk Loader operation.
...
- Get all the base records added to the EAS during the Bulk Loader operation.
Query the
public.address_base
table on the newchange_request_id
value.Code Block language sql firstline 1 title address_base linenumbers true SELECT activate_change_request_id, address_id, public.address_base.* FROM public.address_base, public.addresses WHERE public.address_base.address_base_id = public.addresses.address_base_id AND public.addresses.address_base_flg = TRUE AND public.addresses.activate_change_request_id = <change_request_id>;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\address_base.csv
- For example,
- Extract sample base address from the output
- Pick a random record from the results. Gather the value in the address_base_id field.
- Construct a URL from this value like this: http://eas.sfgov.org/?address=NNNNNN
- Where NNNNNN is the value from the address_base_id field.
- Make note of this URL for use in Step 7 when testing EAS after services are restored.
- Artifacts
- address_base.csv - All the base records added to the EAS during the Bulk Loader operation.
Step 6.5 - Cross check results
Compare the results of Stage 5 with the results from Stage 6.
The number of addresses found in the Step 5.6 (2) should be identical to the number of addresses found in Step 6.3.
- The number of base addresses found in the Step 5.6 (2) should be identical to the number of base addresses found in Step 6.4.
...
Step 7.1 - Database Cleanup
- Connect to the database,
<environment>_DB
, and clear address_extract records from the latest Bulk Loader batch. Make note of final disk usage tally.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
TRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest; |
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
VACUUM FULL ANALYZE bulkloader.address_extract; |
Code Block | ||||
---|---|---|---|---|
| ||||
VACUUM FULL ANALYZE bulkloader.blocks_nearest; |
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
date; df /data # 3rd of 3 |
Step 7.2 - Clean automation machine
- Return to automation machine and remove shapefile from 'bulkload_shapefile' folder.
- Logout of automation machine.
Step 7.3 - On Failure Restore Database
- If the Bulk Loader failed and corrupted any data then restore from the database backup.
- Follow these steps to restore from backup.
Step 7.4 - Restore Services (Non-production Only)
- TODO: Remove this step if nothing to populate with
Step 7.5 - Restore Services (Production Only)
...
Re-enable database replication by restarting the database service on the replication server (DR PROD DB).
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
sudo -u postgres -i
/usr/pgsql-9.0/bin/pg_ctl -D /data/9.0/data start |
SKIP Turn on downstream database propagation service(s)
Resume downstream replication to internal business system database (SF PROD WEB).
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
sudo /var/www/html/eas/bin/xmit_change_notifications.bsh start |
...
Enable web service on <environment>_WEB
(SF DEV WEB, SF QA WEB, SF PROD WEB)nable front-end access to EAS
Code Block | ||||
---|---|---|---|---|
| ||||
cd /var/www/html
sudo ./set_eas_mode.sh LIVE |
...
Review sample addresses gathered in Step 6.3 and Step 6.4
...
Notify relevant recipients that the Bulk Loader Process is complete
...
, public.address_base.* FROM public.address_base, public.addresses WHERE public.address_base.address_base_id = public.addresses.address_base_id AND public.addresses.address_base_flg = TRUE AND public.addresses.activate_change_request_id = <change_request_id>;
- Save artifact as
address_base.csv
- Extract sample base address from the output
- Pick a random record from the results. Gather the value in the address_base_id field.
- Construct a URL from this value like this: http://eas.sfgov.org/?address=NNNNNN
- Where NNNNNN is the value from the address_base_id field.
- Make note of this URL for use in Step 7 when testing EAS after services are restored.
- Artifacts
- address_base.csv - All the base records added to the EAS during the Bulk Loader operation.
Step 6.5 - Cross check results
Compare the results of Stage 5 with the results from Stage 6.
The number of addresses found in the Step 5.6 (2) should be identical to the number of addresses found in Step 6.3.
- The number of base addresses found in the Step 5.6 (2) should be identical to the number of base addresses found in Step 6.4.
Anchor stage7 stage7
Stage 7 - Cleanup and Restoration
stage7 | |
stage7 |
Step 7.1 - Database Cleanup
- Connect to the database,
<environment>_DB
, and clear address_extract records from the latest Bulk Loader batch. Make note of final disk usage tally.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
TRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest; |
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
VACUUM FULL ANALYZE bulkloader.address_extract; |
Code Block | ||||
---|---|---|---|---|
| ||||
VACUUM FULL ANALYZE bulkloader.blocks_nearest; |
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
date; df /data # 3rd of 3
# Optional step: archive output to 'df.txt' artifact
exit |
Step 7.2 - Clean automation machine
- Return to automation machine and remove shapefile from 'bulkload_shapefile' folder.
- Logout of automation machine.
Step 7.3 - On Failure Restore Database
- If the Bulk Loader failed and corrupted any data then restore from the database backup.
- Follow these steps to restore from backup.
Step 7.4 - Restore Services (Production Only)
SKIP
Turn on production-to-replication serviceRe-enable database replication by restarting the database service on the replication server (DR PROD DB).Code Block language bash title Stop PostgreSQL linenumbers true #sudo -u postgres -i #/usr/pgsql-9.0/bin/pg_ctl -D /data/9.0/data start
SKIP
Turn on downstream database propagation service(s)Resume downstream replication to internal business system database (SF PROD WEB).Code Block language bash firstline 1 title start xmit #sudo /var/www/html/eas/bin/xmit_change_notifications.bsh start
Step 7.5 - Enable front-end access to EASAnchor step7.5 step7.5
Enable web service on
<environment>_WEB
(SF DEV WEB, SF QA WEB, SF PROD WEB)Code Block language bash linenumbers true cd /var/www/html sudo ./set_eas_mode.sh LIVE exit
Browse to website, http://eas.sfgov.org/, and review sample addresses gathered in Step 6.3 and Step 6.4
Notify relevant recipients that the Bulk Loader Process is complete
Step 7.6 - Archive artifacts
- List of artifacts
- address_base.csv
- address_extract.csv
- addresses.csv
- bulk_loader_CLI_output.txt
- change_request_id.csv
- df.txt
- exception_text.csv
- exception_text_counts.csv
Contents of progress and summary artifact, BulkLoader_Process_YYYYMMDD.xlsx
Progress - This sheet contains a table of relevant totals for each batch
Batch number
Batch date
Input record counts
New base record counts
New unit record counts
Sample addresses
Email Jobs - This sheet contains a table of details related to the weekly 'Address Notification Report' automated email job
Batch range
Record count in batch range
Email Timestamp
Total record counts in email
Subtotal of records generated as a result of the Bulk Loader
Size of email attachment
Batch N - This sheet tracks the before and after record counts for all tables in the EAS database. There is a sheet for each batch loaded. Within each sheet is a section for the 'before' records, a section for the 'after' record counts, and a 'diff' column showing the change in record counts.
END OF STEPS
Notes
Anchor | ||||
---|---|---|---|---|
|
...