Bulk Loader Process

Overview

  • The Bulk Loader is a process that is used to add several new addresses to the EAS at one time.
  • There are several stages that make up the Bulk Loader process, outlined below in Summary and Details.

(info) Stages 1 - 3 are run only once per Bulk Loader process.

(info) Stages 4 - 7 are run one or more times in batches.

Summary of Bulk Loader Stages

Stage NumberStageCategorySummaryEnvironmentIterationsEstimated Person TimeEstimated Computer Time
1Import and parse reference dataset (Optional)ParsingThis optional step in the Bulk Loader process is to cross check an address for a match in a reference data set. If a source address is found in the reference dataset the address makes it to the next step. If not found the address is put aside in an exclusion set for later review.

Python 3

PostgreSQL / pgAdmin

Once per Bulk Loader process1 hour10 minutes
2Import, parse and filter source datasetParsingImport the dataset destined for the EAS. Parse and filter the set.

Python 3

PostgreSQL / pgAdmin

Once per Bulk Loader process90 minutes15 minutes

3

Geocode and filterGeocodingGeocode the set and filter further based on the geocoder score and status.ArcMap Once per Bulk Loader process1 hour 5 minutes
4Export full set (single batch) or subset (multiple batches)GeocodingFor large datasets, create one of many subsets that will be run through the Bulk Loader in multiple batches.ArcMap One or more batches for each Bulk Loader process30 minutes per batch5 minutes per batch
5Bulk Load batch (full set or subset)Bulk LoadingRun the entire batch or each subset batch through the Bulk Loader.

EAS <environment>(+)

PostgreSQL / pgAdmin

One or more batches for each Bulk Loader process1 hour per batch5 minutes per batch
6Extract resultsBulk LoadingExtract and archive the list of addresses that were added to the EAS . Also archive the unique EAS 'change request id' associated with this batch. Also archive the addresses that were rejected by the Bulk Loader in this batch.PostgreSQL / pgAdminOne or more batches for each Bulk Loader process1 hour per batch5 minutes per batch
7Cleanup and RestorationBulk LoadingClean up database, restore services and in the event of a failure, restore from backup.PostgreSQL / pgAdminOne or more batches for each Bulk Loader process1 hour per batch5 minutes per batch

(star)Running on DEV or QA first is a requirement

Required! Run addresses through DEV or QA first

Required! Run addresses through DEV or QA first

Never load any new addresses into production until a successful trial run is performed on the same addresses in a non-production environment, such as development or QA.


(warning)Important considerations when running the Bulk Loader

Downstream implications

Downstream Implications

Be aware of the downstream implications of running the Bulk Loader. The Bulk Loader adds new addresses to the EAS. This has a ripple effect of populating multiple databases.

Some of the downstream databases affected by running the Bulk Loader:

  • EAS Database (master/production) - targeted directly by the Bulk Loader
  • EAS Database (slave/replication)
  • Other internal and external business system databases that are updated in near-real-time

Backup recommended

Backup recommended

Is is highly recommended that a backup be made of the database prior to running the Bulk Load step in Stage 5.

If a problem is noticed immediately after running the Bulk Loader it may be possible to restore the database from backup.

In order to facilitate an immediate backout it is recommended that the Bulk Loader process be run after hours and with access to the EAS temporarily suspended. Otherwise, backtracking may become difficult if not impossible.

Treat like a release in production environment

Treat like a release

When Bulk Loading in a production environment, treat it like a software release by following protocols to stop and start relevant services as outlined in the steps below.

Pick the right time

When releasing in the production environment pick a time outside core work hours that does not conflict with any cron jobs running on the production server.


Suggested folder layout

Artifacts are generated throughout the stages of the Bulk Loader Process. Below is a suggested folder and file layout for the various artifacts.

Folder and file layout for artifacts collected

./ # root (YYYYMMDD-tag e.g. 20190510-3x10k)
./gap # Artifacts from the General Address Parser stage
./geocoder # input csv and output shapefiles
./bulkload_001 # artifacts from the first batch
./bulkload_00N # artifacts from the Nth batch, if any
./excluded_records # records from all stages set aside for later review
./excluded_records/gap
./excluded_records/geocoder
./excluded_records/bulkload_001
./excluded_records/bulkload_00N

Folder and file names for artifacts
###########
# Folders #
###########

./ # root (YYYYMMDD-tag e.g. 20190510-SourceNameHere)
./gap # Artifacts from the General Address Parser stage
./geocoder # input csv and output shapefiles
./bulkload_001 # artifacts from the first batch
./bulkload_00N # artifacts from the Nth batch, if any
./excluded_records # records from all stages set aside for later review

./bulkload_00N # artifacts from the Nth batch
./bulkload_00N/shapefile/ # this is the input for the Bulk Loader step

./excluded_records # records from all stages set aside for later review

./excluded_records/gap
./excluded_records/geocoder
./excluded_records/bulkload_001
./excluded_records/bulkload_00N

#########
# Files #
#########

./gap/gap_YYYYMMDD.csv # GAP output; geocoder input

./geocoder/bulkloader.mxd # ArcMap file

./geocoder/addresses_geocoded.shp # All geocoded records
./geocoder/geocode_score_100.shp # Records with non-tied, perfect geocoder score

./bulkload_00N/shapefile/bulkload.shp # Input for the Bulk Loader stage

./bulkload_00N/address_base.csv # New base records
./bulkload_00N/addresses.csv # New base+unit records
./bulkload_00N/change_request.csv # One-record containing unique id change request id associated with this batch
./bulkload_00N/record-counts-after.csv # EAS record-count after Bulk Load
./bulkload_00N/record-counts-before.csv # EAS record-count before Bulk Load

./excluded_records/geocoder/geocode_under_100.shp # Tied and/or not perfect geocoder score

./excluded_records/bulkload_00N/address_extract_exceptions.csv
./excluded_records/bulkload_00N/address_extract_exception_totals.csv


Details for each Bulk Loader Stage

Stage 1 Import and parse reference dataset (Optional)

This optional stage is run once per Bulk Loader process. This stage can be skipped if the reference dataset is already available or if the optional 'filter by reference' step (Step 2.5) is skipped.

  • Step 1.1 - Import reference dataset

  1. Script Name (*) csv2pg.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/csv2pg.py
  3. Important arguments
    1. input_file - The relative path to the raw CSV file.
    2. output_table - The name of the table for the imported records.
  4. Example usage
    1. Import CSV file into PostgreSQL table (++<odbc>)

      csv2pg
      python csv2pg.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --input_file=./path/to/raw_reference_file.csv \
        --output_table=reference_raw
  5. Output table
    1. This step generates a new table. The name of the new table is passed as a required command line argument.
  6. Output fields
    1. All columns in the input CSV file are imported as fields in the new table.
    2. sfgisgapid: a new serial, not null, primary key
  7. Artifacts (**)
    1. reference_raw.csv - The input CSV table serves as the artifact for this step.


  • Step 1.2 - Parse reference dataset

  1. Script Name (*) parse_address_table.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/parse_address_table.py
  3. Important arguments
    1. address_column: Name of address field. Required.
    2. city_column: Name of city field. Optional.
    3. state_column: Name of state field. Optional.
    4. zip_column: Name of ZIP Code field. Optional.
    5. output_table: Name of the output table. Required.
  4. Example usage
    1. Parse address table (++<odbc>)

      parse_address_table.py
      python parse_address_table.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --address_table=reference_raw \
        --primary_key=sfgisgapid \
        --address_column=address \
        --city_column=city \
        --state_column=state \
        --zip_column=zip \
        --output_table=reference_parsed \
        --output_report_file=./output/reference.xlsx \
        --limit=-1
  5. Output table
    1. This step generates a new table. The new table contains all of the fields and records of the input table with additional fields noted below.
  6. Output fields
    1. The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields.
      1. address_to_parse character varying(255)
      2. address_number bigint
      3. address_number_suffix character varying(255)
      4. street_name_pre_directional character varying(255)
      5. street_name_pre_type character varying(255)
      6. street_name character varying(255)
      7. street_name_post_type character varying(255)
      8. street_name_post_directional character varying(255)
      9. subaddress_type character varying(255)
      10. subaddress_identifier character varying(255)
      11. state_name character varying(255)
      12. zip_code character varying(255)
      13. complete_address_number character varying(255)
      14. complete_street_name character varying(255)
      15. complete_subaddress character varying(255)
      16. complete_place_name character varying(255)
      17. delivery_address_with_unit character varying(255)
      18. delivery_address_without_unit character varying(255)
      19. parsed_address character varying(1000)
      20. parser_had_issues boolean
      21. parser_message character varying(2000)
      22. parserator_tag character varying(255)
      23. address_number_parity character varying(255)
      24. counter integer
  7. Artifacts (**)
    1. reference_parsed.csv - Export the new table, e.g. reference_parsed, as a CSV, e.g. reference_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration.

Stage 2 - Import, parse and filter source dataset

This stage is run once per Bulk Loader process. This stage involves importing, parsing and filtering the dataset. The result will be a dataset that will be geocoded in Stage 3.

  • Step 2.1 - Import source dataset

  1. Script Name (*) csv2pg.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/csv2pg.py
  3. Important arguments
    1. input_file: The relative path to the raw CSV file.
    2. output_table: The name of the table for the imported records.
  4. Example usage
    1. Import CSV file into PostgreSQL table (++<odbc>)

      csv2pg
      python csv2pg.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \ 
        --input_file=./path/to/raw_source_file.csv \
        --output_table=addresses_raw
  5. Output table
    1. This step generates a new table. The name of the new table is passed as a required command line argument.
  6. Output fields
    1. All columns in the input CSV file are imported as fields in the new table.
    2. sfgisgapid: a new serial, not null, primary key
  7. Artifacts (**)
    1. addresses_raw.csv - The input CSV table serves as the artifact for this step.


  • Step 2.2 - Exclude from source dataset where address_number has range

  1. Create Output Tables
    1. addresses_no_range 

      addresses_no_range
      CREATE TABLE addresses_no_range AS SELECT * FROM addresses_raw WHERE NOT add_number LIKE '%-%';
    2. addresses_with_range 

      addresses_with_range
      CREATE TABLE addresses_with_range AS SELECT * FROM addresses_raw WHERE add_number LIKE '%-%';
  2. Artifacts (**)
    1. addresses_no_range.csv - This is an archive of the records that made it past the filter to the next step.
    2. addresses_with_range.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.


  • Step 2.3 - Parse source dataset

  1. Script Name (*) parse_address_table.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/parse_address_table.py
  3. Important arguments
    1. address_column: Name of address field. Required.
    2. city_column: Name of city field. Optional.
    3. state_column: Name of state field. Optional.
    4. zip_column: Name of ZIP Code field. Optional.
    5. output_table: Name of the output table. Required.
  4. Example usage
    1. Parse address table (++<odbc>)

      parse_address_table
      python parse_address_table.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --address_table=addresses_no_range \
        --primary_key=sfgisgapid \
        --address_column=address \
        --city_column=city \
        --state_column=state \ 
        --zip_column=zip \
        --output_table=addresses_parsed \
        --output_report_file=./output/source_report.xlsx \
        --limit=-1
  5. Output table
    1. This step generates a new table. The new table contains all of the fields and records of the input table with additional fields noted below.
  6. Output fields
    1. The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields. See also Step 1.2 Output Fields.
  7. Artifacts (**)
    1. addresses_parsed.csv: Export the new table, e.g. addresses_parsed, as a CSV, e.g. addresses_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration.


  • Step 2.4 - Exclude addresses where street_name_post_direction is not blank and corresponding streets

  1. Script Name (*) remove_postdirectionals.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/remove_postdirectionals.py
  3. Important arguments
    1. output_table: The name of the table for the filtered records.
  4. Example usage
    1. Remove records with post directional values  (++<odbc>)

      remove_postdirectionals
      python remove_postdirectionals.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --address_table=addresses_parsed \
        --output_table=addresses_no_post_dir \
        --output_report_file=./output/source_report.xlsx \
        --limit=-1
    2. Create table addresses_with_post_dir

      addresses_with_post_dir
      CREATE TABLE addresses_with_post_dir AS SELECT * FROM addresses_parsed WHERE NOT id IN (SELECT id FROM addresses_no_post_dir);
  5. Output table
    1. This step generates a new table. The new table contains all of the fields and records of the input table excluding any that have a post direction value and corresponding addresses with streets without the post direction.
  6. Artifacts (**)
    1. addresses_no_post_dir.csv - This is an archive of the records that made it past the filter to the next step.
    2. addresses_with_post_dir.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.


  • Step 2.5 - Exclude addresses not in reference (Optional)

(info) This optional step excludes from the source dataset any addresses that are not also found in the parsed reference dataset.

(info) For an understanding on how this step handles duplicates in either the source or reference, see the comments in EAS Issue 281, Refine the filtering regime for address data to be bulk-loaded .

  1. Script Name (*) is_address_in_reference.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/is_address_in_reference.py
  3. Important arguments
    1. source_output_field - The name of a new integer field that will store the results of the step.
  4. Example usage
    1. Add field to flag if address is found in reference  (++<odbc>)

      is_address_in_reference
      python is_address_in_reference.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --source_table=addresses_no_post_dir \
        --reference_table=reference_parsed  \
        --source_pk=id  \
        --reference_pk=id  \
        --source_address_number=address_number \
        --source_address_number_suffix=address_number_suffix \
        --source_street_name_pre_directional=street_name_pre_directional \
        --source_street_name_pre_type=street_name_pre_type \
        --source_street_name=street_name \
        --source_street_name_post_type=street_name_post_type \
        --source_street_name_post_directional=street_name_post_directional \
        --source_subaddress_type=subaddress_type \
        --source_subaddress_identifier=subaddress_identifier \
        --source_place_name=place_name \
        --reference_address_number=address_number \
        --reference_address_number_suffix=address_number_suffix \
        --reference_street_name_pre_directional=street_name_pre_directional \
        --reference_street_name_pre_type=street_name_pre_type \
        --reference_street_name=street_name \
        --reference_street_name_post_type=street_name_post_type \
        --reference_street_name_post_directional=street_name_post_directional \
        --reference_subaddress_type=subaddress_type \
        --reference_subaddress_identifier=subaddress_identifier \
        --reference_place_name=place_name \
        --source_output_field=in_reference \
        --limit=-1
    2. Create and archive table addresses_in_reference

      addresses_in_reference
      CREATE TABLE addresses_in_reference AS SELECT * FROM addresses_no_post_dir WHERE in_reference = 1;
    3. Create and archive table addresses_not_in_reference

      addresses_not_in_reference
      CREATE TABLE addresses_not_in_reference AS SELECT * FROM addresses_no_post_dir WHERE in_reference = 0;
  5. Output table
    1. This step generates a new table. The name of the new table is passed as a required command line argument.
  6. Output fields
    1. A new field that tells if the source address was found in the reference set (1), not found (0), or not yet checked (-1). The name of the new field is passed as a required command line argument.
  7. Artifacts (**)
    1. addresses_in_reference.csv - This is an archive of the records that made it past the filter to the next step.
    2. addresses_not_in_reference.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.


  • Step 2.6 - Export source dataset for geocoding

  1. Script Name (*) pg2bulkloader.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/pg2bulkloader.py
  3. Important arguments
    1. input_table - The name of the table containing addresses to be geocoded.
    2. output_file_name - The name of the CSV created from the input table.
    3. input_source - The text associated with the data source of this batch of addresses. The value entered here appears in the EAS user interface (under the source of a given base address or unit). (warning) This field has a 32 character limit. Exceeding 32 characters will cause the Bulk Loader to crash.
  4. Example usage
    1. Create table addresses_to_geocode

      addresses_to_geocode
      -- Select, order and save as 'addresses_to_geocode'
      CREATE TABLE addresses_to_geocode AS
      SELECT * FROM addresses_in_reference
      ORDER BY
        street_name,
        street_name_pre_directional,
        street_name_pre_type,
        street_name_post_type,
        street_name_post_directional,
        address_number,
        address_number_suffix,
        subaddress_type,
        subaddress_identifier,
        place_name
    2. Add counter field

      counter
      -- Add and index a counter field
      ALTER TABLE addresses_to_geocode DROP COLUMN IF EXISTS counter;
      ALTER TABLE addresses_to_geocode ADD COLUMN counter SERIAL;
      CREATE INDEX ON addresses_to_geocode (counter);
    3. Export PostgreSQL table to CSV file for Bulk Loading (++<odbc>)

      pg2bulkloader
      python pg2bulkloader.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --input_table=addresses_to_geocode \
        --output_file_name=./output/addresses_to_geocode \
        --input_source='Imported Addresses (20XX-XX-XX)' \
        --limit=-1
    4. Archive a copy of table addresses_to_geocode to artifact addresses_to_geocode.csv
  5. Output table
    1. addresses_to_geocode
  6. Output file
    1. This step generates a new CSV file. The name of the new file is passed as a required command line argument.
  7. Artifacts (**)
    1. addresses_to_geocode.csv - A CSV file of addresses ready for geocoding.

Stage 3 Geocode and filter

  • Step 3.1 - Geocode source dataset

  1. Input - addresses_to_geocode.csv
  2. Output - addresses_geocoded.shp
  3. Detailed Substeps

    1. Create folder for storing all input and output artifacts for this iteration of the Bulk Loader process.

      • For example, R:\Tec\..\Eas\_Task\2018_2019\path\to\archive\bulkloader_process_YYYYMMDD

    2. Create new ArcMap map document (ArcMap 10.6.1) 

      • For example, bulkloader_YYYYMMDD.mxd

    3. Add streets (optional)

      • See StClines_20190129.shp in R:\Tec\...\Eas\_Task\2018_2019\20181128_248_DocumentBulkLoader\Data

    4. Create Personal Geodatabase

      1. Right click Home in Folder Connections in Catalog

      2. Select New → Personal Geodatabase

    5. Import CSV into File Geodatabase

      1. Right-click new personal geodatabase: Import → Table (single)

      2. Browse to addresses_to_geocode.csv

      3. Specify output table addresses_to_geocode

      4. Click OK and time on stopwatch. Wait up to 5 minutes for TOC to update.

    6. Geocode with Virtual Address Geocoder
      1. Right-click the table addresses_to_geocode in the ArcMap table of contents and select Geocode Addresses
      2. In the dialog 'Choose an address geocoder to use' select 'Add'
        1. Browse to and select R:\311\...\StClines_20150729_VirtualAddressLocator (TODO - Replace with new path)
        2. Click 'OK'
      3. Under 'Address Input Fields' select 'Multiple Fields'
        1. Street or intersection: address
        2. ZIP Code: zip
      4. Under Output
        1. Click folder icon. In popup change 'Save as type' to 'Shapefile'.
        2. Save shapefile in path dedicated to artifacts for this Bulk Loader Progress

          1. For example, R:\Tec\..\Eas\_Task\2018_2019\path\to\archive\bulkloader_YYYYMMDD\geocoder\

          2. addresses_geocoded.shp

      5. Time on stopwatch and take note of execution time.

  4. Artifacts (**)
    1. bulkloader_process_YYYYMMDD.mxd

    2. addresses_geocoded.shp 

  • Step 3.2 - Filter geocoded dataset

The purpose of this step is to filter geocoded addresses based on their geocoding score and status.

  • The geocoding score has a maximum of 100 points. This step keeps addresses with a score of 100.
  • The geocoding process also applies of a status value of either 'M' for matched, 'T' for tied with another address, and 'U' for unmatched.
  • This step filters out any addresses that are tied with another address (status of 'T'), even if the score is 100. 
  1. Input - addresses_geocoded.shp

  2. Detailed Substeps

    1. Open the ArcMap document created in the previous step.

    2. Filter matched addresses (status = 'M') with score of 100.

      1. In the Table of Contents right-click the shapefile from previous step, e.g. addresses_geocoded.shp, and select Open Attributes Table.

      2. Click the first icon in the menu bar (top-left) and select Select By Attributes.

      3. Enter the following WHERE clause to select matched address with a score of 100:

        • ("Status" = 'M') AND ("Score" = 100)
      1. Click Apply, wait for operation to complete and then close the Attributes window

    1. Save results to geocoder_matched_and_score100.shp

      1. In the Table of Contents right-click the shapefile from the previous step and select Data → Export Data

      2. Click the browse icon.

      3. In the file browser select the Save as type dropdown and select Shapefile.

      4. Save the shapefile to the artifacts folder dedicated to this iteration of the Bulk Loader Process.

    2. Filter the opposite set: status not matched ('M') or score not 100.

      1. Right-click addresses_geocoded.shp and open the attributes table.

      2. Enter the following WHERE clause:

        • NOT ("Status" = 'M') OR NOT ("Score" = 100)

    3. Save results to geocoder_notmatched_or_under100.shp. (See substeps above.)

  1. Output / Artifacts

    1. geocoder_score_100.shp - Shapefile of matched addresses (status = 'M') with geocoding score of 100.

    2. geocoder_notmatched_or_under100.shp - Shapefile of non-matched addresses (unmatched or tied) or with geocoding score less than 100.

(warning) The total number of records of the two output shapefiles should be the same as the number of records in the input shapefile.


Stage 4 Export shapefile - full set (single batch) or subset (multiple batches)

A note about batches

Stages 4, 5 and 6 can be run one time with the results from Stage 3, or they can be run in multiple batches of subsets.

A major consideration of when to run the full set at once versus in batches is the number of records being Bulk Loaded.

The size of each Bulk Loader operation affects the following aspects of the EAS:

  • The disk space consumed by the database server
  • The EAS user interface section that lists addresses loaded in a given Bulk Loader operation
  • The weekly email attachment listing new addresses added to the EAS

For medium-to-large datasets (input sets with over 1,000 records) it is recommended that the Bulk Loading process be run in batches over several days or weeks.

Reminder! It is required that the process first be run on a development server to assess the implications of the operation.

The remaining steps will document a single batch iteration. Repeat these steps in a multi-batch process.

  • Step 4.1 - Export shapefile for Bulk Loading (entire set or subset batch)

  1. Substeps
    1. Create Batch from Subset (optional)

(info) This example shows filtering for the second batch of 50,000 records using the counter_ field.

      1. Right-click the layer geocoder_score_100 in the ArcMap Table of Contents and select Open Attributes Table.
      2. Click the first icon in the menu bar (top-left) and select Select By Attributes.
      3. Enter the following WHERE clause to select the current batch of 50,000 records:

        "counter_" > 50000 AND "counter_" <= 100000
      4. Click Apply, wait for operation to complete and then close the Attributes window
        • (warning) The batch may contain less than 50,000 records due to filtering by geocoding results in the previous step.
    1. Export for Bulk Loader
      1. In the Table of Contents right-click the layer geocoder_score_100 and select Data → Export Data.

      2. Click the browse icon.

      3. In the file browser select the Save as type dropdown and select Shapefile.

      4. Save the shapefile to the artifacts folder dedicated to this iteration of the Bulk Loader Process.

 e.g. R:\Tec\..\Eas\_Task\2018_2019\path\to\archive\bulkloader_process_YYYYMMDD\bulkloader\batch_NNN\bulkload.shp

  1. Artifacts
    1. bulkload.shp - Shapefile for loading into the Bulk Loader in the next stage.

Stage 5 Run the Bulk Loader

(info) For a complete set of steps and background about the Bulk Loader, see also Running the Bulk Loader, a page dedicated to its input, operation and results.

Required! Run addresses through DEV or QA first

Never load any new addresses into production until a successful trial run is performed on the same addresses in a non-production environment, such as development or QA.


  • Step 5.1 - Disable front-end access to EAS

  1. Notify relevant recipients that the Bulk Loader Process is starting
  2. Disable web service on <environment>_WEB (SF DEV WEB, SF QA WEB, SF PROD WEB)

    cd /var/www/html
    sudo ./set_eas_mode.sh MAINT
    1. Browse to web site, http://eas.sfgov.org/, to confirm the service has stopped. (Expect to see message that EAS is currently out of service.)


  • Step 5.2 - Non-production-specific preparation

  1. Restore Database

    1. Restore database from latest daily production backup


  • Step 5.3 - Production-specific preparation

  1. Halt Services

    Reason for halting services

    These steps are being performed to facilitate immediate roll-back of the EAS database if the Bulk Load Process ends in failure

    1. SKIP Turn off the replication server

      1. Disable database replication by shutting down the database service on the replication server (DR PROD DB).

        Stop PostgreSQL
        #sudo -u postgres -i
        #/usr/pgsql-9.0/bin/pg_ctl -D /data/9.0/data stop
    2. Turn off downstream database propagation service(s)

      1. Suspend downstream replication to internal business system database (SF PROD WEB).

        stop xmit
        sudo /var/www/html/eas/bin/xmit_change_notifications.bsh stop
  2. Backup Database

Stop PostgreSQL
sudo -u postgres -i
/home/dba/scripts/dbbackup.sh > /var/tmp/dbbackup.log # this step takes about 2 minutes
ls -l /var/tmp # ensure the log file is 0 bytes
ls -la /mnt/backup/pg/daily/easproddb.sfgov.org-* # the timestamp on the last file listed should match timestamp of backup
exit # logout of user postgres when done
  • Step 5.3 - Database Preparation 

  1. Connect to the database, <environment>_DB, and clear any leftover records from previous Bulk Loader batches.

    TRUNCATE
    TRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest;
    VACUUM
    VACUUM FULL ANALYZE bulkloader.address_extract;
    VACUUM
    VACUUM FULL ANALYZE bulkloader.blocks_nearest;
  2. Make note of EAS record counts before the Bulk Loading operation.

    Record Counts
    SELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY schemaname,relname,n_live_tup
    1. Save results in artifact as record_counts_before.csv 
    2. Also save results in Excel spreadsheet artifact as BulkLoader_Process_YYYYMMDD.xlsx
  3. Make note of the database partition size on the file system at the current point in time.

    disk usage
    date; df /data # 1st of 3
  • Step 5.4 - Transfer Shapefiles 

  1. Transfer the bulkload.shp shapefile from Stage 4 to an EAS automation machine, <environment>_AUTO.
  2. Substitute <environment> with one of the relevant environments: SF_DEV, SF_QA, SF_PROD, SD_PROD.
  3. Copy the shapefile to the folder C:\apps\eas_automation\app_data\data\bulkload_shapefile.


  • Step 5.5 - Run Bulk Loader

  1. Open a command prompt and change folders:

    cd C:\apps\eas_automation\automation\src
  2. Run the step to stage the address records:

    python job.py --job stage_bulkload_shapefile --env <environment> --action EXECUTE --v
    python job.py --job stage_bulkload_shapefile --env SF_DEV --action EXECUTE --v
    python job.py --job stage_bulkload_shapefile --env SF_QA --action EXECUTE --v
    python job.py --job stage_bulkload_shapefile --env SF_PROD --action EXECUTE --v
  3. Run the step to bulk load the address records

    python job.py --job bulkload --env <environment> --action EXECUTE --v
    python job.py --job bulkload --env SF_DEV --action EXECUTE --v
    python job.py --job bulkload --env SF_QA --action EXECUTE --v
    python job.py --job bulkload --env SF_PROD --action EXECUTE --v
    
    
  4. (info) To calculate the time it took to run the Bulk Loader look at the timestamps in the output or use a stopwatch or clock to time the operation.


  5. Save Bulk Loader command line output artifact as bulk_loader_CLI_output.txt


  • Step 5.6 - Analysis

  1. Make note of the database partition size on the file system at this point. Compare with size of partition prior to loading to get the total disk space used as a result of running the Bulk Loader.

    disk usage
    date; df /data # 2nd of 3
  2. Make note of EAS record counts after the Bulk Load operation.

    Record Counts
    SELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY schemaname,relname,n_live_tup
    • Save results as artifact record_counts_after.csv
    • Also save results in Excel spreadsheet artifact as BulkLoader_Process_YYYYMMDD.xlsx
    • In the spreadsheet, calculate the difference between the 'before' and 'after' record counts. The results will indicate the number of new base addresses added to the table `public.address_base` and the number of new addresses and units added to the table `public.addresses`.

    • (info) See dedicated Bulk Loader page, Running the Bulk Loader, for more analysis options.

Stage 6 Extract results

  • Step 6.1 - Archive exceptions

Info about the 'address_extract' table

The Bulk Loader operation in Stage 5 populated an EAS table named 'bulkloader.address_extract' with every address it attempted to load.

If any errors occurred on a given address during the load, the Bulk Loader populated the 'exception_text' field with a description of the error.

  1. Archive the entire address_extract table.
    1. Use a query tool such as pgAdmin to query and save the table as a CSV file.

      address_extract
      SELECT * FROM bulkloader.address_extract;
    2. Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
      1. Save as artifact address_extract.csv
  2. Archive the addresses that raised exceptions during the Bulk Loader process
    1. Query subtotals

      exception_text_counts
      SELECT exception_text, Count(*) FROM bulkloader.address_extract GROUP BY exception_text ORDER BY exception_text;
      1. Save artifact as exception_text_counts.csv

    2. Query all exception text records

      exception_text
      SELECT * FROM bulkloader.address_extract WHERE NOT(exception_text IS NULL) ORDER BY exception_text, id;
      1. Save artifact as exception_text.csv
  3. Artifacts
    1. address_extract.csv - Results of every address submitted to the Bulk Loader.
    2. exception_text_counts.csv - Counts of the records that were not loaded due to the error indicated in the 'exception_text' field.
    3. exception_text.csv - Subset of the just the records that were not loaded due to the error indicated in the 'exception_text' field


  • Step 6.2 - Archive unique EAS change_request_id associated with the Bulk Load

  1. Get the unique EAS change_request_id created by the Bulk Load operation. The value of <change_request_id> will be used in the next steps to count addresses added to the EAS.
    1. Query the 'public.change_requests' table for the new 'change_request_id' value.

      change_request_id
      SELECT change_request_id FROM public.change_requests 
      WHERE requestor_comment LIKE 'bulk load change request' 
      ORDER BY change_request_id DESC 
      LIMIT 1;
    2. Save artifact as change_request_id.csv
  2. Artifacts
    1. change_request_id.csv - The unique EAS change_request_id created by the Bulk Load operation.


  • Step 6.3 - Archive new EAS addresses records

  1. Get all the address records (including units) added to the EAS during the Bulk Loader operation..
    1. Query the public.addresses table on the new change_request_id value.

      addresses
      SELECT * FROM public.addresses
      WHERE activate_change_request_id = <change_request_id>;
    2. Save artifact as addresses.csv
  2. Extract sample unit address from the output
    1. Pick a random record from the results where unit_num is not NULL. Gather the value in the address_base_id field.
    2. Construct a URL from this value like this: http://eas.sfgov.org/?address=NNNNNN
      • Where NNNNNN is the value from the address_base_id field.
    3. Make note of this URL for use in Step 7 when testing EAS after services are restored.
  3.  Artifacts
    1. addresses.csv - All the address records (including units) added to the EAS during the Bulk Loader operation.


  • Step 6.4 - Archive new EAS address_base records

  1. Get all the base records added to the EAS during the Bulk Loader operation.
    1. Query the public.address_base table on the new change_request_id value.

      address_base
      SELECT activate_change_request_id, address_id, public.address_base.*
      FROM public.address_base, public.addresses
      WHERE public.address_base.address_base_id = public.addresses.address_base_id
      AND public.addresses.address_base_flg = TRUE
      AND public.addresses.activate_change_request_id = <change_request_id>;
    2. Save artifact as address_base.csv
  2. Extract sample base address from the output
    1. Pick a random record from the results. Gather the value in the address_base_id field.
    2. Construct a URL from this value like this: http://eas.sfgov.org/?address=NNNNNN
      • Where NNNNNN is the value from the address_base_id field.
    3. Make note of this URL for use in Step 7 when testing EAS after services are restored.
  3. Artifacts
    1. address_base.csv - All the base records added to the EAS during the Bulk Loader operation.



  • Step 6.5 - Cross check results

Compare the results of Stage 5 with the results from Stage 6.

  1. The number of addresses found in the Step 5.6 (2) should be identical to the number of addresses found in Step 6.3.

  2. The number of base addresses found in the Step 5.6 (2) should be identical to the number of base addresses found in Step 6.4.

Stage 7 - Cleanup and Restoration

  • Step 7.1 - Database Cleanup

  1. Connect to the database, <environment>_DB, and clear address_extract records from the latest Bulk Loader batch. Make note of final disk usage tally.


TRUNCATE
TRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest;
VACUUM
VACUUM FULL ANALYZE bulkloader.address_extract;
VACUUM FULL ANALYZE bulkloader.blocks_nearest;
disk usage
date; df /data # 3rd of 3
# Optional step: archive output to 'df.txt' artifact
exit
  • Step 7.2 - Clean automation machine

  1. Return to automation machine and remove shapefile from 'bulkload_shapefile' folder.
  2. Logout of automation machine. 
  • Step 7.3 - On Failure Restore Database

  1. If the Bulk Loader failed and corrupted any data then restore from the database backup.
    1. Follow these steps to restore from backup.


  • Step 7.4 - Restore Services (Production Only)

  1. SKIP Turn on production-to-replication service
    • Re-enable database replication by restarting the database service on the replication server (DR PROD DB).

      Stop PostgreSQL
      #sudo -u postgres -i
      #/usr/pgsql-9.0/bin/pg_ctl -D /data/9.0/data start
  2. SKIP Turn on downstream database propagation service(s)
    • Resume downstream replication to internal business system database (SF PROD WEB).

      start xmit
      #sudo /var/www/html/eas/bin/xmit_change_notifications.bsh start
  • Step 7.5 - Enable front-end access to EAS

  1. Enable web service on <environment>_WEB (SF DEV WEB, SF QA WEB, SF PROD WEB)

    cd /var/www/html
    sudo ./set_eas_mode.sh LIVE
    exit
  2. Browse to website, http://eas.sfgov.org/, and review sample addresses gathered in Step 6.3 and Step 6.4

  3. Notify relevant recipients that the Bulk Loader Process is complete


  • Step 7.6 - Archive artifacts

  1. List of artifacts
    1. address_base.csv
    2. address_extract.csv
    3. addresses.csv
    4. bulk_loader_CLI_output.txt
    5. change_request_id.csv
    6. df.txt
    7. exception_text.csv
    8. exception_text_counts.csv
  2. Contents of progress and summary artifact, BulkLoader_Process_YYYYMMDD.xlsx


    1. Progress - This sheet contains a table of relevant totals for each batch

      1. Batch number

      2. Batch date

      3. Input record counts

      4. New base record counts

      5. New unit record counts

      6. Sample addresses


    2. Email Jobs - This sheet contains a table of details related to the weekly 'Address Notification Report' automated email job

      1. Batch range

      2. Record count in batch range

      3. Email Timestamp

      4. Total record counts in email

      5. Subtotal of records generated as a result of the Bulk Loader

      6. Size of email attachment


    3. Batch N - This sheet tracks the before and after record counts for all tables in the EAS database. There is a sheet for each batch loaded. Within each sheet is a section for the 'before' records, a section for the 'after' record counts, and a 'diff' column showing the change in record counts.


END OF STEPS

Notes

(+)  Substitute EAS <environment> with one of the relevant environments: SF_DEV, SF_QA, SF_PROD, SD_PROD

(++<odbc>)  Substitute <odbc> arguments with values for an available PostgreSQL database.

<odbc-server> - Name or IP address of the database server, e.g. localhost
<odbc-port> - Port of the database server, e.g. 5432
<odbc-database> - Name of the database, e.g. awgdb
<odbc-uid> - User name
<odbc-pwd> - User password

(*) Scripts are written in and require Python 3. See the source code repository for more details.

(**) Artifacts should be saved in a network folder dedicated to the entire instance of a given Bulk Loader process. Artifact names shown are suggestions. Note: For large datasets the entire process could be spread over many days or weeks. Take this into consideration when naming any artifacts and subfolders.

Development stack

(info) This is the stack used for the development and testing of the steps. For best results run the steps with the same or compatible stack.

  1. Stages 1 and 2 (Parsing)
    1. Python 3 (3.6.4)
    2. PostgreSQL (10.3)
    3. pgAdmin (for executing raw SQL)
  2. Stages 3 and 4 (Geocoding)
    1. ArcMap (10.6.1)
  3. Stages 5 and 6 (Bulk Loading)
    1. EAS <environment>(+)
    2. PostgreSQL (10.3)

Filter by label

There are no items with the selected labels at this time.