Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Moved considerations to top of page.

Table of Contents
Overview

...

Stage NumberStageCategorySummaryEnvironmentIterationsEstimated Person TimeEstimated Computer Time
1Import and parse reference dataset (Optional)ParsingThis optional step in the Bulk Loader process is to cross check an address for a match in a reference data set. If a source address is found in the reference dataset the address makes it to the next step. If not found the address is put aside in an exclusion set for later review.

Python 3

PostgreSQL / pgAdmin

Once per Bulk Loader process1 hour10 minutes
2Import, parse and filter source datasetParsingImport the dataset destined for the EAS. Parse and filter the set.

Python 3

PostgreSQL / pgAdmin

Once per Bulk Loader process90 minutes15 minutes

3

Geocode and filterGeocodingGeocode the set and filter further based on geocoder score and status.ArcMapOnce per Bulk Loader process1 hour 5 minutes
4Export full set (single batch) or subset (multiple batches)GeocodingFor large datasets, create one of many subsets that will be run through the Bulk Loader in many batches.ArcMapOne or more batches for each Bulk Loader process30 minutes per batch5 minutes per batch
5Bulk Load batch (full set or subset)Bulk LoadingRun each subset batch through the Bulk Loader.

EAS <environment>(+)

PostgreSQL / pgAdmin

One or more batches for each Bulk Loader process1 hour per batch5 minutes per batch
6Extract resultsBulk LoadingExtract and archive the list of addresses that were added to the EAS . Also archive the unique EAS 'change request id' associated with this batch. Also archive the addresses that were rejected by the Bulk Loader in this batch.PostgreSQL / pgAdminOne or more batches for each Bulk Loader process1 hour per batch5 minutes per batch
7Cleanup and RestorationBulk LoadingClean up database, restore services and in the event of a failure, restore from backup.
One or more batches for each Bulk Loader process

...



(warning)Important considerations when running the Bulk Loader

...

...

Downstream implications

...

Note

...

This optional stage is run once per Bulk Loader process. This stage can be skipped if the reference dataset is already available or if the optional 'filter by reference' step is skipped.

  •  

    Step 1.1 - Import reference dataset

...

  1. input_file - The relative path to the raw CSV file.
  2. output_table - The name of the table for the imported records.

...

Import CSV file into PostgreSQL table (++<odbc>)

...

titleDownstream Implications

Be aware of the downstream implications of running the Bulk Loader. The Bulk Loader adds new addresses to the EAS. This has a ripple effect of populating multiple databases.

Some of the downstream databases affected by running the Bulk Loader:

  • EAS Database (master/production) - targeted directly by the Bulk Loader
  • EAS Database (slave/replication)
  • Other internal and external business system databases that are updated in near-real-time

Backup recommended

Note
titleBackup recommended

Is is highly recommended that a backup be made of the database prior to running the Bulk Load step in Stage 5.

If a problem is noticed immediately after running the Bulk Loader it may be possible to restore the database from backup.

In order to facilitate an immediate backout it is recommended that the Bulk Loader process be run after hours and with access to the EAS temporarily suspended. Otherwise, backtracking may become difficult if not impossible.

Treat like a release in production environment

Warning
titleTreat like a release

When Bulk Loading in a production environment, treat it like a software release by following protocols to stop and start relevant services as outlined in the steps below.


Anchor
details
details
Details for each Bulk Loader Stage

Anchor
stage1
stage1
Stage 1 Import and parse reference dataset (Optional)

This optional stage is run once per Bulk Loader process. This stage can be skipped if the reference dataset is already available or if the optional 'filter by reference' step is skipped.

  •  

    Step 1.1 - Import reference dataset

  1. Script Name (*) csv2pg.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/csv2pg.py
  3. Important arguments
    1. input_file - The relative path to the raw CSV file.
    2. output_table - The name of the table for the imported records.
  4. Example usage
    1. Import CSV file into PostgreSQL table (++<odbc>)

      Code Block
      languagetext
      firstline1
      titlecsv2pg
      linenumberstrue
      python csv2pg.py
        --odbc_server=<odbc-server> # Name or IP address of the database server, e.g. localhost
        --odbc_port=<odbc-port> # Port of the database server, e.g. 5432
        --odbc_database=<odbc-database> # Name of the database, e.g. awgdb
        --odbc_uid=<odbc-uid> # Database user name
        --odbc_pwd=<odbc-pwd> # Database user password
        --input_file=./path/to/raw_reference_file.csv
        --output_table=reference_raw


  5. Output table
    1. This step generates a new table. The name of the new table is passed as a required command line argument.
  6. Output fields
    1. All columns in the input CSV file are imported as fields in the new table.
    2. sfgisgapid: a new serial, not null, primary key
  7. Artifacts (**)
    1. reference_raw.csv - The input CSV table serves as the artifact for this step.

...

Anchor
stage3
stage3
Stage 3 
Geocode and filter

  •  

    Step 3.1 - Geocode source dataset

...

Anchor
stage4
stage4
Stage 4 
Export full set (single batch) or subset (multiple batches)

Note
titleA Note about Batches

Stages 4, 5 and 6 can be run one time with the results from Stage 3, or they can be run in multiple batches of subsets.

A major consideration of when to run the full set at once versus in batches is the number of records being Bulk Loaded.

The size of each Bulk Loader operation affects the following aspects of the EAS:

  • The disk space consumed by the database server
  • The EAS user interface section that lists addresses loaded in a given Bulk Loader operation
  • The weekly email attachment listing new addresses added to the EAS

For medium-to-large datasets (input sets with over 1000 records) it is recommended to run the process on a development server and assess the implications of the operation. Where appropriate, perform the Bulk Loading process in batches over several days or weeks.

The remaining steps will document one example iteration of a multi-batch process.

...

Anchor
stage5
stage5
Stage 5 
Run the Bulk Loader

(info) For a complete set of steps and background about the Bulk Loader, see also Running the Bulk Loader, a page dedicated to its input, operation and results.

...

Anchor
stage6
stage6
Stage 6 
Extract results

  •  

    Step 6.1 - Archive exceptions

...

  1. If the Bulk Loader Process was run on the production server then restore services
    1. Turn on production-to-replication service
      • TODO: add steps
    2. Turn on downstream database propagation service(s)
      • Resume downstream replication to internal business system database (SF PROD WEB).

        Code Block

        Place the Web servers into live mode (SF PROD WEB, DR PROD WEB).

        cd
        Code Block
        languagebash
        linenumberstrue
        languagetext
        firstline1
        titlestart xmit
        sudo /var/www/html/eas/bin/xmit_change_notifications.bsh start
      Enable front-end access to EAS
      • start xmit
        sudo /var/www/html
        sudo ./set_eas_mode.sh LIVE

Notes

...

<odbc-server> - Name or IP address of the database server, e.g. localhost
<odbc-port> - Port of the database server, e.g. 5432
<odbc-database> - Name of the database, e.g. awgdb
<odbc-uid> - User name
<odbc-pwd> - User password

...

(warning)Important considerations when running the Bulk Loader

Downstream implications

Note
titleDownstream Implications

Be aware of the downstream implications of running the Bulk Loader. The Bulk Loader adds new addresses to the EAS. This has a ripple effect of populating multiple databases.

Some of the downstream databases affected by running the Bulk Loader:

  • EAS Database (master/production) - targeted directly by the Bulk Loader
  • EAS Database (slave/replication)
  • Other internal and external business system databases that are updated in near-real-time

Backup recommended

Note
titleBackup Recommended

Is is highly recommended that a backup be made of the database prior to running the Bulk Load step in Stage 5.

If a problem is noticed immediately after running the Bulk Loader it may be possible to restore the database from backup.

...

      • /bin/xmit_change_notifications.bsh start


    1. Enable front-end access to EAS
      • Place the Web servers into live mode (SF PROD WEB, DR PROD WEB).

        Code Block
        languagebash
        linenumberstrue
        cd /var/www/html
        sudo ./set_eas_mode.sh LIVE


Notes

Anchor
env
env
(+)  Substitute EAS <environment> with one of the relevant environments: SF_DEV, SF_QA, SF_PROD, SD_PROD

Anchor
odbc
odbc
(++<odbc>)  Substitute <odbc> arguments with values for an available PostgreSQL database.

<odbc-server> - Name or IP address of the database server, e.g. localhost
<odbc-port> - Port of the database server, e.g. 5432
<odbc-database> - Name of the database, e.g. awgdb
<odbc-uid> - User name
<odbc-pwd> - User password

Anchor
scripts
scripts
(*) Scripts are written in and require Python 3. See the source code repository for more details.

Anchor
artifacts
artifacts
(**) Artifacts should be saved in a network folder dedicated to the entire instance of a given Bulk Loader process. Artifact names shown are suggestions. Note: For large datasets the entire process could be spread over many days or weeks. Take this into consideration when naming any artifacts and subfolders.

Development stack

(info) This is the stack used for the development and testing of the steps. For best results run the steps with a similar or compatible stack.

...