Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Added CLI multiline connector character (backslash) to facilitate copying/pasting from the code snippets.

Table of Contents
Overview

...

Stage NumberStageCategorySummaryEnvironmentIterationsEstimated Person TimeEstimated Computer Time
1Import and parse reference dataset (Optional)ParsingThis optional step in the Bulk Loader process is to cross check an address for a match in a reference data set. If a source address is found in the reference dataset the address makes it to the next step. If not found the address is put aside in an exclusion set for later review.

Python 3

PostgreSQL / pgAdmin

Once per Bulk Loader process1 hour10 minutes
2Import, parse and filter source datasetParsingImport the dataset destined for the EAS. Parse and filter the set.

Python 3

PostgreSQL / pgAdmin

Once per Bulk Loader process90 minutes15 minutes

3

Geocode and filterGeocodingGeocode the set and filter further based on geocoder score and status.ArcMapOnce per Bulk Loader process1 hour 5 minutes
4Export full set (single batch) or subset (multiple batches)GeocodingFor large datasets, create one of many subsets that will be run through the Bulk Loader in many batches.ArcMapOne or more batches for each Bulk Loader process30 minutes per batch5 minutes per batch
5Bulk Load batch (full set or subset)Bulk LoadingRun each subset batch through the Bulk Loader.

EAS <environment>(+)

PostgreSQL / pgAdmin

One or more batches for each Bulk Loader process1 hour per batch5 minutes per batch
6Extract resultsBulk LoadingExtract and archive the list of addresses that were added to the EAS . Also archive the unique EAS 'change request id' associated with this batch. Also archive the addresses that were rejected by the Bulk Loader in this batch.PostgreSQL / pgAdminOne or more batches for each Bulk Loader process1 hour per batch5 minutes per batch
7Cleanup and RestorationBulk LoadingClean up database, restore services and in the event of a failure, restore from backup.
One or more batches for each Bulk Loader process

...

Anchor
stage1
stage1
Stage 1 Import and parse reference dataset (Optional)

This optional stage is run once per Bulk Loader process. This stage can be skipped if the reference dataset is already available or if the optional 'filter by reference' step is skipped.

...

  1. Script Name (*) csv2pg.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/csv2pg.py
  3. Important arguments
    1. input_file - The relative path to the raw CSV file.
    2. output_table - The name of the table for the imported records.
  4. Example usage
    1. Import CSV file into PostgreSQL table (++<odbc>)

      Code Block
      languagetext
      firstline1
      titlecsv2pg
      linenumberstrue
      python csv2pg.py
        --odbc_server=<odbc-server> --# Name or IP address of the database server, e.g. localhost
        --odbc_port=<odbc-port> --# Port of the database server, e.g. 5432
        --odbc_database=<odbc-database> --# Name of the database, e.g. awgdb
        --odbc_uid=<odbc-uid> --# Database user name
        --odbc_pwd=<odbc-pwd> --# Database user password
        --input_file=./path/to/raw_reference_file.csv
        --output_table=reference_raw


  5. Output table
    1. This step generates a new table. The name of the new table is passed as a required command line argument.
  6. Output fields
    1. All columns in the input CSV file are imported as fields in the new table.
    2. sfgisgapid: a new serial, not null, primary key
  7. Artifacts (**)
    1. reference_raw.csv - The input CSV table serves as the artifact for this step.

...

  1. Script Name (*) parse_address_table.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/parse_address_table.py
  3. Important arguments
    1. address_column: Name of address field. Required.
    2. city_column: Name of city field. Optional.
    3. state_column: Name of state field. Optional.
    4. zip_column: Name of ZIP Code field. Optional.
    5. output_table: Name of the output table. Required.
  4. Example usage
    1. Parse address table (++<odbc>)

      Code Block
      languagetext
      firstline1
      titleparse_address_table.py
      linenumberstrue
      python parse_address_table.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --address_table=reference_raw \
        --primary_key=sfgisgapid \
        --address_column=address \
        --city_column=city \
        --state_column=state \
        --zip_column=zip \
        --output_table=reference_parsed \
        --output_report_file=./output/reference.xlsx \
        --limit=-1


  5. Output table
    1. This step generates a new table. The new table contains all of the fields and records of the input table with additional fields noted below.
  6. Anchor
    parserator_fields
    parserator_fields
    Output fields
    1. The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields.
      1. address_to_parse character varying(255)
      2. address_number bigint
      3. address_number_suffix character varying(255)
      4. street_name_pre_directional character varying(255)
      5. street_name_pre_type character varying(255)
      6. street_name character varying(255)
      7. street_name_post_type character varying(255)
      8. street_name_post_directional character varying(255)
      9. subaddress_type character varying(255)
      10. subaddress_identifier character varying(255)
      11. state_name character varying(255)
      12. zip_code character varying(255)
      13. complete_address_number character varying(255)
      14. complete_street_name character varying(255)
      15. complete_subaddress character varying(255)
      16. complete_place_name character varying(255)
      17. delivery_address_with_unit character varying(255)
      18. delivery_address_without_unit character varying(255)
      19. parsed_address character varying(1000)
      20. parser_had_issues boolean
      21. parser_message character varying(2000)
      22. parserator_tag character varying(255)
      23. address_number_parity character varying(255)
      24. counter integer
  7. Artifacts (**)
    1. reference_parsed.csv - Export the new table, e.g. reference_parsed, as a CSV, e.g. reference_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration.

...

  1. Script Name (*) parse_address_table.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/parse_address_table.py
  3. Important arguments
    1. address_column: Name of address field. Required.
    2. city_column: Name of city field. Optional.
    3. state_column: Name of state field. Optional.
    4. zip_column: Name of ZIP Code field. Optional.
    5. output_table: Name of the output table. Required.
  4. Example usage
    1. Parse address table (++<odbc>)

      Code Block
      languagetext
      firstline1
      titleparse_address_table
      linenumberstrue
      python parse_address_table.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --address_table=addresses_no_range \
        --primary_key=sfgisgapid \
        --address_column=address \
        --city_column=city \
        --state_column=state \ 
        --zip_column=zip \
        --output_table=addresses_parsed \
        --output_report_file=./output/source_report.xlsx \
        --limit=-1


  5. Output table
    1. This step generates a new table. The new table contains all of the fields and records of the input table with additional fields noted below.
  6. Output fields
    1. The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields. See also Step 1.2 Output Fields.
  7. Artifacts (**)
    1. addresses_parsed.csv: Export the new table, e.g. addresses_parsed, as a CSV, e.g. addresses_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration.

...

  1. Script Name (*) remove_postdirectionals.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/remove_postdirectionals.py
  3. Important arguments
    1. output_table: The name of the table for the filtered records.
  4. Example usage
    1. Remove records with post directional values  (++<odbc>)

      Code Block
      languagetext
      firstline1
      titleremove_postdirectionals
      linenumberstrue
      python remove_postdirectionals.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --address_table=addresses_parsed \
        --output_table=addresses_no_post_dir \
        --output_report_file=./output/source_report.xlsx \
        --limit=-1


    2. Create table addresses_with_post_dir

      Code Block
      languagesql
      firstline1
      titleaddresses_with_post_dir
      linenumberstrue
      CREATE TABLE addresses_with_post_dir AS SELECT * FROM addresses_parsed WHERE NOT id IN (SELECT id FROM addresses_no_post_dir);


  5. Output table
    1. This step generates a new table. The new table contains all of the fields and records of the input table excluding any that have a post direction value and corresponding addresses with streets without the post direction.
  6. Artifacts (**)
    1. addresses_no_post_dir.csv - This is an archive of the records that made it past the filter to the next step.
    2. addresses_with_post_dir.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.

...

  1. Script Name (*) is_address_in_reference.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/is_address_in_reference.py
  3. Important arguments
    1. source_output_field - The name of a new integer field that will store the results of the step.
  4. Example usage
    1. Add field to flag if address is found in reference  (++<odbc>)

      Code Block
      languagetext
      firstline1
      titleis_address_in_reference
      linenumberstrue
      python is_address_in_reference.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --source_table=addresses_no_post_dir \
        --reference_table=reference_parsed  \
        --source_pk=id  \
        --reference_pk=id  \
        --source_address_number=address_number \
        --source_address_number_suffix=address_number_suffix \
        --source_street_name_pre_directional=street_name_pre_directional \
        --source_street_name_pre_type=street_name_pre_type \
         --source_street_name=street_name \
        --source_street_name_post_type=street_name_post_type \
        --source_street_name_post_directional=street_name_post_directional \
        --source_subaddress_type=subaddress_type \
        --source_subaddress_identifier=subaddress_identifier \
        --source_place_name=place_name \
        --reference_address_number=address_number \
        --reference_address_number_suffix=address_number_suffix \
        --reference_street_name_pre_directional=street_name_pre_directional \
        --reference_street_name_pre_type=street_name_pre_type \
        --reference_street_name=street_name \
        --reference_street_name_post_type=street_name_post_type \
        --reference_street_name_post_directional=street_name_post_directional \
        --reference_subaddress_type=subaddress_type \
        --reference_subaddress_identifier=subaddress_identifier \
        --reference_place_name=place_name \
        --source_output_field=in_reference \
        --limit=-1


    2. Create and archive table addresses_in_reference

      Code Block
      languagesql
      firstline1
      titleaddresses_in_reference
      linenumberstrue
      CREATE TABLE addresses_in_reference AS SELECT * FROM addresses_no_post_dir WHERE in_reference = 1;


    3. Create and archive table addresses_not_in_reference

      Code Block
      languagesql
      firstline1
      titleaddresses_not_in_reference
      linenumberstrue
      CREATE TABLE addresses_not_in_reference AS SELECT * FROM addresses_no_post_dir WHERE in_reference = 0;


  5. Output table
    1. This step generates a new table. The name of the new table is passed as a required command line argument.
  6. Output fields
    1. A new field that tells if the source address was found in the reference set (1), not found (0), or not yet checked (-1). The name of the new field is passed as a required command line argument.
  7. Artifacts (**)
    1. addresses_in_reference.csv - This is an archive of the records that made it past the filter to the next step.
    2. addresses_not_in_reference.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.

...

  1. Script Name (*) pg2bulkloader.py
  2. URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/pg2bulkloader.py
  3. Important arguments
    1. input_table - The name of the table containing addresses to be geocoded.
    2. output_file_name - The name of the CSV created from the input table.
    3. input_source - The text associated with the data source of this batch of addresses. The value entered here appears in the EAS user interface (under the source of a given base address or unit). (warning) This field has a 32 character limit. Exceeding 32 characters will crash the Bulk Loader!
  4. Example usage
    1. Create table addresses_to_geocode

      Code Block
      languagesql
      firstline1
      titleaddresses_to_geocode
      linenumberstrue
      -- Select, order and save as 'addresses_to_geocode'
      CREATE TABLE addresses_to_geocode AS
      SELECT * FROM addresses_in_reference
      ORDER BY
        street_name,
        street_name_pre_directional,
        street_name_pre_type,
        street_name_post_type,
        street_name_post_directional,
        address_number,
        address_number_suffix,
        subaddress_type,
        subaddress_identifier,
        place_name


    2. Add counter field

      Code Block
      languagesql
      firstline1
      titlecounter
      linenumberstrue
      -- Add and index a counter field
      ALTER TABLE addresses_to_geocode DROP COLUMN IF EXISTS counter;
      ALTER TABLE addresses_to_geocode ADD COLUMN counter SERIAL;
      CREATE INDEX ON addresses_to_geocode (counter);


    3. Export PostgreSQL table to CSV file for Bulk Loading (++<odbc>)

      Code Block
      languagetext
      firstline1
      titlepg2bulkloader
      linenumberstrue
      python pg2bulkloader.py \
        --odbc_server=<odbc-server> \
        --odbc_port=<odbc-port> \
        --odbc_database=<odbc-database> \
        --odbc_uid=<odbc-uid> \
        --odbc_pwd=<odbc-pwd> \
        --input_table=addresses_to_geocode \
        --output_file_name=./output/addresses_to_geocode \
        --input_source='Imported Addresses (20XX-XX-XX)' \
        --limit=-1


    4. Archive a copy of table addresses_to_geocode to artifact addresses_to_geocode.csv
  5. Output table
    1. addresses_to_geocode
  6. Output file
    1. This step generates a new CSV file. The name of the new file is passed as a required command line argument.
  7. Artifacts (**)
    1. addresses_to_geocode.csv - A CSV file of addresses ready for geocoding.

Anchor
stage3
stage3
Stage 3 
Geocode and filter

  •  

    Step 3.1 - Geocode source dataset

...

Anchor
stage4
stage4
Stage 4 
Export full set (single batch) or subset (multiple batches)

Note
titleA Note about Batches

Stages 4, 5 and 6 can be run one time with the results from Stage 3, or they can be run in multiple batches of subsets.

A major consideration of when to run the full set at once versus in batches is the number of records being Bulk Loaded.

The size of each Bulk Loader operation affects the following aspects of the EAS:

  • The disk space consumed by the database server
  • The EAS user interface section that lists addresses loaded in a given Bulk Loader operation
  • The weekly email attachment listing new addresses added to the EAS

For medium-to-large datasets (input sets with over 1000 records) it is recommended to run the process on a development server and assess the implications of the operation. Where appropriate, perform the Bulk Loading process in batches over several days or weeks.

The remaining steps will document one example iteration of a multi-batch process.

...

Anchor
stage5
stage5
Stage 5 
Run the Bulk Loader

(info) For a complete set of steps and background about the Bulk Loader, see also Running the Bulk Loader, a page dedicated to its input, operation and results.

...

Anchor
stage6
stage6
Stage 6 
Extract results

  •  

    Step 6.1 - Archive exceptions

...

  1. If the Bulk Loader Process was run on the production server then restore services
    1. Turn on production-to-replication service
      • TODO: add steps
    2. Turn on downstream database propagation service(s)
      • Resume downstream replication to internal business system database (SF PROD WEB).

        Code Block
        languagetext
        firstline1
        titlestart xmit
        sudo /var/www/html/eas/bin/xmit_change_notifications.bsh start


    3. Enable front-end access to EAS
      • Place the Web servers into live mode (SF PROD WEB, DR PROD WEB).

        Code Block
        languagebash
        linenumberstrue
        cd /var/www/html
        sudo ./set_eas_mode.sh LIVE


...