Table of Contents |
---|
...
Table of Contents |
---|
- The Bulk Loader is a process that is used to add several new addresses to the EAS at one time.
- There are several stages that make up the Bulk Loader process, outlined below in Summary and Details.
...
Stage Number | Stage | Category | Summary | Environment | Iterations | Estimated Person Time | Estimated Computer Time |
---|---|---|---|---|---|---|---|
1 | Import and parse reference dataset (Optional) | Parsing | This optional step in the Bulk Loader process is to cross check an address for a match in a reference data set. If a source address is found in the reference dataset the address makes it to the next step. If not found the address is put aside in an exclusion set for later review. | Once per Bulk Loader process | 1 hour | 10 minutes | |
2 | Import, parse and filter source dataset | Parsing | Import the dataset destined for the EAS. Parse and filter the set. | Once per Bulk Loader process | 90 minutes | 15 minutes | |
Geocode and filter | Geocoding | Geocode the set and filter further based on the geocoder score and status. | ArcMap | Once per Bulk Loader process | 1 hour | 5 minutes | |
4 | Export full set (single batch) or subset (multiple batches) | Geocoding | For large datasets, create one of many subsets that will be run through the Bulk Loader in multiple batches. | ArcMap | One or more batches for each Bulk Loader process | 30 minutes per batch | 5 minutes per batch |
5 | Bulk Load batch (full set or subset) | Bulk Loading | Run the entire batch or each subset batch through the Bulk Loader. | One or more batches for each Bulk Loader process | 1 hour per batch | 5 minutes per batch | |
6 | Extract results | Bulk Loading | Extract and archive the list of addresses that were added to the EAS . Also archive the unique EAS 'change request id' associated with this batch. Also archive the addresses that were rejected by the Bulk Loader in this batch. | PostgreSQL / pgAdmin | One or more batches for each Bulk Loader process | 1 hour per batch | 5 minutes per batch |
7 | Cleanup and Restoration | Bulk Loading | Clean up database, restore services and in the event of a failure, restore from backup. | PostgreSQL / pgAdmin | One or more batches for each Bulk Loader process | 1 hour per batch | 5 minutes per batch |
...
Running on DEV or QA first is a requirement
Required! Run addresses through DEV or QA first
Warning | ||
---|---|---|
| ||
Never load any new addresses into production until a successful trial run is performed on the same addresses in a non-production environment, such as development or QA. |
Important considerations when running the Bulk Loader
...
Warning | ||
---|---|---|
| ||
When releasing in the production environment pick a time outside core work hours that does not conflict with any cron jobs running on the production server. |
...
This optional stage is run once per Bulk Loader process. This stage can be skipped if the reference dataset is already available or if the optional 'filter by reference' step (Step 2.5) is skipped.
Step 1.1 - Import reference dataset
...
- input_file - The relative path to the raw CSV file.
- output_table - The name of the table for the imported records.
...
Import CSV file into PostgreSQL table (++<odbc>)
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
python csv2pg.py \
--odbc_server=<odbc-server> \
--odbc_port=<odbc-port> \
--odbc_database=<odbc-database> \
--odbc_uid=<odbc-uid> \
--odbc_pwd=<odbc-pwd> \
--input_file=./path/to/raw_reference_file.csv \
--output_table=reference_raw |
...
- This step generates a new table. The name of the new table is passed as a required command line argument.
...
- All columns in the input CSV file are imported as fields in the new table.
- sfgisgapid: a new serial, not null, primary key
...
- reference_raw.csv - The input CSV table serves as the artifact for this step.
Step 1.2 - Parse reference dataset
...
- address_column: Name of address field. Required.
- city_column: Name of city field. Optional.
- state_column: Name of state field. Optional.
- zip_column: Name of ZIP Code field. Optional.
- output_table: Name of the output table. Required.
...
Parse address table (++<odbc>)
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
python parse_address_table.py \
--odbc_server=<odbc-server> \
--odbc_port=<odbc-port> \
--odbc_database=<odbc-database> \
--odbc_uid=<odbc-uid> \
--odbc_pwd=<odbc-pwd> \
--address_table=reference_raw \
--primary_key=sfgisgapid \
--address_column=address \
--city_column=city \
--state_column=state \
--zip_column=zip \
--output_table=reference_parsed \
--output_report_file=./output/reference.xlsx \
--limit=-1 |
...
- This step generates a new table. The new table contains all of the fields and records of the input table with additional fields noted below.
...
- The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields.
- address_to_parse character varying(255)
- address_number bigint
- address_number_suffix character varying(255)
- street_name_pre_directional character varying(255)
- street_name_pre_type character varying(255)
- street_name character varying(255)
- street_name_post_type character varying(255)
- street_name_post_directional character varying(255)
- subaddress_type character varying(255)
- subaddress_identifier character varying(255)
- state_name character varying(255)
- zip_code character varying(255)
- complete_address_number character varying(255)
- complete_street_name character varying(255)
- complete_subaddress character varying(255)
- complete_place_name character varying(255)
- delivery_address_with_unit character varying(255)
- delivery_address_without_unit character varying(255)
- parsed_address character varying(1000)
- parser_had_issues boolean
- parser_message character varying(2000)
- parserator_tag character varying(255)
- address_number_parity character varying(255)
- counter integer
...
- reference_parsed.csv - Export the new table, e.g. reference_parsed, as a CSV, e.g. reference_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration.
...
This stage is run once per Bulk Loader process. This stage involves importing, parsing and filtering the dataset. The result will be a dataset that will be geocoded in Stage 3.
Step 2.1 - Import source dataset
...
Suggested folder layout
Artifacts are generated throughout the stages of the Bulk Loader Process. Below is a suggested folder and file layout for the various artifacts.
Info | ||
---|---|---|
| ||
./ # root (YYYYMMDD-tag e.g. 20190510-3x10k) |
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
###########
# Folders #
###########
./ # root (YYYYMMDD-tag e.g. 20190510-SourceNameHere)
./gap # Artifacts from the General Address Parser stage
./geocoder # input csv and output shapefiles
./bulkload_001 # artifacts from the first batch
./bulkload_00N # artifacts from the Nth batch, if any
./excluded_records # records from all stages set aside for later review
./bulkload_00N # artifacts from the Nth batch
./bulkload_00N/shapefile/ # this is the input for the Bulk Loader step
./excluded_records # records from all stages set aside for later review
./excluded_records/gap
./excluded_records/geocoder
./excluded_records/bulkload_001
./excluded_records/bulkload_00N
#########
# Files #
#########
./gap/gap_YYYYMMDD.csv # GAP output; geocoder input
./geocoder/bulkloader.mxd # ArcMap file
./geocoder/addresses_geocoded.shp # All geocoded records
./geocoder/geocode_score_100.shp # Records with non-tied, perfect geocoder score
./bulkload_00N/shapefile/bulkload.shp # Input for the Bulk Loader stage
./bulkload_00N/address_base.csv # New base records
./bulkload_00N/addresses.csv # New base+unit records
./bulkload_00N/change_request.csv # One-record containing unique id change request id associated with this batch
./bulkload_00N/record-counts-after.csv # EAS record-count after Bulk Load
./bulkload_00N/record-counts-before.csv # EAS record-count before Bulk Load
./excluded_records/geocoder/geocode_under_100.shp # Tied and/or not perfect geocoder score
./excluded_records/bulkload_00N/address_extract_exceptions.csv
./excluded_records/bulkload_00N/address_extract_exception_totals.csv |
Anchor | ||||
---|---|---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
This optional stage is run once per Bulk Loader process. This stage can be skipped if the reference dataset is already available or if the optional 'filter by reference' step (Step 2.5) is skipped.
Step 1.1 - Import reference dataset
- Script Name (*) csv2pg.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/csv2pg.py
- Important arguments
- input_file - The relative path to the raw CSV file.
- output_table - The name of the table for the imported records.
- Example usage
Import CSV file into PostgreSQL table (++<odbc>)
Code Block language text firstline 1 title csv2pg linenumbers true python csv2pg.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --input_file=./path/to/raw_reference_file.csv \ --output_table=reference_raw
- Output table
- This step generates a new table. The name of the new table is passed as a required command line argument.
- Output fields
- All columns in the input CSV file are imported as fields in the new table.
- sfgisgapid: a new serial, not null, primary key
- Artifacts (**)
- reference_raw.csv - The input CSV table serves as the artifact for this step.
Step 1.2 - Parse reference dataset
- Script Name (*) parse_address_table.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/csv2pgparse_address_table.py
- Important arguments
- input_file: The relative path to the raw CSV file
- address_column: Name of address field. Required.
- city_column: Name of city field. Optional.
- state_column: Name of state field. Optional.
- zip_column: Name of ZIP Code field. Optional.
- output_table: The name Name of the table for the imported recordsoutput table. Required.
- Example usage
Import CSV file into PostgreSQL table Parse address table (++<odbc>)
Code Block language text firstline 1 title csv2pgparse_address_table.py linenumbers true python csv2pgparse_address_table.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --inputaddress_file=./path/to/raw_source_file.csvtable=reference_raw \ --primary_key=sfgisgapid \ --address_column=address \ --city_column=city \ --state_column=state \ --zip_column=zip \ --output_table=addresses_rawreference_parsed \ --output_report_file=./output/reference.xlsx \ --limit=-1
- Output table
- This step generates a new table. The name of the new table is passed as a required command line argument.
- Output fields
- All columns in the input CSV file are imported as fields in the new table.
- sfgisgapid: a new serial, not null, primary key
- Artifacts (**)
- addresses_raw.csv - The input CSV table serves as the artifact for this step.
Step 2.2 - Exclude from source dataset where address_number has range
...
addresses_no_range
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
CREATE TABLE addresses_no_range AS SELECT * FROM addresses_raw WHERE NOT add_number LIKE '%-%'; |
addresses_with_range
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
CREATE TABLE addresses_with_range AS SELECT * FROM addresses_raw WHERE add_number LIKE '%-%'; |
...
- addresses_no_range.csv - This is an archive of the records that made it past the filter to the next step.
- addresses_with_range.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.
...
- The new table contains all of the fields and records of the input table with additional fields noted below.
Output fieldsAnchor parserator_fields parserator_fields - The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields.
- address_to_parse character varying(255)
- address_number bigint
- address_number_suffix character varying(255)
- street_name_pre_directional character varying(255)
- street_name_pre_type character varying(255)
- street_name character varying(255)
- street_name_post_type character varying(255)
- street_name_post_directional character varying(255)
- subaddress_type character varying(255)
- subaddress_identifier character varying(255)
- state_name character varying(255)
- zip_code character varying(255)
- complete_address_number character varying(255)
- complete_street_name character varying(255)
- complete_subaddress character varying(255)
- complete_place_name character varying(255)
- delivery_address_with_unit character varying(255)
- delivery_address_without_unit character varying(255)
- parsed_address character varying(1000)
- parser_had_issues boolean
- parser_message character varying(2000)
- parserator_tag character varying(255)
- address_number_parity character varying(255)
- counter integer
- The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields.
- Artifacts (**)
- reference_parsed.csv - Export the new table, e.g. reference_parsed, as a CSV, e.g. reference_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration.
Anchor stage2 stage2
Stage 2 - Import, parse and filter source dataset
stage2 | |
stage2 |
This stage is run once per Bulk Loader process. This stage involves importing, parsing and filtering the dataset. The result will be a dataset that will be geocoded in Stage 3.
Step 2.1 - Import source dataset
- Script Name (*) parse_address_tablecsv2pg.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/parse_address_tablecsv2pg.py
- Important arguments
- addressinput_column: Name of address field. Required.
- city_column: Name of city field. Optional.
- state_column: Name of state field. Optional.
- zip_column: Name of ZIP Code field. Optional.
- output_table: Name of the output table. Required.
- Parse address table file: The relative path to the raw CSV file.
- output_table: The name of the table for the imported records.
- Example usage
Import CSV file into PostgreSQL table (++<odbc>)
Code Block language text firstline 1 title parse_address_tablecsv2pg linenumbers true python parse_address_tablecsv2pg.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --addressinput_table=addresses_no_rangefile=./path/to/raw_source_file.csv \ --primaryoutput_key=sfgisgapid \ --address_column=address \ --city_column=city \ --state_column=state \ --zip_column=zip \ --output_table=addresses_parsed \ --output_report_file=./output/source_report.xlsx \ --limit=-1
table=addresses_raw
- Output table
- This step generates a new table. The new table contains all The name of the fields and records of the input table with additional fields noted belownew table is passed as a required command line argument.
- Output fields
- The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields. See also Step 1.2 Output Fields.All columns in the input CSV file are imported as fields in the new table.
- sfgisgapid: a new serial, not null, primary key
- Artifacts (**)
- addresses_parsedraw.csv: Export the new table, e.g. addresses_parsed, as a CSV, e.g. addresses_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration. - The input CSV table serves as the artifact for this step.
-
Step 2.
42 -
Exclude addresses where street_name_post_direction is not blank and corresponding streets
- The Bulk Loader is unable to correctly import addresses that contain a post directional value. As such these addresses as well as all corresponding addresses without a post directional value must be filtered out of the dataset.
- See also Issue 168, Provide for street names that have a post directional.
...
- output_table: The name of the table for the filtered records.
...
Remove records with post directional values (++<odbc>)
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
python remove_postdirectionals.py \
--odbc_server=<odbc-server> \
--odbc_port=<odbc-port> \
--odbc_database=<odbc-database> \
--odbc_uid=<odbc-uid> \
--odbc_pwd=<odbc-pwd> \
--address_table=addresses_parsed \
--output_table=addresses_no_post_dir \
--output_report_file=./output/source_report.xlsx \
--limit=-1 |
Create table addresses_with_post_dir
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
CREATE TABLE addresses_with_post_dir AS SELECT * FROM addresses_parsed WHERE NOT id IN (SELECT id FROM addresses_no_post_dir); |
...
Exclude from source dataset where address_number has range
- Create Output Tables
addresses_no_range
Code Block language sql firstline 1 title addresses_no_range linenumbers true CREATE TABLE addresses_no_range AS SELECT * FROM addresses_raw WHERE NOT add_number LIKE '%-%';
addresses_with_range
Code Block language sql firstline 1 title addresses_with_range linenumbers true CREATE TABLE addresses_with_range AS SELECT * FROM addresses_raw WHERE add_number LIKE '%-%';
- Artifacts (**)
- addresses_no_range.csv - This is an archive of the records that made it past the filter to the next step.
- addresses_with_range.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.
Step 2.3 - Parse source dataset
- Script Name (*) parse_address_table.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/parse_address_table.py
- Important arguments
- address_column: Name of address field. Required.
- city_column: Name of city field. Optional.
- state_column: Name of state field. Optional.
- zip_column: Name of ZIP Code field. Optional.
- output_table: Name of the output table. Required.
- Example usage
Parse address table (++<odbc>)
Code Block language text firstline 1 title parse_address_table linenumbers true python parse_address_table.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --address_table=addresses_no_range \ --primary_key=sfgisgapid \ --address_column=address \ --city_column=city \ --state_column=state \ --zip_column=zip \ --output_table=addresses_parsed \ --output_report_file=./output/source_report.xlsx \ --limit=-1
- Output table
- This step generates a new table. The new table contains all of the fields and records of the input table excluding any that have a post direction value and corresponding addresses with streets without the post directionwith additional fields noted below.
- Output fields
- The output consists of a field for every kind of parsed address part that is generated by the parsing tool, Parserator. It also contains additional concatenated fields as defined in the FGDC's United States Thoroughfare, Landmark, and Postal Address Data Standard as well as metadata fields. See also Step 1.2 Output Fields.
- Artifacts (**)
- addresses_no_post_dir.csv - This is an archive of the records that made it past the filter to the next step.
- addresses_with_post_dir.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.
- parsed.csv: Export the new table, e.g. addresses_parsed, as a CSV, e.g. addresses_parsed.csv, and archive in the network folder dedicated to artifacts for the Bulk Loader iteration.
-
Step 2.
54 - Exclude addresses
not in reference (Optional)
This optional step excludes from the source dataset any addresses that are not also found in the parsed reference dataset.
...
where street_name_post_direction is not blank and corresponding streets
- The Bulk Loader is unable to correctly import addresses that contain a post directional value. As such these addresses as well as all corresponding addresses without a post directional value must be filtered out of the dataset.
- See also Issue 168, Provide for street names that have a post directional.
- Script Name (*) is_address_in_referenceremove_postdirectionals.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/is_address_in_referenceremove_postdirectionals.py
- Important arguments
- source_output_field - table: The name of a new integer field that will store the results of the stepthe table for the filtered records.
- Example usage
Add field to flag if address is found in reference Remove records with post directional values (++<odbc>)
Code Block language text firstline 1 title is_address_in_referenceremove_postdirectionals linenumbers true python is_address_in_referenceremove_postdirectionals.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --sourceaddress_table=addresses_no_post_dirparsed \ --referenceoutput_table=reference_parsed addresses_no_post_dir \ --source_pk=id output_report_file=./output/source_report.xlsx \ --reference_pk=id \ --source_address_number=address_number \ --source_address_number_suffix=address_number_suffix \ --source_street_name_pre_directional=street_name_pre_directional \ --source_street_name_pre_type=street_name_pre_type \ --source_street_name=street_name \ --source_street_name_post_type=street_name_post_type \ -limit=-1
Create table addresses_with_post_dir
Code Block language sql firstline 1 title addresses_with_post_dir linenumbers true CREATE TABLE addresses_with_post_dir AS SELECT * FROM addresses_parsed WHERE NOT id IN (SELECT id FROM addresses_no_post_dir);
- Output table
- This step generates a new table. The new table contains all of the fields and records of the input table excluding any that have a post direction value and corresponding addresses with streets without the post direction.
- Artifacts (**)
- addresses_no_post_dir.csv - This is an archive of the records that made it past the filter to the next step.
- addresses_with_post_dir.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.
Step 2.5 - Exclude addresses not in reference (Optional)
This optional step excludes from the source dataset any addresses that are not also found in the parsed reference dataset.
For an understanding on how this step handles duplicates in either the source or reference, see the comments in EAS Issue 281, Refine the filtering regime for address data to be bulk-loaded .
- Script Name (*) is_address_in_reference.py
- URL https://bitbucket.org/sfgovdt/sfgis-general-address-parser/src/master/is_address_in_reference.py
- Important arguments
- source_output_field - The name of a new integer field that will store the results of the step.
- Example usage
Add field to flag if address is found in reference (++<odbc>)
Code Block language text firstline 1 title is_address_in_reference linenumbers true python is_address_in_reference.py \ --odbc_server=<odbc-server> \ --odbc_port=<odbc-port> \ --odbc_database=<odbc-database> \ --odbc_uid=<odbc-uid> \ --odbc_pwd=<odbc-pwd> \ --source_table=addresses_no_post_dir \ --reference_table=reference_parsed \ --source_pk=id \ --reference_pk=id \ --source_address_number=address_number \ --source_address_number_suffix=address_number_suffix \ --source_street_name_pre_directional=street_name_pre_directional \ --source_street_name_pre_type=street_name_pre_type \ --source_street_name=street_name \ --source_street_name_post_type=street_name_post_type \ --source_street_name_post_directional=street_name_post_directional \ --source_subaddress_type=subaddress_type \ --source_subaddress_identifier=subaddress_identifier \ --source_place_name=place_name \ --reference_address_number=address_number \ --reference_address_number_suffix=address_number_suffix \ --reference_street_name_pre_directional=street_name_pre_directional \ --reference_street_name_pre_type=street_name_pre_type \ --reference_street_name=street_name \ --reference_street_name_post_type=street_name_post_type \ --reference_street_name_post_directional=street_name_post_directional \ --reference_subaddress_type=subaddress_type \ --reference_subaddress_identifier=subaddress_identifier \ --reference_place_name=place_name \ --source_output_field=in_reference \ --limit=-1
Create and archive table addresses_in_reference
Code Block language sql firstline 1 title addresses_in_reference linenumbers true CREATE TABLE addresses_in_reference AS SELECT * FROM addresses_no_post_dir WHERE in_reference = 1;
Create and archive table addresses_not_in_reference
Code Block language sql firstline 1 title addresses_not_in_reference linenumbers true CREATE TABLE addresses_not_in_reference AS SELECT * FROM addresses_no_post_dir WHERE in_reference = 0;
- Output table
- This step generates a new table. The name of the new table is passed as a required command line argument.
- Output fields
- A new field that tells if the source address was found in the reference set (1), not found (0), or not yet checked (-1). The name of the new field is passed as a required command line argument.
- Artifacts (**)
- addresses_in_reference.csv - This is an archive of the records that made it past the filter to the next step.
- addresses_not_in_reference.csv - This is an archive of the records that did not make it past the filter. This artifact is part of the exclusion set of addresses that are set aside for later review.
...
Anchorstage3 stage3
Stage 3 - Geocode and filter
stage3 | |
stage3 |
-
Step 3.1 - Geocode source dataset
...
Click
Apply
, wait for operation to complete and then close the Attributes window
Save results to geocoder_matched_and_score100.shp
In the Table of Contents right-click the shapefile from the previous step and select
Data → Export Data
Click the browse icon.
In the file browser select the
Save as type
dropdown and selectShapefile
.Save the shapefile to the artifacts folder dedicated to this iteration of the Bulk Loader Process.
Filter the opposite set: status not matched ('M') or score not 100.
Right-click addresses_geocoded.shp and open the attributes table.
Enter the following WHERE clause:
NOT ("Status" = 'M') OR NOT ("Score" = 100)
Save results to geocoder_notmatched_or_under100.shp. (See substeps above.)
Output / Artifacts
geocoder_score_100.shp - Shapefile of matched addresses (status = 'M') with geocoding score of 100.
geocoder_notmatched_or_under100.shp - Shapefile of non-matched addresses (unmatched or tied) or with geocoding score less than 100.
The total number of records of the two output shapefiles should be the same as the number of records in the input shapefile.
...
Note | ||
---|---|---|
| ||
Stages 4, 5 and 6 can be run one time with the results from Stage 3, or they can be run in multiple batches of subsets. A major consideration of when to run the full set at once versus in batches is the number of records being Bulk Loaded. The size of each Bulk Loader operation affects the following aspects of the EAS:
For medium-to-large datasets (input sets with over 1,000 records) it is recommended that the process first be run on a development server to assess the implications of the operation. Where appropriate, perform the Bulk Loading process in batches over several days or weeks. The remaining steps will document a single batch example iteration of a multi-batch process. |
Step 4.1 - Export shapefile for Bulk Loading (entire set or subset batch)
- Substeps
- Create Batch from Subset (optional)
This example shows filtering for the second batch of 50,000 records using the counter_
field.
- Right-click the layer geocoder_score_100 in the ArcMap Table of Contents and select
Open Attributes Table
. - Click the first icon in the menu bar (top-left) and select
Select By Attributes
. - Enter the following WHERE clause to select the current batch of 50,000 records:
"counter_" >= 50000 AND "counter_" < 100000
- Click
Apply
, wait for operation to complete and then close the Attributes window- The batch may contain less than 50,000 records due to filtering by geocoding results in the previous step.
- Right-click the layer geocoder_score_100 in the ArcMap Table of Contents and select
- Export for Bulk Loader
In the Table of Contents right-click the layer geocoder_score_100 and select
Data → Export Data.
Click the browse icon.
In the file browser select the
Save as type
dropdown and selectShapefile
.Save the shapefile to the artifacts folder dedicated to this iteration of the Bulk Loader Process.
e.g. R:\Tec\..\Eas\_Task\2018_2019\path\to\archive\bulkloader_process_YYYYMMDD\bulkloader\batch_002\bulkload.shp
- Artifacts
- bulkload.shp - Shapefile for loading into the Bulk Loader in the next stage.
...
For a complete set of steps and background about the Bulk Loader, see also Running the Bulk Loader, a page dedicated to its input, operation and results.
Step 5.1 - Production-specific preparation
- Backup DatabaseMake a backup of the EAS databaseSee also Backup the EAS Databases
dropdown and select
Shapefile
.Save the shapefile to the artifacts folder dedicated to this iteration of the Bulk Loader Process.
Filter the opposite set: status not matched ('M') or score not 100.
Right-click addresses_geocoded.shp and open the attributes table.
Enter the following WHERE clause:
NOT ("Status" = 'M') OR NOT ("Score" = 100)
Save results to geocoder_notmatched_or_under100.shp. (See substeps above.)
Output / Artifacts
geocoder_score_100.shp - Shapefile of matched addresses (status = 'M') with geocoding score of 100.
geocoder_notmatched_or_under100.shp - Shapefile of non-matched addresses (unmatched or tied) or with geocoding score less than 100.
The total number of records of the two output shapefiles should be the same as the number of records in the input shapefile.
Anchor stage4 stage4
Stage 4 - Export shapefile - full set (single batch) or subset (multiple batches)
stage4 | |
stage4 |
Note | ||
---|---|---|
| ||
Stages 4, 5 and 6 can be run one time with the results from Stage 3, or they can be run in multiple batches of subsets. A major consideration of when to run the full set at once versus in batches is the number of records being Bulk Loaded. The size of each Bulk Loader operation affects the following aspects of the EAS:
For medium-to-large datasets (input sets with over 1,000 records) it is recommended that the Bulk Loading process be run in batches over several days or weeks. Reminder! It is required that the process first be run on a development server to assess the implications of the operation. The remaining steps will document a single batch iteration. Repeat these steps in a multi-batch process. |
Step 4.1 - Export shapefile for Bulk Loading (entire set or subset batch)
- Substeps
- Create Batch from Subset (optional)
This example shows filtering for the second batch of 50,000 records using the counter_
field.
- Right-click the layer geocoder_score_100 in the ArcMap Table of Contents and select
Open Attributes Table
. - Click the first icon in the menu bar (top-left) and select
Select By Attributes
. Enter the following WHERE clause to select the current batch of 50,000 records:
Code Block language text linenumbers true "counter_" > 50000 AND "counter_" <= 100000
- Click
Apply
, wait for operation to complete and then close the Attributes window- The batch may contain less than 50,000 records due to filtering by geocoding results in the previous step.
- Right-click the layer geocoder_score_100 in the ArcMap Table of Contents and select
- Export for Bulk Loader
In the Table of Contents right-click the layer geocoder_score_100 and select
Data → Export Data.
Click the browse icon.
In the file browser select the
Save as type
dropdown and selectShapefile
.Save the shapefile to the artifacts folder dedicated to this iteration of the Bulk Loader Process.
e.g. R:\Tec\..\Eas\_Task\2018_2019\path\to\archive\bulkloader_process_YYYYMMDD\bulkloader\batch_NNN\bulkload.shp
- Artifacts
- bulkload.shp - Shapefile for loading into the Bulk Loader in the next stage.
Anchor stage5 stage5
Stage 5 - Run the Bulk Loader
stage5 | |
stage5 |
For a complete set of steps and background about the Bulk Loader, see also Running the Bulk Loader, a page dedicated to its input, operation and results.
Warning | ||
---|---|---|
| ||
Never load any new addresses into production until a successful trial run is performed on the same addresses in a non-production environment, such as development or QA. |
Step 5.1 - Disable front-end access to EAS
- Notify relevant recipients that the Bulk Loader Process is starting
Disable web service on
<environment>_WEB
(SF DEV WEB, SF QA WEB, SF PROD WEB)Code Block language bash linenumbers true cd /var/www/html sudo ./set_eas_mode.sh MAINT
- Browse to web site, http://eas.sfgov.org/, to confirm the service has stopped. (Expect to see message that EAS is currently out of service.)
Step 5.2 - Non-production-specific preparation
Restore Database
- Restore database from latest daily production backup
Step 5.3 - Production-specific preparation
Halt Services
Warning title Reason for halting services These steps are being performed to facilitate immediate roll-back of the EAS database if the Bulk Load Process ends in failure.
Place EAS web application into maintenance mode
Place the Web servers into maintenance mode (SF PROD WEB, DR PROD WEBSKIP
Turn off the replication serverDisable database replication by shutting down the database service on the replication server (DR PROD DB).Code Block language bash title
Stop PostgreSQL linenumbers true
cd /var/www/html sudo ./set_eas_mode.sh MAINT
Turn off the replication server
- TODO: Add steps
- Turn off
#sudo -u postgres -i #/usr/pgsql-9.0/bin/pg_ctl -D /data/9.0/data stop
Turn off downstream database propagation service(s)
Suspend downstream replication to internal business system database (SF PROD WEB).
Code Block language
text title stop xmitbash title stop xmit linenumbers true sudo /var/www/html/eas/bin/xmit_change_notifications.bsh stop
Backup Database
- See also Backup the EAS Databases
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
sudo -u postgres -i / |
...
home/dba/ |
...
Step 5.2 - Non-production-specific preparation
- Restore Database
- Restore database from latest daily production backup
Step 5.3 - Backup Database
- Make a backup of the EAS database
- See also Backup the EAS Databases
...
scripts/dbbackup.sh > /var/tmp/dbbackup.log # this step takes about 2 minutes
ls -l /var/tmp # ensure the log file is 0 bytes
ls -la /mnt/backup/pg/daily/easproddb.sfgov.org-* # the timestamp on the last file listed should match timestamp of backup
exit # logout of user postgres when done |
Step 5.3 - Database Preparation
Connect to the database,
<environment>_DB
, and clear any leftover records from previous Bulk Loader batches.Code Block language sql title truncateTRUNCATE linenumbers true TRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest;
Code Block language sql title vacuumVACUUM linenumbers true VACUUM FULL ANALYZE bulkloader.address_extract;
Code Block language sql title vacuumVACUUM linenumbers true VACUUM FULL ANALYZE bulkloader.blocks_nearest;
Make note of EAS record counts before the Bulk Loading operation.
Code Block language sql firstline 1 title Record Counts linenumbers true SELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY schemaname,relname,n_live_tup_tables ORDER BY schemaname,relname,n_live_tup
- Save results in artifact as record_counts_before.csv
- Also save results in Excel spreadsheet artifact as BulkLoader_Process_YYYYMMDD.xlsx
Make note of the database partition size on the file system at the current point in time.
Code Block language bash firstline 1 title disk usage linenumbers true date; df /data # 1st of 3
-
Step 5.
54 - Transfer Shapefiles
- Transfer the bulkload.shp shapefile fromStep from Stage 4 to an EAS automation machine,
<environment>_AUTO
. - Substitute
<environment>
with one of the relevant environments: SF_DEV, SF_QA, SF_PROD, SD_PROD.Copy the shapefile to the folder . - Copy the shapefile to the folder
C:\apps\eas_automation\app_data\data\bulkload_shapefile
.
Step 5.5 - Run Bulk Loader
Open a command prompt and change folders:
app_data\data\bulkload_shapefile.Code Block language bash linenumbers true cd C:\apps\eas_automation\
Step 5.6 - Run Bulk Loader
Transfer the bulkload.shp shapefile from Step 4 to an EAS automation machine,
<environment>_AUTO
.Substitute
<environment>
with one of the relevant environments: SF_DEV, SF_QA, SF_PROD, SD_PROD.
Copy the shapefile to the folder
C:\apps\eas_automation\app_data\data\bulkload_shapefile
.- Open a command prompt and change folders:
cd C:\apps\eas_automation\automation\src
Run the step to stage the address records:
python job.py --job stage_bulkload_shapefile --env SF_DEV --action EXECUTE --v
Run the step to bulk load the address records:
python job.py --job bulkload --env SF_DEV --action EXECUTE --v
automation\src
Run the step to stage the address records:
Code Block language bash linenumbers true python job.py --job stage_bulkload_shapefile --env <environment> --action EXECUTE --v python job.py --job stage_bulkload_shapefile --env SF_DEV --action EXECUTE --v python job.py --job stage_bulkload_shapefile --env SF_QA --action EXECUTE --v python job.py --job stage_bulkload_shapefile --env SF_PROD --action EXECUTE --v
Run the step to bulk load the address records
Code Block language bash linenumbers true python job.py --job bulkload --env <environment> --action EXECUTE --v python job.py --job bulkload --env SF_DEV --action EXECUTE --v python job.py --job bulkload --env SF_QA --action EXECUTE --v python job.py --job bulkload --env SF_PROD --action EXECUTE --v
To calculate the time it took to run the Bulk Loader look at the timestamps in the output or use a stopwatch or clock to time the operation.
- Save Bulk Loader command line output artifact as bulk_loader_CLI_output.txt
-
Step 5.
7 -Anchor analysis analysis Analysis6 - Analysis
Anchor analysis analysis
Make note of the database partition size on the file system at this point. Compare with size of partition prior to loading to get the total disk space used as a result of running the Bulk Loader.
Code Block language bash firstline 1 title disk usage linenumbers true date; df /data # 2nd of 3
Make note of EAS record counts after the Bulk Load operation.
A comparison ofCode Block language sql firstline 1 title Record Counts linenumbers true SELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY schemaname,relname,n_live_tup
- Save results as artifact record_counts_after.csv
- Also save results in Excel spreadsheet artifact as BulkLoader_Process_YYYYMMDD.xlsx
In the spreadsheet, calculate the difference between the 'before' and 'after' record counts. The results will indicate the number of new base addresses added to the table `public.address_base` and the number of new addresses and units added to the table `public.addresses`.
See dedicated Bulk Loader page, Running the Bulk Loader, for more analysis options.
Make note of the database partition size on the file system. Compre with size of partition prior to loading to get the total disk space used as a result of running the Bulk Loader.
Query and make note of totals in theCode Block language bash firstline 1 title disk usage linenumbers true df /data
bulkloader.address_extract
table. The results here will be used to cross check the results in the next stage.Count/view new base addresses added to the EAS
Anchor stage6 stage6
Stage 6 - Extract results
stage6 | |
stage6 |
Step 6.1 - Archive exceptions
Info | ||
---|---|---|
| ||
The Bulk Loader operation in Stage 5 populated an EAS table named If any errors occurred on a given address during the load, the Bulk Loader populated the |
- Archive the entire
address_extract
table.Use a query tool such as pgAdmin to query and save the table as a CSV file.
Count/view unit addresses (some were already there, some are new)Code Block language sql firstline 1 title Count/view new base addressesaddress_extract linenumbers true SELECT COUNT(*) FROM bulkloader.address_extract WHERE NOT (street_segment_id IS NULL) SELECT * FROM bulkloader.address_extract WHERE NOT (street_segment_id IS NULL)
;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- Save as artifact address_extract.csv
- Archive the addresses that raised exceptions during the Bulk Loader process
Query subtotals
Code Block language sql firstline 1 title Count/view unit addressesexception_text_counts linenumbers true SELECT COUNTexception_text, Count(*) FROM bulkloader.address_extract WHEREGROUP NOTBY (addressexception_idtext ISORDER NULL) BY exception_text;
Save artifact as exception_text_counts.csv
Query all exception text records
Code Block language sql firstline 1 title exception_text linenumbers true SELECT * FROM bulkloader.address_extract WHERE NOT (addressexception_idtext IS NULL)
...
Step 6.1 - Archive exceptions
...
title | Info about the 'address_extract' table |
---|
The Bulk Loader operation in Stage 5 populated an EAS table named 'address_extract'
in the 'bulkloader'
schema.
It populated the 'address_extract'
table with every address it attempted to load.
...
ORDER BY exception_text, id;
- Save artifact as exception_text.csv
- Artifacts
- address_extract.csv - Results of every address submitted to the Bulk Loader.
- exception_text_counts.csv - Counts of the records that were not loaded due to the error indicated in the
'exception_text
' field
...
- .
address_extract
table.Use a query tool such as pgAdmin to query and save the table as a CSV file.Code Block language sql firstline 1 title bulkloader.address_extract linenumbers true SELECT * FROM 'bulkloader.address_extract';
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\address_extract.csv
Archive the addresses that raised exceptions during the Bulk Loader run - For example,
- Query the
'bulkloader.address_extract'
table for any value in the'exception_text'
fieldexception_text.csv - Subset of the just the records that were not loaded due to the error indicated in the'exception_text
' field
Step 6.2 - Archive unique EAS
change_request_id
associated with the Bulk Load
- Get the unique EAS
change_request_id
created by the Bulk Load operation. The value of<change_request_id>
will be used in the next steps to count addresses added to the EAS.
Get theQuery the
'public.change_requests'
table for the new'change_request_id'
value.Code Block language sql firstline 1 title exceptionchange_request_textid linenumbers true SELECT *change_request_id FROM bulkloaderpublic.addresschange_extractrequests WHERE NOT(exception_text IS NULL) ORDER BY exception_text;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\exceptions.csv
- For example,
- Artifacts
- address_extract.csv - Results of every address submitted to the Bulk Loader.
- exception_text.csv - Subset of the just the records that were not loaded due to the error indicated in the
'exception_text
' field.
Step 6.2 - Archive unique EAS
change_request_id
associated with the Bulk Load
requestor_comment LIKE 'bulk load change request' ORDER BY change_request_id DESC LIMIT 1;
- Save artifact as change_request_id.csv
- Artifacts
- change_request_id.csv - The unique EAS change_request_id
- created by the Bulk Load operation
<change_request_id>
will be used in the next steps to count addresses- .
Step 6.3 - Archive new EASAnchor step6.3 step6.3 addresses
records
- Get all the address records (including units) added to the EAS during the Bulk Loader operation..
Query the
'public.change_requests'
table for the new'addresses
table on the newchange_request_id'
value.Code Block language sql firstline 1 title change_request_idaddresses linenumbers true SELECT change_request_id FROM public.change_requests WHERE requestor_comment LIKE 'bulk load change request' ORDER BY * FROM public.addresses WHERE activate_change_request_id DESC LIMIT 1= <change_request_id>;
- Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\change_request_id.csv
- For example,
- change_request_id.csv - The unique EAS change_request_id created by the Bulk Load artifact as addresses.csv
- Extract sample unit address from the output
- Pick a random record from the results where unit_num is not NULL. Gather the value in the address_base_id field.
- Construct a URL from this value like this: http://eas.sfgov.org/?address=NNNNNN
- Where NNNNNN is the value from the address_base_id field.
- Make note of this URL for use in Step 7 when testing EAS after services are restored.
- Artifacts
- addresses.csv - All the address records (including units) added to the EAS during the Bulk Loader operation.
-
3Anchor step6.
34 step6.
3
Step 6.4 4 - Archive new EAS
address_base
records
- Get all the base records added to the EAS during the Bulk Loader operation.
Query the
public.address_base
table on the newchange_request_id
value.
Save the file in the network folder dedicated to artifacts for the Bulk Loader iteration.For example,Code Block language sql firstline 1 title public.address_base linenumbers true SELECT activate_change_request_id, address_id, public.address_base.* FROM public.address_base, public.addresses WHERE public.address_base.address_base_id = public.addresses.address_base_id AND public.addresses.address_base_flg = TRUE AND public.addresses.activate_change_request_id = <change_request_id>;
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\address_base.csv
id = <change_request_id>;
- Save artifact as
address_base.csv
- Extract sample base address from the output
- Pick a random record from the results. Gather the value in the address_base_id field.
- Construct a URL from this value like this: http://eas.sfgov.org/?address=NNNNNN
- Where NNNNNN is the value from the address_base_id field.
- Make note of this URL for use in Step 7 when testing EAS after services are restored.
- Artifacts
- address_base.csv - All the base records added to the EAS during the Bulk Loader operation.
...
- Get all the address records (including units) added to the EAS during the Bulk Loader operation..
- Query the
public.addresses
table on the newchange_request_id
value.during the Bulk Loader operation.
- Query the
Step 6.5 - Cross check results
Compare the results of Stage 5 with the results from Stage 6.
The number of addresses found in the Step 5.6 (2) should be identical to the number of addresses found in Step 6.3.
- The number of base addresses found in the Step 5.6 (2) should be identical to the number of base addresses found in Step 6.4.
Anchor stage7 stage7
Stage 7 - Cleanup and Restoration
stage7 | |
stage7 |
Step 7.1 - Database Cleanup
- Connect to the database,
<environment>_DB
, and clear address_extract records from the latest Bulk Loader batch. Make note of final disk usage tally.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
TRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest; |
Code Block | |||||
---|---|---|---|---|---|
|
...
|
...
VACUUM |
...
FULL |
...
ANALYZE |
...
- For example,
R:\Tec\..\Eas\_Task\path\to\archive\bulkloader_YYYYMMDD\bulkloader\batch_002\addresses.csv
...
- addresses.csv - All the address records (including units) added to the EAS during the Bulk Loader operation.
Step 6.5 - Cross check results
Compare the results of Stage 5 with the results from Stage 6.
The number of base addresses found in the Stage 5 Analysis should be identical to the number of base addresses found in Step 6.3.
- The number of addresses found in the Stage 5 Analysis should be less than or equal to the number of addresses listed in Step 6.4. (The Bulk Loader does not provide enough information in the the
bulkloader.address_extract
table to determine the exact number of new addresses added. But there is enough information to determine an upper limit.)
...
Step 7.1 - Database Cleanup
...
Connect to the database, <environment>_DB
, and clear temporary records from the latest Bulk Loader batch.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
TRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest; |
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
VACUUM FULL ANALYZE bulkloader.address_extract; |
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
TRUNCATE bulkloader.address_extract, bulkloader.blocks_nearest; |
...
Step 7.2 - On Failure Restore Database
- If the Bulk Loader failed and corrupted any data then restore from the database backup.
- Follow these steps to restore from backup.
Step 7.3 - Restore Services (Production Only)
...
Turn on production-to-replication service
- TODO: add steps
Turn on downstream database propagation service(s)
Resume downstream replication to internal business system database (SF PROD WEB).
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
sudo /var/www/html/eas/bin/xmit_change_notifications.bsh start |
Enable front-end access to EAS
Place the Web servers into live mode (SF PROD WEB, DR PROD WEB).
...
language | bash |
---|---|
linenumbers | true |
...
bulkloader.address_extract; |
Code Block | ||||
---|---|---|---|---|
| ||||
VACUUM FULL ANALYZE bulkloader.blocks_nearest; |
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
date; df /data # 3rd of 3
# Optional step: archive output to 'df.txt' artifact
exit |
Step 7.2 - Clean automation machine
- Return to automation machine and remove shapefile from 'bulkload_shapefile' folder.
- Logout of automation machine.
Step 7.3 - On Failure Restore Database
- If the Bulk Loader failed and corrupted any data then restore from the database backup.
- Follow these steps to restore from backup.
Step 7.4 - Restore Services (Production Only)
SKIP
Turn on production-to-replication serviceRe-enable database replication by restarting the database service on the replication server (DR PROD DB).Code Block language bash title Stop PostgreSQL linenumbers true #sudo -u postgres -i #/usr/pgsql-9.0/bin/pg_ctl -D /data/9.0/data start
SKIP
Turn on downstream database propagation service(s)Resume downstream replication to internal business system database (SF PROD WEB).Code Block language bash firstline 1 title start xmit #sudo /var/www/html/eas/bin/xmit_change_notifications.bsh start
Step 7.5 - Enable front-end access to EASAnchor step7.5 step7.5
Enable web service on
<environment>_WEB
(SF DEV WEB, SF QA WEB, SF PROD WEB)Code Block language bash linenumbers true cd /var/www/html sudo ./set_eas_mode.sh LIVE exit
Browse to website, http://eas.sfgov.org/, and review sample addresses gathered in Step 6.3 and Step 6.4
Notify relevant recipients that the Bulk Loader Process is complete
Step 7.6 - Archive artifacts
- List of artifacts
- address_base.csv
- address_extract.csv
- addresses.csv
- bulk_loader_CLI_output.txt
- change_request_id.csv
- df.txt
- exception_text.csv
- exception_text_counts.csv
Contents of progress and summary artifact, BulkLoader_Process_YYYYMMDD.xlsx
Progress - This sheet contains a table of relevant totals for each batch
Batch number
Batch date
Input record counts
New base record counts
New unit record counts
Sample addresses
Email Jobs - This sheet contains a table of details related to the weekly 'Address Notification Report' automated email job
Batch range
Record count in batch range
Email Timestamp
Total record counts in email
Subtotal of records generated as a result of the Bulk Loader
Size of email attachment
Batch N - This sheet tracks the before and after record counts for all tables in the EAS database. There is a sheet for each batch loaded. Within each sheet is a section for the 'before' records, a section for the 'after' record counts, and a 'diff' column showing the change in record counts.
END OF STEPS
Notes
Anchor | ||||
---|---|---|---|---|
|
...