Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Here we provide the details of the Linux VM on which we deploy PostgreSQL.

Context
  • we have very little linux admin expertise
  • we are linux users (bash, vi, sed, awk, grep, etc)
  • simple is important
  • automation is important
  • the application is not huge, not complex, not demanding of resources
  • the overnight ETL and map tile generation is the most demanding portion of the appliation
  • we expect to update and insert an average of 10-100 "street address records" per day
  • there will be 10s of thousands of reads per day (generate mailing lists)
  • there will be a modest amount of spatial processing (nearest streets, point in polygon, nearest addresses)
  • there is currently no known performance issue with the application
General
  • SF data center is primary; SD is secondary.
    Paul will pick up here tomorrow
    XXXXXXXXXXXXXXXXXXXXXX
    We want a small core VM (< 1 GB?) so the VM copy/clone operations are network friendly and reasonably fast.
    So I think a 10GB PGDATA partition would be plenty for now.
    We do not expect that much growth since addresses do not change that much.
    We are expecting to upgrade to PG 9.0 for our 1.1 release.
    Failover will be via log file shipping.
    If that proves problematic in any way (not expected), it is acceptable to loose 24 hours worth of data.
    We have room to build this VM at either data center.
    We can give you access to the hypervisor, storage allocation, everything you need.
    If we use the SD DC, you can get started right away.
    If you use the SF DC, you'll need to wait for the VPN creds.
    We'll (city staff) want to look over your shoulder in web meetings to learn how to fish.
OS

For consistency we would like to use centos 5.5.

Disk Partitions

We would like separate disk partitions for:

  • pg data
  • transaction log files
  • backups
  • os

PGExperts shall recommend a size for each.
When we do a pgdump (e.g.)
pg_dump --host localhost --port 5432 --username postgres --format custom --blobs --verbose --file eas_20110408_1409.dmp eas_qa
the file size is about 200MB.

Monitoring

We would like to implement monitoring that works the same way using the same technology in both SD and SF.
SF OPs have a new monitoring tool called ?what? that they would prefer to use.
Will Ops still support the use of Nagios?
What does carinet have for montioring?
?Ask Hema? If we need to simplify, can we cut corners (not monitor) on monitoring in SD?

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.