-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade to Postgres 13 #3451
Upgrade to Postgres 13 #3451
Conversation
RDS has only recently added support for PostGIS 3.1 as of October 2021: https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-rds-postgresql-postgis-3-1/ Unfortunately, PostGIS 3.1 and 3.0 kept failing in local provisionig for both Postgres 12 and 13. So we're going with PostGIS 3.2 for now, although we'll keep all our features to PostGIS 3.1 support.
This is what we'll be using in staging / production.
Had been previously missed
We have been on 9.6.22 since #3149
This prepares us for a full jump to the latest 13.4.
CI is failing with this message:
which I am unable to reproduce locally. Going to take a look on Jenkins. |
Fixed CI by logging in to |
The hi res streams data is very large, and often fails import into the default 30GB disk. This plugin allows for setting the disk size in the Vagrant file, and we add a shell provisioner to increase the logical volume and file system to use it. The README is updated accordingly.
I am noticing that importing the development data with Postgres 13 takes ~33% more time. Provisioning a new services VM and importing all the development data using this series of commands: vagrant destroy -f services &&
vagrant up services &&
vagrant ssh app -c 'cd /vagrant && ./scripts/aws/setupdb.sh -bc' &&
vagrant ssh app -c 'cd /vagrant && ./scripts/aws/setupdb.sh -dmpq' &&
vagrant ssh app -c 'cd /vagrant && ./scripts/aws/setupdb.sh -sS' Takes about 3h4m on We have to do the upgrade as part of AWS requirements, so I'm not going to look further into this here, but making a note of it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. As always it is a pleasure to review PRs that are so logically split into incremental, progressive commits.
sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv | ||
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a documentation reference for this? This appears to be some advances Linux file system configuration that I have not seed before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. This was cobbled together from various sources including https://stackoverflow.com/a/66117279/6995854, https://medium.com/@kanrangsan/how-to-automatically-resize-virtual-box-disk-with-vagrant-9f0f48aa46b3, and https://marcbrandner.com/blog/increasing-disk-space-of-a-linux-based-vagrant-box-on-provisioning/, with some trial and error to get the minimum set of instructions. I did not know this at the time, but we've done something similar in https://github.com/azavea/cicero/pull/1617
From what I understand these commands are Ubuntu specific, with CentOS and the like using other utilities (e.g. xfs_growfs
). Essentially, by the time we get to the shell provisioner, the physical disk size is set to 64GB as specified in the Vagrantfile. The logical volume is still stuck at ~30GB, possibly initialized by the base box we're using. lvextend
extends the logical volume to take up all the free space in the physical volume. resize2fs
resizes the file system to extend to all the logical volume.
I'll add this explanation to the commit message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These commands are not ubuntu specific or terribly bleeding-edge. lvextend is a standard part of the lvm2 tools, and resize2fs will work with all variants of the ext file system. xfs_growfs is for xfs; Filesystems generally implement their own tools to expand the filesystem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah so xfs_growfs
vs resize2fs
are because of the file system, not because of the distro. Thanks for pointing that out!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a comment explaining this in 4356133.
This was cobbled together from various sources including: - https://stackoverflow.com/a/66117279/6995854 - https://medium.com/@kanrangsan/how-to-automatically-resize-virtual-box-disk-with-vagrant-9f0f48aa46b3 - https://marcbrandner.com/blog/increasing-disk-space-of-a-linux-based-vagrant-box-on-provisioning/ with some trial and error to get the minimum set of instructions. We've done something similar in https://github.com/azavea/cicero/pull/1617
Since we use Postgres 13, we want to use libpq-dev 13 as well. During provisioning, at some point it gets upgraded to 14, and then reprovisioning fails since downgrades are not allowed by default. This enables them to ensure we stay on the same version.
Overview
Upgrades Postgres to 13.4, the latest available on RDS, from 9.6.22. Necessary as AWS is going to hard switch to 12 come January 18. This PR upgrades Postgres in development and staging. It will be upgraded on production in #3444.
Connects #3379
Demo
Notes
Here's the full steps that were taken for the performing this ugprade on staging:
General Setup for Working with Deployments
~/deployment
. Activate it.pip install -r requirements.txt
mmw-stg
AWS profile configured correctlymmw-stg.pem
file for SSHing in to Bastionstaging.yml
deployment configuration file available locally, akin to thedefault.yml
used for deployments in the fileshare.Take a Database Snapshot
(this was not done for staging, but should be done for production)
mmw-stg
credentials in LastPassUpgrade Bastion
The bastion currently runs on 16.04 Xenial, whereas everything else has been upgraded to 20.04 Focal.
BastionHostAMI
:mmw-stg
credentials in LastPassBastionHostAMI
with the value from aboveRDSParameterGroupName
tommw-postgres96
mmw-stg.pem
fileapp
orworker
VM to ensure that still worksUpgrade Postgres to 9.6.23
This is the latest version of Postgres 9.6, and will make subsequent upgrades easier.
9.6.23
, DB Instance class todb.t3.medium
SELECT version();
Upgrade Postgres to 12.8
This is the higher version we can go with PostGIS 2.5, which is the highest version of PostGIS supported by Postgres 9.6.
mmw-postgres12
12.8
, DB parameter group tommw-postgres12
SELECT version();
Upgrade Postgres to 13.4
Now with PostGIS at 3.1, we can upgrade to Postgres 13.4.
mmw-postgres13
log_min_duration_statement
has a value of500
(to match themmw-postgres96
parameter group)13.4
, DB parameter group tommw-postgres13
SELECT version();
Downgrade RDS Instance to t3.micro
Since the instance was set to t2.micro before, we reduce it from t3.medium to t3.micro.
RDSInstanceType
todb.t3.micro
Resolve CloudFormation Template
Now, with everything updated manually, run the CloudFormation data plane plan to ensure everything matches:
Testing Instructions