Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spike data export for restoring prod db to local #2469
base: master
Are you sure you want to change the base?
Spike data export for restoring prod db to local #2469
Changes from 1 commit
8a729cd
db04b9d
7405f18
4ef5a0e
7996447
c1ab7c0
71e4a2f
c36c93a
51b3570
577723c
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presumably this S3 bucket is just manually provisioned.
If we want to roll this out to another project, we just manually make another bucket in the relevant AWS account?
Also I'd probably make
bucket_name
BUCKET_NAME
here.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. This bucket already existed so I just used that. I didn't include it in SAM for that reason, but there's no particular reason we can't set up a bucket in SAM.
I guess we can also have a single DC wide bucket for this: the user doesn't need bucket write permissions because of the pre-signed URL, so there's no problem with e.g limiting a user to only getting backups from one project but not others.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to get too into prematurely trying to generalise. To start with, lets just copy & paste code to the 3 repos where we need it.
That said..
There are parts of this that could potentially be a shared library, and parts that are application specific
For example, this block here, the scrubbing rules, etc will be different for every project.
I think lets not get too bogged down in it at this point but having implemented it in a few places it might be worth having a think about what bits ended up the same/different and wether it is worth trying to generalise any of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I think the parameters could be generic. No reason we can't enforce some conventions, especially when we have one application per account. These settings are only at this path because we're still in a shared AWS account.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest using
tempfile.NamedTemporaryFile
for this rather than hard-coding the location.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to do so, but to be clear this is writing to the Lambda temp directory, not the local dev one. That directory is limited to the invocation of the function, so there's no way it can ever interact with anything outside of the scope of the invocation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh. Hi there, world 👋
Can we replace with sensible descriptions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this one is still relevant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should add some error handling to make sure what we got back appears to be a URL. When I ran this, I got
but the script just tried to plough on regardless
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I forgot to pull the commit out (I imagine we'll squash them all later anyway), but this is done here: 51b3570#diff-388985bd50a1bb1aefbf516845f5019a2f244d5ba5cc1f10f0f6df6e98dddc1fR53
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I try to run this locally (using
AWS_PROFILE=sym-roe scripts/get-prod-db.sh
), I getnormally, I would
sudo -u postgres dropdb
orsudo -u postgres createdb <dbname>
in order to drop or create a DB. Do you have your local user set up as a postgres admin, or do you run the whole script aspostgres
? I wonder if it is useful to allow the user to pass an option to say sudo here?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reckon this should be sorted with 51b3570