-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(kraken): push to kube-manifests repository #147
base: master
Are you sure you want to change the base?
Conversation
AWS_SECRET_ACCESS_KEY: '${{ secrets.GH_AWS_SECRET_ACCESS_KEY }}', | ||
}, | ||
}, | ||
name: 'Push to kube-manifests repository', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can push manifests to another repo, but we should still keep & run the deployment steps:
kubectl apply -f k8s/dist/ -l app.kubernetes.io/component=certificate
kubectl apply -f k8s/dist/ --prune -l app.kubernetes.io/part-of=$RELEASE_NAME`
Or did you plan for this to be ran as a part of the github actions for the kube-manifests
repository? Might make more sense to keep deployments specific to products imo and just push to kube-manifests
for record-keeping.
Also, can you update the snapshots?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea behind this PR is migrating to use argocd to manage our deployments rather than manually kubectl applying to the cluster. We can push yaml files to the kube-manifests
repo, which argo is monitoring and it can then update our cluster using those changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added the eks deploy step back in for now. It's likely easier to deploy to kube-manifests and kubectl deploying while the Argo deployment is getting ironed out and then removing the kubectl deploy step afterwards.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using both argo and the direct kubectl apply feels like we may end up with some weird race condition. It may make sense to remove the kubectl apply here,, finish the kittyhawk rollout, then migrate over to argo.
run: dedent`aws eks --region us-east-1 update-kubeconfig --name production --role-arn arn:aws:iam::\${AWS_ACCOUNT_ID}:role/kubectl | ||
|
||
# get repo name from synth step | ||
RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }}
is necessary to use the $RELEASE_NAME
env variable in the git commit -m "chore(k8s): deploy $RELEASE_NAME"
line you added
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wups, thanks
Codecov Report
@@ Coverage Diff @@
## master #147 +/- ##
===========================================
+ Coverage 98.38% 100.00% +1.61%
===========================================
Files 31 15 -16
Lines 433 169 -264
Branches 86 14 -72
===========================================
- Hits 426 169 -257
+ Misses 5 0 -5
+ Partials 2 0 -2
Flags with carried forward coverage won't be shown. Click here to find out more. Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good! Left a few comments specific to where we are storing the product manifests.
run: dedent`aws eks --region us-east-1 update-kubeconfig --name production --role-arn arn:aws:iam::\${AWS_ACCOUNT_ID}:role/kubectl | ||
name: 'Push to kube-manifests repository', | ||
run: dedent`cd kube-manifests | ||
mkdir -p \${{ github.repository }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we make a directory with github.repository
(so like a folder pennlabs then a subfolder penn-courses), or should we just make a directory with the RELEASE_NAME
?
I feel like the latter might make more sense since everything will be prefixed by pennlabs anyways.
|
||
# get repo name from synth step | ||
RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }} | ||
|
||
git add \${{ github.repository }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same question here as above ^^ (since we either change both or keep both)
name: "Deploy", | ||
run: dedent`aws eks --region us-east-1 update-kubeconfig --name production --role-arn arn:aws:iam::\${AWS_ACCOUNT_ID}:role/kubectl | ||
# get repo name from synth step | ||
RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }} | ||
# Deploy | ||
kubectl apply -f k8s/dist/ -l app.kubernetes.io/component=certificate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the deploy is product specific, it might make more sense to just run k apply
on the specific product subfolder (e.g. k8s/dist/penn-courses
) instead of as a whole
AWS_SECRET_ACCESS_KEY: '${{ secrets.GH_AWS_SECRET_ACCESS_KEY }}', | ||
}, | ||
}, | ||
name: 'Push to kube-manifests repository', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using both argo and the direct kubectl apply feels like we may end up with some weird race condition. It may make sense to remove the kubectl apply here,, finish the kittyhawk rollout, then migrate over to argo.
|
||
# get repo name from synth step | ||
RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }} | ||
|
||
git add \${{ github.repository }} | ||
git commit -m "chore(k8s): deploy $RELEASE_NAME" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a suggestion, but if we change the commit message to be something like "deploy pennlabs/repo-name@git-sha", we'll get a direct link to the commit that's being deployed. It might be a small QoL improvement.
Untested so don't know if this actually works but seems fine