Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(kraken): push to kube-manifests repository #147

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

jonathankao97
Copy link
Contributor

Untested so don't know if this actually works but seems fine

AWS_SECRET_ACCESS_KEY: '${{ secrets.GH_AWS_SECRET_ACCESS_KEY }}',
},
},
name: 'Push to kube-manifests repository',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can push manifests to another repo, but we should still keep & run the deployment steps:

kubectl apply -f k8s/dist/ -l app.kubernetes.io/component=certificate
kubectl apply -f k8s/dist/ --prune -l app.kubernetes.io/part-of=$RELEASE_NAME`

Or did you plan for this to be ran as a part of the github actions for the kube-manifests repository? Might make more sense to keep deployments specific to products imo and just push to kube-manifests for record-keeping.

Also, can you update the snapshots?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea behind this PR is migrating to use argocd to manage our deployments rather than manually kubectl applying to the cluster. We can push yaml files to the kube-manifests repo, which argo is monitoring and it can then update our cluster using those changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added the eks deploy step back in for now. It's likely easier to deploy to kube-manifests and kubectl deploying while the Argo deployment is getting ironed out and then removing the kubectl deploy step afterwards.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using both argo and the direct kubectl apply feels like we may end up with some weird race condition. It may make sense to remove the kubectl apply here,, finish the kittyhawk rollout, then migrate over to argo.

run: dedent`aws eks --region us-east-1 update-kubeconfig --name production --role-arn arn:aws:iam::\${AWS_ACCOUNT_ID}:role/kubectl

# get repo name from synth step
RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }} is necessary to use the $RELEASE_NAME env variable in the git commit -m "chore(k8s): deploy $RELEASE_NAME" line you added

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wups, thanks

@codecov
Copy link

codecov bot commented Apr 16, 2022

Codecov Report

Merging #147 (34bd9af) into master (efd8251) will increase coverage by 1.61%.
The diff coverage is n/a.

@@             Coverage Diff             @@
##           master      #147      +/-   ##
===========================================
+ Coverage   98.38%   100.00%   +1.61%     
===========================================
  Files          31        15      -16     
  Lines         433       169     -264     
  Branches       86        14      -72     
===========================================
- Hits          426       169     -257     
+ Misses          5         0       -5     
+ Partials        2         0       -2     
Flag Coverage Δ
kittyhawk ?
kraken 100.00% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
cdk/kraken/src/deploy.ts 100.00% <ø> (ø)
cdk/kittyhawk/src/deployment.ts
cdk/kittyhawk/src/application/react.ts
cdk/kittyhawk/src/chart.ts
cdk/kittyhawk/src/certificate.ts
cdk/kittyhawk/src/application/django.ts
cdk/kittyhawk/src/serviceaccount.ts
cdk/kittyhawk/test/utils.ts
cdk/kittyhawk/src/index.ts
cdk/kittyhawk/src/cronjob.ts
... and 7 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 63f6cb9...34bd9af. Read the comment docs.

Copy link
Contributor

@joyliu-q joyliu-q left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good! Left a few comments specific to where we are storing the product manifests.

run: dedent`aws eks --region us-east-1 update-kubeconfig --name production --role-arn arn:aws:iam::\${AWS_ACCOUNT_ID}:role/kubectl
name: 'Push to kube-manifests repository',
run: dedent`cd kube-manifests
mkdir -p \${{ github.repository }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we make a directory with github.repository (so like a folder pennlabs then a subfolder penn-courses), or should we just make a directory with the RELEASE_NAME?

I feel like the latter might make more sense since everything will be prefixed by pennlabs anyways.


# get repo name from synth step
RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }}

git add \${{ github.repository }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same question here as above ^^ (since we either change both or keep both)

name: "Deploy",
run: dedent`aws eks --region us-east-1 update-kubeconfig --name production --role-arn arn:aws:iam::\${AWS_ACCOUNT_ID}:role/kubectl
# get repo name from synth step
RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }}
# Deploy
kubectl apply -f k8s/dist/ -l app.kubernetes.io/component=certificate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the deploy is product specific, it might make more sense to just run k apply on the specific product subfolder (e.g. k8s/dist/penn-courses) instead of as a whole

AWS_SECRET_ACCESS_KEY: '${{ secrets.GH_AWS_SECRET_ACCESS_KEY }}',
},
},
name: 'Push to kube-manifests repository',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using both argo and the direct kubectl apply feels like we may end up with some weird race condition. It may make sense to remove the kubectl apply here,, finish the kittyhawk rollout, then migrate over to argo.


# get repo name from synth step
RELEASE_NAME=\${{ steps.synth.outputs.RELEASE_NAME }}

git add \${{ github.repository }}
git commit -m "chore(k8s): deploy $RELEASE_NAME"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a suggestion, but if we change the commit message to be something like "deploy pennlabs/repo-name@git-sha", we'll get a direct link to the commit that's being deployed. It might be a small QoL improvement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants