-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 upload #150
S3 upload #150
Conversation
After some tweaks and testing, I updated how the transfer is done. Setting Transferring the whole Transferring just @renchap Let me know if this works. I think this is much more reasonable than it was before, and good enough to merge. |
values.yaml
Outdated
# that are renamed. As the pods are getting redeployed, and old/new pods are | ||
# present simultaneously, there is a change that old asset files are | ||
# requested from pods that don't have them anymore. Uploading asset files to | ||
# S3 in this manner solves this potential conflict. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- add that you would need to update your CDN / proxy to send request to
/assets
and/packs
to this bucket?
Co-authored-by: Renaud Chaput <[email protected]>
Co-authored-by: Renaud Chaput <[email protected]>
Added a job to upload assets to an S3 bucket on helm chart creation. Tested this in an isolated situation outside of a chart, and the job successfully uploads to exoscale, so the actual job definition works. Haven't tested it in the actual chart itself yet.
One note: this uploads the entire
/public
folder. Not sure if this is the "correct" thing to do or not, or if just/public/assets
should be uploaded. Let me know if this should be changed.