You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 16, 2023. It is now read-only.
This is helpful for systems that remove old files automatically like S3 or EFS storage with expiration rules which is nice to prevent old unused files from costing
It should probably be a strategy method so that hashing storages that create gzip and other compressed formats can handle the extra files as well
The text was updated successfully, but these errors were encountered:
So I read more into this. My current case is using EFS. I assumed that the lifecycle rules would be the same between EFS and S3. I've done some reading to determine how to implement this and I think this will need to be specific to the FileSystemStrategy.
For http expiration headers, we do not want to modify the last-modified stamp on the file. So at this time I think it makes sense just to implement this for file system based strategies.
-- edit --
Bummer, support chat just revealed LifeCycle management is basically useless for this task:
Files smaller than 128 KB aren't eligible for lifecycle management and are always stored in the Standard class.
However, this issue is still useful - the cleanup just has to be manual - so I am going to rethink how I want to accomplish it before proceeding
Google Storage lifecycle policies do not seem to place a restriction on minimum file size per https://cloud.google.com/storage/docs/lifecycle and it also indicates that specific expiration policies are available. And neither does S3.
So "touch" isn't the right word here. It should have something to do with lifecycle since that is the term used by the other storage providers
lifecycle_action_hook() seems appropriate
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
When copy is skipped, it would be nice if there was an option to update the timestamps when copy is skipped (https://github.com/antonagestam/collectfast/blob/master/collectfast/management/commands/collectstatic.py#L110)
This is helpful for systems that remove old files automatically like S3 or EFS storage with expiration rules which is nice to prevent old unused files from costing
It should probably be a strategy method so that hashing storages that create gzip and other compressed formats can handle the extra files as well
The text was updated successfully, but these errors were encountered: