I actually had to solve this exact problem recently. I set up a lifecycle policy for a known prefix in the bucket, so any item with that prefix is deleted after N amount of time. Then, when a link is requested to a static asset (which is stored at rest with a private ACL), it gets copied into the prefix, with a random name and a public ACL, and the new link gets served to the client.
So far I haven't seen any big drawbacks. It does mean storing the same objects multiple times in S3. But S3 storage is relatively cheap unless you have a huge amount of data. If bandwidth was ever a problem, it would be simple enough to wrap the transient prefix in a CDN.
So far I haven't seen any big drawbacks. It does mean storing the same objects multiple times in S3. But S3 storage is relatively cheap unless you have a huge amount of data. If bandwidth was ever a problem, it would be simple enough to wrap the transient prefix in a CDN.