-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Force re-caching of one file? #333
Comments
Hello, sorry for the delay in response. So the gateway does not allow this by default, but I have some options we can discuss for something like this. Take a look and see if these strategies can work for your use case, then we could talk about how to make them happen with the s3 gateway project.
|
Thank you! I was looking at proxy_cache_purge, but purchasing Plus for the purposes of the project I'm working on isn't really an option for organizational reasons. I'll try this today |
I've set the proxy_cache_bypass as described, but am having an issue with caching 404s I'm running in docker, and have created ssl certificates per @dekobon's super helpful comment in #138. I added this to my s3_server.conf.template
I am now seeing new versions of files when they're updated. However, I'm only seeing a 404 for deleted files when the |
Hi Chris, Can you confirm that the browser is not using its own cache? I'm doubt it isn't the problem, but I want to exhaust that possibility before moving forward. |
Yep definitely not a browser caching issue, I'm seeing the behavior with basic requests via curl. I deleted a file "test/test.html" from the bucket. I have verified it has been deleted in aws console. Running This is not the behavior I'm experiencing with changes to the file. If the file is modified in the bucket, running the request with the For reference, I have these values set in s3_server.conf.template:
I have this in my Dockerfile
For what it's worth, deleted files are showing 404 after a few minutes once the cache expires normally, it just appears that the gateway isn't caching the 404 after the headers are used. |
@cmhac So I think what could be happening is that NGINX will only update the cache for the You can see in my example I had to add the I have not personally done this before, but some quick googling shows that you could try enabling static website hosting, setting and index document, then configuring that to return the right headers. There were also mentions of AWS Object lamba to add headers - but not sure if you want the additional layer. |
Is your feature request related to a problem? Please describe
My team is using this configuration to publicly serve files from a private s3 bucket. We've noticed that there are some cases where files that have changed or been deleted from the bucket are still being served, which doesn't work for us.
Is there a way to force the gateway to get the latest version of a given file, perhaps via a header? I've googled as much as I can and haven't found any clear solutions.
Describe the solution you'd like
Ideally, it would be possible to send an http request for a file that tells nginx to not fetch a file from the bucket even if the file has been cached.
Describe alternatives you've considered
Currently we have disabled caching while we work on this project, which is not ideal. We want to utilize caching for obvious performance reasons but also need to have more fine-grained control over updating certain files quickly.
The text was updated successfully, but these errors were encountered: