-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add some level of caching for speed-up #78
Conversation
return decorator | ||
|
||
|
||
def activate_requests_caching(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we want both activate_requests_caching
and cached
?
Doesn't the cached
decorator negate the need for the other?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They serve different purposes. One caches what we serve to the user, the other caches what we get from other tools.
I believe there will be cases where what we deliver to the user will change (different views of the same data set) but what we need to fetch from the tools will be the same. Especially depending on how we get data from those tools (e.g. less requests that fetch a large dataset rather than many small requests - I think you were planning to use this kind of approach with fetching branch info from github).
User -(cached)- Ployst -(requests_caching)- 3rd party tools
Eventually we may want to review how we fetch and cache 3rd party data so that we can have some of it pre-fetched in the background, not impacting response time to the user. That lives in a longer-term future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I understood the difference in when they were caching - but in the use case of list_features
the activate_requests_caching
won't ever be used.
I'm a little uneasy with the way the activate_requests_caching
does the caching - if it means all installed libraries using requests also have that cache then that might not be the right way about it. It feels like caching should be a per-provider thing - as source code info is more important to be live than planning info for example.
Happy to merge if you add a config param to decide whether to call this on line 70 of deploystream/__init__.py
:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On retrospect, this looked like a quick win but it is a shoddy solution. I am concerned about the load time for the page and I want my team to use ployst and I didn't want page load time to be a pain point.
But I am not sure what is the best place to implement caching for provider calls. Either we do it by 1) wrapping it around calls to providers (preferrable, but at that level we may not be aware of what variables make output change - such as provider configuration) or 2) we leave that responsibility to providers' implementations (that will be more aware of what caching makes sense for the service they call, or which are the variables).
Leaning towards 1), but changes in provider configuration may require busting the cache for that provider.
At times like these I find myself missing the richness of Django as a framework and its extensive support for caching, e.g. its support for cache headers for browser caching could be also useful. I will have to do more research about proper ways to deal with that in Flask. |
It's not too late for us to change - (although it might be better to spend Where possible we're going to have to pre-fetch to achieve a decent user On 21 May 2013 20:44, Carles Barrobés i Meix [email protected]:
|
Oh no I didn't mean we should change to Django at all. Just that some things take longer to figure out. Happy about this project's learning curve so far. |
I have added some caching to HTTP client calls (that use requests, by introducing requests-cache https://requests-cache.readthedocs.org/) and to views.
This is not really the goal of #72 (which is mostly about speeding up user feedback by progressive render) but is going to be quite useful in an active multi-user environment.