Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache the processed data #59

Open
wants to merge 5 commits into
base: stable
Choose a base branch
from

Conversation

flolopdel
Copy link

We experience slow request in the processed data.

The difference is hudge between the jdbc time spent and the total time spent of the request and this time come from the 2 loop which is done in the processed data for api.

To prevent this time I add a check which check if the data come from the querycache or from the query database.

If the data come from the query database, the processed is done in the old way and it update a cache key with this info
If the data come from the query cache, I also get the processed data from the cahe, adding more speed to the request.

I hope this help and can be add soon to the extension. If is there anything I can do, let me know

@DominicWatson
Copy link
Contributor

Thanks @flolopdel - have added to our current sprint for review (may be a a little while with public holidays for our office rn)



public void function onCreateSelectDataCacheKey( event, interceptData ) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels dangerous / misplaced. This interception point could easily be fired off 1000's of times in any given request (i.e. some bad uncached admin request with lots of little queries). From my understanding, you want to use a 'processed' cache of the select data results, only when the select data result hasn't changed.

I think a more accurate / clean approach here would be:

  1. Create a dedicated API result cache
  2. Do your selectData as normal
  3. Generate a cache key that is API Endpoint + a hash of the db query result

Then just use your dedicated API cache with this cache key. If the data changes in the DB, that cache entry will no longer get looked up.

Does that make sense? Here is an example of an extension registering its own cachebox cache using afterConfigurationLoad coldbox interception point: https://github.com/pixl8/preside-ext-s3-storage-provider/blob/stable/interceptors/S3StorageProviderInterceptors.cfc

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @DominicWatson !

I understand you.

The answer to the question 2 Do your selectData as normal Is the reason I add the interceptor onCreateSelectDataCacheKey To check if the data from the selectData has been cached or not.

  • If it was cached then get the cache specific for api result
  • If not, processed and save cached

What do you propose to do this without use the intercelpor onCreateSelectDataCacheKey ?

It is true onCreateSelectDataCacheKey is fired a lot of times but it is also true I did not do any important action there more than set true or false some RequestContext

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To check if the data from the selectData has been cached or not.

This does not essentially matter. All that matters is that you have a version in cache that matches the result of the query. If the query is cached and the API result is cached then super-great. If the query is not cached but the result has not changed and we still have an API result cache, then we can still use this API result cache.

@DominicWatson
Copy link
Contributor

@flolopdel Have added some review feedback above. Let me know what you think and/or if you'd like any help with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants