-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: deleting access token cache for marketo bulk upload destination #3029
Conversation
Important Auto Review SkippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the To trigger a single review, invoke the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
Test report for this run is available at: https://test-integrations-dev.s3.amazonaws.com/integrations-test-reports/rudder-transformer/3029/test-report.html |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## hotfix/20240124 #3029 +/- ##
===================================================
+ Coverage 87.07% 87.09% +0.02%
===================================================
Files 530 530
Lines 28812 28800 -12
Branches 6860 6856 -4
===================================================
- Hits 25088 25084 -4
+ Misses 3377 3371 -6
+ Partials 347 345 -2 ☔ View full report in Codecov by Sentry. |
Test report for this run is available at: https://test-integrations-dev.s3.amazonaws.com/integrations-test-reports/rudder-transformer/3029/test-report.html |
Test report for this run is available at: https://test-integrations-dev.s3.amazonaws.com/integrations-test-reports/rudder-transformer/3029/test-report.html |
Co-authored-by: Utsab Chowdhury <[email protected]>
Test report for this run is available at: https://test-integrations-dev.s3.amazonaws.com/integrations-test-reports/rudder-transformer/3029/test-report.html |
Test report for this run is available at: https://test-integrations-dev.s3.amazonaws.com/integrations-test-reports/rudder-transformer/3029/test-report.html |
Test report for this run is available at: https://test-integrations-dev.s3.amazonaws.com/integrations-test-reports/rudder-transformer/3029/test-report.html |
Test report for this run is available at: https://test-integrations-dev.s3.amazonaws.com/integrations-test-reports/rudder-transformer/3029/test-report.html |
Quality Gate passedKudos, no new issues were introduced! 0 New issues |
What are the changes introduced in this PR?
We are removing the access token cache to avoid setting faulty TTL for bulk upload.
Write a brief explainer on your code changes.
While load testing with 1M events we could figure out that, as transformer repository is not having a centralised cache, hence if multiple transformer pods are in working state, they will set up authCache for 1 hr, starting at multiple times.
The problem with marketo is, every time ( read it as from every pod ) when we are fetching access token, it is generating the same access token and only the validity time time changes.
For example:
This is how the chronology of situation takes place.
pod 1 --> generates token at --> 11:00 AM and token is "dummyABC", expiry time : 3000s
a. at this point pod 2 is having the older access token only
pod 2 --> generates token at --> 11:10 AM and token is the same "dummyABC", expiry time : 2400s
But according to our present auth cache system, in pod 1 the token will be valid till 12:00 AM and in pod 2 the token will be valid till 12:10 AM
pod 1 --> refreshes token at 12:00 PM
pod 2 --> is scheduled to refresh at 12:10 PM
which means we have no way to avoid some event failure for pod 2, as the access token is already expired and pod will not refresh it unless some events fail.
For the above scenario, as a quick fix we are dropping authCache implementation for now. As this is a bulk upload destination, we hope we do not exhaust our rate limits ( on the basis of 1M sync, we have not come across any )
What is the related Linear task?
Resolves INT-1404
Please explain the objectives of your changes below
Put down any required details on the broader aspect of your changes. If there are any dependent changes, mandatorily mention them here
Any changes to existing capabilities/behaviour, mention the reason & what are the changes ?
yes.
Presently we are caching the access tokens.
After this change we will make axios call each time for upload, poll and fetch job status.
Any new dependencies introduced with this change?
N/A
Any new generic utility introduced or modified. Please explain the changes.
N/A
Any technical or performance related pointers to consider with the change?
N/A
Developer checklist
My code follows the style guidelines of this project
No breaking changes are being introduced.
All related docs linked with the PR?
All changes manually tested?
Any documentation changes needed with this change?
Is the PR limited to 10 file changes?
Is the PR limited to one linear task?
Are relevant unit and component test-cases added?
Reviewer checklist
Is the type of change in the PR title appropriate as per the changes?
Verified that there are no credentials or confidential data exposed with the changes.