Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token ops caching #9

Closed
c0c0n3 opened this issue Dec 18, 2019 · 3 comments
Closed

Token ops caching #9

c0c0n3 opened this issue Dec 18, 2019 · 3 comments

Comments

@c0c0n3
Copy link
Member

c0c0n3 commented Dec 18, 2019

Caching is an important aspect of the Istio architecture:

From what I've gathered up to this point through debugging and code inspection is that the ValidDuration and ValidUseCount fields returned in adapter responses get used by the Mixer/Envoy to cache those responses.

This affects both token validation and generation. In fact, if a positive validation response gets cached for a time period C, then we've got to be sure

now + C < expiry_date(token)

otherwise security flies out of the window.

If you don't set the ValidDuration and ValidUseCount fields explicitly in the adapter response, it looks like the Mixer fills them in for you with default values. I ran a test where the adapter just ignores the fields and then I could see the Mixer client outputting some values instead:

$ sh scripts/send-token.sh my.fat.jwt
...
Check RPC completed successfully. Check status was OK
Valid use count: 10000, valid duration: 1m0s

So it looks like if we want security, we'll have to deal with caching explicitly to make sure the above invariant holds true at all times. Ditto for the response attribute we use to store the generated server token.

@c0c0n3
Copy link
Member Author

c0c0n3 commented Dec 18, 2019

see also: istio/istio#19289

@c0c0n3
Copy link
Member Author

c0c0n3 commented Mar 23, 2020

We're going to roll out our own caching solution instead of piggybacking on the Mixer's.

Rationale

  • Since the Mixer architecture has been deprecated (see Doomsday?! #29), using its caching mechanism means there's no way we'll be able to reuse it in the future. Developing our own gives us a chance to reuse the caching component if we switch over to wasm.
  • Having the Mixer cache XACML calls isn't going to be a walk in the park. In fact, cache keys get generated from a set of attributes (see this) which we'll have to extend. Quite a bit of work for very little gain---keep Doomsday?! #29 in mind!
  • DAPS ID token can't be cached along with the XACML response since it will, in general, have a different TTL than that of the client connector's token. So we'll have to generate a new attribute just for the ID token which means more templates, config, etc. Again, not worth it, given Doomsday?! #29.
  • Not using Mixer/Envoy caching comes at a price though: on each request, the Mixer will call the adapter. On average this may add about 5ms latency to each call, according to Istio benchmarks---see also this. It looks like a small price to pay though in comparison to the effort we'd need to put in to make use of Mixer caching in an effective way.

Proposed solution

Develop a component with an interface hiding whether the backing cache is in memory or distributed. For now, use an in-memory backing store with a map interface. Key for the ID token can be any constant string and TTL = token's exp. For XACML, key = hash(X) where X = call input params and TTL = exp of client connector's token.

After combing the Go ecosystem, Ristretto came up as one of the best options out there. Its robust design based on recent research on caching strategies caters for memory management, concurrency & lock contention, smart admission/eviction, TTL, etc. So that's the backing cache we're going to use.

@c0c0n3 c0c0n3 mentioned this issue Apr 2, 2020
@c0c0n3
Copy link
Member Author

c0c0n3 commented Apr 3, 2020

closed by #35.

@c0c0n3 c0c0n3 closed this as completed Apr 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant