Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide caching mechanism for introspection #116

Open
livio-a opened this issue Jul 22, 2022 · 3 comments
Open

Provide caching mechanism for introspection #116

livio-a opened this issue Jul 22, 2022 · 3 comments
Labels
category: backend enhancement New feature or request

Comments

@livio-a
Copy link
Member

livio-a commented Jul 22, 2022

We should provide the ability to configure a cache in the introspection interceptor so that not every call is sent to the endpoint.

The cache can either be implemented by the interceptor itself or at least be provided by the implementation using this library.

@livio-a livio-a added the enhancement New feature or request label Jul 22, 2022
@hifabienne hifabienne moved this to 📨 Product Backlog in Product Management Dec 29, 2022
@danielloader
Copy link
Contributor

Implemented this myself using https://github.com/erni27/imcache but would love to have it integrated into the interceptor so I can just pass a TTL to it and forget about it.

@fforootd
Copy link
Member

Implemented this myself using https://github.com/erni27/imcache but would love to have it integrated into the interceptor so I can just pass a TTL to it and forget about it.

Ah nice!

Would you be open for a PR?

@danielloader
Copy link
Contributor

danielloader commented Oct 21, 2024

I just wrapped the CheckAuthorization call with a GetOrSet function - https://github.com/erni27/imcache/blob/0991f9bd3aa1eaeefe9a01a25fb9a1da9355d4bb/imcache.go#L376-L420.

I'm not sure if it's a good cache, the best cache for the job or anything - I didn't need gc-lite caches relying on sharded pointer slices and other magic, I'm storing hundreds of introspection sessions at most ever, primarily used when the same access token makes a flurry of frontend async PromiseAll style batch calls.

This cache allowed me to set a default TTL, a goroutine to clean up old entries in the background, a maximum cache key size to prevent OOM - happy enough with it!

It's been good with a TTL of 60s, so the first call incurs a 100ms latency hit (as my introspection calls mostly do) and the subsequent calls for 60s are only incurring 1ms penalty to return the introspection response from the cache (over gRPC via istio/envoy).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: backend enhancement New feature or request
Projects
Status: 📨 Product Backlog
Development

No branches or pull requests

4 participants