-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Incremental Cache improvements #621
Comments
From what i've understood of what they're saying and also observed it myself the way it works in our case is exactly how we want it to work. Except for PPR cases (which are special), this will get called in 2 situations
It can become costly when you use the fetch cache a lot (especially in SSR) or very low revalidation time with a lot of pages, but that's kind of expected. As far as i know they use an LRU cache only in The problem with an In-Memory LRU cache is with on demand revalidation. There is simply no way to tell all the alive lambda/function about that without some custom runtime and a continously running lambda, and even then, is it worth it ? There is an alternative that would be to create a multi tiered cache system, it should probably be in 3-tier :
But this is a lot more work, a lot more potential breaking point (what happens if something goes wrong while you've inserted in DDB, but not yet in S3), and this is only really useful if using a lot of fetch cache or very low revalidation time and a lot of POP in the CDN requesting at about the same time. We also have to know if the entry is stale, which i think we can't know on earlier version of next. And to add to that, this is probably totally useless to do that in cloudflare if they use something like KV instead of R2/S3 The way i see it, we could create a new |
I'm going to experiment w/ this on Fly machines. Even when I build static pages, doing a reload on static pages still triggers a cache |
Is it a miss on the CDN as well ? If that's the case, yeah that's expected as well.
And with App router you'll get a ton of CDN miss because you have 1 different CDN entry for every different RSC link to a page ( and the html of course ). |
No this is just testing local via |
Yeah i expected that as well, i wonder if this cannot even happen on dev mode with the This prop should probably be avoided without a CDN anyway |
I'm starting to get the hang of the cacheHandler. According to the types, // I still think we should use a local cache here. We invalidate it when the age expires.
try {
const result = await s3Client.send(
new GetObjectCommand({
Bucket: CACHE_BUCKET_NAME,
Key: buildS3Key(key, isFetch ? "fetch" : "cache"),
}),
);
const cacheData = JSON.parse(
(await result.Body?.transformToString()) ?? "{}",
);
return {
value: cacheData,
lastModified: result.LastModified?.getTime(),
};
} catch (err) {
// it really doesn't matter what the error is, return null so that next server can generate the page and call `set()`
return null
} |
It is doing exactly that, both opennextjs-aws/packages/open-next/src/adapters/cache.ts Lines 137 to 159 in 44f3678
This will cause trouble with page router On Demand Revalidation, it's fine without it. Maybe an option for people not using On Demand revalidation, but not something we'd want enabled by default. |
Ah ok I was looking at the wrong file. I forgot OpenNext has to support a bunch of old versions so it's doing a lot of work 😬 |
https://github.com/opennextjs/opennextjs-aws/blob/main/packages/open-next/src/overrides/incrementalCache/s3.ts#L43-L58
I had a conversation with the Vercel devs and they said that their cacheHandle makes use of
lru-cache
to prevent extraneous calls to S3 (or any external datastore). When I was usingsst NextjsSite
, I noticed lots of S3 hits too:Whenever a page is reloaded, it would make the S3 call due to the
get
. Is this correct behavior? Or should we placelru-cache
here before hitting S3? eg if cache is null or is stale.CC: @conico974 @vicb
The text was updated successfully, but these errors were encountered: