Replies: 2 comments 8 replies
-
Yes, @joerdav and I have discussed introducing a cache component for partial output caching. The main issue is, as you might guess, cache invalidation - one of the two hard problems of computer science. 😁 Here's a working example of a partial output cache in templ: package main
import (
"bytes"
"context"
"io"
"log/slog"
"os"
"time"
)
type cacheContent struct {
expireAt time.Time
content []byte
}
var cacheKeyToContent = make(map[string]cacheContent)
type cacheComponent struct {
cacheDuration time.Duration
key string
}
func (c cacheComponent) Render(ctx context.Context, w io.Writer) error {
// Get children.
children := templ.GetChildren(ctx)
ctx = templ.ClearChildren(ctx)
if children == nil {
return nil
}
cc, isCached := cacheKeyToContent[c.key]
if isCached && cc.expireAt.After(time.Now()) {
_, err := w.Write(cc.content)
return err
}
// Render children to a buffer.
var buf bytes.Buffer
err := children.Render(ctx, &buf)
if err != nil {
return err
}
// Cache the result.
cacheKeyToContent[c.key] = cacheContent{
expireAt: time.Now().Add(c.cacheDuration),
content: buf.Bytes(),
}
// Write the result to the output.
_, err = w.Write(buf.Bytes())
return err
}
func Cache(key string, duration time.Duration) templ.Component {
return cacheComponent{
cacheDuration: duration,
key: key,
}
}
func slowDependency() (string, error) {
time.Sleep(5 * time.Second)
return "I took a while", nil
}
templ SlowComponent() {
<div>{ slowDependency() }</div>
}
templ DemonstrateCache() {
@Cache("slowCacheKey", 10*time.Second) {
@SlowComponent()
}
}
func main() {
log := slog.New(slog.NewTextHandler(os.Stderr, nil))
// Render the template.
ctx := context.Background()
log.Info("Rendering template, this should take 5 seconds")
start := time.Now()
err := DemonstrateCache().Render(ctx, os.Stdout)
if err != nil {
log.Error("failed to render template", slog.Any("error", err))
os.Exit(1)
}
log.Info("Render complete", slog.Duration("time-taken", time.Since(start)))
log.Info("Rendering template again, this should be instant")
start = time.Now()
err = DemonstrateCache().Render(ctx, os.Stdout)
log.Info("Render complete", slog.Duration("time-taken", time.Since(start)))
} The result of this is:
Use of a cache is a tradeoff between process RAM consumption, and the cost of rendering again. Unless you're doing a network fetch in the component, then the cost of rendering a templ component is very low. You could argue, therefore, that a better pattern would be to cache the data, before attempting to render components, as per the https://templ.guide/core-concepts/view-models/ concept, rather than making template rendering more complex. re: your idea of computing whether something can be cached, constants in Go aren't really very useful, and AFAIK, you can't analyse whether something is a pure function (sadly): https://go.dev/play/p/aiYFPU8URBQ so the burden of deciding whether to cache the output of a template rendering operation falls to the programmer, hence the API above having a cache key parameter. Other considerations would be out-of-process caching in Redis or something else, where the tradeoff is then the cost of pulling the data back over the network from the cache vs rendering it again. I think a partial output cache could be useful even if the output of a template render isn't deterministic. For example, if I'm displaying available stock in a shop, the value might be unlikely to change on every page load, so I might want to avoid hitting a backend DB by caching the result. For entire page level output caching, then I think most people would/should/could use a CDN, or some more general purpose HTTP middleware. As you can see, there's no need per se for this to be in the core of templ until we work out the right API. We've got https://templ.guide/experimental/overview packages which might be the best place to start such a thing, and see whether anyone actually values it. |
Beta Was this translation helpful? Give feedback.
-
I think something experimental is very reasonable, and your proposal is a good start to test interest. This approach allows one to easily bake parameters into their cache key too, if they want, through a simple A couple of comments on the caching itself:
|
Beta Was this translation helpful? Give feedback.
-
Hi. I'm fairly new to this project, but my first use is going smoothly, and I'm really impressed!
As I've gone over the docs and peeked into the code, one topic I've not seen much of is caching. Watch mode involves some caching, but I was thinking about an in-memory cache for templates/components.
If a component is a pure function that only references other pure functions/components, and its inputs are value types (not pointers), then a cache of
{component, args} → HTML
should be possible. This could be beneficial with a highly composed app. For example, a styled Button might used in many places but only with a few labels. A form might only have a few different sets of fields, etc.Knowing what components can be safely cached sounds non-trivial for the general case, since a template can call any function. Instead, this could be the developer's responsibility, e.g. by marking a template as
const
to indicate it is cacheable. The compiler+runtime could, over time, get better at detecting and warning if aconst
template can't be safely cached.I was going to start playing around with this concept, but I'm already overcommitted and probably won't get to it for a while. I'm curious if this idea has been discussed before? I'm not even sure it is worth it. But if templates are calling deterministic but slow functions, maybe it is.
Beta Was this translation helpful? Give feedback.
All reactions