[Coldbox 6] Some Thoughts on Cache Invalidation

I’ve been giving a lot of thought to different strategies for manually invalidating event cache entries. There are a few different approaches to clear cache items, for example:

//Purge all events from the blog handler
getCache( "template" ).clearEvent( 'blog' );

//Purge all permutations of the blog.dspBlog event
getCache( "template" ).clearEvent( 'blog.show' );

//Purge the blog.dspBlog event with entry of 12345
getCache( "template" ).clearEvent( 'blog.show', 'id=12345' )

//Purge based on just a part of the cache key
getCache( 'template' ).clearByKeySnippet( 'something' );

The downside of these approaches is that you need to know something about the cacheKey, and there’s a limitation when generating an EVENT_CACHE_SUFFIX because the closure only receives a single argument eventHandlerBean which doesn’t include anything specific about what is being cached.

Let’s say a user has a blogging application, and they want to expire/delete a cached entry any time they update a blog post. You could accomplish this with a custom interception point that would get the template cache, look for the appropriate key, and then clear it. However, what if the handler uses an SEO friendly slug in the URL and not the ID, and the CMS references the post by id? It gets a bit tricky, doesn’t it?

A Possible Solution:

What if a Coldbox cache object contained a metadata struct developers could use to describe the cached object? The updated cache object might look something like this:

  "contentType": "",
  "encoding": "",
  "isBinary": "",
  "renderData": "",
  "renderedContent": "",
  "responseHeaders": {},
  "statusCode": "",
  "statusText": "",
  // new metadata key
  "metadata": {
     "internalKey": "post-1234"

Then, when a user wants to purge an entry from the cache with a post id of 1234, they could run a hypothetical method like this:
getCache( 'template' ).clearByMetadata( 'internalKey', 'post-1234' );

We would then need some way for the handler to pass this dynamic metadata onto the cache. I was thinking a good way to do this would be to utilize the existing prc scope struct cbox_eventCacheableEntry. A user could append a special metadata key like this:

// handler method that shows a blog post
function show( event, rc, prc ) cache="true" cacheTimeout="30" {
  // get the entity
  prc.post = postService.getById( 1234 );

  // set the cache metadata
  cbox_eventCacheableEntry.metadata = { "internalKey", 1234 }

  // ... etc

Thinking outside the box:

We could make things even cooler if we could use a closure function inside of our hypothetical clearByMetadata() method. So users could get pretty advanced with their “seek and destroy” functions:

getCache( 'template' ).clearByMetadata( function( metadata ) { 
  // return true if the cache item should be snuffed out
  return ( metadata.keyExists( "internalKey" ) && metadata.internalKey == "post-1234" );
} );`

What do you think? Would something like this be beneficial for anyone else?

One concern would be that the metadata structure you propose could be larger (in bytes/storage) than the actual cache entry. The idea has merit, but it might be that when the cache item is created, that an “advanced” flag is passed along with it or that it only applies to “event” cache keys (or handlers, etc.) I bring this up because the cache can be used for simple values, too, or small structs.

Secondly, what kind of speed impacts would this have. Cache conflicts or locking issues can be a real thing, and we’d want cache items to be cleared very quickly.

Just a couple thoughts off the top of my head. I’m not a big “cache guy”, and don’t have a ton of experience in it, but these thoughts do surface as I read through your post. I’m sure the Ortus guys have much deeper considerations! :slight_smile:

1 Like

@CaptainPalapa thank you for such an insightful response. You’re absolutely right that performance and efficiency will be key when implementing something like this. I also agree that this type of caching implementation would work best with events, at least for now.

I don’t foresee any performance issues since most users would probably only need to store minimal data in the cache object’s metadata. Therefore “setting” the cache metadata shouldn’t impact performance much (in theory, of course). The biggest performance impact would likely be the process of actually seeking and destroying the cache’d item(s) since it involves a loop. However, I don’t think that would be much of a negative at all considering the benefit of being able to expire specific events at will and the frequency of expiring cached items would likely be low.

I know the Ortus team is pretty busy right now recovering after HappyBox and the reggaeton dance party , :wink: I’ll take a stab at creating a PR just to get a proof of concept working.

Yes and no on the loop to seek and destroy. With Lucee and CF having closures, and the team using that a lot, doing this would definitely involve some kind of array.filter({}) operation, which I’m pretty sure is more efficient than good ol’ . Not sure off the top of my head, but there may even be parallel processing features available. (Might be confusing my languages.) If so, that could considerably speed up the operation.