Skip to content

ConcurrentLfu

Alex Peck edited this page Nov 22, 2023 · 10 revisions

ConcurrentLfu approximates a LFU cache using the W-TinyLfu policy. W-TinyLfu tracks items using a window LRU list, and a main space LRU divided into protected and probation segments. Reads and writes to the cache are stored in buffers and later applied to the policy LRU lists in batches under a lock. Each read and write is tracked using a compact popularity sketch to probalistically estimate item frequency.

ConcurrentLfu closely follows the approach taken in Java's Caffeine library by Ben Manes.

graph LR
    new
    window
    subgraph main
    probation
    protected
    end
    new --> window --> probation--> protected 
    protected --> probation
    probation ---> evicted
    window ---> evicted
    style new fill:#0000,stroke:#0000
    style evicted fill:#0000,stroke:#0000
    style main fill:#eee,stroke:#ddd
Loading

Items proceed through the LRU lists as follows:

  • New items are added to the window LRU. When acessed window items move to the window MRU position.
  • When the window is full, candidate items are moved to the probation segment in LRU order.
  • When the main space is full, the access frequency of each window candidate is compared to probation victims in LRU order. The item with the lowest frequency is evicted until the cache size is within bounds.
  • When a probation item is accessed, it is moved to the protected segment. If the protected segment is full, the LRU protected item is demoted to probation.
  • When a protected item is accessed, it is moved to the protected MRU position.

The size of the admission window and main space are adapted over time to iteratively improve hit rate using a hill climbing algorithm. A larger window favors workloads with high recency bias, whereas a larger main space favors workloads with frequency bias.