public class LRUCache<K,V> extends AbstractCacheMap<K,V>
Items are added to the cache as they are accessed; when the cache is full, the least recently used item is ejected. This type of cache is typically implemented as a linked list, so that an item in cache, when it is accessed again, can be moved back up to the head of the queue; items are ejected from the tail of the queue. Cache access overhead is again constant time. This algorithm is simple and fast, and it has a significant advantage over FIFO in being able to adapt somewhat to the data access pattern; frequently used items are less likely to be ejected from the cache. The main disadvantage is that it can still get filled up with items that are unlikely to be reaccessed soon; in particular, it can become useless in the face of scanning type accesses. Nonetheless, this is by far the most frequently used caching algorithm.
Implementation note: unfortunately, it was not possible to have onRemove
callback method,
since LinkedHashMap
has its removal methods private.
Summary for LRU: fast, adaptive, not scan resistant.
cacheMap, cacheSize, existCustomTimeout, hitCount, missCount, timeout
Constructor and Description |
---|
LRUCache(int cacheSize) |
LRUCache(int cacheSize,
long timeout)
Creates a new LRU cache.
|
Modifier and Type | Method and Description |
---|---|
protected int |
pruneCache()
Prune only expired objects,
LinkedHashMap will take care of LRU if needed. |
protected boolean |
removeEldestEntry(int currentSize)
Removes the eldest entry if current cache size exceed cache size.
|
clear, createCacheObject, get, getHitCount, getMissCount, isEmpty, isFull, isPruneExpiredActive, isReallyFull, limit, onRemove, prune, put, put, remove, size, snapshot, timeout
public LRUCache(int cacheSize)
public LRUCache(int cacheSize, long timeout)
protected boolean removeEldestEntry(int currentSize)
protected int pruneCache()
LinkedHashMap
will take care of LRU if needed.pruneCache
in class AbstractCacheMap<K,V>
Copyright © 2003-present Jodd Team