# 146. LRU Cache

Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.

Implement the LRUCache class:

• LRUCache(int capacity) Initialize the LRU cache with positive size capacity.
• int get(int key) Return the value of the key if the key exists, otherwise return -1.
• void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity from this operation, evict the least recently used key.

The functions get and put must each run in O(1) average time complexity.

Example 1:

Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]

Explanation LRUCache lRUCache = new LRUCache(2); lRUCache.put(1, 1); // cache is {1=1} lRUCache.put(2, 2); // cache is {1=1, 2=2} lRUCache.get(1); // return 1 lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3} lRUCache.get(2); // returns -1 (not found) lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3} lRUCache.get(1); // return -1 (not found) lRUCache.get(3); // return 3 lRUCache.get(4); // return 4

Constraints:

• 1 <= capacity <= 3000
• 0 <= key <= 104
• 0 <= value <= 105
• At most 2 * 105 calls will be made to get and put.

Solution1:

#### Hashmap + DoubleLinkedList

Intuition

This Java solution is an extended version of the the article published on the Discuss forum.

The problem can be solved with a hashmap that keeps track of the keys and its values in the double linked list. That results in \mathcal{O}(1)O(1) time for put and get operations and allows to remove the first added node in \mathcal{O}(1)O(1) time as well.

One advantage of double linked list is that the node can remove itself without other reference. In addition, it takes constant time to add and remove nodes from the head or tail.

One particularity about the double linked list implemented here is that there are pseudo head and pseudo tail to mark the boundary, so that we don’t need to check the null node during the update.

Implementation

class DLinkedNode():
def __init__(self):
self.key = 0
self.value = 0
self.prev = None
self.next = None

class LRUCache():
"""
Always add the new node right after head.
"""

def _remove_node(self, node):
"""
Remove an existing node from the linked list.
"""
prev = node.prev
new = node.next

prev.next = new
new.prev = prev

"""
Move certain node in between to the head.
"""
self._remove_node(node)

def _pop_tail(self):
"""
Pop the current tail.
"""
res = self.tail.prev
self._remove_node(res)
return res

def __init__(self, capacity):
"""
:type capacity: int
"""
self.cache = {}
self.size = 0
self.capacity = capacity

def get(self, key):
"""
:type key: int
:rtype: int
"""
node = self.cache.get(key, None)
if not node:
return -1

# move the accessed node to the head;

return node.value

def put(self, key, value):
"""
:type key: int
:type value: int
:rtype: void
"""
node = self.cache.get(key)

if not node:
newNode.key = key
newNode.value = value

self.cache[key] = newNode

self.size += 1

if self.size > self.capacity:
# pop the tail
tail = self._pop_tail()
del self.cache[tail.key]
self.size -= 1
else:
# update the value.
node.value = value
self._move_to_head(node)

Solution2:

Doubly linked lists to store the (key, val) pair, and dictionary mapping key to the corresponding node. Time complexity for both get and put : O(1). Space complexity:

class ListNode(object):

def __init__(self, key, val):
self.key = key
self.val = val
self.prev = None
self.next = None

class LRUCache(object):

def __init__(self, capacity):
"""
:type capacity: int
"""
self.head = ListNode(-1, -1)
self.key2node = {}
self.capacity = capacity
self.length = 0

def get(self, key):
"""
:type key: int
:rtype: int
"""
if key not in self.key2node:
return -1
node = self.key2node[key]
val = node.val
if node.next:
node.prev.next = node.next
node.next.prev = node.prev
self.tail.next = node
node.prev = self.tail
node.next = None
self.tail = node
return val

def put(self, key, value):
"""
:type key: int
:type value: int
:rtype: void
"""
if key in self.key2node:
node = self.key2node[key]
node.val = value
if node.next:
node.prev.next = node.next
node.next.prev = node.prev
self.tail.next = node
node.prev = self.tail
node.next = None
self.tail = node
else:
node = ListNode(key, value)
self.key2node[key] = node
self.tail.next = node
node.prev = self.tail
self.tail = node
self.length += 1
if self.length > self.capacity:
del self.key2node[remove.key]
self.length -= 1

Solution3:

Using OrderedDict:

class LRUCache:

def __init__(self, capacity):
"""
:type capacity: int
"""
self.capacity = capacity
self.dic = collections.OrderedDict()

def get(self, key):
"""
:type key: int
:rtype: int
"""
if key not in self.dic:
return -1
val = self.dic[key]
self.dic.move_to_end(key)
return val

def put(self, key, value):
"""
:type key: int
:type value: int
:rtype: void
"""
self.dic[key] = value
self.dic.move_to_end(key)
if len(self.dic) > self.capacity:
self.dic.popitem(last=False)
• move_to_end():

This method is used to move an existing key of the dictionary either to end or to the beginning. There are two versions of this function –

Syntax:

move_to_end(key, last = True)

If last is True then this method would move an existing key of the dictionary in the end otherwise it would move an existing key of dictionary in the beginning. If the key is moved at the beginning then it serves as FIFO ( First In First Out ) in queue.

Scroll to Top