This repository has been archived by the owner on Jul 6, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathlecture16.tex
112 lines (98 loc) · 5.07 KB
/
lecture16.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
\chapter{Caches II}
\section{Cache Terminology}
\subsection{Basics}
\begin{itemize}
\item Cache line/block: single entry in cache
\item Line/block size: number of bytes in each cache line
\item Tag: identifies the data stored at a given cache line
\item Valid bit: indicator for whether data stored at a given cache line is valid
\item Capacity: total number of data bytes that can be stored in a cache
\end{itemize}
\subsection{Temperature}
\begin{itemize}
\item Cold: cache empty
\item Warming: cache filling with values
\item Warm: cache has a fair percentage of hits
\item Hot: cache is doing very well with high percentage of hits
\end{itemize}
\section{Cache Misses}
\begin{itemize}
\item Compulsory Miss: Caused by the first access to a block that has never been in the cache.
\begin{itemize}
\item Valid bit is 0.
\end{itemize}
\item Capacity Miss: Caused when the cache cannot contain all the blocks needed during the execution of a program.
\begin{itemize}
\item Would not occur if we increase the size of the cache; independent of associativity.
\end{itemize}
\item Conflict Miss: Occurs in set-associative or direct mapped caches when multiple blocks compete for the same set.
\begin{itemize}
\item Tag is mismatched; would not occur in a fully associative cache with LRU replacement.
\end{itemize}
\end{itemize}
\subsection{Categorizing Misses}
Run an address trace against a set of caches:
\begin{enumerate}
\item Consider an infinite-size, fully-associative cache $\implies$ every miss is \emph{compulsory}
\item Consider a finite-sized, fully-associative cache $\implies$ every miss that is not compulsory is \emph{capacity}
\item Consider a finite-sized, finite-associative cache $\implies$ remaining misses are \emph{conflict}
\end{enumerate}
\section{Direct Mapped Caches}
Each memory address is associated with exactly one \emph{block} within the cache. We only need to look at a single location to determine whether the data exist within the cache.
\subsection{TIO}
Memory address is divided into three fields (unsigned integers):
\begin{itemize}
\item Order for checking: Index $\rightarrow$ Valid bit $\rightarrow$ Tag $\rightarrow$ Offset
\end{itemize}
\begin{enumerate}
\item Tag: remaining bits after offset and index are determined; distinguishes between all of the memory addresses that map to the same location
\item Index: cache index, i.e. specifies row/block of the cache
\item Offset: location of the byte in the block
\end{enumerate}
Area (cache size) = Height (Index, i.e. number of blocks) * Width (Offset, i.e. size of one block)
\section{Fully Associative Caches}
Data can be stored anywhere in the cache, so there are no indices (rows) or conflict misses. However, it is infeasible as we need one comparator for every entry.
\section{Set Associative Caches}
An N-way set associative cache contains N blocks per set/row.
For a cache with M blocks, it is:
\begin{itemize}
\item Direct mapped if 1-way set associative (number of sets = number of blocks)
\item Fully associative if M-way set associative
\end{itemize}
\section{Write Policies}
\begin{description}
\item[Write-through] Write to the cache and memory at the same time.
\begin{itemize}
\item Simple to implement
\item Redundancy
\end{itemize}
\item[Write-back] Write data only to cache and set the dirty bit to 1. When this line gets evicted from the cache, write to memory.
\begin{itemize}
\item More complex control logic
\item Lowers traffic because you might write to something multiple times before eviction from cache.
\item Less reliable, more possibility for inconsistent state
\end{itemize}
\end{description}
\section{Allocation Policies}
\begin{description}
\item[Write-allocate] On a write miss, you pull the block you missed on into the cache.
\item[No write-allocate] On a write miss, you do not pull the block you missed on into the cache. Only memory is updated.
\begin{itemize}
\item On a read miss, the data is still loaded into the cache.
\end{itemize}
\end{description}
\section{Replacement Policies}
When deciding to evict a cache block to make space:
\begin{description}
\item[LRU (Least Recently Used)] Select the block that has been used the furthest back in time of all the blocks.
\item[Random] Randomly select one of the blocks in the cache to evict.
\item[MRU (Most Recently Used)] Select the block that has been used the most recently of all the blocks.
\end{description}
\subsection{Approximate LRU (Clock Algorithm)}
Least-Recently Used eviction policy replaces the entry that has not been used for the longest time.
\begin{itemize}
\item Each cache line has a \emph{reference bit} set to 1 each time the line is accessed
\item When a line needs to be evicted, we start at the first entry of the cache and check its reference bit; we choose the line if bit is 0
\item Search from where we left off the next time we need to evict a line from cache
\item Note: We do not start evicting lines until all entries in the cache are valid
\end{itemize}