From 4700e72b429ce74ffb5fa43ac9989b9a082d7b4f Mon Sep 17 00:00:00 2001 From: Dmitriy Ryajov Date: Mon, 9 May 2022 15:14:21 -0600 Subject: [PATCH 1/7] erasure coding design overview --- design/erasure-coding.md | 59 + design/figs/blocks.svg | 348 ++++++ design/figs/matrix1.svg | 352 ++++++ design/figs/matrix2.svg | 580 ++++++++++ design/figs/matrix3.svg | 2329 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 3668 insertions(+) create mode 100644 design/erasure-coding.md create mode 100644 design/figs/blocks.svg create mode 100644 design/figs/matrix1.svg create mode 100644 design/figs/matrix2.svg create mode 100644 design/figs/matrix3.svg diff --git a/design/erasure-coding.md b/design/erasure-coding.md new file mode 100644 index 0000000..330e671 --- /dev/null +++ b/design/erasure-coding.md @@ -0,0 +1,59 @@ +# Codex Erasure Coding + +We present an interleaved erasure code that is able to encode arbitrary amounts of data despite the relatively small Reed-Solomon codeword size. + +## Overview + +Erasure coding has several complementing uses in the Codex system. It is superior to replication because it provides greater redundancy at lower storage overheads; it strengthens our remote storage auditing scheme by allowing to check only a percentage of the blocks rather than checking the entire dataset; and it helps with downloads as it allows retrieving $K$ arbitrary pieces from any nodes, thus avoiding the so called "stragglers problem". + +We employ an Reed-Solomon code with configurable $K$ and $M$ parameters per dataset and an interleaving structure that allows us to overcome the limitation of a small Galois field (GF) size. + +## Limitations of small GF fields + +### Terminology + +- A $codeword$ ($C$) is the maximum amount of data that can be encoded together, it cannot surpass the size of the GF field. For example $C <= GF(2^8) = C <= 256$. +- A $symbol$ ($s$) is an element of the of the GF field. A $codeword$ is composed of a number of symbols up to the size of the GF field. For example a field of $GF(2^8) = 256$ contains up to 256 symbols. The size of the symbol in bytes is also limited by the size of the GF field, a symbol in $GF(2^8)$ will have a max size of 8 bits or 1 byte. + +The size of the codeword determines the maximum number of symbols that an error correcting code can encode and decode as a unit. For example, a Reed-Solomon code that uses a Galois field of $2^8$ can only encode and decode 256 symbols together. In other words, the size of the field imposes a natural limit on the size of the data that can be coded together. + +Why not use a larger field? The limitations are mostly practical, either memory or CPU bound, or both. For example, most implementation rely on one or more arithmetic tables which have a linear storage overhead proportional to the size of the field. In other words, a fields of $2^{32}$ would require generating and storing several 4GB tables and it would still only allow coding 4GB of data at a time, clearly this isn't enough when the average high definition video file is several times larger than that and specially when the expectation is to allow handling very large, potentially terabyte size datasets, routinely employed in science and big-data. + +## Interleaving + +Interleaving is the process of combining or interweaving several symbols from disparate parts of the source data in ways that minimizes the likelihood that any sequence of these symbols is damaged together compromises the rest of the data. + +In our case, the primary requirement is to ensure that we're preserving the dataset in its entirety and any $K$ elements are still enough to recover the original dataset. This is important to emphasize, we cannot simply encode a chunk of data individually, that would only protect that individual chunk, the code should protect the dataset in its entirety, even if it is terabytes size. + +A secondary but important requirement is to allow appending data without having to re-encode the entire dataset. + +### Codex Interleaving + +Codex employs a type of interleaving where every $K$ symbols spaced at $S$ intervals, are encoded with additional $M$ parity symbols equally spaced at $S$ intervals. This results in a logical matrix where each row is $S$ symbols long and each column is $N=K+M$ symbols tall. + + +The algorithm looks like the following: + +1) Given a dataset chunked in equally sized blocks and rounded up to the next multiple of $K$, where $K$ represents the number of symbols to be coded together and $M$ the resulting parity symbols. +2) Take $K$, $S$ spaced blocks and $K$ symbols $s$ from each of the selected blocks at offset 0 and encode into $M$ parity symbols each placed in new $S$ spaced blocks at the same offset. Repeat this steps at offset $1*sizeof(GF(p))$ and so on. +3) Repeat the above steps for the length of the entire dataset. + +_Original Dataset Blocks_ + +![](./figs/blocks.svg) + +_Matrix form K=3, S=4_ + +![](./figs/matrix1.svg) + +_Matrix form with parity blocks added K=3, M=2, N=5_ + +![](./figs/matrix2.svg) + +_Symbol coding direction_ + +![](./figs/matrix3.svg) + +The resulting structure is a matrix of height $N$ and width $S$, which allows loosing any $M$ "rows" of the matrix and still reconstruct the original dataset. Of course, loosing more than $M$ symbols in a given column would still render the entire dataset invalid, but the likelihood of that happening is mitigated by placing the individual original and parity blocks on independent locations. + +Moreover, the resulting dataset is still systematic, which means that the original blocks are unchanged and can be used without prior decoding. diff --git a/design/figs/blocks.svg b/design/figs/blocks.svg new file mode 100644 index 0000000..181109e --- /dev/null +++ b/design/figs/blocks.svg @@ -0,0 +1,348 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/design/figs/matrix1.svg b/design/figs/matrix1.svg new file mode 100644 index 0000000..f1c9744 --- /dev/null +++ b/design/figs/matrix1.svg @@ -0,0 +1,352 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/design/figs/matrix2.svg b/design/figs/matrix2.svg new file mode 100644 index 0000000..1e4751b --- /dev/null +++ b/design/figs/matrix2.svg @@ -0,0 +1,580 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/design/figs/matrix3.svg b/design/figs/matrix3.svg new file mode 100644 index 0000000..cd4e842 --- /dev/null +++ b/design/figs/matrix3.svg @@ -0,0 +1,2329 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + From 78df544369f68305db3a41801c23ce20a181c1ed Mon Sep 17 00:00:00 2001 From: Dmitriy Ryajov Date: Mon, 30 May 2022 18:01:45 -0600 Subject: [PATCH 2/7] adding section on data placement and erasures --- design/erasure-coding.md | 35 ++++++++++++++++++++++++----------- 1 file changed, 24 insertions(+), 11 deletions(-) diff --git a/design/erasure-coding.md b/design/erasure-coding.md index 330e671..a943f28 100644 --- a/design/erasure-coding.md +++ b/design/erasure-coding.md @@ -1,4 +1,4 @@ -# Codex Erasure Coding +# Erasure Coding We present an interleaved erasure code that is able to encode arbitrary amounts of data despite the relatively small Reed-Solomon codeword size. @@ -13,7 +13,7 @@ We employ an Reed-Solomon code with configurable $K$ and $M$ parameters per data ### Terminology - A $codeword$ ($C$) is the maximum amount of data that can be encoded together, it cannot surpass the size of the GF field. For example $C <= GF(2^8) = C <= 256$. -- A $symbol$ ($s$) is an element of the of the GF field. A $codeword$ is composed of a number of symbols up to the size of the GF field. For example a field of $GF(2^8) = 256$ contains up to 256 symbols. The size of the symbol in bytes is also limited by the size of the GF field, a symbol in $GF(2^8)$ will have a max size of 8 bits or 1 byte. +- A $symbol$ ($s$) is an element of the GF field. A $codeword$ is composed of a number of symbols up to the size of the GF field. For example a field of $GF(2^8) = 256$ contains up to 256 symbols. The size of the symbol in bytes is also limited by the size of the GF field, a symbol in $GF(2^8)$ will have a max size of 8 bits or 1 byte. The size of the codeword determines the maximum number of symbols that an error correcting code can encode and decode as a unit. For example, a Reed-Solomon code that uses a Galois field of $2^8$ can only encode and decode 256 symbols together. In other words, the size of the field imposes a natural limit on the size of the data that can be coded together. @@ -21,7 +21,7 @@ Why not use a larger field? The limitations are mostly practical, either memory ## Interleaving -Interleaving is the process of combining or interweaving several symbols from disparate parts of the source data in ways that minimizes the likelihood that any sequence of these symbols is damaged together compromises the rest of the data. +Interleaving is the process of combining or interweaving several symbols from disparate parts of the source data in ways that minimizes the likelihood that any sequence of these symbols damaged together compromises the rest of the data. In our case, the primary requirement is to ensure that we're preserving the dataset in its entirety and any $K$ elements are still enough to recover the original dataset. This is important to emphasize, we cannot simply encode a chunk of data individually, that would only protect that individual chunk, the code should protect the dataset in its entirety, even if it is terabytes size. @@ -35,25 +35,38 @@ Codex employs a type of interleaving where every $K$ symbols spaced at $S$ inter The algorithm looks like the following: 1) Given a dataset chunked in equally sized blocks and rounded up to the next multiple of $K$, where $K$ represents the number of symbols to be coded together and $M$ the resulting parity symbols. -2) Take $K$, $S$ spaced blocks and $K$ symbols $s$ from each of the selected blocks at offset 0 and encode into $M$ parity symbols each placed in new $S$ spaced blocks at the same offset. Repeat this steps at offset $1*sizeof(GF(p))$ and so on. +2) Take $K$, $S$ spaced blocks and $K$ symbols $s$ from each of the selected blocks at offset 0 and encode into $M$ parity symbols, each placed in new $S$ spaced blocks at the same offset and appended to the end of the sequence of original blocks. Repeat this steps at offset $1*sizeof(GF(p))$ and so on. 3) Repeat the above steps for the length of the entire dataset. -_Original Dataset Blocks_ +Bellow is a graphical outline of the process: + +_The sequence of original blocks_ ![](./figs/blocks.svg) -_Matrix form K=3, S=4_ +_The logical matrix resulting from stacking each $S$ symbols together, where K=3 and S=4_ ![](./figs/matrix1.svg) -_Matrix form with parity blocks added K=3, M=2, N=5_ +_Symbols and Coding direction. Each cell corresponds to a symbol $s$ and each column of symbols are coded together_ -![](./figs/matrix2.svg) +![](./figs/matrix3.svg) -_Symbol coding direction_ +_The resulting matrix with parity blocks added, where K=3, M=2, N=5 and S=4_ -![](./figs/matrix3.svg) +![](./figs/matrix2.svg) -The resulting structure is a matrix of height $N$ and width $S$, which allows loosing any $M$ "rows" of the matrix and still reconstruct the original dataset. Of course, loosing more than $M$ symbols in a given column would still render the entire dataset invalid, but the likelihood of that happening is mitigated by placing the individual original and parity blocks on independent locations. +The resulting structure is a matrix of height $N$ and width $S$. It allows loosing any $M$ "rows" of the matrix and still reconstructing the original dataset. Of course, loosing more than $M$ symbols in a given column would still render the entire dataset invalid, but the likelihood of that happening is mitigated by placing the individual original and parity blocks on independent locations. Moreover, the resulting dataset is still systematic, which means that the original blocks are unchanged and can be used without prior decoding. + +## Data placement and erasures + +The code introduced in the above section satisfies our original requirement of any $K$ blocks allowing to reconstruct the original dataset. In fact, one can easily see that every row in the resulting matrix is a standalone element and any $K$ rows will allow reconstructing the original dataset. + +However, the code is $K$ elements strong only if we operate under the assumption that each failure is independent of each other. Thus, it is a requirement that each row is placed independently. Moreover, the overall strength of the code decreases based on the number of dependent failures. If we place $N$ rows on two independent locations, we can only tolerate $M=N/2$ failures, three will allow tolerating $M=N/3$ failures and so on. Hence, the code is only as strong as the number of independent locations each element is stored on. Thus, each row and better yet, each element (block) of the matrix should be stored independently and in a pattern mitigating dependence, meaning that placing elements of the same column together should be avoided. + +### Adversarial vs random erasures + +If the above placement rules are respect, there is still a problem that needs to be addressed - random erasures. Both the code and the placement rules protect against random erasures, meaning erasures that aren't targeted or coordinated in any way, it doesn't protect against adversarial erasures - targeted or coordinated erasures. + From 5dfc0457c43a9ad0ee09eba84e785506b0fe8331 Mon Sep 17 00:00:00 2001 From: Dmitriy Ryajov Date: Mon, 30 May 2022 18:51:05 -0600 Subject: [PATCH 3/7] expanding on data placement and load balancing --- design/erasure-coding.md | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/design/erasure-coding.md b/design/erasure-coding.md index a943f28..baf67fe 100644 --- a/design/erasure-coding.md +++ b/design/erasure-coding.md @@ -64,9 +64,20 @@ Moreover, the resulting dataset is still systematic, which means that the origin The code introduced in the above section satisfies our original requirement of any $K$ blocks allowing to reconstruct the original dataset. In fact, one can easily see that every row in the resulting matrix is a standalone element and any $K$ rows will allow reconstructing the original dataset. -However, the code is $K$ elements strong only if we operate under the assumption that each failure is independent of each other. Thus, it is a requirement that each row is placed independently. Moreover, the overall strength of the code decreases based on the number of dependent failures. If we place $N$ rows on two independent locations, we can only tolerate $M=N/2$ failures, three will allow tolerating $M=N/3$ failures and so on. Hence, the code is only as strong as the number of independent locations each element is stored on. Thus, each row and better yet, each element (block) of the matrix should be stored independently and in a pattern mitigating dependence, meaning that placing elements of the same column together should be avoided. +However, the code is $K$ rows strong only if we operate under the assumption that each failure is independent of each other. Thus, it is a requirement that each row is placed independently. Moreover, the overall strength of the code decreases based on the number of dependent failures. If we place $N$ rows on two independent locations, we can only tolerate $M=N/2$ failures, three will allow tolerating $M=N/3$ failures and so on. Hence, the code is only as strong as the number of independent locations each element is stored on. Thus, each row and better yet, each element (block) of the matrix should be stored independently and in a pattern mitigating dependence, meaning that placing elements of the same column together should be avoided. + +### Load balancing retrieval + +There is an issue resulting mostly from the systematic nature of the code and the presence of original and parity data. + +It is safe to assume that the systematic (original) data is going to be retrieved more frequently than the parity data, which should only be retrieved when one of the systematic pieces have gone missing thus, in order to prevent congestion and overloading of the systematic nodes some further placement considerations should be taken in to account. Namely, having nodes dedicated to parity data should be avoided. + +This can be accomplished by choosing a placement pattern that avoids favoring some nodes over others when it comes to storing the systematic data, and at the same time it should not break the placement rules around the safety of the code. For example, one can think of a placement strategy where the node gets assigned a block based on the node's position in some queue or its id or a combination thereof. + +Some placing strategies will be explored in a further document. ### Adversarial vs random erasures -If the above placement rules are respect, there is still a problem that needs to be addressed - random erasures. Both the code and the placement rules protect against random erasures, meaning erasures that aren't targeted or coordinated in any way, it doesn't protect against adversarial erasures - targeted or coordinated erasures. +There is still a problem that needs to be addressed - adversarial erasures. Both the code and the placement rules protect against random erasures, meaning erasures that aren't targeted or coordinated in any way, it doesn't protect against adversarial erasures. +In codex, adversarial erasures are partly addressed by the presence of incentives and penalties. Here we make all the standard assumptions around rational behavior, and rely on remote auditing to detect faults, penalize nodes and rebuilt the lost data when faults are detected. From afcab34ef4b0880d34b2993f7c8d9edc6aefb661 Mon Sep 17 00:00:00 2001 From: Dmitriy Ryajov Date: Mon, 30 May 2022 18:57:35 -0600 Subject: [PATCH 4/7] formating --- design/erasure-coding.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/design/erasure-coding.md b/design/erasure-coding.md index baf67fe..33db8a1 100644 --- a/design/erasure-coding.md +++ b/design/erasure-coding.md @@ -44,7 +44,7 @@ _The sequence of original blocks_ ![](./figs/blocks.svg) -_The logical matrix resulting from stacking each $S$ symbols together, where K=3 and S=4_ +_The logical matrix resulting from stacking each $S$ symbols together, where $K=3$ and $S=4$_ ![](./figs/matrix1.svg) @@ -52,7 +52,7 @@ _Symbols and Coding direction. Each cell corresponds to a symbol $s$ and each c ![](./figs/matrix3.svg) -_The resulting matrix with parity blocks added, where K=3, M=2, N=5 and S=4_ +_The resulting matrix with parity blocks added, where $K=3$, $M=2$, $N=5$ and $S=4$_ ![](./figs/matrix2.svg) From ae85a3294b6e04551acf475e0f1a2d148b8bfc22 Mon Sep 17 00:00:00 2001 From: Dmitriy Ryajov Date: Tue, 31 May 2022 09:56:05 -0600 Subject: [PATCH 5/7] fixed figure and minor formating --- design/erasure-coding.md | 8 +- design/figs/matrix3.svg | 1055 +++++++++++++------------------------- 2 files changed, 347 insertions(+), 716 deletions(-) diff --git a/design/erasure-coding.md b/design/erasure-coding.md index 33db8a1..c5a5215 100644 --- a/design/erasure-coding.md +++ b/design/erasure-coding.md @@ -40,19 +40,19 @@ The algorithm looks like the following: Bellow is a graphical outline of the process: -_The sequence of original blocks_ +The sequence of original blocks ![](./figs/blocks.svg) -_The logical matrix resulting from stacking each $S$ symbols together, where $K=3$ and $S=4$_ +The logical matrix resulting from stacking each $S$ symbols together, where $K=3$ and $S=4$ ![](./figs/matrix1.svg) -_Symbols and Coding direction. Each cell corresponds to a symbol $s$ and each column of symbols are coded together_ +Each cell corresponds to a symbol $s$ and each column of symbols are coded together ![](./figs/matrix3.svg) -_The resulting matrix with parity blocks added, where $K=3$, $M=2$, $N=5$ and $S=4$_ +The resulting matrix with parity blocks added, where $K=3$, $M=2$, $N=5$ and $S=4$ ![](./figs/matrix2.svg) diff --git a/design/figs/matrix3.svg b/design/figs/matrix3.svg index cd4e842..be07816 100644 --- a/design/figs/matrix3.svg +++ b/design/figs/matrix3.svg @@ -8,18 +8,18 @@ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" version="1.1" - viewBox="0 0 284.55337 252.90796" + viewBox="0 0 309.93399 309.93398" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" - id="svg699" + id="svg587" sodipodi:docname="matrix3.svg" - width="284.55338" - height="252.90796" + width="309.93399" + height="309.93399" inkscape:version="1.0.2 (e86c8708, 2021-01-15)"> + id="metadata593"> @@ -31,7 +31,7 @@ + id="defs591" /> + inkscape:current-layer="svg587" /> + id="p.0"> + clip-path="url(#p.0)" + id="g585" + transform="translate(-301.39584,-218.57603)"> + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + id="path575" /> + id="path577" /> + id="path579" /> + id="path581" /> + id="path583" /> - + rx="3.22701" + ry="150.85622" /> From 16694382456536199a110da70a4738f3090b6bb8 Mon Sep 17 00:00:00 2001 From: Dmitriy Ryajov Date: Tue, 31 May 2022 12:47:22 -0600 Subject: [PATCH 6/7] small typo --- design/erasure-coding.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/design/erasure-coding.md b/design/erasure-coding.md index c5a5215..a8a88c1 100644 --- a/design/erasure-coding.md +++ b/design/erasure-coding.md @@ -70,7 +70,7 @@ However, the code is $K$ rows strong only if we operate under the assumption tha There is an issue resulting mostly from the systematic nature of the code and the presence of original and parity data. -It is safe to assume that the systematic (original) data is going to be retrieved more frequently than the parity data, which should only be retrieved when one of the systematic pieces have gone missing thus, in order to prevent congestion and overloading of the systematic nodes some further placement considerations should be taken in to account. Namely, having nodes dedicated to parity data should be avoided. +It is safe to assume that the systematic (original) data is going to be retrieved more frequently than the parity data, which should only be retrieved when one of the systematic pieces has gone missing thus, in order to prevent congestion and overloading of the systematic nodes some further placement considerations should be taken in to account. Namely, having nodes dedicated to parity data should be avoided. This can be accomplished by choosing a placement pattern that avoids favoring some nodes over others when it comes to storing the systematic data, and at the same time it should not break the placement rules around the safety of the code. For example, one can think of a placement strategy where the node gets assigned a block based on the node's position in some queue or its id or a combination thereof. From 8c45b1fc8a1e04e1f77bc57f0a9ae0d191270018 Mon Sep 17 00:00:00 2001 From: Dmitriy Ryajov Date: Tue, 31 May 2022 13:26:02 -0600 Subject: [PATCH 7/7] more wording --- design/erasure-coding.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/design/erasure-coding.md b/design/erasure-coding.md index a8a88c1..3698a19 100644 --- a/design/erasure-coding.md +++ b/design/erasure-coding.md @@ -70,7 +70,7 @@ However, the code is $K$ rows strong only if we operate under the assumption tha There is an issue resulting mostly from the systematic nature of the code and the presence of original and parity data. -It is safe to assume that the systematic (original) data is going to be retrieved more frequently than the parity data, which should only be retrieved when one of the systematic pieces has gone missing thus, in order to prevent congestion and overloading of the systematic nodes some further placement considerations should be taken in to account. Namely, having nodes dedicated to parity data should be avoided. +It is safe to assume that the systematic (original) data is going to be retrieved more frequently than the parity data, which should only be retrieved when some of the systematic pieces have gone missing thus, in order to prevent congestion and overloading of the systematic nodes some further placement considerations should be taken in to account. Namely, having nodes dedicated to parity data should be avoided. This can be accomplished by choosing a placement pattern that avoids favoring some nodes over others when it comes to storing the systematic data, and at the same time it should not break the placement rules around the safety of the code. For example, one can think of a placement strategy where the node gets assigned a block based on the node's position in some queue or its id or a combination thereof.