Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Fault Testing Support #47

Closed
wants to merge 14 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 37 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ Features:
* Performs KZG verification during dispersal to ensure that DA certificates returned from the EigenDA disperser have correct KZG commitments.
* Performs DA certificate verification during dispersal to ensure that DA certificates have been properly bridged to Ethereum by the disperser.
* Performs DA certificate verification during retrieval to ensure that data represented by bad DA certificates do not become part of the canonical chain.
* Compatibility with Optimism's alt-da commitment type with eigenda backend.
* Compatibility with Optimism's keccak-256 commitment type with S3 storage.

In order to disperse to the EigenDA network in production, or at high throughput on testnet, please register your authentication ethereum address through [this form](https://forms.gle/3QRNTYhSMacVFNcU8). Your EigenDA authentication keypair address should not be associated with any funds anywhere.
Expand All @@ -25,7 +26,7 @@ In order to disperse to the EigenDA network in production, or at high throughput
| `--eigenda-disable-point-verification-mode` | `false` | `$EIGENDA_PROXY_DISABLE_POINT_VERIFICATION_MODE` | Disable point verification mode. This mode performs IFFT on data before writing and FFT on data after reading. Disabling requires supplying the entire blob for verification against the KZG commitment. |
| `--eigenda-disable-tls` | `false` | `$EIGENDA_PROXY_GRPC_DISABLE_TLS` | Disable TLS for gRPC communication with the EigenDA disperser. Default is false. |
| `--eigenda-disperser-rpc` | | `$EIGENDA_PROXY_EIGENDA_DISPERSER_RPC` | RPC endpoint of the EigenDA disperser. |
| `--eigenda-eth-confirmation-depth` | `6` | `$EIGENDA_PROXY_ETH_CONFIRMATION_DEPTH` | The number of Ethereum blocks of confirmation that the DA bridging transaction must have before it is assumed by the proxy to be final. If set negative the proxy will always wait for blob finalization. |
| `--eigenda-eth-confirmation-depth` | `-1` | `$EIGENDA_PROXY_ETH_CONFIRMATION_DEPTH` | The number of Ethereum blocks of confirmation that the DA bridging transaction must have before it is assumed by the proxy to be final. If set negative the proxy will always wait for blob finalization. |
| `--eigenda-eth-rpc` | | `$EIGENDA_PROXY_ETH_RPC` | JSON RPC node endpoint for the Ethereum network used for finalizing DA blobs. See available list here: https://docs.eigenlayer.xyz/eigenda/networks/ |
| `--eigenda-g1-path` | `"resources/g1.point"` | `$EIGENDA_PROXY_TARGET_KZG_G1_PATH` | Directory path to g1.point file. |
| `--eigenda-g2-tau-path` | `"resources/g2.point.powerOf2"` | `$EIGENDA_PROXY_TARGET_G2_TAU_PATH` | Directory path to g2.point.powerOf2 file. |
Expand All @@ -42,6 +43,7 @@ In order to disperse to the EigenDA network in production, or at high throughput
| `--log.pid` | `false` | `$EIGENDA_PROXY_LOG_PID` | Show pid in the log. |
| `--memstore.enabled` | `false` | `$MEMSTORE_ENABLED` | Whether to use mem-store for DA logic. |
| `--memstore.expiration` | `25m0s` | `$MEMSTORE_EXPIRATION` | Duration that a mem-store blob/commitment pair are allowed to live. |
| `--memstore.fault-config-path` | `""` | `$MEMSTORE_FAULT_CONFIG_PATH` | Path to fault config json file.
| `--metrics.addr` | `"0.0.0.0"` | `$EIGENDA_PROXY_METRICS_ADDR` | Metrics listening address. |
| `--metrics.enabled` | `false` | `$EIGENDA_PROXY_METRICS_ENABLED` | Enable the metrics server. |
| `--metrics.port` | `7300` | `$EIGENDA_PROXY_METRICS_PORT` | Metrics listening port. |
Expand All @@ -62,15 +64,48 @@ In order to disperse to the EigenDA network in production, or at high throughput
In order for the EigenDA Proxy to avoid a trust assumption on the EigenDA disperser, the proxy offers a DA cert verification feature which ensures that:

1. The DA cert's batch hash can be computed locally and matches the one persisted on-chain in the `ServiceManager` contract
2. The DA cert's blob inclusion proof can be merkalized to generate the proper batch root
2. The DA cert's blob inclusion proof can be successfully verified against the blob-batch merkle root
3. The DA cert's quorum params are adequately defined and expressed when compared to their on-chain counterparts
4. The DA cert's quorum ids map to valid quorums

To target this feature, use the CLI flags `--eigenda-svc-manager-addr`, `--eigenda-eth-rpc`.


#### Soft Confirmations

An optional `--eigenda-eth-confirmation-depth` flag can be provided to specify a number of ETH block confirmations to wait before verifying the blob certificate. This allows for blobs to be accredited upon `confirmation` versus waiting (e.g, 25-30m) for `finalization`. The following integer expressions are supported:
`-1`: Wait for blob finalization
`0`: Verify the cert immediately upon blob confirmation and return the blob
`N where N>0`: Wait `N` blocks before verifying the cert and returning the blob

### In-Memory Backend

An ephemeral memory store backend can be used for faster feedback testing when testing rollup integrations. To target this feature, use the CLI flags `--memstore.enabled`, `--memstore.expiration`.


### Fault Mode

Memstore also supports a configurable fault mode which allows for blob content corruption when reading. This is key for testing sequencer resiliency against incorrect batches as well as testing dispute resolution where an optimistic rollup commitment poster produces a machine state hash irrespective of the actual intended execution.

The configuration lives as a json file with path specified via `--memstore.fault-config-path` CLI flag. It looks like so:
```
{
"all": {
"mode": "honest",
"interval": 1
},
"challenger": {
"mode": "byzantine",
},
}
```

Each key refers to an `actor` with context being shared via the http request and processed accordingly by the server. The following modes are currently supported:
- `honest`: returns the actual blob contents that were persisted to memory
- `interval_byzantine`: blob contents are corrupted every `n` of reads
- `byzantine`: blob contents are corrupted for every read


## Metrics

To the see list of available metrics, run `./bin/eigenda-proxy doc metrics`
Expand Down
7 changes: 6 additions & 1 deletion client/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@ import (

// TODO: Add support for custom http client option
type Config struct {
URL string
Actor string
URL string
}

// ProxyClient is an interface for communicating with the EigenDA proxy server
Expand Down Expand Up @@ -60,6 +61,10 @@ func (c *client) Health() error {
func (c *client) GetData(ctx context.Context, comm []byte) ([]byte, error) {
url := fmt.Sprintf("%s/get/0x%x?commitment_mode=simple", c.cfg.URL, comm)

if c.cfg.Actor != "" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One high level comment, the eigenda-proxy should be configured to work correctly most of time. Is it reasonable to make Actor a lower layer concept subject to a specific boundary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wdym?

url = fmt.Sprintf("%s&actor=%s", url, c.cfg.Actor)
}

req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return nil, fmt.Errorf("failed to construct http request: %e", err)
Expand Down
12 changes: 6 additions & 6 deletions e2e/optimism_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ func TestOptimismKeccak256Commitment(gt *testing.T) {
gt.Skip("Skipping test as INTEGRATION or TESTNET env var not set")
}

proxyTS, close := e2e.CreateTestSuite(gt, useMemory(), true)
proxyTS, close := e2e.CreateTestSuite(gt, useMemory(), true, nil)
defer close()

t := actions.NewDefaultTesting(gt)
Expand Down Expand Up @@ -165,16 +165,16 @@ func TestOptimismKeccak256Commitment(gt *testing.T) {
// assert that EigenDA proxy's was written and read from
stat := proxyTS.Server.GetS3Stats()

require.Equal(t, 1, stat.Entries)
require.Equal(t, 1, stat.Reads)
require.Equal(t, uint(1), stat.Entries)
require.Equal(t, uint(1), stat.Reads)
}

func TestOptimismAltDACommitment(gt *testing.T) {
if !runIntegrationTests && !runTestnetIntegrationTests {
gt.Skip("Skipping test as INTEGRATION or TESTNET env var not set")
}

proxyTS, close := e2e.CreateTestSuite(gt, useMemory(), false)
proxyTS, close := e2e.CreateTestSuite(gt, useMemory(), false, nil)
defer close()

t := actions.NewDefaultTesting(gt)
Expand Down Expand Up @@ -219,7 +219,7 @@ func TestOptimismAltDACommitment(gt *testing.T) {

if useMemory() {
stat := proxyTS.Server.GetMemStats()
require.Equal(t, 1, stat.Entries)
require.Equal(t, 1, stat.Reads)
require.Equal(t, uint(1), stat.Entries)
require.Equal(t, uint(1), stat.Reads)
}
}
Binary file removed e2e/resources/kzg/SRSTables/dimE512.coset1
Binary file not shown.
84 changes: 67 additions & 17 deletions e2e/server_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (

"github.com/Layr-Labs/eigenda-proxy/client"
"github.com/Layr-Labs/eigenda-proxy/e2e"
"github.com/Layr-Labs/eigenda-proxy/store"
"github.com/Layr-Labs/eigenda-proxy/utils"
op_plasma "github.com/ethereum-optimism/optimism/op-plasma"
"github.com/stretchr/testify/require"
Expand All @@ -25,7 +26,7 @@ func TestOptimismClientWithS3Backend(t *testing.T) {

t.Parallel()

ts, kill := e2e.CreateTestSuite(t, useMemory(), true)
ts, kill := e2e.CreateTestSuite(t, useMemory(), true, nil)
defer kill()

daClient := op_plasma.NewDAClient(ts.Address(), false, true)
Expand All @@ -49,7 +50,7 @@ func TestOptimismClientWithEigenDABackend(t *testing.T) {

t.Parallel()

ts, kill := e2e.CreateTestSuite(t, useMemory(), true)
ts, kill := e2e.CreateTestSuite(t, useMemory(), true, nil)
defer kill()

daClient := op_plasma.NewDAClient(ts.Address(), false, false)
Expand All @@ -73,7 +74,7 @@ func TestProxyClient(t *testing.T) {

t.Parallel()

ts, kill := e2e.CreateTestSuite(t, useMemory(), false)
ts, kill := e2e.CreateTestSuite(t, useMemory(), false, nil)
defer kill()

cfg := &client.Config{
Expand All @@ -93,14 +94,65 @@ func TestProxyClient(t *testing.T) {
require.Equal(t, testPreimage, preimage)
}

func TestProxyServerFaultMode(t *testing.T) {
if !runIntegrationTests {
t.Skip("Skipping test as INTEGRATION or TESTNET env var not set")
}

if runTestnetIntegrationTests {
t.Skip("Skipping test since fault mode is only supported for memstore implementations")
}

fc := &store.FaultConfig{
Actors: map[string]store.Behavior{
"sequencer": {
Mode: store.HonestMode,
},

"challenger": {
Mode: store.ByzantineFaultMode,
},
},
}

ts, kill := e2e.CreateTestSuite(t, useMemory(), false, fc)
defer kill()

cfg := &client.Config{
Actor: "sequencer",
URL: ts.Address(),
}
sequencerClient := client.New(cfg)

cfg2 := &client.Config{
Actor: "challenger",
URL: ts.Address(),
}
challengerClient := client.New(cfg2)

var testPreimage = []byte("inter-subjective and not objective!")

blobInfo, err := sequencerClient.SetData(ts.Ctx, testPreimage)
require.NoError(t, err)

preimage, err := sequencerClient.GetData(ts.Ctx, blobInfo)
require.NoError(t, err)
require.Equal(t, testPreimage, preimage)

preimage, err = challengerClient.GetData(ts.Ctx, blobInfo)
require.NoError(t, err)
require.NotEqual(t, testPreimage, preimage)

}

func TestProxyClientWithLargeBlob(t *testing.T) {
if !runIntegrationTests && !runTestnetIntegrationTests {
t.Skip("Skipping test as INTEGRATION or TESTNET env var not set")
}

t.Parallel()

ts, kill := e2e.CreateTestSuite(t, useMemory(), false)
ts, kill := e2e.CreateTestSuite(t, useMemory(), false, nil)
defer kill()

cfg := &client.Config{
Expand All @@ -127,27 +179,24 @@ func TestProxyClientWithOversizedBlob(t *testing.T) {

t.Parallel()

ts, kill := e2e.CreateTestSuite(t, useMemory(), false)
ts, kill := e2e.CreateTestSuite(t, useMemory(), false, nil)
defer kill()

cfg := &client.Config{
URL: ts.Address(),
}
daClient := client.New(cfg)
// 2MB blob
testPreimage := []byte(e2e.RandString(200000000))
// 32MB blob
testPreimage := []byte(e2e.RandString(32_0000_000))

t.Log("Setting input data on proxy server...")
blobInfo, err := daClient.SetData(ts.Ctx, testPreimage)
require.Empty(t, blobInfo)
require.Error(t, err)

oversizedError := false
if strings.Contains(err.Error(), "blob is larger than max blob size") {
oversizedError = true
}

if strings.Contains(err.Error(), "blob size cannot exceed 2 MiB") {
if strings.Contains(err.Error(), "blob is larger than max blob size") ||
strings.Contains(err.Error(), "blob size cannot exceed") {
oversizedError = true
}

Expand All @@ -156,12 +205,13 @@ func TestProxyClientWithOversizedBlob(t *testing.T) {
}

func TestProxyClient_MultiSameContentBlobs_SameBatch(t *testing.T) {
t.Skip("Skipping test until fix is applied to holesky")

if !runIntegrationTests && !runTestnetIntegrationTests {
t.Skip("Skipping test as INTEGRATION or TESTNET env var not set")
}

t.Parallel()

ts, kill := e2e.CreateTestSuite(t, useMemory(), false)
ts, kill := e2e.CreateTestSuite(t, useMemory(), false, nil)
defer kill()

cfg := &client.Config{
Expand All @@ -171,7 +221,7 @@ func TestProxyClient_MultiSameContentBlobs_SameBatch(t *testing.T) {
errChan := make(chan error, 10)
var wg sync.WaitGroup

// disperse 10 blobs with the same content in the same batch
// disperse 10 blobs with the same content into the same batch
for i := 0; i < 4; i ++ {
wg.Add(1)
go func(){
Expand Down Expand Up @@ -229,4 +279,4 @@ func waitTimeout(wg *sync.WaitGroup, timeout time.Duration) bool {
case <-time.After(timeout):
return true
}
}
}
11 changes: 7 additions & 4 deletions e2e/setup.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ type TestSuite struct {
Server *server.Server
}

func CreateTestSuite(t *testing.T, useMemory bool, useS3 bool) (TestSuite, func()) {
func CreateTestSuite(t *testing.T, useMemory bool, useS3 bool, fc *store.FaultConfig) (TestSuite, func()) {

ctx := context.Background()

Expand Down Expand Up @@ -122,6 +122,11 @@ func CreateTestSuite(t *testing.T, useMemory bool, useS3 bool) (TestSuite, func(
ctx,
log,
)

if fc != nil {
store.GetMemStore().SetFaultConfig(fc)
}

require.NoError(t, err)
server := server.NewServer(host, 0, store, log, metrics.NoopMetrics)

Expand Down Expand Up @@ -164,10 +169,8 @@ func createS3Bucket(bucketName string) {
panic(err)
}

location := "us-east-1"

ctx := context.Background()
err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: location})
err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: "us-east-1"})
if err != nil {
// Check to see if we already own this bucket (which happens if you run this twice)
exists, errBucketExists := minioClient.BucketExists(ctx, bucketName)
Expand Down
6 changes: 6 additions & 0 deletions fault.example.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"all": {
"mode": "interval_byzantine",
"interval": 1
}
}
Loading
Loading