Skip to content

Commit

Permalink
chore: lagniappe v1.4 updates
Browse files Browse the repository at this point in the history
  • Loading branch information
epociask committed Aug 13, 2024
1 parent 8e47a1b commit 5048e9d
Show file tree
Hide file tree
Showing 4 changed files with 73 additions and 57 deletions.
58 changes: 43 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,3 @@
![Compiles](https://github.com/Layr-Labs/eigenda-proxy/actions/workflows/build.yml/badge.svg)
![Unit Tests](https://github.com/Layr-Labs/eigenda-proxy/actions/workflows/unit-tests.yml/badge.svg)
![Linter](https://github.com/Layr-Labs/eigenda-proxy/actions/workflows/gosec.yml/badge.svg)
![Integration Tests](https://github.com/Layr-Labs/eigenda-proxy/actions/workflows/holesky-test.yml/badge.svg)

# EigenDA Sidecar Proxy

## Introduction
Expand All @@ -17,6 +12,7 @@ Features:
* Performs KZG verification during dispersal to ensure that DA certificates returned from the EigenDA disperser have correct KZG commitments.
* Performs DA certificate verification during dispersal to ensure that DA certificates have been properly bridged to Ethereum by the disperser.
* Performs DA certificate verification during retrieval to ensure that data represented by bad DA certificates do not become part of the canonical chain.
* Compatibility with Optimism's alt-da commitment type with eigenda backend.
* Compatibility with Optimism's keccak-256 commitment type with S3 storage.

In order to disperse to the EigenDA network in production, or at high throughput on testnet, please register your authentication ethereum address through [this form](https://forms.gle/3QRNTYhSMacVFNcU8). Your EigenDA authentication keypair address should not be associated with any funds anywhere.
Expand All @@ -30,9 +26,9 @@ In order to disperse to the EigenDA network in production, or at high throughput
| `--eigenda-disable-point-verification-mode` | `false` | `$EIGENDA_PROXY_DISABLE_POINT_VERIFICATION_MODE` | Disable point verification mode. This mode performs IFFT on data before writing and FFT on data after reading. Disabling requires supplying the entire blob for verification against the KZG commitment. |
| `--eigenda-disable-tls` | `false` | `$EIGENDA_PROXY_GRPC_DISABLE_TLS` | Disable TLS for gRPC communication with the EigenDA disperser. Default is false. |
| `--eigenda-disperser-rpc` | | `$EIGENDA_PROXY_EIGENDA_DISPERSER_RPC` | RPC endpoint of the EigenDA disperser. |
| `--eigenda-eth-confirmation-depth` | `6` | `$EIGENDA_PROXY_ETH_CONFIRMATION_DEPTH` | The number of Ethereum blocks of confirmation that the DA bridging transaction must have before it is assumed by the proxy to be final. If set negative the proxy will always wait for blob finalization. |
| `--eigenda-eth-confirmation-depth` | `-1` | `$EIGENDA_PROXY_ETH_CONFIRMATION_DEPTH` | The number of Ethereum blocks of confirmation that the DA bridging transaction must have before it is assumed by the proxy to be final. If set negative the proxy will always wait for blob finalization. |
| `--eigenda-eth-rpc` | | `$EIGENDA_PROXY_ETH_RPC` | JSON RPC node endpoint for the Ethereum network used for finalizing DA blobs. See available list here: https://docs.eigenlayer.xyz/eigenda/networks/ |
| `--eigenda-g1-path` | `"resources/g1.point.1048576"` | `$EIGENDA_PROXY_TARGET_KZG_G1_PATH` | Directory path to g1.point file. |
| `--eigenda-g1-path` | `"resources/g1.point"` | `$EIGENDA_PROXY_TARGET_KZG_G1_PATH` | Directory path to g1.point file. |
| `--eigenda-g2-tau-path` | `"resources/g2.point.powerOf2"` | `$EIGENDA_PROXY_TARGET_G2_TAU_PATH` | Directory path to g2.point.powerOf2 file. |
| `--eigenda-max-blob-length` | `"2MiB"` | `$EIGENDA_PROXY_MAX_BLOB_LENGTH` | Maximum blob length to be written or read from EigenDA. Determines the number of SRS points loaded into memory for KZG commitments. Example units: '30MiB', '4Kb', '30MB'. Maximum size slightly exceeds 1GB. |
| `--eigenda-put-blob-encoding-version` | `0` | `$EIGENDA_PROXY_PUT_BLOB_ENCODING_VERSION` | Blob encoding version to use when writing blobs from the high-level interface. |
Expand All @@ -47,6 +43,7 @@ In order to disperse to the EigenDA network in production, or at high throughput
| `--log.pid` | `false` | `$EIGENDA_PROXY_LOG_PID` | Show pid in the log. |
| `--memstore.enabled` | `false` | `$MEMSTORE_ENABLED` | Whether to use mem-store for DA logic. |
| `--memstore.expiration` | `25m0s` | `$MEMSTORE_EXPIRATION` | Duration that a mem-store blob/commitment pair are allowed to live. |
| `--memstore.fault-config-path` | `""` | `$MEMSTORE_FAULT_CONFIG_PATH` | Path to fault config json file.
| `--metrics.addr` | `"0.0.0.0"` | `$EIGENDA_PROXY_METRICS_ADDR` | Metrics listening address. |
| `--metrics.enabled` | `false` | `$EIGENDA_PROXY_METRICS_ENABLED` | Enable the metrics server. |
| `--metrics.port` | `7300` | `$EIGENDA_PROXY_METRICS_PORT` | Metrics listening port. |
Expand All @@ -58,8 +55,6 @@ In order to disperse to the EigenDA network in production, or at high throughput
| `--s3.bucket` | | `$EIGENDA_PROXY_S3_BUCKET` | Bucket name for S3 storage. |
| `--s3.path` | | `$EIGENDA_PROXY_S3_PATH` | Bucket path for S3 storage. |
| `--s3.endpoint` | | `$EIGENDA_PROXY_S3_ENDPOINT` | Endpoint for S3 storage. |
| `--s3.backup` | | `$EIGENDA_PROXY_S3_BACKUP` | Enable parallel blob backup to S3 from EigenDA. |
| `--s3.timeout` | | `$EIGENDA_PROXY_S3_TIMEOUT` | Timeout duration for S3 operations. |
| `--help, -h` | `false` | | Show help. |
| `--version, -v` | `false` | | Print the version. |

Expand All @@ -69,15 +64,48 @@ In order to disperse to the EigenDA network in production, or at high throughput
In order for the EigenDA Proxy to avoid a trust assumption on the EigenDA disperser, the proxy offers a DA cert verification feature which ensures that:

1. The DA cert's batch hash can be computed locally and matches the one persisted on-chain in the `ServiceManager` contract
2. The DA cert's blob inclusion proof can be merkleized to generate the proper batch root
2. The DA cert's blob inclusion proof can be successfully verified against the blob-batch merkle root
3. The DA cert's quorum params are adequately defined and expressed when compared to their on-chain counterparts
4. The DA cert's quorum ids map to valid quorums

To target this feature, use the CLI flags `--eigenda-svc-manager-addr`, `--eigenda-eth-rpc`.


#### Soft Confirmations

An optional `--eigenda-eth-confirmation-depth` flag can be provided to specify a number of ETH block confirmations to wait before verifying the blob certificate. This allows for blobs to be accredited upon `confirmation` versus waiting (e.g, 25-30m) for `finalization`. The following integer expressions are supported:
`-1`: Wait for blob finalization
`0`: Verify the cert immediately upon blob confirmation and return the blob
`N where N>0`: Wait `N` blocks before verifying the cert and returning the blob

### In-Memory Backend

An ephemeral memory store backend can be used for faster feedback testing when testing rollup integrations. To target this feature, use the CLI flags `--memstore.enabled`, `--memstore.expiration`.


### Fault Mode

Memstore also supports a configurable fault mode which allows for blob content corruption when reading. This is key for testing sequencer resiliency against incorrect batches as well as testing dispute resolution where an optimistic rollup commitment poster produces a machine state hash irrespective of the actual intended execution.

The configuration lives as a json file with path specified via `--memstore.fault-config-path` CLI flag. It looks like so:
```
{
"all": {
"mode": "honest",
"interval": 1
},
"challenger": {
"mode": "byzantine",
},
}
```

Each key refers to an `actor` with context being shared via the http request and processed accordingly by the server. The following modes are currently supported:
- `honest`: returns the actual blob contents that were persisted to memory
- `interval_byzantine`: blob contents are corrupted every `n` of reads
- `byzantine`: blob contents are corrupted for every read


## Metrics

To the see list of available metrics, run `./bin/eigenda-proxy doc metrics`
Expand Down Expand Up @@ -150,9 +178,9 @@ Commitments returned from the EigenDA Proxy adhere to the following byte encodin

```
0 1 2 3 4 N
|--------|--------|--------|-----------------|
commit da layer version raw commitment
type type byte
|--------|--------|--------|--------|-----------------|
commit da layer ext da version raw commitment
type type type byte
```

The `raw commitment` is an RLP-encoded [EigenDA certificate](https://github.com/Layr-Labs/eigenda/blob/eb422ff58ac6dcd4e7b30373033507414d33dba1/api/proto/disperser/disperser.proto#L168).
Expand All @@ -171,10 +199,10 @@ A holesky integration test can be ran using `make holesky-test` to assert proper

### Optimism

An E2E test exists which spins up a local OP sequencer instance using the [op-e2e](https://github.com/ethereum-optimism/optimism/tree/develop/op-e2e) framework for asserting correct interaction behaviors with batch submission and state derivation.
An E2E test exists which spins up a local OP sequencer instance using the [op-e2e](https://github.com/ethereum-optimism/optimism/tree/develop/op-e2e) framework for asserting correct interaction behaviors with batch submission and state derivation. These tests can be ran via `make optimism-test`.

## Resources

* [op-stack](https://github.com/ethereum-optimism/optimism)
* [Alt-DA spec](https://specs.optimism.io/experimental/alt-da.html)
* [eigen da](https://github.com/Layr-Labs/eigenda)
* [eigen da](https://github.com/Layr-Labs/eigenda)
2 changes: 1 addition & 1 deletion cmd/server/entrypoint.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ func StartProxySvr(cliCtx *cli.Context) error {
if err := server.Start(); err != nil {
return fmt.Errorf("failed to start the DA server")
} else {
log.Info("Started DA Server")
log.Info("Started EigenDA proxy server")
}

defer func() {
Expand Down
53 changes: 20 additions & 33 deletions server/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@ package server

import (
"fmt"
"math"
"runtime"
"time"

Expand Down Expand Up @@ -30,27 +29,28 @@ const (
PutBlobEncodingVersionFlagName = "eigenda-put-blob-encoding-version"
DisablePointVerificationModeFlagName = "eigenda-disable-point-verification-mode"
// Kzg flags
G1PathFlagName = "eigenda-g1-path"
G2TauFlagName = "eigenda-g2-tau-path"
CachePathFlagName = "eigenda-cache-path"
MaxBlobLengthFlagName = "eigenda-max-blob-length"
G1PathFlagName = "eigenda-g1-path"
G2TauFlagName = "eigenda-g2-tau-path"
CachePathFlagName = "eigenda-cache-path"
MaxBlobLengthFlagName = "eigenda-max-blob-length"

// Memstore flags
MemstoreFlagName = "memstore.enabled"
MemstoreExpirationFlagName = "memstore.expiration"

// S3 flags
S3CredentialTypeFlagName = "s3.credential-type" // #nosec G101
S3BucketFlagName = "s3.bucket" // #nosec G101
S3PathFlagName = "s3.path"
S3EndpointFlagName = "s3.endpoint"
S3AccessKeyIDFlagName = "s3.access-key-id" // #nosec G101
S3AccessKeySecretFlagName = "s3.access-key-secret" // #nosec G101
S3BackupFlagName = "s3.backup"
S3TimeoutFlagName = "s3.timeout"
)

const BytesPerSymbol = 31
const MaxCodingRatio = 8

var MaxSRSPoints = math.Pow(2, 28)
var MaxSRSPoints = 1 << 28 // 2^28

var MaxAllowedBlobSize = uint64(MaxSRSPoints * BytesPerSymbol / MaxCodingRatio)

Expand All @@ -69,15 +69,16 @@ type Config struct {

// KZG vars
CacheDir string
G1Path string
G2Path string
G1Path string
G2Path string
G2PowerOfTauPath string

// Size constraints
MaxBlobLength string
maxBlobLengthBytes uint64

G2PowerOfTauPath string

// Memstore Config params
// Memstore
MemstoreEnabled bool
MemstoreBlobExpiration time.Duration
}
Expand All @@ -90,7 +91,7 @@ func (c *Config) GetMaxBlobLength() (uint64, error) {
}

if numBytes > MaxAllowedBlobSize {
return 0, fmt.Errorf("excluding disperser constraints on max blob size, SRS points constrain the maxBlobLength configuration parameter to be less than than ~1 GB (%d bytes)", MaxAllowedBlobSize)
return 0, fmt.Errorf("excluding disperser constraints on max blob size, SRS points constrain the maxBlobLength configuration parameter to be less than than %d bytes", MaxAllowedBlobSize)
}

c.maxBlobLengthBytes = numBytes
Expand All @@ -102,7 +103,7 @@ func (c *Config) GetMaxBlobLength() (uint64, error) {
func (c *Config) VerificationCfg() *verify.Config {
numBytes, err := c.GetMaxBlobLength()
if err != nil {
panic(fmt.Errorf("Check() was not called on config object, err is not nil: %w", err))
panic(fmt.Errorf("failed to read max blob length: %w", err))
}

kzgCfg := &kzg.KzgConfig{
Expand Down Expand Up @@ -131,7 +132,7 @@ func (c *Config) VerificationCfg() *verify.Config {

}

// NewConfig parses the Config from the provided flags or environment variables.
// ReadConfig parses the Config from the provided flags or environment variables.
func ReadConfig(ctx *cli.Context) Config {
cfg := Config{
S3Config: store.S3Config{
Expand All @@ -141,8 +142,6 @@ func ReadConfig(ctx *cli.Context) Config {
Endpoint: ctx.String(S3EndpointFlagName),
AccessKeyID: ctx.String(S3AccessKeyIDFlagName),
AccessKeySecret: ctx.String(S3AccessKeySecretFlagName),
Backup: ctx.Bool(S3BackupFlagName),
Timeout: ctx.Duration(S3TimeoutFlagName),
},
ClientConfig: clients.EigenDAClientConfig{
RPC: ctx.String(EigenDADisperserRPCFlagName),
Expand Down Expand Up @@ -203,8 +202,8 @@ func (cfg *Config) Check() error {
if cfg.EthConfirmationDepth >= 0 && (cfg.SvcManagerAddr == "" || cfg.EthRPC == "") {
return fmt.Errorf("eth confirmation depth is set for certificate verification, but Eth RPC or SvcManagerAddr is not set")
}
if cfg.S3Config.S3CredentialType == store.S3CredentialUnknown {

if cfg.S3Config.S3CredentialType == store.S3CredentialUnknown && cfg.S3Config.Endpoint != "" {
return fmt.Errorf("s3 credential type must be set")
}
if cfg.S3Config.S3CredentialType == store.S3CredentialStatic {
Expand Down Expand Up @@ -287,7 +286,7 @@ func CLIFlags(envPrefix string) []cli.Flag {
Name: G1PathFlagName,
Usage: "Directory path to g1.point file.",
EnvVars: prefixEnvVars("TARGET_KZG_G1_PATH"),
Value: "resources/g1.point.1048576",
Value: "resources/g1.point",
},
&cli.StringFlag{
Name: G2TauFlagName,
Expand Down Expand Up @@ -360,17 +359,5 @@ func CLIFlags(envPrefix string) []cli.Flag {
Value: "",
EnvVars: prefixEnvVars("S3_ACCESS_KEY_SECRET"),
},
&cli.BoolFlag{
Name: S3BackupFlagName,
Usage: "Backup to S3 in parallel with Eigenda.",
Value: false,
EnvVars: prefixEnvVars("S3_BACKUP"),
},
&cli.StringFlag{
Name: S3TimeoutFlagName,
Usage: "S3 timeout",
Value: "60s",
EnvVars: prefixEnvVars("S3_TIMEOUT"),
},
}
}
}
17 changes: 9 additions & 8 deletions store/eigenda.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"time"

"github.com/Layr-Labs/eigenda-proxy/verify"
"github.com/Layr-Labs/eigenda/api/clients"
"github.com/ethereum/go-ethereum/log"
Expand Down Expand Up @@ -67,13 +68,6 @@ func (e EigenDAStore) Get(ctx context.Context, key []byte) ([]byte, error) {

// Put disperses a blob for some pre-image and returns the associated RLP encoded certificate commit.
func (e EigenDAStore) Put(ctx context.Context, value []byte) (comm []byte, err error) {
dispersalStart := time.Now()
blobInfo, err := e.client.PutBlob(ctx, value)
if err != nil {
return nil, err
}
cert := (*verify.Certificate)(blobInfo)

encodedBlob, err := e.client.GetCodec().EncodeBlob(value)
if err != nil {
return nil, fmt.Errorf("EigenDA client failed to re-encode blob: %w", err)
Expand All @@ -82,6 +76,13 @@ func (e EigenDAStore) Put(ctx context.Context, value []byte) (comm []byte, err e
return nil, fmt.Errorf("encoded blob is larger than max blob size: blob length %d, max blob size %d", len(value), e.cfg.MaxBlobSizeBytes)
}

dispersalStart := time.Now()
blobInfo, err := e.client.PutBlob(ctx, value)
if err != nil {
return nil, err
}
cert := (*verify.Certificate)(blobInfo)


err = e.verifier.VerifyCommitment(cert.BlobHeader.Commitment, encodedBlob)
if err != nil {
Expand Down Expand Up @@ -151,4 +152,4 @@ func (e EigenDAStore) EncodeAndVerify(ctx context.Context, key []byte, value []b
}

return value, nil
}
}

0 comments on commit 5048e9d

Please sign in to comment.