Allows Data Providers to expose their (private) APIs on the SEDA network. Only eligible overlay nodes are allowed to access the proxy.
Install bun:
curl -fsSL https://bun.sh/install | bash
Install all project dependencies:
bun install
Now you are able to run the Data Proxy CLI:
bun start --help
Run the init command to create a keypair and an example config:
bun start init
This will generate two files:
config.json
: Configure where routes are going to and what to inject (ex: headers)data-proxy-private-key.json
: Private key that signs the HTTP response. This key is registered on the SEDA chain (see below). If required you can also use theSEDA_DATA_PROXY_PRIVATE_KEY
environment variable to expose the private key to the node.
Now you can run the node:
# Disables the proofing mechanism so it's easier to debug the proxy
bun start run --disable-proof
# The console will output something similiar:
2024-08-19 13:21:46.624 info: Proxy routes is at http://127.0.0.1:5384/proxy/
Now you can access the SWApi through curl, browser, or any other HTTP client:
curl http://localhost:5384/proxy/planets/1
The node will auto sign the response and include two headers: x-seda-signature
and x-seda-publickey
, which will be used for verification on the executor node.
- Request query params are forwared to the
upstreamUrl
- Request headers except
host
are forwared to theupstreamUrl
- Request Body is forwared to the
upstreamUrl
- By default only the upstream header
content-type
is given back. This can however be configured to include more. - The full body is given back as a response. This can be reduced with using
jsonPath
All proxy routes are grouped under a single path prefix, by default this is "proxy". You can change this by specifying the routeGroup
attribute in the config.json:
{
"routeGroup": "custom"
// Rest of config
}
In case you want to have software in front of the data proxy to handle the request (such as another proxy or an API management solution) it's possible that the public URL of the data proxy is different from the URL that the data proxy services. This causes a problem in the tamper proofing mechanism since the data proxy needs to sign the request URL, in order to prove that the overlay node did not change the URL. To prevent this you can specify the baseURL
option in the config.json:
{
"routeGroup": "proxy",
"baseURL": "https://my-public-data-proxy.com"
}
Important
Just the protocol and host should be enough, no trailing slash.
Should you do additional path rewriting in the proxy layer you can add that to the baseURL
option, but this is not recommended.
A single data proxy can expose different data sources through a simple mapping in the config.json. The routes
attribute takes an array of proxy route objects which can each have their own configuration.
The two required attributes are path
and upstreamUrl
. These specify how the proxy should be called and how the proxy should call the upstream. By default a route is configured as GET
, but optionally you can specify which methods the route should support with the methods
attribute.
{
"routeGroup": "proxy",
"routes": [
{
"path": "/eth-usd",
"upstreamUrl": "https://myapi.com/eth-usd"
// Default method is GET
},
{
"path": "/btc-usd",
"upstreamUrl": "https://myapi.com/btc-usd",
// You can set multiple methods for the same route
"method": ["GET", "HEAD"]
}
]
}
In addition to specifying the baseURL
at the root level you can also specify it per route
. The baseURL
at the route
level will take precedence over one at the root level.
{
"routeGroup": "proxy",
"baseURL": "https://data-proxy.com",
"routes": [
{
// This route will use the "baseURL" from the root
"path": "/eth-usd",
"upstreamUrl": "https://myapi.com/eth-usd"
},
{
// This route will use its own "baseURL"
"baseURL": "https://btc.data-proxy.com",
"path": "/btc-usd",
"upstreamUrl": "https://myapi.com/btc-usd"
},
],
}
Should your upstream require certain request headers you can configure those in the routes
object. All headers specified in the headers
attribute will be sent to the upstream in addition to headers specified by the original request.
{
"routeGroup": "proxy",
"routes": [
{
"path": "/eth",
"upstreamUrl": "https://myapi.com/endpoint/eth",
"headers": {
"x-api-key": "MY-API-KEY",
"accept": "application/json"
}
}
]
}
Sometimes you don't want to expose your API key in a config file, or you have multiple environments running. The Data Proxy node has support for injecting environment variables through the {$MY_ENV_VARIABLE}
syntax:
{
"routeGroup": "proxy",
"routes": [
{
"path": "/odds",
"upstreamUrl": "https://swapi.dev/api/my-odds",
"headers": {
"x-api-key": "{$SECRET_API_KEY}"
}
}
]
}
Warning
Environment variables are evaluated during startup of the data proxy. If it detects variables in the config which aren't present in the environment the process will exit with an error message detailing which environment variable it was unable to find.
The routes
objects have support for path parameter variables and forwarding those to the upstream. Simply declare a variable in your path with the :varName:
syntax and reference them in the upstreamUrl with the {:varName:}
syntax. See below for an example:
{
"routeGroup": "proxy",
"routes": [
{
"path": "/:coinA/:coinB",
// Use {} to inject route variables
"upstreamUrl": "https://myapi.com/{:coinA}-{:coinB}"
}
]
}
By default the data proxy node will only forward the content-type
header from the upstream response. If required you can specify which other headers the proxy should forward to the requesting client:
{
"routeGroup": "proxy",
"routes": [
{
"path": "/planets/:planet",
"upstreamUrl": "https://swapi.dev/api/planets/{:planet}",
// Now the API will also return the server header from SWApi
"forwardResponseHeaders": ["content-type", "server"],
"headers": {
"x-api-key": "some-api-key"
}
}
]
}
The Data Proxy node has support for wildcard routes, which allows you to quickly expose all your APIs:
{
"routeGroup": "proxy",
"routes": [
{
// The whole path will be injected in the URL
"path": "/*",
"upstreamUrl": "https://swapi.dev/api/{*}",
"headers": {
"x-api-key": "some-api-key"
}
}
]
}
If you don't want to expose all API info you can use jsonPath
to return a subset of the response:
{
"routeGroup": "proxy",
"routes": [
{
"path": "/planets/:planet",
"upstreamUrl": "https://swapi.dev/api/planets/{:planet}",
// Calling the API http://localhost:5384/proxy/planets/1 will only return "Tatooine" and omit the rest
"jsonPath": "$.name",
"headers": {
"x-api-key": "some-api-key"
}
}
]
}
The Data Proxy node has support for exposing status information through some endpoints. This can be used to monitor the health of the node and the number of requests it has processed.
The status endpoint has two routes:
/status/health
Returns a JSON object with the following strucuture:{ "status": "healthy", "metrics": { "uptime": "P0Y0M1DT2H3M4S", // ISO 8601 duration since the node was started "requests": 1024, // Number of requests processed "errors": 13 // Number of errors that occurred } }
/status/pubkey
Returns the public key of the node.{ "pubkey": "031b84c5567b126440995d3ed5aaba0565d71e1834604819ff9c17f5e9d5dd078f" }
The status endpoints can be configured in the config file under the statusEndpoints attribute:
{
// Other config...
"statusEndpoints": {
"root": "status",
// Optional
"apiKey": {
"header": "x-api-key",
"secret": "some-secret"
}
}
}
root
: Root path for the status endpoints. Defaults tostatus
.apiKey
: Optionally secure the status endpoints with an API key. Theheader
attribute is the header key that needs to be set, andsecret
is the value that it needs to be set to.
ThestatusEndpoints.apiKey.secret
attribute supports the{$MY_ENV_VARIABLE}
syntax for injecting a value from the environment during start up.
The SEDA Data Proxy can be deployed in several ways:
- Local Installation (shown above)
- Docker Deployment
- Kubernetes Deployment
Pull the latest image:
docker pull ghcr.io/sedaprotocol/seda-data-proxy:v0.0.3
Initialize configuration and keys (choose one option):
# Option A: Save files to local directory
docker run \
-v $PWD/config:/app/config \
ghcr.io/sedaprotocol/seda-data-proxy:v0.0.3 \
init -c ./config/config.json -pkf config/data-proxy-private-key.json
# Option B: Print to console for manual setup
docker run ghcr.io/sedaprotocol/seda-data-proxy:v0.0.3 init --print
Register your node:
# Option A: Using private key file
docker run \
-v $PWD/config/data-proxy-private-key.json:/app/data-proxy-private-key.json \
ghcr.io/sedaprotocol/seda-data-proxy:v0.0.3 \
register <seda-address> <seda-amount>
# Option B: Using environment variable
docker run \
--env SEDA_DATA_PROXY_PRIVATE_KEY=$SEDA_DATA_PROXY_PRIVATE_KEY \
ghcr.io/sedaprotocol/seda-data-proxy:v0.0.3 \
register <seda-address> <seda-amount>
Run the proxy:
# Option A: Using private key file
docker run -d \
--name seda-data-proxy \
-p 5384:5384 \
-v $PWD/config/config.json:/app/config.json \
-v $PWD/config/data-proxy-private-key.json:/app/data-proxy-private-key.json \
ghcr.io/sedaprotocol/seda-data-proxy:v0.0.3 \
run --disable-proof
# Option B: Using environment variable
docker run -d \
--name seda-data-proxy \
-p 5384:5384 \
-v $PWD/config/config.json:/app/config.json \
--env SEDA_DATA_PROXY_PRIVATE_KEY=$SEDA_DATA_PROXY_PRIVATE_KEY \
ghcr.io/sedaprotocol/seda-data-proxy:v0.0.3 \
run --disable-proof
Note
The config.json file must always be mounted as a volume.
Important
Remove --disable-proof
in production environments
For production deployments on Kubernetes, we provide a Helm chart in the helm/
directory. Here's a basic setup:
Basic Helm configuration example:
# values.yaml
# ... other configuration ...
secret:
sedaDataProxyPrivateKey: "" # Will be set via CLI
# Remove this flag in production - it disables request verification
sedaProxyFlags: "--disable-proof"
sedaProxyConfig:
routes:
- path: "/*"
upstreamUrl: "https://swapi.dev/api/"
methods:
- GET
Deploy using Helm from the project root:
helm install my-proxy ./helm --set secret.sedaDataProxyPrivateKey=$SEDA_DATA_PROXY_PRIVATE_KEY
Note
The above is a minimal example. Your specific deployment may require additional configuration for services, ingress, resources, and security settings based on your infrastructure requirements. Please consult with your infrastructure team for production deployments.