Run a cluster using the CLI
Charon is in an early alpha state and is not ready to be run on mainnet
The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster.
- Ensure you have docker installed.
- Ensure you have git installed.
- Make sure
dockeris running before executing the commands below.
- Decide who the Leader or Creator of your cluster will be. Only them have to perform step 2 and step 5 in this quickstart. They do not get any special privilege.
- In the Leader case, the operator creating the cluster will also operate a node in the cluster.
- In the Creator case, the cluster is created by an external party to the cluster.
Step 1. Create and back up a private key for charon
In order to prepare for a distributed key generation ceremony, all operators (including the leader but NOT a creator) need to create an ENR for their charon client. This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
# Clone this repo
git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
# Change directory
# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.14.4 create enr
You should expect to see a console output like
Created ENR private key: .charon/charon-enr-private-key
Please make sure to create a backup of the private key at
.charon/charon-enr-private-key. Be careful not to commit it to git! If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.
Finally, share your ENR with the leader or creator so that he/she can proceed to Step 2.
Step 2. Leader or Creator creates the DKG configuration file and distribute it to cluster operators
The leader or creator of the cluster will prepare the
cluster-definition.jsonfile for the Distributed Key Generation ceremony using the
charon create dkgcommand.
# Prepare an environment variable file
cp .env.create_dkg.sample .env.create_dkg
.env.create_dkgfile created with the
cluster name, the
withdrawal Ethereum addresses, and the
ENRsof all the operators participating in the cluster.
- The file generated is hidden by default. To view it, run
ls -alin your terminal. Else, if you are on
Cmd + Shift + .to view all hidden files in the finder application.
- The file generated is hidden by default. To view it, run
charon create dkgcommand that generates DKG cluster-definition.json file.
docker run --rm -v "$(pwd):/opt/charon" --env-file .env.create_dkg obolnetwork/charon:v0.14.4 create dkg
This command should output a file at
.charon/cluster-definition.json. This file needs to be shared with the other operators in a cluster.
Step 3. Run the DKG
After receiving the
cluster-definition.json file created by the leader, cluster operators should ideally save it in the
.charon/ folder that was created during step 1, alternatively the
--definition-file flag can override the default expected location for this file.
Every cluster member then participates in the DKG ceremony. For Charon v1, this needs to happen relatively synchronously between participants at an agreed time.
# Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.14.4 dkg
This is a helpful video walkthrough.
Assuming the DKG is successful, a number of artefacts will be created in the
.charon folder. These include:
deposit-data.jsonfile. This contains the information needed to activate the validator on the Ethereum network.
cluster-lock.jsonfile. This contains the information needed by charon to operate the distributed validator cluster with its peers.
validator_keys/folder. This folder contains the private key shares and passwords for the created distributed validators.
Please make sure to create a backup of
.charon/validator_keys. If you lose your keys you won't be able to start the DV cluster successfully.
deposit-data files are identical for each operator and can be copied if lost.
Step 4. Start your Distributed Validator Node
With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term. This repo is configured to sync an execution layer client (
geth) and a consensus layer client (
Before completing these instructions, you should assign a static local IP address to your device (extending the DHCP reservation indefinitely or removing the device from the DCHP pool entirely if you prefer), and port forward the TCP protocol on the public port
:3610 on your router to your device's local IP address on the same port. This step is different for every person's home internet, and can be complicated by the presence of dynamic public IP addresses. We are currently working on making this as easy as possible, but for the time being, a distributed validator cluster isn't going to work very resiliently if all charon nodes cannot talk directly to one another and instead need to have an intermediary node forwarding traffic to them.
Caution: If you manually update
docker-compose to mount
lighthouse from your locally synced
~/.lighthouse, the whole chain database may get deleted. It'd be best not to manually update as
lighthouse checkpoint-syncs so the syncing doesn't take much time.
Note: If you have a
geth node already synced, you can simply copy over the directory. For ex:
cp -r ~/.ethereum/goerli data/geth. This makes everything faster since you start from a synced geth node.
# Delete lighthouse data if it exists
rm -r ./data/lighthouse
# Spin up a Distributed Validator Node with a Validator Client
docker compose up
# Open Grafana dashboard
You should use the grafana dashboard to infer whether your cluster is healthy. In particular you should check:
- That your charon client can connect to the configured beacon client.
- That your charon client can connect to all peers.
Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually ~16 hours after the deposit is made).
If at any point you need to turn off your node, you can run:
# Shut down the currently running distributed validator node
docker compose down
Step 5. Activate the deposit data
Congrats 🎉 if your cluster have gotten to this step of the quickstart and have successfully created a distributed validator together.
If you have connected all of your charon clients together such that the monitoring indicates that they are all healthy and ready to operate, ONE operator, usually the leader, may process to activate this deposit data with the existing staking launchpad.
This process can take a minimum of 16 hours, with the maximum time to activation being dictated by the length of the activation queue, which can be weeks. You can leave your distributed validator cluster offline until closer to the activation period if you would prefer. You can also use this time to improve and harden your monitoring and alerting for the cluster.
Step 6 - Optional. Add the Monitoring Credentials
This step is optional but will help the Obol Team monitor the health of your cluster. It can only be perfomed if the Obol Team has given you a credential to use.
You may have been provided with Monitoring Credentials used to push distributed validator metrics to our central prometheus service to monitor, analyze and improve your cluster's performance. The provided credentials needs to be added in
$PROM_REMOTE_WRITE_TOKEN and will look like:
prometheus/prometheus.yml file would look like:
scrape_interval: 30s # Set the scrape interval to every 30 seconds.
evaluation_interval: 30s # Evaluate rules every 30 seconds.
- url: https://vm.monitoring.gcp.obol.tech/write
- job_name: 'charon'
- targets: ['charon:3620']
- job_name: 'teku'
- targets: ['teku:8008']
- job_name: 'node-exporter'
- targets: ['node-exporter:9100']
Step 7. Validator Voluntary Exit
Exiting your validator(s) can be useful in situations where you want to stop staking and withdraw your staked ETH.
👉 Follow the exit guide here
If you have gotten this far through the process, and whether you succeeded or failed at running the distributed validator successfully, we would like to hear your feedback on the process and where you encountered difficulties. Please let us know by joining and posting on our Discord. Also, feel free to add issues to our GitHub repos.
The above steps should get you running a distributed validator cluster. The following are some extra steps you may want to take either to improve the resilience and performance of your distributed validator cluster.
Docker power users
This section of the readme is intended for the "docker power users", i.e., for the ones who are familiar with working with
docker compose and want to have more flexibility and power to change the default configuration.
We use the "Multiple Compose File" feature which provides a very powerful way to override any configuration in
docker-compose.yml without needing to modify git-checked-in files since that results in conflicts when upgrading this repo.
See this for more details.
There are two additional files in this repository,
docker-compose.override.yml.sample, alongwith the default
docker-compose.yml file that you can use for this purpose.
compose-debug.ymlcontains some additional containers that developers can use for debugging, like
jaeger. To achieve this, you can run:
docker compose -f docker-compose.yml -f compose-debug.yml up
docker-compose.override.yml.sampleis intended to override the default configuration provided in
docker-compose.yml. This is useful when, for example, you wish to add port mappings or want to disable a container.
To use it, just copy the sample file to
docker-compose.override.ymland customise it to your liking. Please create this file ONLY when you want to tweak something. This is because the default override file is empty and docker errors if you provide an empty compose file.
cp docker-compose.override.yml.sample docker-compose.override.yml
# Tweak docker-compose.override.yml and then run docker compose up
docker compose up
- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up