How to run a Conflux node

How to run a Conflux node

Conflux is a fully decentralized network based on PoW(proof of work). If you want to participate in mining of this decentralized network, or have your own RPC service, you need to run a Node (also called a client).This article shows you how to run a Conflux node.

Archivenode VS fullnode

There are 3 types of Conflux nodes : archivenode, fullnode and lightnode. The difference between three types of nodes lies in the amount of data reserved for storage. The archive node takes the most and the light node takes the least. Of course, more data consumes more hardware resources. Click here for detailed information of nodes.

In general, if you want to participate in mining, a fullnode will suffice . you need to run an archivenode if you want to use it as RPC service. The lightnode is mainly used as a wallet.

Hardware Requirements

The hardware requirement to run an archivenode are roughly as follows:

  • CPU: 4Core
  • Internal Storage: 16G
  • Hard Disk: 500G

Fullnode has a lower hardware requirement and requires a discrete graphics card if you want to participate in mining.

In addition, the maximum number of open files are advised to set to 10000. In Linux, the default value is 1024, which is insufficient.

How to get the Node Program and Configuration

To obtain the Conflux network node program, you can download it in the Release page of the official Github Conflux-rust. Generally. You can download the latest Release version directly. Each version not only contains the source code, but also provides compiled node programs for Windows, Mac, and Linux.

Pay attention, now, there are two lines of release version for main network and test network: Conflux-vx.x.x for main network, and Conflux-vx.x.x-testnet for test network. You should select the correct version according to your needs when you download the program.

The zipped file you download can be decompressed to a run folder, which contains the following contents:

➜  run tree
.
β”œβ”€β”€ conflux  # node program
β”œβ”€β”€ log.yaml # log configuration file
β”œβ”€β”€ start.bat # windows startup script
β”œβ”€β”€ start.sh # unix unix startup script
β”œβ”€β”€ hydra.toml # main network configuration file
└── throttling.toml # traffic limiting configuration file

0 directories, 6 files

There are 2 files you should pay attention to conflux and hydra.toml. If you download the Windows package, the executable node program should be conflux.exe.

Another way to obtain the node program is to compile the node program from the source code, if you are interested in it, you can click here.

Configuration

You need to prepare the node configuration file before running the node program. You can find the configuration file in the downloaded program package. Generally, the main network configuration file is hydra.toml, and the test network file is testnet.toml. The main difference between the two configuration files is the configuration values of the bootnodes and chainId. Developers can also look for the configuration files in the run directory of the conflux-rust repository from Github. The file name is also hydra.toml or testnet.toml.

Usually the user doesn’t need to change any configuration, just run the startup script (if you don’t want to know configuration details, you can skip to the next section to learn the running the node program). However, if you want to open a certain function or set some user-defined behaviors, you need to set some configuration parameters by yourself. The following are some of the most common configurations:

node_type

  • node_type: is used to set the type of the node, you can select full (default) , archive, light.

chainId

  • chainId: is used to set the ID of the chain to connect, the value of the main network is 1029, the value of the test network is 1 (generally, it doesn’t need to change).

Miner related

  • mining_address: the address to receive mining reward. You can set the hex40 address or CIP-37 address (note: the network prefix of the address should match the currently configured chainId). The default value of minint_type is stratum.
  • mining_type: the optional values are stratum, cpu, disable.
  • stratum_listen_address: stratum address
  • stratum_port: stratum port number
  • stratum_secret: stratum connection credential

RPC related

  • jsonrpc_cors: is used to control the RPC domain validation. The optional values are None, all, or domain names separated by commas (no spaces)
  • jsonrpc_http_keep_alive: false or true is used to control whether to set the KeepAlive for RPC HTTP connections.
  • jsonrpc_ws_port: websocket RPC port number.
  • jsonrpc_http_port: http RPC port number.
  • public_rpc_apis:public access RPC api setting, the optional values are all, safe, cfx, debug, pubsub, test, trace (safe=cfx+pubsub). The recommended value is safe
  • persist_tx_index: true or false. If you need to process transaction-related RPCs, you need to open this configuration at the same time, otherwise you will only be able to access the most recent transaction information.
  • persist_block_number_index: true or false If you want to search the block information by blockNumber, you should set true.
  • executive_trace: true or false. It indicates whether to open the trace EVM execution function. If enabled, trace will be recorded in the database.
  • get_logs_filter_max_epoch_range: The Event log is obtained by calling cfx_getLogs, which has a significant impact on node performance. Maximum range of the epoch can be searched is configured through this option.
  • get_logs_filter_max_limit: maximum number of log in a single query by cfx_getLogs .

Snapshot

  • additional_maintained_snapshot_count: is used to set the retained number of snapshots before setting the stable checkpoint, default value is 0. Snapshot prior to stable genesis will be deleted. This option is required if the user wants to query for a more distant historical state. When this option is on, the disk usage will increases considerably.

directries

  • conflux_data_dir: the storage directory of the data (block data, state data, node database) .
  • block_db_dir: the storage directory for the block data. By default, it is stored in blockchain_db directory under conflux_data_dir specified directory.
  • netconf_dir: is used to control the network-related persistent directories, include net_key

Log related

  • log_conf: is used to specify the log configuration files such aslog.yaml, the settings in the configuration file will overwrite the log_level setting.
  • log_file: path of log. If it is not set, the log will output to stdout.
  • log_level: log printing level, the optional values are error, warn, info, debug, trace, off

The higher the log level is, the more logs are generated, which takes up more memory space and affects the node performance.

Developer (dev) Mode

Smart contract developers, who want to deploy and test contract in a local node environment, can use this pattern:

  • Comment out the bootnodes configuration
  • mode: set the node model to dev
  • dev_block_interval_ms: the block generation interval time, unit is ms.

In this mode, it will run aa single node network with all RPC methods opened.

Configuring Genesis Accounts

You can configure a genesis account using a single genesis_secrets.txt file in dev model. This file requires one line to place a private key (without the 0x prefix). You should add the genesis_secrets to the configuration file and set the value to the path of the file:

genesis_secrets = './genesis_secrets.txt'

After the nodes start, each account will start with 10000,000,000,000,000,000,000 Drip, that is 10k CFX.

Other

  • net_key: is a 256-bit private key used to generate a unique node ID. This option is randomly generated if it is not set. If you want to set it, you can fill in a 64 hex characters.
  • tx_pool_size: the maximum number of transactions allowed to be stored (default 500k)
  • tx_pool_min_tx_gas_price: the minimum limit for trading gasPrice by trading pool ( default 1)

For complete configuration, you can check the configuration file, which contains all configurable items and detailed comments.

Start the Node

Once the configuration file is enabled, you can run the node through the node program.

# Run the startup script
$ ./start.sh

If you see the content like this in the stdout or log file, the node program has started successfully:

2021-04-14T11:54:23.518634+08:00 INFO main network::thr - throttling.initialize: min = 10M, max = 64M, cap = 256M
2021-04-14T11:54:23.519229+08:00 INFO main conflux -
:'######:::'#######::'##:::##:'########:'##:::::::'##::::'##:'##::::'##:
'##... ##:'##.... ##: ###:: ##: ##.....:: ##::::::: ##:::: ##:. ##::'##::
##:::..:: ##:::: ##: ####: ##: ##::::::: ##::::::: ##:::: ##::. ##'##:::
##::::::: ##:::: ##: ## ## ##: ######::: ##::::::: ##:::: ##:::. ###::::
##::::::: ##:::: ##: ##. ####: ##...:::: ##::::::: ##:::: ##::: ## ##:::
##::: ##: ##:::: ##: ##:. ###: ##::::::: ##::::::: ##:::: ##:: ##:. ##::
. ######::. #######:: ##::. ##: ##::::::: ########:. #######:: ##:::. ##:
:......::::.......:::..::::..::..::::::::........:::.......:::..:::::..::
Current Version: 1.1.3-testnet

2021-04-14T11:54:23.519271+08:00 INFO main conflux - Starting full client...

After the node program is started, two new folders blockchain_data and log are created in the run directory. They are used to store node data and logs.

After starting a new main network or test network node, it will synchronize historical block data from the blockchain, and the nodes which are catching up are in catch up mode. You can see the status of the node and the latest epoch count from the log:

2021-04-16T14:49:11.896942+08:00 INFO IO Worker #1 cfxcore::syn - Catch-up mode: true, latest epoch: 102120 missing_bodies: 0
2021-04-16T14:49:12.909607+08:00 INFO IO Worker #3 cfxcore::syn - Catch-up mode: true, latest epoch: 102120 missing_bodies: 0
2021-04-16T14:49:13.922918+08:00 INFO IO Worker #1 cfxcore::syn - Catch-up mode: true, latest epoch: 102120 missing_bodies: 0
2021-04-16T14:49:14.828910+08:00 INFO IO Worker #1 cfxcore::syn - Catch-up mode: true, latest epoch: 102180 missing_bodies: 0

You also can use cfx_getStatus to get the latest epochNumber of the current node and compare with the latest epoch from conflux scan to know whether the data has been synchronized to the latest.

RPC Server

After the node is started, if RPC-related port are opened, and configurations are enabled, wallets and Dapps can access the node via the RPC URL. For example:

http://node-ip:12537

This address can be used when adding networks to the Conflux Portal, or SDKs.

Run Node using Docker

If you are familiar with Docker, you can use Docker to run a node.
Official versions of Docker Image can be pulled and run by yourselves.

Since the node data is large, it is recommended to mount a data directory to store node data when running image.

Now, there are three pipelines for the mirrored tag:

  • x.x.x-mainnet: main network mirror
  • x.x.x-testnet: test network mirror
  • x.x.x: developer model mirror, In this mode, ten accounts are automatically initialized for local development.

FAQ:

Why does synchronization take a long time after restart?

After a node is restarted, it synchronizes data from the last checkpoint and replays block data. It will take different amounts of time according to the distance to the last checkpoint. After that, it will start synchronizing from the latest block.
It’s normal, generally it will take a few minutes to more than ten minutes.

Why does the block height stop increasing?

If the block height stops increasing, you can check the log or terminal to determine whether there is any error. If there is no error, it is most likely due to network reasons, you can try to restart the node.

After the configuration is modified, do I need to clear the data when restarting the node?

Depending on the situation, sometimes it does, sometimes it doesn’t. If the configuration involves data store or data index, you need to restart the node if the configuration changes, for example:

  • persist_tx_index
  • executive_trace
  • persist_block_number_index

Other restart are generally not required.

What is the size of the current archive node data?

Up to 2021.11.04, the block data compression package size is less than 90 GB

How to get involved in mining?

Mining requires GPU, you can see here for details

How to synchronize data quickly to run an archive node?

You can use fullnode-node to download the data snapshot of the archive node, node data can be quickly synchronized to the latest data using a snapshot.

How to check the error log

If you run the node through start.sh, you can check the error log in stderr.txt in the same directory.

How to run a PoS node?

TO BE UPDATE

Reference

2 Likes