Hive Developer logo

Hive Developer Portal

Get Transaction Node

Setting up a node that supports *.get_transaction.


This tutorial will show how to setup the lowest possible resource node that can support condenser_api.get_transaction and account_history_api.get_transaction.


Minimum Requirements

This tutorial assumes Ubuntu Server 18.04 LTS 16GB RAM and 500GB SSD/HDD.

Building hived

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install autoconf automake autotools-dev bsdmainutils \
  build-essential cmake doxygen gdb libboost-all-dev libreadline-dev \
  libssl-dev libtool liblz4-tool ncurses-dev pkg-config python3-dev \
  python3-pip nginx fcgiwrap awscli gdb libgflags-dev libsnappy-dev zlib1g-dev \
  libbz2-dev liblz4-dev libzstd-dev
mkdir -p ~/src
cd ~/src
git clone --branch master
cd hive
git submodule update --init --recursive
mkdir -p build
cd build
cmake \
make -j$(nproc)
sudo make install

Configure Node

mkdir -p ~/hive_data
cd ~/hive_data
hived --data-dir=.

At the startup banner, press ^C (Ctrl+C) to exit hived. As a side effect, a default data-dir is created. Now we can purge the empty blockchain and create config.ini as follows:

rm -Rf blockchain
nano config.ini

Then make the following changes to the generated config.ini:

To summarize, the changed values are:

plugin = p2p webserver account_history block_api condenser_api database_api account_history_api
account-history-blacklist-ops = fill_convert_request_operation author_reward_operation curation_reward_operation comment_reward_operation liquidity_reward_operation interest_operation fill_vesting_withdraw_operation fill_order_operation shutdown_witness_operation fill_transfer_from_savings_operation hardfork_operation comment_payout_update_operation return_vesting_delegation_operation comment_benefactor_reward_operation producer_reward_operation clear_null_account_balance_operation proposal_pay_operation sps_fund_operation hardfork_hive_operation hardfork_hive_restore_operation delayed_voting_operation consolidate_treasury_balance_operation effective_comment_vote_operation ineffective_delete_comment_operation sps_convert_operation
shared-file-size = 54G
p2p-endpoint =
webserver-http-endpoint =
webserver-ws-endpoint =

Save config.ini.

Latest Block Log

Download the block log (optional but recommended).

cd ~/hive_data
mkdir -p blockchain
wget -O blockchain/block_log
hived --data-dir=. --replay-blockchain

Sync Node

If you did not download the latest block log:

cd ~/hive_data
hived --data-dir=. --resync-blockchain

After replay or resync is complete, the console will display Got ## transactions from .... It’s possible to close hived with ^C (Ctrl+C). Then, to start the node again:

cd ~/hive_data
hived --data-dir=.


Problem: Got an error while trying to compile hived:

c++: internal compiler error: Killed (program cc1plus)

Solution: Add more memory or enable swap.

To enable swap (do not enable swap on a VPS like Digital Ocean):

sudo dd if=/dev/zero of=/var/swap.img bs=1024k count=4000
sudo chmod 600 /var/swap.img
sudo mkswap /var/swap.img
sudo swapon /var/swap.img

Problem: Got an error while replaying:

IO error: While open a file for appending: /root/hive_data/./blockchain/rocksdb_witness_object/012590.sst: Too many open files

Solution: You’re using MIRA, but this tutorial recommends not to (-DENABLE_MIRA=OFF). If you really intend to try MIRA, you will need to set higher limits. Note, if you are also running hived as root (not recommended), you must explicitly set hard/soft nofile/nproc lines for root instead of * in /etc/security/limits.conf.

To set the open file limit …

sudo nano /etc/security/limits.conf

Add the following lines:

*      hard    nofile     94000
*      soft    nofile     94000
*      hard    nproc      64000
*      soft    nproc      64000

To set the fs.file-max limit …

sudo nano /etc/sysctl.conf

Add the following line:

fs.file-max = 2097152

Load the new settings:

sudo sysctl -p

Once you save these files, you may need to logout and login again.