Hive Developers logo

Hive Developer Portal - द्रुत-आरंभक

Hive Nodes

Applications that interface directly with the Hive blockchain will need to connect to a Hive node. Developers may choose to use one of the public API nodes that are available, or run their own instance of a node.

Public Nodes

All nodes listed use HTTPS (https://). If you require WebSockets for your solutions, please consider setting up your own hived node or proxy WebSockets to HTTPS using lineman.

URLOwner
api.hive.blog@blocktrades
api.openhive.network@gtg
anyx.io@anyx
rpc.ausbit.dev@ausbitbank
rpc.mahdiyari.info@mahdiyari
api.hive.blue@guiltyparties
techcoderx.com@techcoderx
hive.roelandp.nl@roelandp
hived.emre.sh@emrebeyler
api.deathwing.me@deathwing
api.c0ff33a.uk@c0ff33a
hive-api.arcange.eu@arcange
hive-api.3speak.tv@threespeak
hiveapi.actifit.io@actifit

Private Nodes

The simplest way to get started is by deploying a pre-built dockerized container.

System Requirements

We assume the base system will be running at least Ubuntu 22.04 (jammy). Everything will likely work with later versions of Ubuntu. IMPORTANT UPDATE: experiments have shown 20% better API performance when running U23.10, so this latter version is recommended over Ubuntu 22 as a hosting OS.

For a mainnet API node, we recommend:

Running Hive node with Docker

Install ZFS support

We strongly recommend running your HAF instance on a ZFS filesystem, and this documentation assumes you will be running ZFS. Its compression and snapshot features are particularly useful when running a HAF node. We intend to publish ZFS snapshots of fully-synced HAF nodes that can downloaded to get a HAF node up & running quickly, avoiding multi-day replay times.

sudo apt install zfsutils-linux

Install Docker

Follow official guide https://docs.docker.com/engine/install/.

Create a ZFS pool

Create your ZFS pool if necessary. HAF requires at least 4TB of space, and 2TB NVMe drives are readily available, so we typically construct a pool striping data across several 2TB drives. If you have three or four drives, you will get somewhat better read/write performance, and the extra space can come in handy. To create a pool named “haf-pool” using the first two NVMe drives in your system, use a command like:

sudo zpool create haf-pool /dev/nvme0n1 /dev/nvme1n1

If you name your ZFS pool something else, configure the name in the environment file, as described in the next section. Note: By default, ZFS tries to detect your disk’s actual sector size, but it often gets it wrong for modern NVMe drives, which will degrade performance due to having to write the same sector multiple times. If you don’t know the actual sector size, we recommend forcing the sector size to 8k by specifying setting ashift=13 (e.g., zfs create -o ashift=13 haf-pool /dev….)

Configure your environment

Clone HAF API Node repository from here https://github.com/openhive-network/haf_api_node Make a copy of the file .env.example and customize it for your system. This file contains configurable parameters for things like directories versions of hived, HAF, and associated tools The docker compose command will automatically read the file named .env. If you want to keep multiple configurations, you can give your environment files different names like .env.dev and .env.prod, then explicitly specify the filename when running docker compose: docker compose --env-file=.env.dev ...

Set up ZFS filesystems

The HAF installation is spread across multiple ZFS datasets, which allows us to set different ZFS options for different portions of the data. We recommend that most nodes keep the default datasets in order to enable easy sharing of snapshots.

Initializing from scratch

If you’re starting from scratch, after you’ve created your zpool and configured its name in the .env file as described above, run:

sudo ./create_zfs_datasets.sh

to create and mount the datasets. By default, the dataset holding most of the database storage uses zfs compression. The dataset for the blockchain data directory (which holds the block_log for hived and the shared_memory.bin file) is not compressed because hived directly manages compression of the block_log file. If you have a LOT of nvme storage (e.g. 6TB+), you can get better API performance at the cost of disk storage by disabling ZFS compression on the database dataset, but for most nodes this isn’t recommended.

Assisted startup

./assisted_startup.sh

Depending on your environment variables, assisted start up script will quickly bootstrap the process.

Building Without Docker

Full non-docker steps can be reviewed here:

Build Eclipse by @gtg

Syncing blockchain

Initializing from a snapshot

If you’re starting with one of our snapshots, the process of restoring the snapshots will create the correct datasets with the correct options set. First, download the snapshot file from: https://gtg.openhive.network/get/snapshot/ Since these snapshots are huge, it’s best to download the snapshot file to a different disk (a magnetic HDD will be fine for this) that has enough free space for the snapshot first, then restore it to the ZFS pool. This lets you easily resume the download if your transfer is interrupted. If you download directly to the ZFS pool, any interruption would require you to start the download from the beginning.

wget -c https://whatever.net/snapshot_filename

If the transfer gets interrupted, run the same command again to resume. Then, to restore the snapshot, run:

sudo zfs recv -d -v haf-pool < snapshot_filename

Replay with blocklog

Normally syncing blockchain starts from very first, 0 genesis block. It might take long time to catch up with live network, because it connects to various p2p nodes in the Hive network and requests blocks from 0 to head block.
It stores blocks in block log file and builds up the current state in the shared memory file. But there is a way to bootstrap syncing by using trusted block_log file. The block log is an external append only log of the blocks.
It contains blocks that are only added to the log after they are irreversible because the log is append only.

Trusted block log file helps to download blocks faster. Various operators provide public block log file which can be downloaded from:

Both block_log files updated periodically, as of March 2021 uncompressed block_log file size ~350 GB. (Docker container on stable branch of Hive source code has option to use USE_PUBLIC_BLOCKLOG=1 to download latest block log and start Hive node with replay.)

Block log should be place in blockchain directory below data_dir and node should be started with --replay-blockchain to ensure block log is valid and continue to sync from the point of snapshot. Replay uses the downloaded block log file to build up the shared memory file up to the highest block stored in that snapshot and then continues with sync up to the head block.

Replay helps to sync blockchain in much faster rate, but as blockchain grows in size replay might also take some time to verify blocks.

There is another trick which might help with faster sync/replay on smaller equipped servers:

while :
do
   dd if=blockchain/block_log iflag=nocache count=0
   sleep 60
done

Above bash script drops block_log from the OS cache, leaving more memory free for backing the blockchain database. It might also help while running live, but measurement would be needed to determine this.

Few other tricks that might help:

For Linux users, virtual memory writes dirty pages of the shared file out to disk more often than is optimal which results in hived being slowed down by redundant IO operations. These settings are recommended to optimize reindex time.

echo    75 | sudo tee /proc/sys/vm/dirty_background_ratio
echo  1000 | sudo tee /proc/sys/vm/dirty_expire_centisecs
echo    80 | sudo tee /proc/sys/vm/dirty_ratio
echo 30000 | sudo tee /proc/sys/vm/dirty_writeback_centisecs

Another settings that can be changed in config.ini is flush-state-interval - it is to specify a target number of blocks to process before flushing the chain database to disk. This is needed on Linux machines and a value of 100000 is recommended. It is not needed on OS X, but can be used if desired.

Hive Testnet

Hive blockchain software is written in C++ and in order to modify the source code you need some understanding of the C++ programming language. Each Hive node runs an instance of this software, so in order to test your changes, you will need to know how to install dependencies which can be found in the Hive repo. This also means that some knowledge of System administration is also required. There are multiple advantages of running a testnet, you can test your scripts or applications on a testnet without extra spam on the live network, which allows much more flexibility to try new things. Having access to a testnet also helps you to work on new features and possibly submit new or improved pull requests to official the Hive GitHub repository.

Public Testnet

The Hive Public Testnet is maintained to aid developers who want to rapidly test their applications. Unless your account was created very recently, you should be able to participate in the testnet using your own mainnet account and keys (though please be careful, if you leak your key during testnet, your mainnet account will be compromised).

Also see: hive.blog/hive-139531/@gtg/hf25-public-testnet-reloaded-rc2

Running a Private Testnet Node

Alternatively, if you would like to run a private local testnet, you can get up and running with docker:

docker run -d -p 8090:8090 inertia/tintoy:latest

For details on running a local testnet, see: Setting Up a Testnet

खाते

On Hive, each account is identifiable by its unique username, as maximum of 16 bytes, characters. Accounts are created by existing users or services who utilize blockchain resources to assign public keys to the username.

You can find some of the signup/onboarding services on this page, https://signup.hive.io and claim your unique username now.

प्रमाणीकरण

User authentication

In Web3 unlike Web2, authenticating user has different meaning. Since only user has and knows their private keys, there should be a secure way to sign the transaction because there is no concept of Login and applications won’t have direct access to user private keys. Web3 way of authentication or login, means user has to sign arbitrary message to verify ownership and wallet applications facilitate that. In Hive, there are services maintained and developed by community. These services help to decrease trust on all new dapps and services. They help to minimize hacks and private key stealing, phishing attacks done by malicious actors. It is recommended to utilize and integrate these services into your website or apps so users can quickly authenticate and start using your app without fear of loosing their private keys.

HiveSigner

This is OAuth2 standard built on top of Hive blockchain. Just like web2 OAuth2 integration, Hivesigner integration works in similar way.

Application side

  1. Create Hive account for application/website https://signup.hive.io.
  2. Login with that account into Hivesigner and set account as Application from https://hivesigner.com/profile.
  3. Authorize hivesigner with Application account by clicking this link https://hivesigner.com/authorize/hivesigner.
  4. Finalize app integration https://docs.hivesigner.com/h/guides/get-started/hivesigner-oauth2.

User side

Overview of steps that user experiences during Login/Authentication in your website or app.

  1. Website or application will forward user to Hivesigner.com to authenticate.
  2. Hivesigner after verification/authentication redirects user back website or application with access token.
  3. Access token used by website or application to sign and broadcast transactions on blockchain.

For more detailed instruction please follow HiveSigner documentation.

HiveSigner SDK: https://www.npmjs.com/package/hivesigner

HiveSigner tutorial: JS/Node.js


HiveKeychain

Hive Keychain is an extension for accessing Hive-enabled distributed applications, or “dApps” in your Chromium or Firefox browser!

Application side

  1. Send a handshake to make sure the extension is installed on browser.
  2. Decrypt a message encrypted by a Hive account private key (commonly used for “logging in”)
  3. Create and sign transaction
  4. Broadcast transaction.

User side

  1. Install Keychain browser extension and import accounts
  2. On Login/Authentication popup from website/application, verify message and sign with selected account.
  3. Signature used from website/application to sign transactions going forward, every transaction should be signed by user.

For more detailed instruction please follow HiveKeychain documentation.

HiveKeychain SDK: https://www.npmjs.com/package/keychain-sdk

Keychain tutorial: JS/Node.js


HiveAuth

HiveAuth is decentralized solution for any application (either web, desktop or mobile) to easily authenticate users without asking them to provide any password or private key.

Application side

  1. Open a Websocket connection with HAS server.
  2. Generate unique auth_key for each user account every time they Login/Authenticate.
  3. After user authenticates, auth_key used for broadcasting transactions.

User side

  1. Install wallet applications that support Hive Auth.
  2. On Login/Authentication popup from website/application, verify message with selected account.
  3. Unique auth key generated by application for user account used for signing transaction going forward, every transaction should be signed by user.

For more detailed instruction please follow HiveAuth documentation.

SDK Libraries

Software development kits

Accessing and interacting with Hive data is easy from various options depending on your infrastructure and objectives.

Building a web3 app is a breeze with the JavaScript, check related tutorials. There is also a Python tutorials available, as well as many opensource projects which could be beneficial for your Hive project.


WAX - https://gitlab.syncad.com/hive/wax

Wax is a multi-language, object-oriented library for interacting with the Hive blockchain network. There are currently three language implementations of the library: TypeScript, C++, and Python.
Each implementation of Wax incorporates the same code used by the core Hive protocol library to define Hive objects (operations, transactions, etc). This ensures that Wax will always maintain compatibility with the core blockchain protocol.

@hiveio/wax


Workerbee - https://gitlab.syncad.com/hive/workerbee

Hive automation library based on the wax and beekeeper. Library helps to observe, fetch and submit transactions to blockchain with ease.


Hive-JS - https://github.com/hive/hive-js

Pure JavaScript Hive crypto library for node.js and browsers. Can be used to construct, sign and broadcast transactions in JavaScript.

@hiveio/hive-js


DHive - https://gitlab.syncad.com/hive/dhive

A Typescript Hive crypto library for node.js and browsers. Can be used to construct, sign and broadcast transactions in JavaScript.

@hiveio/dhive


Hive-TX - https://github.com/mahdiyari/hive-tx-js

Lightweight JavaScript library for creating and signing transactions. Works with frameworks like Nativescript. This library is a solution to such cases when other libraries are not working. And also an alternative for only creating, signing, and broadcasting transactions.

hive-tx


Radiator - https://github.com/inertia186/radiator

Radiator is a Ruby API client to interact with the Hive blockchain.


Beem - https://github.com/holgern/beem

A python library to interact with the Hive blockchain. It includes the CLI tool beempy.


Lighthive - https://github.com/emre/lighthive

A light python client to interact with the Hive blockchain.


hive-php - https://gitlab.com/mahdiyari/hive-php

A (real) PHP library for Hive blockchain


Get and Set

Fetching data

Fetching blockchain data with help of SDK libraries, couldn’t be much simpler and easy to do. Nodejs and Python libraries help any developer to quickly access Blockchain data and do analysis or create apps using those data. SDK by default utilizes JSON-RPC to make a request to Hive nodes. Community has created REST API alternatives as well which could easily be integrated with any app on any framework or application.

Broadcast data

Broadcasting or Set, modifying blockchain data could be done directly with above SDK libraries as well. Broadcasting or making any modification into account require user’s private key. Using Authentication services highly recommended in this use-cases.

By utilizing Authenticating services, you can eliminate or give more confidence to user, so they are assured their keys are safe. They can securely interact with your application, website or service.