DevOps in Web3 / Blockchain world

Xyz Zyx
7 min readMay 4, 2022

--

in web3

DevOps in web3 consists of best practices that allows legit users to interact with your dapp while blocking malicious users/bots from preventing legit users to do their intent.

Most problems arise from bots, hackers, impolite aggregators, scanners. The more popular your project is, the more attention it attracts. Because you’re working with crypto, a hacker knows that: a) you probably own and it’s easy for you to pay them via crypto in exchange for them stopping whatever they do. b) if he manages to “hack” something they can get easy and hard to trace crypto. The easiness of getting paid/stay hidden creates incentitves to target crypto projects as opposed to ecommerce/others.

This article is going to help you understand the best practices for devops when working in web3.

When developing web3 applications you have to connect to the blockchain. Blockchain is formed by nodes, but hosting and managing a/many node(s) is not an easy task. A node is where the blockchain data is stored. Nodes are used by developers mostly to query the blockchain and allow fast access to the latest data.

At the time of writing this article a bitcoin node size is 390GB, Ethereum (Archive) is ~10GB and others like are even bigger.

While a bitcoin node can run on anything (even a Raspberry PI https://decrypt.co/55510/how-to-run-a-bitcoin-node-on-a-raspberry-pi-2021) other blockchain have more expensive hardware requirements.

For an ETH Node (hyperledger besu) you’ll need a minimum 8GB of RAM and 3TB (for full sync) or 700GB (with pruning see https://besu.hyperledger.org/en/1.3.5/Concepts/Pruning/). Also you’ll need an SSD HDD, at least 2 cores and a good internet connection (+100 Mbit/s).

If you plan to use AWS the minimum is t3.large and the recommended one is t3.xlarge (tip: after the sync is completed reduce it to t3.large for cost savings).

Unless you have a very big project usually it’s not necessary to have your own node, but instead use some “Infrastructure-as-a-service” that provides you an interface for connecting in exchange for a small fee (or free).

There are a few IAAS providers in web3 space, each one with their own benefits and drawbacks.

Some things to consider are:

  1. Supporting the blockchain you need
  2. Having a free tier (if you’re learning or your project is small)
  3. Price & transparency in pricing policy alongside with protection against overcharging
  4. Response time (for time sensitive projects in defi, exchanges)

Top platforms to do so are:

https://moralis.io/

Price: (free tier, unlimited but rate limited and your server sleeps if it’s not used)

Note: more like a platform that provides their own API instead of a pure blockchain connection
Nice to know: have ready to use REST APIs to query lots of things like NFT ownerships, balances etc.

https://infura.io — Ethereum, ETH2.0, IPFS, Filecoin (MetaMask (by default) uses Infura)

Price: free for 100,000 Requests/Day.
Nice to know: You can buy request packs for $200/month for 1mil/daily. Also have Archive Data for $250/month (very useful for some applications)

https://getblock.io

Price: free tier 40K requests/day
Nice to know: a large collection of blockchains all under the same API with excellent documentation)

https://www.quicknode.com/ — Bitcoin, Ethereum, BSC, Polygon, Fantom, Erra, Aribtrum etc

Price: (no free tier, but a 7 day trial) but $9/month with 300,000 responses per month.

Nice to know: low latency nodes. They claim that they are faster than competitors. When working with mempool it’s a good choice, unless you host your own node.

https://www.alchemy.com/ — Ethereum, Arbitrum, Polygon, Optimism (and their testnets)

Price: free for 300,000,000 compute units/month (since not all API calls are equal the more heavy ones are charged more)
Nice to know: supports a large amount of blockchains and their testnet

example dashboard from Infura.io

Small Projects

A small project is one which doesn’t see a lot of traffic. Typically legitimate users or polite bots that generate up to 50 ‘visits’ per second, with bursts up to 100. These small projects benefit from a web3-api platform because using a free tier (that can be upgraded) is more than enough for most cases.

Unless you’re doing low latency interactions, or require mempool, going to a web3-api platform is the way to go.

You should always have monitoring and backup providers in case you see spikes in traffic.

Medium Projects

I would call a medium project about 100/second with some burst period and having attracted the attention of hackers and impolite bots. For NFT projects there are bots that constantly retrieve the tokens metadata in order to calculate rarity and snipe the rare ones or notify if someone listed a rare one. For tokens projects if you have a backend API serving some public info, if that information can be used for other people’s gain, if they have access to it before everyone else, then they’ll just be going to hammer your API. If by hammering your API they can take it down and use it for their own gain, they’ll do just that.

While the web3 platform will hold the traffic without issues, you can expect a pretty big bill at the end of the month if you don’t have a caching layer that stops spam requests.

Those platforms charge per request, so if you allow some ddos without stopping it then you’ll be billed for all these requests.

In addition check if the web3 provider can have some mechanism for blocking undesired interactions. Infura.io has project secret, jwt verification, allowlists where you can whitelist your contracts, custom user agents, origin request (your domain for example) and allowing just certain APIs (eg: getting the ETH balance of an address, getting the transaction receipts etc

Example from infura.io

Example from alchemyapi.io

For example:

An NFT projects that have some rarities served by your API, and your API checks if the NFT exists before serving the metadata. Everytime a bot asks your API to give him the NFT metadata you’ll make a request to the web3 provider.

We can improve this by caching. In ETH there’s a 15 seconds block time, so you can just cache everything for 15 seconds. Existence of a token, balance of an address, any public variables like totalSupply & so on. It doesn’t make sense to ask 100 times for something per second if the only way that value will get updated is in a 15 seconds interval.

In addition you’ll need a firewall like Cloudflare, AWS Shield, Akamai, Imporva, Cloud Armor etc that proxy traffic. This is needed to block mostly against ddos attacks that will try to take down your services in exchange for money.

Large Projects

In addition to multiple legit users, large projects are full of hackers trying to take them down in exchange for crypto rewards or fun. Protecting against them is no easy task

As best practices you should:

  • Ask users to use their blockchain connection (eg: metamask) instead of you providing it.

This way, instead of you paying for their requests, they will pay (actually use it for free in most cases).

  • Cache everything up to block time (unless mempool info is involved)
  • Have your own nodes in order to not worry about queries (having your own nodes gives you unlimited queries)
  • Have backup solutions automatically take over if your nodes fail
  • Consider listening to contracts or blocks and storing the interested data in a database

A balance of an address will change if there’s a transaction that involves that address in a block. If you listen to all blocks and parse them for what interests you, you can just store that part in a database/redis and use it from there. The same is true for NFT projects like metadata URIs, total supply of something, all public variables that exists in smart contracts.

Remove the below section

And you’ll need specialized people that know Terraform, Amazon CDK, some experience as SRE

  • Kafka/Redis/Kinesis…
  • You’ll also need some monitoring tools like Grafana…, some logging analysis maybe Loki etc.
  • Making self healing systems, auto restart, auto downgrade etc
  • Have knowledge about how mempool works and transaction propagation (if you’re into dex or exchanges)

For exchanges or dexes you would want to have your own nodes, because all the web3 providers are just too slow and you want to give the information to the user as fast as possible.

You should also consider that not all calls to your node are equal. There are lightway operations, eg: is transaction successful and there are heavy operations like getting the logs from a transaction.

Lastly cost optimization can be observed via benchmarking and a good starting point is this excellent article by Johan Hermansson (Senior Systems Engineer, Infura) https://blog.infura.io/optimizing-performance-and-cost-infura-benchmark-analysis-f083ccf8f6ac/

You will need an audit on your web3 code (smart contracts / backend / frontend / devops). I recommend https://to.wtf as best auditors on the market :)

PS2. I’m available for smart contract audits & custom smart contract development. contact me on telegram @andyxyz1

and please mention you got my ID from this blog post

I hope you learned something new from this article. Please use the like button

--

--