For the Embiggen Airdrop we loaded OpenRelay with 27 million orders ready for people to fill. One of the big challenges was optimizing the order ingest process so that it could ingest all those orders efficiently.

When ingesting an order, we need to make sure an order is fillable for the first time. Before optimizing for the Embiggen Airdrop, we ran six RPC. The first two RPC calls check each order to see if it:

The next four RPC calls check each order to see if the maker:

Unoptimized, ingesting 27 million orders would require 162 million RPC calls. That’s doable as a one-time thing, but it would take a long time. So we found two optimizations.

Fill Check Optimization

While the vast majority of orders submitted to OpenRelay won’t have been filled or cancelled, it’s possible someone might create an order, fill it themselves, and then submit it just to mess with us, so we have to check.

To avoid making unnecessary RPC calls, we set up a bloom filter
loaded with every 0x Order that has ever been filled or cancelled. As the fill monitor finds more filled and cancelled orders, they get added to the filter. When the fill updater sees a new order during the ingest process, it checks the filter. If it finds the order, it makes RPC calls to get the amount filled or cancelled, otherwise it just marks the order as 0% filled and 0% cancelled.

Right now there are about 150k orders in the bloom filter. With a bloom filter of 50 MB, the odds of a false positive at this level are infinitesimal. As the order history grows we’ll get a higher false positive rate, leading to more unnecessary RPC calls, but the system will still function correctly. When we get to 100 million filled orders our false positive rate will be about 1 in 7, but we’ll still reduce unnecessary API calls by about 85%. Beyond that, we’ll need to consider bumping up the size of our bloom filter, and maybe pruning out expired orders.

Balance Check Optimizations

We also have to check a user’s balances and allowances to ensure an order can be filled. Those balances have to be checked, but for the airdrop use case we can cheat a bit. Since the airdrop orders come from a small handful of users, and we know those users balances will never change within a single block, we created a balance / allowance cache that gets invalidated every block. Our balance checker now follows the block monitor, and every time a new block is registered it simply dumps its cache. This means instead of having to make two RPC calls for every order we load into OpenRelay, we only have to make two RPC calls every block, and we can process several thousand orders based on those two RPC calls.

This doesn’t necessarily give us a completely robust solution that would take a lot of pressure off the RPC servers if we had millions of users sending us orders, but it means someone else could execute an airdrop on OpenRelay without special treatment.

Massive Pipeline

With those optimizations in place, we used Massive to start pumping in orders.

zcat orders.gz | python resumption.txt | parallel --no-notice -j 6 --pipe ./massive 0x upload

Here, is a small Python script that passes orders from stdin to stdout, counting them and periodically writing out the count to resumption.txt. If the pipeline dies and has to be restarted, will skip the number of orders indicated in resumption.txt and start from the next order. When failures happened (and they did), it was likely that resumption.txt was behind what had actually been uploaded. That wasn’t a big deal, as OpenRelay will gracefully handle previously uploaded orders.

parallel refers to GNU Parallel. This will run 6 instances of the massive 0x upload process, so that we upload orders ~6x faster than we would without using GNU Parallel. Finally, the massive upload subcommand doesn’t require any arguments, but if you wanted to upload to a different 0x Relayer, simply pass in a --target flag to specify the API endpoint, but Massive defaults to uploading to

Final Thoughts

Once we had all of this tuning in place, we were ingesting a quarter million orders every hour.

We probably could have handled more simply by bumping up the number of processes in GNU Parallel, but we were on track to meet our timelines for the airdrop launch, and wanted to avoid congestion in our queues that would cause delays for other users trying to upload orders.

As the name implies, OpenRelay is Open Source. If you want to see how this works in detail, check us out on Github. If you have any questions about how any of this works, visit with us on our Gitter Channel.

And lastly, if you haven’t claimed your Embiggen from the Embiggen Airdrop, head over to to claim some Embiggen. You’ll be helping us test OpenRelay, and thanks to the power of compounding interest, the sooner you claim your Embiggen the more you’ll have over time.