This is the end of an 8 part series. If you haven't read from the beginning, we suggest starting with the intro.

Epilogue: What is Ethical Hacking

One of the claims made by the University of Syracuse Full Stack Security Lab was that they were a team of “ethical hackers.” So what does that mean, and does it actually apply to their activities?

First of all, “ethical hacking” is a real thing. There’s even an Ethical Hacker Certification (though I have no idea whether Dr. Tang or his students have obtained this certification). I mentioned in Friday night’s post that during college I did work as a penetration tester for a local software company; that’s a kind of ethical hacking.

But in order for hacking to truly be ethical, there are several obligations which the hacker must meet:

  1. Get consent. If you are taking actions that you expect to harm someone’s systems without their consent, you are not an authorized user under the Computer Fraud and Abuse Act (CFAA) and may be committing a federal felony.

  2. Agree on the scope. There may be some systems that you are permitted to attack, and other systems that are defined as out of scope. If you attack a system that is not in scope, you are not an authorized user under the CFAA.

  3. Disclose vulnerabilities promptly. Once you have discovered a problem, you must report it to the system owners in a timely fashion.

So did our attackers meet these requirements?

  1. Well, they certainly did not have our consent. However, we learned from Dr. Tang’s bug report that they believed they were attacking one of our customers, and did not seem to know we were the service provider for that customer. Did that customer give them consent?

    Not explicitly, but that customer did have a bug bounty program that grants consent under a specific, predefined scope. However, the terms of the bug bounty program required hackers to

    Make a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service.

    Dr. Tang’s researchers measured the degradation they caused on our service and included that in a graph in their bug report. When we blocked their attacks, they deployed new smart contracts within minutes to resume their attacks. They did not make a good faith effort to avoid interruption or degradation to our service, so even if our customer’s bug bounty policy could confer consent to attack our systems, they did not comply with that policy.

  2. Our customer’s bug bounty program explicitly declared Denial of Service vulnerabilities to be out of scope. They miss point two as well.

  3. Did they disclose promptly? There’s room for reasonable people to disagree on what “prompt” disclosure looks like. Do they need to stop what they’re doing and send an email immediately? Is a few hours prompt? A few days? A week?

    Rather than just looking at time, let’s look at actions. When the researchers wanted to know if our service was vulnerable to their attack, they could have run the request one time with minimal impact to us, and determine whether or not we were vulnerable. At that point if they stopped and took a week to send us a report, I would be satisfied that they had disclosed the vulnerability promptly. But that’s not what they did.

    How to investigate with minimal impact

    As the researchers were trying to evaluate our susceptibility to their exhaust_memory attack, they could have run the query exactly once. If they got an out of gas error in under a second, they would know we were protected. If they ran the query and got a context deadline exceeded error, or some form of gateway error instead, they would know that we were not protected and should immediately stop testing.

    The researchers ran their queries not once, but thousands of times while actively measuring the impact their attack was having on our systems. When we blocked their attacks, they actively circumvented our mitigation attempts. This went on for almost a full week until we were able to identify them. Only after they had been caught did they disclose the vulnerabilities they had found. That does not meet any reasonable definition of prompt disclosure.

So Dr. Tang’s claim that his research group was engaged in “ethical hacking” simply doesn’t hold up.

How to Conduct Security Research with Rivet

But if you’re interested in conducting security research with us, what should you do?

For starters, as of this post, OpenRelay is officially adopting the Disclose.io policies. This lays out ground rules for security researchers looking for vulnerabilities on our systems, and extends a safe harbor provision to researchers acting in good faith. From here forward it won’t matter whether a researcher thinks they’re investigating one of our customers or knows they’re investigating us, they’ll be covered by these terms. Be sure you read, understand, and play by the rules laid out in our policy, and you should have nothing to worry about.

If you want an environment where you can go to town trying to take down our systems, you’re in luck! Rivet is based on the open source Ether Cattle Initiative, so you can set up your own cluster and attack it without any limitations whatsoever (and importantly without impacting us our our customers).

But we understand that administering an Ether Cattle Cluster can be quite cumbersome. If you’re just interested in poking around to see what you can do to our systems, drop us a line at security@rivet.cloud. Depending on the nature of your research, we may give you the go ahead to proceed on our production systems, or we may give you access to a limited test environment that doesn’t have customers with production workloads relying on it.

We don’t want to discourage people from looking for and reporting vulnerabilities in our systems, but we need to make sure you do so in a way that does not interfere with our ability to provide our customers with a high quality of service.

In our opinion, the team at Syracuse fell far short of meeting the bar of ethical hacking. At the time we didn’t have good policies for how they could have gotten involved, so now we’re adopted some policies to help future security researchers start off on the right foot. We hope that the next time somebody decides to see whether Rivet is vulnerable to some attack they’ve discovered, they’ll start by reaching out to security@rivet.cloud.


All told, things turned out pretty well for the Rivet team. We found some gaps in our system and were able to address them quickly. While we had some limited service degradation, our customers only became aware of the issues because we notified them. We were able to track down the attackers and get them to stop. And now we’ve adopted policy changes to encourage and provide guidelines for future security researchers. We hope that next time security researchers turn their eyes to our service they’ll go about it the right way.