Tornado Cash is a mixer on the Ethereum network. By design mixers obfuscate the movement of funds. They make it more difficult to trace how money is flowing among different blockchain addresses. In one view, mixers improve privacy for law-abiding ordinary citizens whose transactions are otherwise visible for everyone else in the world to track. A less charitable view contends that mixers help criminals launder the proceeds of criminal activity. Not surprisingly Tornado found a very happy customer in North Korea, a rogue nation-state with a penchant for stealing digital assets in order to evade sanctions. Regulators were not amused. Tornado Cash achieved the dubious distinction of becoming the first autonomous smart-contract to be sanctioned by the US Treasury. Its developers were arrested and put on trial for their role in operating the mixer. (One has already been convicted of money laundering by a Dutch court.)
Regardless of how Tornado ends up being used in the real world, lobbying groups have been quick to come to the defense of its developers. Some have gone so far as to cast the prosecution as an infringement on constitutionally protected speech, pointing to US precedents where source code was deemed in scope of first amendment rights. Underlying such hand-wringing is a slippery slope argument. If these handful of developers are held liable for North Koreans using their software to commit criminal activity, then what about the thousands of volunteers who are publicly releasing code under open-source licenses? It is almost certain that somebody somewhere in a sanctioned regime is using Linux to further the national interests of those rogue states. Does that mean every volunteer who contributed to Linux is at risk of getting rounded up next?
This is a specious argument. It brushes aside decades of lessons learned from previous controversies around DRM and vulnerability disclosure in information security. To better understand where Tornado crosses the line, we need to look at the distinction between code as speech and code in action.
“It’s alright Ma— I’m only coding”
There is a crucial difference between Tornado Cash and the Linux operating system, or for that matter open-source applications such as the Firefox web browser. Tornado Cash is a hosted service. To better illustrate why that makes a difference, let’s move away from blockchains and money transmission, and into a simpler setting involving productivity applications. Imagine a not-entirely-hypothetical world where providing word processing software to Russia was declared illegal. Note this restriction is phrased very generically; it makes no assumptions about the distribution or business model.
For example the software could be a traditional, locally installed application. LibreOffice is an example of an open-source competitor to the better-known MSFT Office. If it turns out that somebody somewhere in Russia downloaded a copy of that code from one of the mirrors, are LibreOffice developers liable? The answer should be a resounding “no” for several reasons. First the volunteers behind LibreOffice never entered into an agreement with any Russian national/entity for supplying them with software intended for a particular purpose. Second, they had no awareness much less control over who can download their work product once it is released into the wild. Of course these points could also be marshaled in defense of Tornado Cash. Presumably they did not run a marketing campaign courting rogue regimes. Nor for that matter, did North Korean APT check-in with the developers first before using the mixer— at least, based on publicly known information about the case.
But there is one objection that only holds true for the hypothetical case of stand-alone, locally installed software: that source-code downloaded from GitHub is inert content. It does not accomplish anything, until it is built and executed on some machine under control of the sanctioned entity. The same defense would not hold for a software-as-a-service (SaaS) offering such as Google Docs. If the Russian government switches to Google Docs because MSFT is no longer allowed to sell them copies of Word, Google can not disclaim knowledge of deliberately providing a service to a sanctioned entity. (That would hold true even if use was limited to the “free” version, with no money changing hands and no enterprise contract signed.) Google is not merely providing inert lines of code to customers. It has been animated into a fully functioning service, running on Google-owned hardware inside Google-owned data-centers. There is every expectation that Google can and should take steps to limit access to this service from sanctioned countries.
While the previous cases were cut and dry, gray areas emerge quickly. Suppose someone unaffiliated with the LibreOffice development team takes a copy that software and runs it on AWS as a service. With a little work, it would be possible for anyone in the world with a web browser to remotely connect and use this hosted offering for authoring documents. If it turns out such a hosted service is frequented by sanctioned entities, is that a problem? Provided one accepts that Google bears responsibility for the previous example, the answer here should be identical. But it is less straightforward who that responsible party ought to be. There is full separation of roles between development, hosting and operations. For Google Docs, they are all one and the same. Here code written by one group (open-source developers of LibreOffice) and runs on physical infrastructure provided by a different entity (Amazon.) But ultimately it is the operator who crossed the Rubicon. It was their deliberate decision to execute the code in a manner that would make its functionality publicly-accessible, including to participants who not supposed to have access. Any responsibility from misuse then lies squarely with the operator. The original developers are not involved. Neither is AWS. Amazon is merely the underlying platform provider, a neutral infrastructure that can be used for hosting any type of service, legal or not.
Ethereum as the infrastructure provider
Returning to Tornado Cash, it is clear that running a mixer on ethereum is closer in spirit to hosting a service at AWS, than it is to publishing open-source software. Billed as the “world computer,” Ethereum is a neutral platform for hosting applications— specifically, extremely niche types of distributed application requiring full transparency and decentralization. As with AWS, individuals can pay this platform in ether to host services— even if those services are written in an unusual programming language and have very limited facilities compared to what can be accomplished with Amazon. Just like AWS, those services can be used by other participants with access to the platform. Anyone with access to the blockchain can leverage those services. (There is in some sense a higher bar. Paying transaction fees in ether is necessary to interact with a service on the Ethereum blockchain. Using a website hosted at AWS requires nothing more than a working internet connection.) Those services could be free or have a commercial angle— as in the case of the Tornado mixer, which had an associated TORN token that its operators planned to profit from.
The implication is clear: Tornado team is responsible for their role as operators of a mixing service, not for their part developers writing the code. That would have been protected speech if they had stopped short of deploying a publicly-accessible contract, leaving it in the realm of research. Instead they opted for “breathing life into the code,” by launching a contract, complete with a token model they fully controlled.
One key difference is the immutable nature of blockchains: it may be impossible to stop or modify a service once it has been deployed. It is as if AWS allowed launching services without a kill switch. Once launched, the service becomes an autonomous entity that neither the original deployer or Amazon itself can shut down. But that does not absolve the operator of responsibility for deploying the service in the first place. There is no engineering rule that prevents a smart-contract from having additional safeguards, such as the ability to upgrade its code to address defects or even to temporarily pause it when problems are discovered. Such administrative controls— or backdoors, depending on perspective— are now common practice for critical ethereum contracts, including stablecoins and decentralized exchanges. For that matter, contracts can incorporate additional rules to blacklist specific addresses or seize funds in response to law enforcement requests. Stablecoin operators do this all the time. Even Tether with its checkered history has demonstrated a surprising appetite for promptly seizing funds in response to court orders. The Tornado team may protest they have no way to shut down or tweak the service in response to “shocking” revelations that it is being leveraged by North Korea. From an ethical perspective, the only response to that protest is: they should have known better than to launch a service based on code without adequate safeguards in the first place.
Parallels with vulnerability disclosure
Arguments over when developers cross a line into legal liability is not new. Information security has been on the frontlines of that debate for close to three decades owing to the question of proper vulnerability disclosure. Unsurprisingly software vendors have been wildly averse to public dissemination of any knowledge regarding defects in their precious products. Nothing offended those sensibilities more than the school of “full disclosure” especially when accompanied by working exploits. But try as they would like to criminalize such activity with colorful yet misguided metaphors (“handing out free guns to criminals!”) the consensus remains that a security researcher purely releasing research is not liable for the downstream actions of other individuals leveraging their work— even when the research includes fully weaponized exploit code ready-made for breaking into a system. (One exception has been the content industry, which managed to fare better, thanks to a draconian anti-circumvention measure in the DMCA. While that legislation certainly had a chilling effect on pure security research on copyright protection measures, in practice most of the litigation has focused on going after those committing infringement rather than researchers who developed code enabling that infringement.)
Debates still continue around what constitutes “responsible” disclosure, where society can strike the optimal balance between incentivizing vendors to fix security vulnerabilities promptly without making it easier for threat actors to exploit those same vulnerabilities. Absent any pressure, negligent/incompetent vendors will delay patches arguing that risk is low because there is no evidence of public exploitation. (Of course absence of evidence is not evidence of absence and in any case, vendors have little control over when malicious actors will discover exploits independently.) But here we can step back from the question of optimal social outcomes, and focus on the narrow question of liability. It is not the exploit developer writing code but the person executing that exploit against a vulnerable target who ought to be held legally responsible for the consequences. (To paraphrase a bumper-sticker version of second amendment advocacy: “Exploits don’t pop machines; people pop machines.”) In the same vein, Tornado Cash team is fully culpable for their deliberate decision to turn code into a service. Once they launched the contract on chain, they were no longer mere developers. They became operators.
CP
On first read, I missed an important part of this sentence: “From an ethical perspective … should have known better than to launch … without adequate safeguards”
As you mention later, focusing narrowly on liabilities, I feel strongly that there is not liability in this scenario. A developer who deploys a smart contract absent any safeguards such as the ability to upgrade, pause, seize funds etc should not be liable for its misuse in perpetuity.
It’s pretty easy to imagine a scenario where one creates some sort of journaling app where users can put append-only text – and that being misused to spread illegal content (perhaps in the same way newsgroups are.) If you want to go after them I think you would need to prove *intent* – was it an innocent art project, or were they designing a takedown resistant way to spread illegal material? Intent is hard to prove, it’s a very high bar, and I think it’s appropriate.
In some ways it seems reminiscent of the charge of negligence in software engineering – at what point does the presence of bugs (exploitable or not) or lack of security features (https, mfa, upgrades, etc) cross the line of “You should have known better, and done more, and you are partly responsible for the harm that occurred because of this”? I haven’t really kept up with what’s the state-of-play here, but in this area I tend to think of only the FTC…
It is certainly a gray area.
Going over the indictment, there are indications that the perpetrators clearly knew Tornado Cash was being used for criminal activity:
https://www.justice.gov/usao-sdny/file/1311391/dl?inline
Instead of taking steps to curb that, they doubled down on promoting the service & associated token, and adding relays to make tracing even harder. Where they did take steps such as adding KYC check to the front-end, they admitted in their own communications it was all window dressing and easily bypassed. (Granted, it is reasonable to ask what they could have done given the contract is immutable. At a minimum they could have taken down the front-end UI. Their defense that anyone can call the contracts directly is factually accurate but beside the point: it turns out as a matter of fact, the vast majority of usage including from North Korea came through the official front-end and not from people invoking the contract directly on -chain.)
There is also plenty of circumstantial evidence of mens rhea in the indictment:
– Constant misrepresentations about Tornado being “non-profit” or even that they were not founders
– Communications where they discuss ways to distance themselves eg not using their company account to pay for hosting the front-end
– Dumping TORN token after FinCEN was sanctioned using a friend’s Binance exchange account, to disguise the large sale
Another curious point: initially the contracts had admin mode accessible to the operators. But in 2020 they deliberately removed their own public-keys to make the contract autonomous. Given that North Korea’s interest in targeting cryptocurrency to evade sanctions was very well established by that point, it would defy common sense for the Tornado founders to argue they could not have foreseen that their contract will be misused by crooks.
(My 2¢ only, not legal opinion.)