Section 230 Is a Government License to Build Rage Machines

The law serves as Facebook and Google’s get-out-of-jail-free card for conspiracies and disinformation. It’s time for strong amendments.
Trump supporters holding campaign signs and QAnon flags
Propelling misinformation and suppressing competitors shouldn’t be government-protected activities.Photograph: Carlos Barria/Reuters

Facebook has been called the “largest piece of the QAnon infrastructure.” The app has not only hosted plenty of the conspiracy group’s dark and dangerous content, it has also promoted and expanded its audience. QAnon is hardly the only beneficiary: Facebook promotes and expands the audience of militia organizers, racists, those who seek to spread disinformation to voters, and a host of other serious troublemakers. The platform’s basic business, after all, is deciding which content keeps people most engaged, even if it undermines civil society. But unlike most other businesses, Facebook’s most profitable operations benefit from a very special get-out-of-jail-free card provided by the US government.

Section 230 of the Communications Decency Act protects “interactive computer services” like Facebook and Google from legal liability for the posts of their users. This is often portrayed as an incentive for good moderation. What is underappreciated is that it also provides special protection for actively bad moderation and the unsavory business practices that make the big tech platforms most of their money.

Google might be viewed as a search engine, and Facebook as a virtual community, but these services are not where the profits lie. They make money by deciding what content will keep readers’ eyeballs locked near ads. The platforms are paid for their ability to actively select and amplify whatever material keeps you hooked and online. And all of that content is specifically protected by Section 230, even when they are recommending QAnon or Kenosha Guard.

This unusual state of affairs exists because, while Section 230 was intended to limit the platforms’ responsibility for bad content, the courts have also perversely interpreted it as providing protection for commercial decisions to elevate and push stories to users. This allows Google and Facebook to focus on user engagement to the exclusion of everything else, including content quality and user well-being. If I threaten or defame someone in an online post (assuming I’m not acting anonymously), I can be sued. If a platform decides to promote that threatening post to millions of other people to drive user interest and thus increase time on the site, they can do so without any fear of consequences. Section 230 is a government license to build rage machines.

The platforms like to avoid any discussion of their liability-free business model by focusing on the difficulties of blocking bad content. This is evidenced by Mark Zuckerberg’s constant defense of “free speech” and the problems with dealing with information “at scale.” While stopping the public from posting bad content is a truly difficult problem, all decisions about amplifying that content are the platforms’ own. They should be expected to police themselves.

That license to engage in irresponsible behavior is particularly hard on market participants like news publishers (whom I represent) that invest in creating quality content. They are forced to compete in attention markets that don’t value quality and are subject to ever-changing algorithmic decisions about which content favors the platforms’ interests. Under Section 230, news publishers also retain liability for what they produce. We get the responsibility, and Google and Facebook get most of the money.

You might think the platforms would value quality journalism as a partial antidote to the bad information they host, but, as recently indicated by Facebook’s threat to terminate access to news in Australia, they just don’t believe it’s important to their business. Section 230 means they don’t have to care about the quality of the content they deliver.

The internet is no longer in its infancy, as it was when the Communications Decency Act was passed in 1996. We need new rules for the digital market that limit government distortions and promote genuine competition. Roger McNamee, the venture capitalist and early investor in Facebook, has correctly argued that the protections of Section 230 are inconsistent with algorithmic amplification. We could start by limiting Section 230 and making the platforms responsible, like any other publisher, for content they decide to promote and amplify. This wouldn't stop the spread of all hateful content. But it would, at the very least, require the platforms to carefully track and filter what they promote, and introduce incentives to support known sources of quality information.

Propelling misinformation and suppressing competitors shouldn’t be government-protected activities. While Google and Facebook like to preach libertarian virtues like open competition and free speech, they are really living off a giant government subsidy. And when you wrap massive companies in special protections, markets and society suffer. It’s time for them to take responsibility for their commercial decisions, just like any other business.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. See our submission guidelines here, and submit an op-ed at opinion@wired.com.


More Great WIRED Stories