Global Counsel and The Messina Group establish new global partnership. Learn more

Date

The Domino Effect: The global implications of changes to Section 230

By
Body

Section 230 of the Communications Decency Act is the key building block to internet governance and has shaped the growth of e-commerce, social media, search, and the other key services we use every day online. A change to the legal framework, potentially prompted by court decisions, could create major upheaval in how the internet and user-generated content is organized and potentially prompt a flood of legal cases against large tech companies.  

Section 230 is a provision that provides broad protection to online platforms against liability for user-generated content. However, it has faced increasing criticism in the US as the desire to hold internet platforms accountable for their content moderation decisions grows louder. While many Democrats and Republicans dislike Section 230, they tend to have different motivations and want different outcomes. Republicans argue that platforms use Section 230 as a shield to implement left-leaning content moderation policies, potentially censoring conservative viewpoints, while Democrats contend that platforms heavily rely on the statute to evade responsibility in combating the spread of disinformation and misinformation.  

With the reintroduction of Section 230 reform bills, such as the SAFE TECH Act, and recent congressional hearings on platform accountability, there is mounting pressure on congressional committee chairs to find common ground and reform the statute. However, their chances of success ahead of the 2024 Presidential elections are slim, making it more likely that Section 230 will be shaped by litigation and court decisions.  

But on May 18, the US Supreme Court ruled on two major platform liability cases, Google v. Gonzalez and Twitter v. Taamneh. In the Taamneh case, the court issued a unanimous decision stating that Twitter having hosted terroristic speech did not mean it was legally responsible for terrorist attacks. In light of this ruling, the justices in the Gonzalez case opted not to rule on whether Google could cite Section 230 to avoid liability over its YouTube algorithms having recommended terrorist-related videos. These decisions indicate that Section 230 remains unchanged for now and that internet platforms are not at risk of facing a deluge of lawsuits if they fail to remove content that could be labeled as “terrorist.” 

Although the court has decided not to reexamine Section 230 at this time, it could have another opportunity as early as the fall if the justices opt to consider the constitutionality of content moderation laws in Texas and Florida. The main difference between them is that the Google and Twitter cases dealt with algorithmic recommendations of content and federal anti-terrorism law while the Texas and Florida cases involve state laws that make it unlawful for tech platforms to suspend or penalize users in many cases, and as such the court could reach a different conclusion.  

While American internet users and service providers would experience the immediate consequences of dismantling Section 230, there are two significant global impacts to consider. First, changes to the statute could potentially prompt other countries to introduce policies that place restrictions on online content or make platforms liable for content hosted on their sites. Indeed, it would unravel the limited liability consensus globally, which has been underpinned by Section 230 and the EU’s E-Commerce Directive, creating further questions about how tech companies can comply with legal frameworks that could be completely at odds with each other.  

Second, due to the borderless nature of the internet and the fact that many American tech companies host and organize a significant portion of online content worldwide, the policies and systems these companies adopt in response to changes in US regulations will have substantial ramifications beyond US borders. For instance, to mitigate liability and litigation in the US, American platforms will likely develop new, automated content moderation tools tailored to accommodate any changes to Section 230. These actions are likely to have extraterritorial implications because internet companies are more likely to censor content en masse rather than attempt to draw boundaries across jurisdictions. This is in part because the US legal system is structured in a way that encourages litigation and threats of litigation, especially on the topic of speech.  

Internet companies continue to face challenges in training content moderation algorithms capable of accurately capturing the nuances of speech – making mass content moderation as a solution a suboptimal solution for many countries. There are a range of unique variations in speech across different countries, groups, and regions, and the context of this content can be critical in understanding whether or not it should be removed. As a result, building datasets for such categories of content is difficult, and developing and operationalizing a tool that can be consistently applied across different demographics, geographies, and types of speech is also exceedingly difficult for companies.  

Globally, there is a growing demand for stricter content moderation among lawmakers, placing added pressure on internet companies. In 2022 alone, governments in at least 40 countries blocked online content related to social, political, or religious matters, while in at least 22 countries, officials completely blocked access to social media or communications platforms. If the scope of Section 230 is narrowed, either through court rulings or congressional actions, it will likely act as a catalyst that triggers a chain reaction worldwide, potentially ushering in an internet with greater restrictions on content. 

    The views expressed in this research can be attributed to the named author(s) only.