Unless you have been hiding under a rock lately, you are aware that the CEOs of Twitter, Facebook, and Google were grilled on Capitol Hill last week about their “fact checking” of content and ostensible fight against disinformation on their platforms. At best, it was an opportunity for the American public to hear about these actions taken by these platforms straight from the horse’s mouth, so to speak. At worst, it was a tone-deaf display of arrogance of the highest order, demonstrating an astonishing ambivalence to the inconsistency in application of their own policies and terms of use. In any event, their testimony did not assuage any concerns over Big Tech’s actions. The question is no longer whether Section 230 should be revised, but how it should be addressed and by how much. Unfortunately, the answer is not clear-cut, but the approach to the answer may be simpler than your think.
I first addressed why social media platforms needed to respect Section 230 here, reasons why we need to rethink and re-evaluate the protections here, followed by arguments for repairing Section 230 for the 21st century here. Unfortunately, the testimony presented this past week in the Senate demonstrated that Twitter, Facebook, and Google either do not understand the depth of concern over their actions or they do and do not care. As a result, it seemed appropriate to address some closing thoughts on Section 230 from a different angle — that of its good points, its bad points, and the absolutely ugly points stemming from the law’s failing to keep pace with the platforms.
The Good. No matter the criticisms, there is true value in Section 230. Although the largest social media platforms can easily handle a loss of immunity for content moderation, there are a multitude of other platforms that are fighting for market share in the marketplace (like Parler and Rumble) that absolutely benefit from (and need) such immunity. Section 230 protects online service providers from civil liability for defamatory, tortious, and even illegal content that its users post onto the platform (i.e., comments to posts), but also provides immunity from civil liability resulting from the moderation or restriction of content posted on the platform (such as removing obscene content or content that threatens violence against a person or group). This is a commonsense approach to an ongoing problem. This is why calls for the complate repeal of Section 230 without a reasonable replacement are simply wrong — complete repeal will cause not only social media platforms but a vast number of websites to broadly remove information to avoid liability, resulting in even more censored content. Online service providers (including social media platforms) provide valuable conduits for information exchange and should enjoy a certain level of protection from liability so that the free exchange of ideas and speech are ensured.
The Bad. At least with regard to the largest social media platforms, they are not practicing what they preach. Notwithstanding claims that their “purpose is to serve the public conversation” (Twitter) or Community Standards that claim “to create a place for expression and give people a voice” (Facebook), these platforms have continued a march toward increasing the scope of content removal, reaching a fever pitch this election season, and culminating in actions being taken under the guise of “fact checking” to address “misinformation” (and that is putting it mildly). There are a number of reasons why this has occurred, but one of the biggest ones is due to catastrophically broad Section 230 jurisprudence. Ever since Zeran v. America Online, Incorporated in 1997 (where the Fourth Circuit Court of Appeals affirmed the trial court’s dismissal of the case based upon Section 230 immunity, holding distributors of content online to be a “subset” of publishers and deserving of very broad protection against liability). This interpretation made sense in 1997, but 23 years later the considerations are no longer the same. The internet has matured and its reach has dramatically increased due to the proliferation of mobile devices and the reach of cellular and wireless communications. This evolution helped give rise to social media platforms, whose reach has only grown exponentially, changing the way news and other information is disseminated online. The result, however, has been a slow and steady march away from their own stated policies in support of public discourse and a donning of the “information gatekeeper” mantle.
The Unapologetically Ugly. In an already highly polarized political atmosphere this presidential election cycle, some of the largest social media platforms took it upon themselves to censor content without, it seems, thinking through the consequences to their actions. For example, Twitter suspended the New York Post’s account for tweets referencing the paper’s expose on questionable dealings by Hunter Biden (the son of presidential candidate Joe Biden) gleaned from a laptop he abandoned at a computer repair shop. The reason for removal? Violations of Twitter’s hacking policy, even though other sources and the reporting itself demonstrated the information was not hacked. In fact, Twitter changed its policy within 24 hours of the removal claiming “feedback” uncovered concerns about “undue censorship of journalists and whistleblowers.” For similar reasons, Facebook limited the reach of the same NY Post article on its platform. These actions resulted in the CEOs of Facebook and Twitter (as well as of Google) being summoned to testify before the Senate to explain their actions which, on their face, appeared politically motivated. The CEO of Twitter, Jack Dorsey, attempted to justify his platform’s actions under questioning, even claiming that the content in question could now be shared even though that was not the case while Dorsey was testifying, and it remained that way for a period thereafter. Although measured, their testimony failed to address inconsistency in enforcement of their own policies, or otherwise diffuse concerns over perceived political bias as a result. You can unpack it yourself here (on YouTube, despite the irony), but let’s just say that given then stakes the testimony of these CEOs did not exactly redound to their platforms’ benefit.
I want to make sure that this point is abundantly clear: Social media platforms, as private commercial endeavors, can determine what content they wish to permit on their platforms. That is their right. What they need to understand, however, is that Section 230 is a privilege that they must respect. Based upon their own inconsistent actions, they have abused their privilege due in part to overboard Section 230 jurisprudence that has not kept up with the times, the revisiting of which is long overdue. Further, my biggest concern over their actions is their deviation from their purported purpose and abject inconsistency in application of their own policies and procedures, which is a disservice to its users and the internet community at large. Being consistent in enforcement in good faith supports the intent of Section 230 while simultaneously eliminating perceived bias. They should also avoid the pitfalls of addressing “misinformation” by ceasing to act as gatekeepers and return to acting like referees. Simply put, allow the conversation consistent with their stated policies and stop being a part of it. Unfortunately for them, their actions do have consequences — in this case, it will be the loss of Section 230 protections in one form or another, whether they like it or not.
Tom Kulik is an Intellectual Property & Information Technology Partner at the Dallas-based law firm of Scheef & Stone, LLP. In private practice for over 20 years, Tom is a sought-after technology lawyer who uses his industry experience as a former computer systems engineer to creatively counsel and help his clients navigate the complexities of law and technology in their business. News outlets reach out to Tom for his insight, and he has been quoted by national media organizations. Get in touch with Tom on Twitter (@LegalIntangibls) or Facebook (www.facebook.com/technologylawyer), or contact him directly at tom.kulik@solidcounsel.com.