A 2020 Pew study found that about one-fifth of American adults get their news from social media. Eighteen percent of adults specifically find their political news on social media, and almost half of those adults prefer Facebook. While 18 percent is nowhere near the majority, this means 46 million adults rely on social media for current events. With this much of the population using social media as their news source, it is important that the news they read on sites like Facebook is factual. Facebook, a “provider…of an interactive computer service,” is shielded by §230 of the Communications Decency Act, meaning it is not liable for the consumption and spread of misinformation or for the correction of false information, colloquially known as “fake news.” As an international social media company with the largest market share in its industry and the ability to reach 69 percent of U.S. adults, Facebook has a responsibility to mitigate the publishing, sharing, and consumption of misinformation on its platform for the public safety of the country. To promote accuracy online and public safety in the country, Congress should create a narrow social media public safety amendment to §230 to increase social media companies’ accountability by requiring them to fact-check potential sources of dangerous misinformation.

False information has long-lasting political impacts. Former President Trump arguably would not have been elected if not for the virulent spread of misinformation on Facebook. In 2016, 62 percent of U.S. adults read their news on social media, but the “most popular fake news” articles were shared more on Facebook than on any other social media site. Those shared articles were found to favor Donald Trump over Hillary Clinton. These statistics tell a darker story, given that the American population has a history of being easily swayed by false information and conspiracy theories. In 1994, 5 percent of Americans believed the conspiracy theory that the Nazi extermination of millions of Jewish people did not happen. In 2007, 33 percent of Americans believed the conspiracy theory surrounding the 9/11 attacks that suggested the US government either had prior warning but let the attacks occur anyway or actively assisted with the attacks. Finally, in 2010, about 15 percent of Americans believed the conspiracy theory that then-President Obama was born in another country. Gallup polls revealed a sharp drop in “trust and confidence” among Republicans in 2016 regarding to accurate and fair reporting by the mass media, which was linked with the rise of sharing and believing misinformation on social media.

The results of the Gallup polls further illustrate that the ability to differentiate between accurate and false information is not innate and might be attributed to two issues. First, the growing partisan divide has influenced personal feelings between members of opposing political parties that has, in turn, influenced the consumption and belief of single-sided media, especially from conservative individualsAs of 2019, 55 percent of Republicans believed that Democrats were “more immoral” than non-Democrats, while 47 percent of Democrats agreed believed the same of their counterparts.  Second, the ability to determine if something is factual was not a critical thinking skill instilled in many older adults today, even though the idea of being correct is revered in American culture.  It is worth noting that the Baby Boomer generation has a reasonable expectation that the media they consume is true since they lived most of their lives in a media environment governed by the Fairness Doctrine. It is unsurprising then that younger Americans were found to be better than Baby Boomers at distinguishing between fact and opinion in the news by a factor of two. This may correlate with younger generations growing up with the Internet and being told to think critically about the accuracy of anything seen online or with the rise of primary educators taking it upon themselves to incorporate fact-checking and critical thinking into classroom learning.

It is true that Americans have a Constitutional right to believe and discuss what they want. They can fact-check articles if they want to, and select media from other sources they prefer. However, when online users read and believe false information, it can lead to long-lasting political impacts and public safety concerns related to those impacts.

Around the last election, 29-year-old self-identified conservative Edgar Welch read and believed a conspiracy theory online, dubbed “Pizzagate”, claiming that there was a Hillary Clinton-led sex slavery ring housed in a D.C. pizzeria. He traveled from North Carolina to D.C., where he fired his military-style assault rifle inside the pizzeria, believing he was on a child-saving crusade against a “corrupt” political figure. He did not find what he was looking for, and quickly realized he had believed a conspiracy theory without any basis in fact. The judge presiding on his case later said that it was “sheer luck” that he did not injure anyone. Welch himself admitted that “the intel on this wasn’t 100 percent,” understanding after the fact that what he read was not factual. Despite this admission, Welch was declared a hero among conservatives, many of whom believed Pizzagate out of sheer animosity towards then-candidate Clinton.

A second recent incident of violence stemming from online misinformation is the 2018 Tree of Life synagogue shooting. Forty-six-year-old conservative Robert Bowers, radicalized by antisemitism on online forums and social media, took eleven lives and wounded six. With online posts expressing that he could not “sit by and watch [his] people get slaughtered [by Jews],” Bowers faces 63 federal charges and 36 separate charges in Pennsylvania state court.

A pattern emerges from these and other similar acts of violence, linking misinformation on social media to armed, dangerous action. In 2018, Arizona conservative Matthew Wright, armed with two military-style rifles, two handguns, and 900 rounds of ammunition, blocked a bridge near the Hoover Dam in an armored vehicle because he was upset by perceived inaction relating to crimes alleged in QAnon internet conspiracy theories. A right-wing Californian man was arrested the same year after being found with bomb-making materials in his car that he planned to use to “blow up a satanic temple monument…to make Americans aware of Pizzagate” and other QAnon conspiracy theories that he’d discovered on social media. In 2019, 24-year-old conservative Anthony Comello “became certain that he was enjoying the protection of President Trump…and had the president’s full support” when he murdered the patriarch of the Gambino crime family in New York. Later that year, 41-year-old conservative Timothy Larson damaged an Arizona church and posted about the crime on Facebook after being convinced by another QAnon conspiracy theory that the church supported human trafficking. In December of 2019, 50-year-old Republican Cynthia Abcug was arrested in Montana, foiling a plot to kidnap supposed “evil Satan worshippers” that she coordinated with other QAnon supporters via a Facebook group and other social media. In April of 2020, 44-year-old conservative Eduardo Moreno, having seen a QAnon conspiracy theory about COVID-19 on Facebook, ran a freight train beyond its track, aiming it at a navy hospital ship that was treating COVID-19 patients. These incidents can all be tied to the individual’s consumption of unlabeled misinformation on Facebook and other social media.

Although QAnon theories primarily originate on other forums, it is important to note that Facebook, by its very nature, greatly facilitates the spread of these ideas. Facebook not only fails to stop white supremacists from sharing explicitly racist information publicly, but also promotes a connection between like-minded users and advertises groups based on measured interests, which are determined through Facebook’s online content consumption algorithm.  This aspect of the algorithm has some dangerous consequences. Clicking on just one article with a political bias can lead to a Facebook newsfeed crowded with similar content, regardless of its validity, and Facebook suggestions of groups and potential friends who also engaged with that content.  This effect is amplified for frequent Facebook users and is the main reason for the normalization of extremist beliefs and the radicalization of social media users. Even relatively fact-conscious users may see misinformation in familiar contexts, like when a high school friend posts a link in a shared Facebook group and assume that the content is valid. This may lead users to then share it themselves and so on, leading to an entire network uncritically consuming content that has little or no basis in fact. This feedback loop of misinformation and outrage has repeatedly led to the previously noted violent incidents and peaked with the 2021 Capitol riot. A seemingly simple mitigation to the misinformation-to-violence lifecycle is fact-checking. Section 230, however, prevents sites like Facebook from being forced to implement such solutions, which has led to insufficient regulation of false information on their part.

Section 230 of the Communications Decency Act was enacted to protect new and developing technologies by ensuring that they would not be liable for third-party content on their platforms. The statute states that “interactive computer service [providers]” are not recognized as the publisher of user-generated content, and thus cannot be held liable for that content on behalf of the user. [26] While Congress originally intended this rule to support business growth and technological innovation, Section 230 allows for some of the most powerful companies in the world, such as Google, Twitter, and Facebook, to escape accountability for users’ posts. Because Facebook is not legally responsible for anything posted on its platform, misinformation can spread incredibly quickly before the site removes it or considers taking potential action against it. It also means the leader of the free world can freely share misleading or false information and even unknowingly promote satirical articles as fact.  Partially because of this, there has been bipartisan support to amend §230. A proposed amendment by the Justice Department would include a statute allowing companies to moderate user content on their platforms in good faith, which would include fact-checking. It was unclear at the time whether Congress would accept the amendments as the Senate was Republican-led, and the party leader had balked at social media platforms’ efforts to fact-check his content throughout his political career. While the current Senate is Democrat-led, moderate party members may not agree to a hands-on approach with businesses that they hope will cooperate with future and ongoing government partnerships.

To its credit, Facebook has joined other social media companies in attempting to diminish the spread of misinformation by fact-checking some of the articles shared on its platform, but Twitter, YouTube, and Google currently have a lead on Facebook in their mitigation strategies. Twitter banned all state-backed media and political advertising in 2019 and has begun labeling political accounts of state-affiliated media entities and government officials. It also changed its algorithm to stop promoting state-affiliated media accounts through its recommendation system and has flagged inaccurate and misleading tweets from the current administration. Following Twitter’s removal of 7,000 QAnon bot and user accounts, YouTube began prohibiting materially targeting an individual or a group with conspiracy theories used to justify violence. Google published a COVID-19 medical misinformation policy that disallows false YouTube content about coronavirus treatment, prevention, and transmission, and all misinformation that contradicts health authorities’ or the World Health Organization’s medical information. Less than a month before the 2020 election, Facebook announced that it would follow suit by banning pages, groups, and Instagram accounts that represented QAnon to try to prevent the spread of misinformation and the violence that stems from it.

Though belated, Facebook’s action parallels its current user rules of disallowing, flagging, and occasionally removing posts that promote or include hate speech, violence, or harassment. If a user’s content violates Facebook’s policy, the user may appeal Facebook’s decision to flag or remove the post by requesting a review by Facebook’s community operations team. If the content was flagged or removed in error, their policy requires the team to restore it. If a user’s post about political topics, COVID-19, QAnon, or related conspiracy theories is recognized as misinformation by Facebook’s software and moderators, the platform displays a banner under the post, informing viewers that some of the information posted may be inaccurate. Another feature Facebook occasionally provides is showing users articles by Snopes or similar fact-checkers under shared articles containing contested or inaccurate information, without removing the original article. This allows for the initial poster to exercise their free speech rights while providing viewers of that post with the option to confirm the validity of its content.

Implementing steps like these is becoming more crucial every day. It is true that Facebook is a private company that does not currently need to fact-check user-posted content because it is a protected interactive computer service provider under §230. With the current protections, Facebook has no liability for user-published content, and no thus no incentive to moderate it. However, while Facebook may be a private company, it effectively plays the role of a public utility by providing users with a platform for content of any quality or origin and actively promoting that content to other users around the world. Facebook’s steps to moderate false content are business decisions made out of concern for its brand image. These steps should also help delegitimize false and misleading content, which would likely lead to fewer public safety incidents over time. However, due to a lack of actual regulatory pressure, Facebook has not taken sufficient action and has been far too slow to take these first steps in pursuit of public safety. The global stage on which social media giants operate contrasts sharply with the small blogs and forums that §230 was created to protect. As a pseudo-public utility, Facebook has a responsibility to make policy decisions that protect public safety.

Creating a fact-checking mechanism that scans across all Facebook posts may sound like a single addition of software, but it would realistically involve more steps that the company may not consider worth its time. Despite its explicit content moderation rules, Facebook has already erroneously removed posts from Black Americans recounting lived experiences with racism, mistakenly flagging them as hate speech. At the same time, they have not touched implicitly racist content from other users, illustrating the problems with moderating online content for specific terms instead of for context. Additionally, there is the issue of defining misinformation in a way that does not hinder the right to free speech. After the 2020 presidential election, there was a massive migration of Republican and right-wing individuals from Facebook to Parler, a competing social media platform that promises free speech over “censorship.” While relatively young at two years old, Parler has already attracted 2.8 million users, necessitating content moderation. The “free speech alternative” to Facebook lists several prohibited topics in its terms, such as terrorist activity and pornography, but does not define or address “misinformation.” It also reserves the right to ban users for “any or no reason,” which is the primary reason it has given for a wave of bans since the election.

Given the challenge of balancing free speech rights with public safety interests, it is up to Facebook to update its current definition of “news [that] is harmful to [the] community, makes the world less informed, and erodes trust” to something more concrete that includes the issue of misleading and false information in headlines and article content. Addressing headlines as well as content is crucial because the majority of users share articles without reading them first. An article filled with dummy text from a satirical news site titled “Study: 70 percent of Facebook users only read the headline of science stories before commenting” was shared 46,000 times on Facebook, illustrating the article’s point and proving the Columbia University study that 59 percent of links that are shared on social media are never clicked before being shared. After updating its definition of misinformation, Facebook must then consider whether such posts on its platform would best be moderated by its current content moderators across the globe or by an Artificial Intelligence (AI) algorithm. While AI has been proven to have problematic biases when applied in real-world scenarios instead of hypothetical, academic situations, Facebook can potentially avoid unwanted biases by feeding the AI live examples of previously approved and rejected content.

Finally, Congress must consider whether to take on what will look like a more paternalistic role by requiring fact-checking or allow users more autonomy in finding credible news and educating themselves with factual articles, an approach that comes with continued public safety concerns. Congress Members should be aware that Facebook was designed to facilitate the spread of any and all information. Unfortunately, this has played a large role in two national elections and many more conspiracy theory-based crimes.

In the January 2021 Capitol riot, the nation saw yet another instance of Facebook’s role in misinformation-based violence. Unlike the previous one-off instances, this was an organized attempted coup. Hundreds of people were convinced from months of misinformation, amplified in online echo chambers, that storming the Capitol, attempting to take lives of Congress members, and planning to lynch the former Vice President were required actions they needed to take to force the government to declare former President Trump as the victor of the 2020 election. Thanks in part to social media footage of the attacks and voluntarily posted content from individuals at the riot, the Justice Department has arrested and charged over 300 individuals and expects to charge 100 more in the coming months. To prevent this kind of incident from occurring again, Congress must amend §230. The amendment should require social media companies to take stronger steps to mitigate the spread of misinformation for the sake of public safety. We have a First Amendment right to free speech. It is up to Congress to make sure we live to enjoy it.

 

Anokhy Desai is a law student at the University of Pittsburgh and an information security policy master’s student at Carnegie Mellon University. She is interested in privacy, cybersecurity law, and consumer protection.

About the author

Avatar

Heinz Voices

Leave a Comment