The First Amendment

“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

 

The First Amendment is the defining principle of our democracy. It ensures that the people of the United States are free from certain forms of government control and, to an extent, can govern themselves. The First Amendment is upheld as the highest statute, and comprises the core of our justice system. Spence v. Washington, 1974, defined a precedent of the treatment of the United States flag and symbolic speech, and created the “Spence Test”. Spence hung an altered flag upside down to protest the Kent State shooting of a Vietnam War protester at the University of Washington in Seattle, WA. The Supreme Court ruled that Spence’s action was protected by the First Amendment for expressing a particularized message.

Upon analyzing the First Amendment, there are five concrete aspects that cannot be regulated by the federal government: establishment of religion, freedom of speech, freedom of the press, the right to peaceably assemble, and petition for a redress of grievances. Another informal aspect of the First Amendment is the “Spence Test” mentioned previously, where cases are determined whether forms of conduct are “expressive” enough to warrant protection (MTSU). However, this has not been formally amended or added to the Constitution, proving that the text of the amendment itself is partial and incomplete. It only discusses five specific types of potential legislature, and offers unspecified reasoning thus leaving it up for interpretation. 

According to the University of Chicago Law Review, “the phrase ‘freedom of speech’ is too narrow and too wide to indicate the purpose... many activities of belief and communication are not ‘forms of speech’.” The term ‘speech’ acts as a blanket term, and fosters conflict in how to determine what is protected by the First Amendment - hence the “Spence Test”. This instance of unspecificity inherently leads to dissent on issues of national security. This is explored in Dennis v. United States, 1951, where eleven Communist Party leaders were convicted of violating the Smith Act. The Smith Act states that “It shall be unlawful for any person... to organize or help to organize any society, group, or assembly of persons who teach, advocate, or encourage the overthrow or destruction of any government in the United States by force or violence” (54 Stat. 671, 1940). However, in 1957, the Supreme Court reversed a number of the convictions under this act as being unconstitutional - specifically violating the First Amendment. The Smith Act was enacted to protect the United States from growing conflict in Europe around the increase of the Communist Party. The act prohibits forms of petitioning or assembling against the U.S. government for the Communist Party, which violates the fourth and fifth aspects of the First Amendment. The intentions of this act, however, was to ensure the national security of U.S. citizens. 

The University of Chicago Law journal discusses that “there is fundamentally a conflict between the security of the nation and the freedom of the people.” Challenges to national security often occur within cyberspace. Cubby, Inc. v. CompuServe Inc, 1991, provided a confrontation with First Amendment rights and cybersecurity. CompuServe Inc., an internet service provider (ISP) was sued for defamation for material posted on the site but created by the publication Rumorville USA. The District Court for the Southern District of New York held that “CompuServe was merely a distributor, rather than a publisher of content on its forums, and hence could only be liable for defamation if it knew, or had reason to know, of the defamatory nature of the content” (Wikipedia). According to the Yale Law Journal, “there is difficulty in applying existing legal metaphors to [the Internet] in which an information provider may logically seem to fall under the ambit of several legal regimes.” Cubby, Inc. v. CompuServe Inc. serves as a foundational moment where harmful media was left unregulated, therefore setting a precedent of leaving modern cyberspaces unchecked and areas of vulnerability for national security threats. 

Senator Louie Gohmert introduced a bill on Jan. 11, 2019, in the House of Representatives - H.R. 492. The bill sought to amend the Communications Act of 1934 to “an owner or operator of a social media service that hinders the display of user-generated content shall be treated as a publisher or speaker of such content, and for other purposes.” This bill would transform the regulation on social media platforms, now identifying them as publishers rather than distributors - as seen in Cubby, Inc. v CompuServe Inc.. 

Although the term ‘speech’ is broad, online speech can be considered the equivalent of written speech. If applied improperly, attempting to regulate online speech will suppress ideas and hinder First Amendment rights. However, according to the American Bar Association, “Libel is a false statement that is not only unprotected by the First Amendment but also punishable by law.” Libel is a published false statement meant to damage a person’s reputation, and while this is illegal, false ideas or facts in themselves are not. With this, the American Bar Association points out that “relying on the government to distinguish truth from falsehood is at odds with principles of popular government and robust deliberation.” 

According to the Stanford Law Review, in the modern age there is “failure to facilitate constructive judicial engagement with significant contemporary social issues connected with freedom of speech.” Disinformation is an issue that dominates the modern age, and has proven to be a threat to national security to the United States. There is an inherent need to construct legal action against the threats caused by disinformation. This legal action could be taken through social media companies. If defined as publishers rather than distributors of information, social media companies would have to incur liability for the information posted on their platforms - specifically false information with the intent to deceive. As of today, the First Amendment prevents online disinformation from liability associated with libel. Giving disinformation a  defined designation of malicious intent and false information would remove the First Amendment protection provided for sanctioned online speech. 

 

Global Strategies

Of Attack and Defense

 

ATTACK

While domestic forms of disinformation exist, this report will focus on two major foreign producers of disinformation: China and Russia. 

Russia is the pioneer of disinformation, or as they originally dubbed it “active measures”, tactics. “Active measures” is a Soviet term, referring to using intelligence operations for the purpose of influencing international events to achieve geopolitical goals—much like the disinformation tactics seen today. One of the first major influences in U.S. political arena was during the Camp David Accords, signed in 1978, by intervening in U.S. and Egyptian negotiations. Russia forged a document from the U.S. Secretary of State to the U.S. President written with offensive language about the capabilities of the Egyptian president. The document ended up being published in Syrian newspapers before being debunked publicly, causing political discord in the Camp David peace process. Russia’s goal was to undermine U.S. international standing. Today, attacks against the U.S. continue ten-fold in the era of social media, using tactics such as networks of fabricated accounts and news sites to attract users, and—more recently—American based journalists. In Sept. of 2020, on a fake news platform called “PeaceData”, the Kremlin-backed group—the Internet Research Agency—hired American journalists to write for their website, providing electronic payment. They developed a following of about 14,000, and had 13 fake accounts promoting the news source via Twitter and Facebook. Finally, one of the most infamous examples of a successful Russian disinformation campaign was the Russian Internet Research Agency’s impact on political perspectives surrounding the 2016 election. The IRA produced more than “than 57,000 Twitter posts, 2,400 Facebook posts, and 2,600 Instagram posts—and the numbers increased significantly in 2017” (PNAS). 

China is relatively new player to the realm of disinformation. The Chinese Communist Party (CCP) is one of the main actors in the media, attempting to launch pro-Beijing campaigns. In 2017, they launched a series of Chinese-language campaigns on Twitter to undermine democratic processes. However, China’s disinformation campaigns take more than just Chinese forms. Pro-Beijing actors have been discovered carrying out malicious activities in a range of countries, languages, and platforms. The Freedom House’s annual survey of internet freedom—“Freedom on the Net”—uncovered evidence of electoral interference by the CCP in 26 different countries. According to Sarah Cook of the Freedom House, “the CCP presides over one of the world’s most repressive regimes and an economy second only to that of the United States. If it invests heavily in this new approach to international influence, it will pose enormous challenges to democratic governments, technology firms, and internet users”, meaning large investments in AI to generate massive amounts of content. These disinformation efforts accelerated during the COVID-19 pandemic, converging efforts with other malicious actors such as Russia to create disinformation narratives revolving around the United States and the pandemic. The goal of these narratives was to highlight the ‘failures’ of democracy, such as casting doubt that the virus came from China and that it had originated in the U.S. military. Bots on social media and messaging platforms amplified rumors of a mandated nationwide lockdown and the U.S. government deploying troops to deescalate the racial protests across the country. While these campaigns were mildly successful, as a new actor China’s disinformation is more simplistic and therefore easier to expose. However, joint cooperation with an experienced disinformation actor such as Russia begins to transform China’s tactics to mimic Russia’s. 

Russia and China share a goal in disinformation: undermine liberal democratic institutions. Both countries have experience in electoral interference, such as Russia with the 2016 U.S. presidential election, and China in 2018 and 2020 in Taiwanese local and presidential elections. These countries work in cohort, and their cooperation has grown significantly in the wake of COVID-19. Their intention is to exploit the openness of democracies’ cyberspace in their respective regions—Europe and Asia. This alliance does not come as a surprise. The narratives of Russia and China continue to overlap on a global scale, and amplify their symbiotic relationship of authoritarian systems. The two countries share their technologies and continue to learn from each other, thus rapidly becoming a significant threat to democracies internationally. 

DEFENSE

This report will focus on the defense tactics of two democratic-based countries: the United Kingdom and Sweden. 

The United Kingdom has been subject to the of disinformation tactics of Russia. In 2017, the U.K. electoral commission investigated Russian influence in the 2016 Brexit campaign and the 2017 elections. In 2018, the National Security Capability Review captured more malign Russian behavior on U.K. platforms, thus kickstarting the 2018 Russian Doctrine. The goal of this doctrine was to widen diplomatic efforts with other countries also victimized by disinformation, and fed into the U.K. Defend Democracy Program of 2019. The focus was on building resilience, and limiting the supply of disinformation through exposing, countering, and punishing mal-actors. The U.K. analyzed previous Russian tactics to apply lessons-learned to combating future disinformation activities. The Cabinet’s office developed a Rapid Response Unit (R.R.U.) to further prevent the spread of disinformation. The Cabinet Office allowed the R.R.U. to take all actions necessary to protect the U.K. right before or immediately following a major political decision or election. Outside of the Cabinet Office, the U.K. government as a whole seeks to strengthen their relationship with the private sector in an effort to work collaboratively on dismantling disinformation. The Online Harms White Paper from 2019 specifically outlined enhanced measures such as a code of ethics, the ability to launch legal action, and requirements to prove sources of information on social media companies. The U.K. believes that a strong cooperation between the government and private sector social media companies is necessary to assess disinformation tactics. Finally, the U.K. focused on the resilience of their citizens, developing awareness campaigns such as “Don’t Feed the Beast”. They strive to enhance media literacy nationally, running digital literacy campaigns aimed not only at the public domain, but politicians as well. 

In 1940, Sweden established an information agency to combat un-Swedish propaganda during the Cold War. Today, prompted by Russian influence in 2016 general elections, Sweden has developed various disinformation defense tactics. The Swedish Civil Contingencies Agency conducts threat assessments of foreign actors and campaigns, and publish reports for public exposure and consumption. Sweden makes it a priority to prepare for, and defend against, disinformation campaigns and foreign influence operations preceding elections. The Swedish Civil Contingencies Agency also focuses on developing handbooks to educate police and local election authorities about the reality of disinformation and the threat it poses. The Swedish government works with electoral and police authorities to conduct efforts countering disinformation such as forged news stories. As of 2018, Sweden has taken major actions to secure their political environment. Their new plan to protect their elections involved allotting approximately 6.7 million USD dollars (60 million Swedish krona) to improve resistance to disinformation. This included raising awareness to Swedish political parties and citizens about the issue, and bringing in representatives from Swedish media and social media platforms to discuss these issues with the public sector. In August of 2018, Swedish Prime Minister Stefan Loven created a committee to investigate a potential psychological defense unit against disinformation. This unit is still active today, with the former police director appointed to lead its operations in May of 2020. The Swedish government also invested 1.5 million USD in creating a new online platform designed to prevent the spread of false information with an algorithm to perform auto-fact checking. The State Media council has also focused on educating the public sector through new teaching materials implemented in Swedish classrooms to improve visual media literacy regarding false news and propaganda. These defense efforts have been successful, in that the Strategic Dialogue and Institute of Global Affairs at the London School of Economics has found no clear evidence of coordinated disinformation influence, or bot tactics, during the last two Swedish general elections. 

The United States has been subject to attempts to undermine the democratic process, as mentioned previously with information surrounding the 2016 Presidential election—as outlined in the Mueller Report V.— and, more recently, COVID-19. However, it is arguable that not enough is being done to prevent the spread of disinformation, specifically in the proceedings of the 2020 Presidential election. As seen in the defense tactics of the U.K. and Sweden, it is critical that governments, social media companies, and citizens work together in investing in long-term resilience. While democracies should not turn to censorship of platforms, effective regulatory solutions that other democratic governments have implemented should be assessed for possible implementation in the United States. The U.S. government should begin to enforce transparency standards for all media platforms and develop rapid response teams like the U.K.. As of today, the disinformation focus for the U.S. has revolved around election periods—as election interference from foreign entities is illegal—which while those periods are important, it does not cover the scope of the true threat. Within the Executive Branch of the U.S. government, the Global Engagement Center (G.E.C.)—which is a part of the State Department—focuses on countering disinformation through the Public Diplomacy Bureau. Their tactics include funding research to counter-disinformation tactics and to develop new technologies useful for counter-disinformation efforts. The Department of Defense funds the G.E.C. through the National Defense Authorization Act. A Foreign Interference Task Force was established in Oct. 2017 by the Federal Bureau of Investigation to address threats of disinformation, however little has been noted to come out of this. Within the Legislative Branch, Congress has added provisions to the 2019 National Defense Authorization Act to counter disinformation efforts. This included establishing a new position in the National Security Council responsible for countering foreign malicious interference. In Oct. of 2017, the Honest Ads Act required platforms to identify political advertisements as such. However, this act could more effective in combatting Russian-placed advertisements with misleading information. Additionally, political ads make up a small portion of the industry therefore representing a minuscule portion of disinformation. The Senate introduced the Data Protection Act of 2018, which outlined responsibilities of providers of information, however the act was not re-introduced to the new Congress—although it has been passed in the U.K.. Congress has also held hearings with social media company executives, however they have not provided specific steps or focus in assessing the problem to these executives. 

 

Social Media

Social media companies are a hot-bed of disinformation in the 21st century. Disinformation campaigns are conducted on these platforms through misleading advertising, forged news stories, fabricated account networks, or private messaging. These channels of disinformation continue to evolve and become more dangerous as the platforms themselves evolve technologically. Social media companies have little to no regulation on their platforms regarding disinformation, and catered content through user-specific algorithms fosters an environment for ignorance. The social media companies that this report focuses on are Twitter, as well as Facebook and Instagram (who coincide under the same corporate entity). These companies are all American-based and have a history of being key platforms for disinformation campaigns to take place. 

To an extent, these companies are willing to address the national threat of disinformation by implementing teams of security personnel and fact-checkers to monitor posts or advertisements in particular. More emphasis has been put on A.I. technology to track fabricated information. Even recently, both Facebook and Twitter have committed to removing all false information regarding voting and Holocaust denial—a significant step toward monitoring extreme falsehoods in the content on their platforms. 

FACEBOOK & INSTAGRAM

Facebook is a social media platform where ‘friends’ who are accepted by the user, share posts to a main timeline for their other friends to see. These posts consist of their own, or posts the re-share to others. Facebook currently has 2.4 billion active users, majority of those being outside of the United States. Their championed advertising algorithm and sponsored posts allow for being an accessible tool for disinformation campaigns. Facebook disclosed that in 2016, over 150 million users interacted with misleading Russian propaganda. These disinformation efforts continued in the wake of the COVID-19 pandemic. Pre-pandemic, Facebook’s Community Standards outlined that the platform would not remove false information, and place a warning label on the information. However, 95% of users did not click on the warning label of false information. Post-pandemic, the platform has begun to remove COVID-19 related disinformation, a step in the right direction for future defense. Facebook’s CEO Mark Zuckerberg outlined another COVID-19 effort of directing 2 billion users to authoritative health research that has been fact-checked. However, only 350 million—or 17.5%—of users utilized that option. Although Facebook is taking steps to assess disinformation through fact-checking teams, removing the “targeting” category for advertisers, among the efforts previously mentioned, there is little transparency for the public about these efforts. 

Instagram is a platform under the jurisdiction of Facebook. It is based on a follower algorithm, where users can follow accounts to their liking and engage with their photo-based posts. Users can also share others' posts through ‘stories’ that sit at the top of a user’s feed. Instagram is another platform used for disinformation, and abides by many of Facebook’s guidelines for defense due to being under the same administration. In 2018, Facebook made efforts to remove Kremlin-linked accounts from their platforms, which included 65 fake Instagram accounts. A brief analysis of 28 currently known Kremlin-linked Instagram accounts showed about 2.5 million engagements with Instagram users, and 145 million interactions with passive users—meaning scrolling through without engaging through ‘liking’ or other forms. Another example of disinformation on Instagram is the “Sudan Meal Project”, where multiple accounts under the same realm—accounting 400,000 followers total—claimed to donate $1 for every share of their post on the Instagram stories mentioned previously. These accounts were not backed by any legitimate organizations, and no donation proof was provided. The goal of this campaign was to get exposure, therefore more followers, thus a maintained social media presence that can be used to their benefit. 

TWITTER

Twitter has been an active platform in disinformation, with actors using fake account networks and algorithms to promote false political narratives. The structure of the algorithm facilitates the passing of both accidental and deliberate forms of disinformation. Twitter abides by a follower and followed relationships, creating two major forms of users: talkers and listeners. The ‘talkers’ are accounts with large followings, and easily distribute information to their followers, or ‘listeners’. The larger the following, the more reputable a user’s tweets are considered. However, if a talker tweets a falsehood and one of the listeners refutes the information, unless receiving more engagement than the original tweet, it will go unrecognized. On Twitter, timeliness of information is championed over credibility. In some cases, Twitter has flagged tweets that contain violent information, but this does not apply to disinformation which can appear extreme and misleading, but not violent. According to the State Department, over 700,000 Twitter accounts were linked to disinformation during the 2016 Presidential election, and 15% of the platform’s total accounts were a part of disinformation campaigns. Twitter is implementing some defense mechanisms for disinformation, including labeling political ads with the origin of the organization, also allowing users to report these ads. Machine learning and AI have been utilized to identify fake accounts, and over 70 million have been suspended. However, 90% of the disinformation on the platform has been traced back to the same 50 news sites—so the problem is far from solved. 

--

These companies have no incentives to prioritize limiting disinformation. In fact, incentives are aligned with allowing it or even spreading it. Facebook’s championed marketing tools allow for ease of the spread of disinformation, and advertisements make money by maximizing user engagement. In their defense, Facebook has taken steps to reduce false forms of advertising, creating a warning on certain posts that contain false information—such as a pop-up that will appear before you can view an Instagram post. But these actions are simply not enough, and disinformation still poses a large threat to undermining democracy as technologies continue to grow and provide access to malicious foreign actors. Social media companies simply do not hold the threat of disinformation above their revenue. 

As previously mentioned, there should be a greater emphasis placed on regulating these platforms without infringing on First Amendment rights, and this can be achieved through a form of publishing law, specifically defamation laws. Defamation law is a type of law that addresses false statements that are made to damage someone’s reputation. Defamation in written form is called libel, while slander is spoken defamation. Currently, media platforms such as publications and broadcasts do not have protections under the First Amendment if slander or libel is found—where false information is presented as fact intended to defame another person. Social media entities do not strictly abide by defamation laws since individuals are responsible for what is presented in the cyberspace. However, according to Legal-Dictionary, publishers are defined as, “organizations that dispense information to the public”. According to Merriam-Webster Dictionary, social media is a form “of electronic communication (such as websites for social networking and microblogging) through which users create online communities to share information, ideas, personal messages, and other content (such as videos)”. Broadly, these two entities intersect in their role of sharing information with the public sector. To keep social media companies accountable for the rampant disinformation occurring on their platforms, a form of defamation laws should be applied in which social media companies are fined if a disinformation campaign is deemed successful.