In a recent case dealing with Google, the Supreme Court will examine the boundaries of freedom of speech and the limits for online organizations to monitor and control their content. The case deals with Nohemi Gonzalez, a young woman killed in 2015 by an ISIS terrorist attack in Paris. The family of Nohemi claims that YouTube abetted ISIS by posting videos on its platform. At heart is Section 230 of the Communications Decency Act of 1996. Section 230 protects internet platforms from litigation for posting harmful content by third parties on their sites. The Supreme Court will hear the case in which plaintiffs contend that Section 230 should not protect harmful content. The Supreme court is not only reviewing the boundaries of Section 230, but this case could push into limits of the first amendment and the Decriminalizing Artistic Expression Act, and the RICCO Act. Google contends that Section 230 is fundamental to the operation, not only of YouTube but the internet itself. The company states that Section 230 protects them from liability for content posted by users on its site, even if they monitor that content as best as possible. The Court of Appeals for the Ninth Circuit agreed with Google.
In various amicus curiae, organizations have warned against using Section 230 to protect social media platforms. The Children Advocacy Institute (CAI) is against protections afforded social media platforms via section 230. The CAI question how companies like Google, Facebook, and TikTok can monitor and regulate content using AI. They state that companies use AI recommendation machines to “keep users online as long as possible to maximize user engagement.” The CAI contends that Google designs it algorithms for business outcomes rather than “traditional editorial functions.” As such, YouTube cannot enjoy immunity under 230 since it is “responsible . . . in part for the creation and development of information.”
In a writ of certiorari, Google requests that the Supreme Court support the Court of Appeals decision. They state that sites, such as YouTube, generate approximately 720,000 hours of new content per day. (Melvin M. Vopson, The World’s Data Explained, Conversation (May 4, 2021), https://bit.ly/3XaIZ7i) To manage this vast amount of third-party data, the company must create and use algorithms to organize information and make it “accessible and useful.” (Google mission statement)
Similar amici curiae in favor of YouTube have supported the importance of Section 230 in providing immunity to intermediaries for the display of third-party content. Many organizations state that the protections afforded by Section 230 enable online creators to reach audiences – established and new – on the internet. For social media companies and individuals that use them, current legislation promotes the internet as a democratic space that advances free expression, creativity, and innovation. Many in the industry are concerned that altering intermediary-liability protections could hinder social media platforms from hosting and promoting independent content. These changes could affect an artists’ free speech and their ability to monetize content.
Jonathan Ruach states that society, “is not self-maintaining; it relies on an array of sometimes delicate social settings and understanding, and those need to be understood, affirmed, and protected.” (The Constitution of Knowledge) Similarly, social scientists state that successful democracies have three collective forces: high levels of trust within social networks, shared stories, and stable institutions. Early incarnations of social platforms, such as Myspace and Facebook, allowed users to create pages that consisted of personal information, photos, and links to websites of their friends or favorite pastimes. In 2009 Facebook and Twitter began using public endorsements via likes, retweets, and share buttons. YouTube and other social media organizations changed the traditional top-down creation/distribution model to a user/creator model. YouTube states that they want “[t]o make sure (they) reward good creators, by reviewing your channel.” While providing the opportunity to monetize their content, the company also acts as an overseer by “constantly review[ing] channels to make sure your content is in line with our policies”. Google reserves the right to enforce community guidelines on its platform and “remove content … if it violates Google policies.” Kent Walker, Senior Vice President for Global Affairs and Chief Legal Officer at Google, states that “companies that act reasonably in helping rights holders identify and control the use of their content shouldn’t be held liable for anything a user uploads, any more than a telephone company should be liable for the content of conversations.” (https://blog.google/around-the-globe/google-europe/now-time-fix-eu-copyright-directive/)
Each day users worldwide generate over 500 million tweets, 294 billion emails, 4 million giga-bytes of Facebook data, and 720,000 hours of new YouTube content (Melvin M. Vopson, The World’s Data Explained, Conversation (May 4, 2021), https://bit.ly/3XaIZ7i) To monitor this vast flow of data, the use of AI on platforms such as YouTube will continue for the near future. The proposed case before the Supreme Court will determine if Google must monitor its content with greater vigilance, with or without AI.