As the digital sphere is becoming increasingly dominant, the interplay between freedom of expression and internet governance and the regulation of the digital space is becoming more and more relevant in social, academic and political debates. Internet governance involves the collective principles, norms, rules, and decision-making processes that influence the development and usage of the digital space. While major actors are addressing those issues at an international level, there is a clear chasm in the approach of major political players like the EU, the US and China, especially in regards with the geopolitical positioning towards artificial intelligence (AI). As AI is becoming increasingly relevant, its growing use has been feeding the debate around its governance and regulation1. Questions in the legal field have been surrounding mostly cases of intellectual property (IP) with a specific focus on copyrights.
Before a generative AI system can produce content, it must undergo a process known as ‘unsupervised learning’. This form of machine learning involves using data that lacks predefined labels or classifications. The goal of unsupervised learning is to discover patterns and connections within the data2. Following the growing debate around the regulation of AI as well as its source of data, in December 2023 the New York Times (NYT) filed a lawsuit against OpenAI and Microsoft over the unauthorised use of its articles to ‘train’ or ‘feed’ GPT language models and generative patterns3. This case could significantly influence the intersection of generative AI and copyright law, particularly concerning fair use, and may ultimately decide the methods and legality of AI model development. The NYT v. Open AI and Microsoft case has sparked debate over the collection of information and data from open sources that should traditionally (and legally) be subject to international copyright law.
Hence, AI processes are becoming steadily complex, sparking public controversy in regards with their application4. AI does not only impact the digital space and technological innovation, it also encompasses the capacity to define inter alia security, economic development, human rights, and geopolitical dynamics5. While the EU aims to become the frontrunner in the global governance of AI, there are certain threats and challenges that need to be taken into consideration.
In December 2020, the European Commission released its Digital Services Act (DSA) aiming to govern online platforms, including marketplaces, social networks, and content-sharing sites. Its primary objective is to prevent illegal and harmful online activities and curb the spread of disinformation. The DSA aims to ensure user safety, uphold fundamental rights, and promote a fair and open environment for online platforms6. Considering that one of the goals of the DSA is to curb disinformation whilst upholding fundamental rights, the possible implications of such regulation on media pluralism and freedom of expression becomes intriguing. Notably, the DSA makes explicit mentions of the protection of fundamental rights, yet its role becomes challenging as artificial intelligence (AI) is taking over the digital space.
The DSA’s proposals do include several safeguards to protect freedom of expression. For example, users must specify why they consider the notified content to be illegal (Article 14.2(a)). In practice, this means that someone complaining about a defamatory post should clarify why it is not protected under certain defences, such as fair comment. Additionally, online hosts are required to provide a statement of reasons for any decision they make (Article 15). Online platforms must also implement measures to prevent misuse, such as dealing with unfounded notices (Article 20.2). However, it is likely that many hosting providers and platforms will lack the resources to employ teams of lawyers to scrutinise the provisions of Article 15. As a result, they may find it easier to remove content to minimise the risk of liability.
As Article 14 of the DSA grants hosting providers the authority to determine the legality of content once they receive a substantiated notice of alleged illegality (Article 14.3), hosting providers are strongly motivated to remove content upon receiving such notices7. What is worrisome is that the DSA allows social media companies to make decisions of suspending and/or blocking content, like the ones Facebook and Twitter made to indefinitely suspend then-President Trump’s account following the 2021 U.S. Capitol attack on January 6. Hence, while the safeguards of freedom of expression do exist de jure, in practice there is a lot of freedom on private companies to determine user activity, while de facto users may often resort to self-censorship to avoid dispute.
Following the DSA, in 2023 the EU launched the AI Act, the world’s first comprehensive AI law, addressing the threats of AI and setting specific requirements for the use of AI applications on both developers and users8. The AI Act classifies AI systems by risk level and affects how content is used and distributed in two key ways. First, it bans certain AI-driven practices that could manipulate individuals. Second, it places extra requirements on those using AI to generate synthetic content, such as deep-fakes9. Although the AI Act does not specifically highlight freedom of expression as a human right it aims to protect across AI systems, some of its provisions offer safeguards against certain types of AI-driven manipulation that undermine the right to hold opinions—a fundamental aspect of freedom of expression10. However, the requirements needed to prove a breach of these provisions may reduce their overall impact and may violate freedom of expression online.
Obligations primarily affect high-risk AI providers, who must comply with strict rules to operate in the EU market. Users deploying high-risk AI also have responsibilities, although less stringent. For General Purpose AI, providers must supply technical documentation, comply with copyright laws, and disclose training data summaries. Those presenting systemic risks must conduct evaluations, adversarial testing, and ensure cybersecurity protections11. The DSA and AI Act both address inter alia the regulation of profiling. The DSA prohibits advertisements that rely on profiling using special categories of data defined by the EU’s General Data Protection Regulation (GDPR) (Article 26(3)). It also bans targeted advertising to minors based on profiling, regardless of the type of data involved (Article 28(2))12.
While regulations such as the DSA and the AI Act are intended to safeguard users and provide transparency, they may have serious consequences on freedom of expression on the internet. These policies, which impose limitations on AI-driven content moderation and prohibit certain types of tailored advertising, may unintentionally limit the diversity of voices and perspectives online. Overregulation can cause platforms and AI developers to get mistaken on the side of caution, eliminating or banning information that is legal but viewed as harmful or may cause public controversy. In this respect, limiting certain content to avoid disinformation violates users’ right to information, an important part of freedom of expression, as it encompasses not only the freedom to exchange ideas but also the freedom to seek and receive information from diverse sources. When content is removed or restricted in the name of combating disinformation, there is a risk of overreach, as valid content, various perspectives, and critical discourse are silenced.
References
Article 19, (2021) At a glance: Does the EU Digital Services Act protect freedom of expression?, Retrieved from https://www.article19.org/resources/does-the-digital-services-act-protect-freedom-of-expression/
Burnay, Matthieu and Alexandru Circiumaru. “The AI global order: What place for the European Union?”, in Contestation and Polarization in Global Governance: European Responses, edited by Michelle Egan, Kolja Raube, Jan Wouters, and Julien Chaisse, 264–281. (Cheltenham: Edward Elgar, 2023)
Cabrera, Laura Lazaro (2024). “EU AI Act Brief – Pt. 3, Freedom of Expression”, Center for Democracy & Technology, Retrieved from https://cdt.org/insights/eu-ai-act-brief-pt-3-freedom-of-expression/#:~:text=The%20AI%20Act%20impacts%20the,create%20synthetic%20content%2C%20including%20deepfakes.
Diplo Foundation (n.d.), Internet governance and digital policy, Retrieved from https://www.diplomacy.edu/topics/internet-governance-and-digital-policy/
Global Partnership on Artificial Intelligence, (n.d.), About: GPAI, Retrieved from https://gpai.ai/about/
European Commission (n.d.), The Digital Services Act, Retrieved from https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en
European Commission, “Shaping Europe’s digital future”, 06 March 2024, accessed 11 April 2024, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Polo-Jansen, Anna (2024), Ties between generative artificial intelligence and intellectual property rights, Retrieved from https://www.diplomacy.edu/blog/ties-between-generative-artificial-intelligence-and-intellectual-property-rights/
Pope Audrey (2024), NYT v. OpenAI: The Times’s About-Face, Harvard Law Review, Retrieved from https://harvardlawreview.org/blog/2024/04/nyt-v-openai-the-timess-about-face/
Wulf, Alexander J. and Ognyan Seizov, “Artificial Intelligence and Transparency: A Blueprint for Improving the Regulation of AI Applications in the EU”, European Business Law Review, Issue 4, (2020): 611-640.
Footnotes
1. Polo-Jansen, Anna (2024), Ties between generative artificial intelligence and intellectual property rights, Retrieved from https://www.diplomacy.edu/blog/ties-between-generative-artificial-intelligence-and-intellectual-property-rights/
2. Polo-Jansen, Anna (2024), Ties between generative artificial intelligence and intellectual property rights, Retrieved from https://www.diplomacy.edu/blog/ties-between-generative-artificial-intelligence-and-intellectual-property-rights/
3. Audrey Pope (2024), NYT v. OpenAI: The Times’s About-Face, Harvard Law Review, Retrieved from https://harvardlawreview.org/blog/2024/04/nyt-v-openai-the-timess-about-face/
4. Alexander J. Wulf and Ognyan Seizov, “Artificial Intelligence and Transparency: A Blueprint for Improving the Regulation of AI Applications in the EU”, European Business Law Review, Issue 4, (2020): 611-640.
5. Burnay, Matthieu and Alexandru Circiumaru. “The AI global order: What place for the European Union?”, in Contestation and Polarization in Global Governance: European Responses, edited by Michelle Egan, Kolja Raube, Jan Wouters, and Julien Chaisse, 264–281. (Cheltenham: Edward Elgar, 2023)
6. European Commission (n.d.), The Digital Services Act, Retrieved from https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en
7. Article 19, (2021) At a glance: Does the EU Digital Services Act protect freedom of expression?, Retrieved from https://www.article19.org/resources/does-the-digital-services-act-protect-freedom-of-expression/
8. European Commission, “Shaping Europe’s digital future”, 06 March 2024, accessed 11 April 2024, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
9. Laura Lazaro Cabrera (2024). “EU AI Act Brief – Pt. 3, Freedom of Expression”, Center for Democracy & Technology, Retrieved from https://cdt.org/insights/eu-ai-act-brief-pt-3-freedom-of-expression/#:~:text=The%20AI%20Act%20impacts%20the,create%20synthetic%20content%2C%20including%20deepfakes.
10. Laura Lazaro Cabrera (2024). “EU AI Act Brief – Pt. 3, Freedom of Expression”, Center for Democracy & Technology, Retrieved from https://cdt.org/insights/eu-ai-act-brief-pt-3-freedom-of-expression/#:~:text=The%20AI%20Act%20impacts%20the,create%20synthetic%20content%2C%20including%20deepfakes.
11. Laura Lazaro Cabrera (2024). “EU AI Act Brief – Pt. 3, Freedom of Expression”, Center for Democracy & Technology, Retrieved from https://cdt.org/insights/eu-ai-act-brief-pt-3-freedom-of-expression/#:~:text=The%20AI%20Act%20impacts%20the,create%20synthetic%20content%2C%20including%20deepfakes.
12. Laura Lazaro Cabrera (2024). “EU AI Act Brief – Pt. 3, Freedom of Expression”, Center for Democracy & Technology, Retrieved from https://cdt.org/insights/eu-ai-act-brief-pt-3-freedom-of-expression/#:~:text=The%20AI%20Act%20impacts%20the,create%20synthetic%20content%2C%20including%20deepfakes.