Author: Iana Kazeeva
The world’s first most comprehensive law regulating artificial intelligence, the EU Artificial Intelligence Act, has been enacted on 13 June 2024 and entered into force on 1 August 2024. The AI Act aims to provide transparency and ensure safe use of AI systems by introducing obligations and requirements for developers and deployers based on the risk posed by AI systems. Despite the long legislation process that launched in 2020 and multiple negotiations, the final version of the Act includes a number of controversial and arguable provisions that undermine both the concept of open source and the future of open source AI systems. For instance, the AI Act exempts certain AI systems from its application, one of the exemptions being “AI systems released under free and open source licenses” (Article 2(12)). However, the Act fails to define such systems, thus leaving it to the open source community to decide on the exact criteria for AI systems to qualify as open source. The concept of open source is further misinterpreted, as the Act fails to set training dataset transparency obligations, limiting the training dataset transparency requirement to disclosing “a summary about the content”. Furthermore, the Act carves out monetized open source AI models from the exemptions, thus misinterpreting the concept of “free” software. This research paper provides a critical and detailed analysis of the provisions of the EU AI Act on open source AI systems and aims to develop suggestions on filling in the legal gaps in the AI Act covering open source AI. This paper, mainly focusing on the EU AI Act, further examines the legislative approaches to open source AI systems in other jurisdictions and analyzes the problem of disclosing information on AI systems from both transparency- and safety-related perspectives.