- Tech giants are lobbying for leniency in the EU's AI Act to avoid potential billion-dollar penalties.
- The AI code of practice, set to take effect next year, will serve as a compliance checklist for firms.
- The AI Act has sparked a debate over data scraping and the requirement for detailed summaries of the data used to train AI models.
- Tech companies will have until August 2025 to comply with the code, with non-profit organizations also applying to help draft the code.
In a significant development, the world's leading technology companies are making a concerted effort to convince the European Union to adopt a lenient approach towards the regulation of artificial intelligence (AI). This move is seen as an attempt to mitigate the risk of incurring billions of dollars in penalties. In May, EU lawmakers reached a consensus on the AI Act, marking the inception of the world's first comprehensive set of rules governing AI technology. This agreement was the culmination of months of intense negotiations among various political factions.
However, the specifics of how the rules surrounding general purpose AI systems, such as OpenAI's ChatGPT, will be enforced remain uncertain until the law's accompanying codes of practice are finalized. This uncertainty extends to the potential number of copyright lawsuits and multi-billion dollar fines that companies may face. The EU has extended an invitation to companies, academics, and other stakeholders to contribute to the drafting of the code of practice. The response has been overwhelming, with nearly 1,000 applications received, a number considered unusually high according to an anonymous source familiar with the matter.
The AI Code of Practice and its Implications
The AI code of practice, set to take effect late next year, will not be legally binding. However, it will serve as a compliance checklist for firms. Any company claiming to adhere to the law while disregarding the code could potentially face a legal challenge. Boniface de Champris, a senior policy manager at trade organization CCIA Europe, whose members include tech giants Amazon, Google, and Meta, emphasized the importance of the code of practice. He stated, If we get it right, we will be able to continue innovating. He further cautioned that if the code is too narrow or too specific, it could pose significant challenges.
The AI Act has also sparked a debate over data scraping. Companies like Stability AI and OpenAI have been questioned about the legality of using bestselling books or photo archives to train their models without obtaining permission from the creators, which could potentially constitute a copyright breach. The AI Act mandates companies to provide detailed summaries of the data used to train their models. In theory, a content creator who discovers their work has been used to train an AI model may be able to seek compensation, although this is currently being tested in the courts.
The requirement for detailed summaries has elicited mixed reactions from business leaders. Some argue that these summaries should contain minimal details to protect trade secrets, while others advocate for the rights of copyright holders to know if their content has been used without permission. OpenAI, which has faced criticism for its refusal to answer questions about the data used to train its models, has also applied to join the working groups.
Tech Giants' Involvement and Future Implications
Google has submitted an application as well, and Amazon expressed its hope to contribute our expertise and ensure the code of practice succeeds. Maximilian Gahntz, AI policy lead at the Mozilla Foundation, expressed concern about companies' attempts to evade transparency. He stated, The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box.
The EU's prioritization of tech regulation over innovation has drawn criticism from some business quarters. Those tasked with drafting the text of the code of practice will strive for a compromise. Last week, former European Central Bank chief Mario Draghi advised the bloc to adopt a better coordinated industrial policy, expedite decision-making, and increase investment to keep pace with China and the United States.
Thierry Breton, a vocal advocate of EU regulation and critic of non-compliant tech companies, recently resigned from his role as European Commissioner for the Internal Market, following a disagreement with Ursula von der Leyen, the president of the bloc's executive arm. Amid growing protectionism within the EU, local tech companies are hoping for carve-outs in the AI Act to benefit emerging European firms.
Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it. Non-profit organizations, including Access Now, the Future of Life Institute, and Mozilla, have also applied to help draft the code. Gahntz warned, As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates.
* This is a contributed article and this content does not necessarily represent the views of btin.co.in