IN BRUSSELS THERE IS ALREADY AN AGREEMENT IN PRINCIPLE ON THE REGULATION OF AI IN EUROPE

Image: Cointelegraph

As announced in a short biweekly newsletter from the Future of Life Institute, we already have news from Brussels that there is an agreement on the regulation of Artificial Intelligence in Europe.

  • Risto Uuk, EU Research Leader at the Future of Life Institute, tells us that the European Parliament and Council have reached a provisional agreement on the AI Law. The legislation covers safeguards for general-purpose AI, limitations on biometric identification systems by law enforcement, bans on social scoring and manipulative AI, and the right of consumers to file complaints. Fines for non-compliance range between 7.5 million or 1.5% of global turnover and 35 million euros or 7% of global turnover. Specifically prohibited applications include biometric categorization systems that use sensitive characteristics, untargeted scraping of facial images, emotion recognition in workplaces and educational institutions, social scoring based on social behavior or personal characteristics, and artificial intelligence systems that manipulate behavior human or exploit vulnerabilities. Law enforcement exemptions for biometric identification systems are subject to court authorization and are limited to specific searches for specific crimes or threats. High-risk AI systems face mandatory fundamental rights impact assessments, also applicable to sectors such as insurance and banking. General Purpose AI (GPAI) models and systems must comply with transparency requirements, including technical documentation and compliance with EU copyright law; GPAI models with systemic risk are also required to conduct model assessments, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity, and report on energy efficiency. The legislation supports innovation and SMEs through regulatory sandboxes and real-world testing. The agreement is awaiting formal adoption by both Parliament and the Council to become EU law.
  • European Commission President Ursula von der Leyen has welcomed the political agreement as a historic moment. Von der Leyen says it is the first comprehensive global legal framework on AI, prioritizing safety, fundamental rights and supporting the development and deployment of responsible, transparent and human-centred AI. The law aims to anticipate new rules and has sparked the interest of around 100 companies to join the Voluntary AI Pact. Von der Leyen emphasizes the EU’s commitment to global AI governance, highlighting efforts in international forums such as the G7, OECD, Council of Europe, G20 and the UN.
  • New York Times technology correspondent Adam Satariano reported that the political agreement is one of the world’s first attempts to address the social and economic impacts of this rapidly evolving technology. Satariano says the legislation aims to set a global benchmark by balancing the benefits of AI with potential risks such as work automation, disinformation and national security threats. He warns that while the Act is hailed as a regulatory breakthrough, concerns remain over its effectiveness, with some provisions expected to take 12 to 24 months to be enacted. The agreement, reached after three days of negotiations in Brussels, awaits approval by votes in the European Parliament and the Council. The law, which affects leading AI developers and companies in education, healthcare and banking, will be closely monitored around the world, influencing the trajectory of AI development and its economic implications. Law enforcement challenges are anticipated, including regulatory coordination across 27 countries and potential legal disputes.

In an analysis of the agreement by experts Rishi Bommasani, Tatsunori Hashimoto, Daniel E. Ho, Marietje Schaake and Percy Liang, they have reached the following conclusions:

  • While a political agreement has been reached on the AI Law, there are still many technical details to be resolved in the coming months. Researchers at Stanford University published a
  • detailed proposal for the regulation of the foundation model before the tripartite dialogue meetings last week. Among the many technical details of the article, here are some takeaways. The authors propose proportionate disclosure requirements for business model providers or well-resourced entities, with exemptions for low-resource entities such as students, hobbyists, and non-commercial academic groups. Regarding the disclosure of computing information, they argue that the amount of computing power should be reported in FLOPs, the amount of hardware should be reported in relevant hardware units (e.g., the number of NVIDIA A100–80 GB GPUs), and agencies should be required that establish standards. Establish standards for how to measure training time. Regarding environmental information, they say that measuring energy and emissions directly will likely require information about the specific data center or who operates the hardware at which location. This should allow reasonable estimates of energy and emissions to be calculated.
  • Many interest groups have published statements in light of the political agreement on the AI Law. For example, civil society organization Algorithm Watch says EU lawmakers have introduced key safeguards into the Act to protect fundamental rights, improving on the European Commission’s original draft in 2021. They add that EU advocacy efforts civil society led to a mandatory impact on fundamental rights. public transparency assessments and duties for high-risk AI systems. However, there are gaps, such as AI developers determining high-risk status and exceptions for national security, law enforcement, and migration contexts. Trade association DIGITAL EUROPE says that while agreement has been reached on the law, concerns are now being raised about last-minute regulation of foundation models, diverting attention from a risk-based approach. They add that the new requirements, together with laws such as the Data Law, can divert resources towards legal and compliance issues, hampering SMEs who are not familiar with product legislation. Despite concerns, it is recognized that the AI Law, if implemented effectively, can generate positive results for AI adoption and innovation in Europe.
  • The European DIGITAL SME Alliance has published a statement supporting the phased regulation of basic models to support SME innovation. The digital SME sector supports deregulation for ICT startups and medium-sized companies developing core models, but advocates regulation for large dominant providers. They claim that these major players, often large tech companies, supply smaller developers with extensive base models and should be subject to third-party conformance assessments to ensure a fair distribution of responsibility. This approach aims to prevent downstream users, especially SMEs, from bearing excessive compliance costs and reduce market entry barriers for smaller entities. To avoid excessive regulation of the digital economy, mainly composed of SMEs, clear criteria are proposed defining “very large basic models”, potentially based on computing power, the number of end users and business users, ensuring a balance between security and innovation.

Source:

The EU AI Act Newsletter

El artículo se puede leer en español en este enlace.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment

  • Top
  • Latest
empty
No comments yet