Strange IndiaStrange India


The statement from the European Commission is being displayed on a smartphone with AI and EU stars in the background.

Representatives of EU member governments approved the EU AI Act this month.Credit: Jonathan Raa/NurPhoto via Getty

European Union countries are poised to adopt the world’s first comprehensive set of laws to regulate artificial intelligence (AI). The EU AI Act puts its toughest rules on the riskiest AI models, and is designed to ensure that AI systems are safe and respect fundamental rights and EU values.

“The act is enormously consequential, in terms of shaping how we think about AI regulation and setting a precedent,” says Rishi Bommasani, who researches the societal impact of AI at Stanford University in California.

The legislation comes as AI develops apace. This year is expected to see the launch of new versions of generative AI models — such as GPT, which powers ChatGPT, developed by OpenAI in San Francisco, California — and existing systems are being used in scams and to propagate misinformation. China already uses a patchwork of laws to guide commercial use of AI, and US regulation is under way. Last October, President Joe Biden signed the nation’s first AI executive order, requiring federal agencies to take action to manage the risks of AI.

EU nations’ governments approved the legislation on 2 February, and the law now needs final sign-off from the European Parliament, one of the EU’s three legislative branches; this is expected to happen in April. If the text remains unchanged, as policy watchers expect, the law will enter into force in 2026.

Some researchers have welcomed the act for its potential to encourage open science, whereas others worry that it could stifle innovation. Nature examines how the law will affect research.

What is the EU’s approach?

The EU has chosen to regulate AI models on the basis of their potential risk, by applying stricter rules to riskier applications and outlining separate regulations for general-purpose AI models, such as GPT, which have broad and unpredictable uses.

The law bans AI systems that carry ‘unacceptable risk’, for example those that use biometric data to infer sensitive characteristics, such as people’s sexual orientation. High-risk applications, such as using AI in hiring and law enforcement, must fulfil certain obligations; for example, developers must show that their models are safe, transparent and explainable to users, and that they adhere to privacy regulations anddo not discriminate. For lower-risk AI tools, developers will still have to tell users when they are interacting with AI-generated content. The law applies to models operating in the EU and any firm that violates the rules risks a fine of up to 7% of its annual global profits.

“I think it’s a good approach,” says Dirk Hovy, a computer scientist at Bocconi University in Milan, Italy. AI has quickly become powerful and ubiquitous, he says. “Putting a framework up to guide its use and development makes absolute sense.”

Some don’t think the laws go far enough, leaving “gaping” exemptions for military and national-security purposes, as well as loopholes for AI use in law enforcement and migration, says Kilian Vieth-Ditlmann, a political scientist at AlgorithmWatch, a Berlin-based non-profit organization that studies the effects of automation on society.

How much will it affect researchers?

In theory, very little. Last year, the European Parliament added a clause to the draft act that would exempt AI models developed purely for research, development or prototyping. The EU has worked hard to make sure that the act doesn’t affect research negatively, says Joanna Bryson, who studies AI and its regulation at the Hertie School in Berlin. “They really don’t want to cut off innovation, so I’d be astounded if this is going to be a problem.”

Many people writing at rows of curved desks, photographed from a high angle.

The European Parliament must give the final green light to the law. A vote is expected in April.Credit: Jean-Francois Badias/AP via Alamy

But the act is still likely to have an effect, by making researchers think about transparency, how they report on their models and potential biases, says Hovy. “I think it will filter down and foster good practice,” he says.

Robert Kaczmarczyk, a physician at the Technical University of Munich in Germany and co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit organization aimed at democratizing machine learning, worries that the law could hinder small companies that drive research, and which might need to establish internal structures to adhere to the laws. “To adapt as a small company is really hard,” he says.

What does it mean for powerful models such as GPT?

After heated debate, policymakers chose to regulate powerful general-purpose models — such as the generative models that create images, code and video — in their own two-tier category.

The first tier covers all general-purpose models, except those used only in research or published under an open-source licence. These will be subject to transparency requirements, including detailing their training methodologies and energy consumption, and must show they respect copyright laws .

The second, much stricter, tier will cover general-purpose models deemed to have “high-impact capabilities”, which pose a higher “systemic risk”. These models will be subject to “some pretty significant obligations”, says Bommasani, including stringent safety testing and cybersecurity checks. Developers will be made to release details of their architecture and data sources.

For the EU, ‘big’ effectively equals dangerous: any model that uses more than 1025 FLOPs (the number of computer operations) in training qualifies as high impact. Training a model with that amount of computing power costs between US$50 million and $100 million — so it is a high bar, says Bommasani. It should capture models such as GPT-4, OpenAI’s current model, and could include future iterations of Meta’s open-source rival, LLaMA. Open-source models in this tier are subject to regulation, although research-only models are exempt.

Some scientists are against regulating AI models, preferring to focus on how they’re used. “Smarter and more capable does not mean more harm,” says Jenia Jitsev, an AI researcher at the Jülich Supercomputing Centre in Germany and another co-founder of LAION. Basing regulation on any measure of capability has no scientific basis, adds Jitsev. They use the analogy of defining as dangerous all chemistry that uses a certain number of person-hours. “It’s as unproductive as this.”

Will the act bolster open-source AI?

EU policymakers and open-source advocates hope so. The act incentivizes making AI information available, replicable and transparent, which is almost like “reading off the manifesto of the open-source movement”, says Hovy. Some models are more open than others, and it remains unclear how the language of the act will be interpreted, says Bommasani. But he thinks legislators intend general-purpose models, such as LLaMA-2 and those from start-up Mistral AI in Paris, to be exempt.

The EU’s approach of encouraging open-source AI is notably different from the US strategy, says Bommasani. “The EU’s line of reasoning is that open source is going to be vital to getting the EU to compete with the US and China.”

How it is the act going to be enforced?

The European Commission will create an AI Office to oversee general-purpose models, advised by independent experts. The office will develop ways to evaluate the capabilities of these models and monitor related risks. But even if companies such as OpenAI comply with regulations and submit, for example, their enormous data sets, Jitsev questions whether a public body will have the resources to scrutinize submissions adequately. “The demand to be transparent is very important,” they say. “But there was little thought spent on how these procedures have to be executed.”



Source link

By AUTHOR

Leave a Reply

Your email address will not be published. Required fields are marked *