Partner Keith Enright Speaks with Bloomberg Law About the Challenges of Implementing EU Artificial Intelligence Act
In the Media | April 8, 2025
Bloomberg Law
World’s Broadest AI Law Pushes Legal Teams on Managing Risk
Bloomberg Law
By Isabel Gottlieb
April 8, 2025
The world’s most comprehensive AI regulation went into effect earlier this year in the European Union, affecting corporations in the US and around the world that are using or selling artificial intelligence systems in the EU.
The AI Act aims to protect consumers from the technology’s harms by focusing on risk to individuals. Higher-risk uses face more reporting obligations, and require companies to build more safety measures, than lower-risk applications.
For example, using AI to determine whether someone should get a job or a home loan falls into the “high-risk” category, while using AI to generate ideas or images may be considered lower-risk.
The law applies in stages: In February, the EU began banning AI deemed unacceptably risky, like emotion recognition in the workplace. A requirement that companies train their staffs for AI literacy also took effect. Next up in August are rules for general-purpose AI systems. The obligations surrounding high-risk uses will come in 2026.
Multinationals with mature data governance structures in place—across industries including financial services, not just big tech—are treating the law as a global standard that helps them manage AI risk, said Matthew Worsfold, a partner at Ashurst Risk Advisory.
At the other end of the spectrum, Worsfold said, some companies will plan closer to August 2026, when most of the obligations come into effect. Another group is, “very much getting stuck into a lot of the nuance and the practicalities,” as they try to figure out how the law applies to their specific uses, he said.
Companies should review what they’re doing with AI, who their business partners are and where the data in their AI comes from, then turn to the act itself, said Jean-Marc Leclerc, head of EU policy at IBM.
“First step, ignore the AI Act. Look at what you’re doing yourself,” Leclerc said.
“It’s all based on a certain level of risk,” he added. “You will not be able to assess that level of risk if you don’t know your AI yourself.”
Who’s in scope?
The act sets different obligations around high-risk AI uses for what the EU deems “providers”—including not just model developers but also companies that “substantially modify” AI products—and “deployers,” those using AI systems.
A provider might be the big tech company that built the AI, or the company that used underlying AI built by someone else to create an AI-based tool. The company that buys the tool may be the deployer, but can become the provider through actions like re-branding with its own name.
A company’s preparation for compliance will depend on where in the value chain it sits, said Ashley Casovan, managing director of the AI Governance Center at the International Association for Privacy Professionals.
Some companies are setting up their AI governance programs, “because they recognize that they’re operating or going to be introducing systems that could introduce risk to them,” and are looking to mitigate risk through their own governance programs or through their contracts with service or product providers, she said.
What’s required?
Starting Aug. 2, general-purpose AI providers must report details about their models’ training data, and confirm they’re putting appropriate safety guardrails in place.
Many of the law’s provisions come into effect a year later—including obligations surrounding high-risk uses.
High-risk use cases already covered under existing EU product liability legislation are addressed by rules coming into effect in 2027.
How will it be enforced?
The EU AI Act carries steep fines for noncompliance—as high as 7% of a company’s global annual revenue for violating the prohibitions, and 3% for violating certain rules around high-risk AI.
Questions around the law’s enforcement are creating uncertainty for companies, said Keith Enright, a partner at Gibson Dunn and former chief privacy officer at Google. The law lets each EU country decide who will enforce the law. Some have given the job to existing privacy authorities, others are creating enforcement bodies.
“It’s an untested law,” Enright said. “It proposes new regulators that companies don’t yet have a sense of their enforcement priorities, they don’t yet have relationships.”
EU officials also have recently talked about softening tech regulation to allow more innovation. And geopolitical tensions are rising, fueled by the Trump administration threatening retaliation on countries that fine US tech players.
Of the tension between innovation and regulation, Enright said, “There is so much political friction between those two forces right now, it’s difficult to predict exactly how it’s going to play out.”
How are companies preparing?
In-house legal departments generally know which of their use cases will trigger provisions of the law, said Marcus Evans, a Norton Rose Fulbright partner. Guidance is continually developing, so involving outside counsel at every step could get expensive, he added. Evans said his clients are often working through most compliance questions on their own, reaching out to outside counsel for the most complicated issues.
Mike Jackson, associate general counsel in Microsoft’s Office of Responsible AI, said he calls in outside counsel in three scenarios: When he’s looking at a nuanced legal question that will take more time to analyze than his team has; when there are concerns regarding attorney-client privilege under EU law; and for benchmarking—to find out how other companies are interpreting some of the law’s requirements.
What’s next?
As the effective date of the general-purpose AI rules approach, companies are navigating uncertainty around that provision. Those rules will capture not just the tech giants creating large language models, but companies that are retraining or fine-tuning those models on their own data to a degree. But it’s unclear exactly what amount of training crosses the line.
The EU has said it will issue more guidance.
Microsoft encourages customers to engage with EU regulators on how they could be affected, said Amanda Craig Deckard, who leads the public policy team at the company’s Office of Responsible AI.
Reproduced with permission. April 8, 2025, Bloomberg Industry Group 800-372-1033 https://www.bloombergindustry.com