Why Europe is Leading The Race To Regulate AI

The European Union took a major step Wednesday toward setting rules — the first in the world — on how companies can use artificial intelligence.

It’s a bold move that Brussels hopes will pave the way for global standards for a technology used in everything from chatbots such as OpenAI’s ChatGPT to surgical procedures and fraud detection at banks.

“We have made history today,” Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists.

Lawmakers have agreed a draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law.

“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei added.

Hundreds of top AI scientists and researchers warned last month that the technology posed an extinction risk to humanity, and several prominent figures — including Microsoft President Brad Smith and OpenAI CEO Sam Altman — have called for greater regulation.

At the Yale CEO Summit this week, more than 40% of business leaders — including Walmart chief Doug McMillion and Coca-Cola CEO James Quincy — AI had the potential to destroy humanity five to 10 years from now.

Against that backdrop, the EU AI Act seeks to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects.”

Here are the key takeaways.

High-risk, low-risk, prohibited

Once approved, the Act will apply to anyone who develops and deploys AI systems in the EU, including companies located outside the bloc.

The extent of regulation depends on the risks created by a particular application, from minimal to “unacceptable.”

Systems that fall into the latter category are banned outright. These include real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China, which assign people a “health score” based on their behavior.

The legislation also sets tight restrictions on “high-risk” AI applications, which are those that threaten “significant harm to people’s health, safety, fundamental rights or the environment.”

These include systems used to influence voters in an election, as well as social media platforms with more than 45 million users that recommend content to their users — a list that would include Facebook, Twitter and Instagram.

The Act also outlines transparency requirements for AI systems.

For instance, systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.

Detailed summaries of the copyrighted data used to train these AI systems would also have to be published.

AI systems with minimal or no risk, such as spam filters, fall largely outside of the rules.

Hefty penalties

Most AI systems will likely fall into the high-risk or prohibited categories, leaving their owners exposed to potentially enormous fines if they fall foul of the regulations, according to Racheal Muldoon, a barrister (litigator) at London law firm Maitland Chambers.

Engaging in prohibited AI practices could lead to a fine of up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

That goes much further than Europe’s signature data privacy law, the General Data Protection Regulation, under which Meta was hit with a €1.2 billion ($1.3 billion) fine last month. GDPR sets fines of up to €10 million ($10.8 million), or up to 2% of a firm’s global turnover.

Fines under the AI Act serve as a “war cry from the legislators to say, ‘take this seriously’,” Muldoon said.

Protections for innovation

At the same time, penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for start-ups.

The Act also requires EU member states to establish at least one regulatory “sandbox” to test AI systems before they are deployed.

“The one thing that we wanted to achieve with this text is balance,” Dragoș Tudorache, a member of the European Parliament, told journalists. The Act protects citizens while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he added.

The Act gives citizens the right to file complaints against providers of AI systems and makes a provision for an EU AI Office to monitor enforcement of the legislation. It also requires member states to designate national supervisory authorities for AI.

Companies respond

Microsoft which, together with Google, is at the forefront of AI development globally — welcomed progress on the Act but said it looked forward to “further refinement.”

“We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said in a statement.

IBM meanwhile, called on EU policymakers to take a “risk-based approach” and suggested four “key improvements” to the draft Act, including further clarity around high-risk AI “so that only truly high-risk use cases are captured.”

The Act may not come into force until 2026, according to Muldoon, who said revisions were likely, given how rapidly AI was advancing. The legislation has already gone through several updates since drafting began in 2021.

“The law will expand in scope as the technology develops,” Muldoon said.

CNN

Don Pedro Aganbi https://www.techtvnetwork.ng

Don Pedro Aganbi is a Nigerian Journalist, broadcaster, Filmmaker, brand and Public Relations Specialist and 1st prize winner, TV category, United Nations Economic Commission for Africa (UNECA) & Africa Information Society Initiatives (AISI) Awards. He is also a recipient of the Global IT Champion Awards, courtesy of World Information Technology and Services Alliance (WITSA).

He is the Founder/Managing Partner, TechTV Network and convener of the hugely popular Titans of Tech Awards, Pan African Digital Initiative Summit & Expo and the TechTV Digital Agenda Forum.

Don Pedro Aganbi is the producer and host of the popular international award winning TV show, TechTV.

You May Also Like

More From Author

+ There are no comments

Add yours