Mind Matters Natural and Artificial Intelligence News and Analysis
digital-law-concept-law-scales-at-data-center-abstract-background-stockpack-adobe-stock
Digital law concept. Law scales at data center. Abstract background
Image licensed via Adobe Stock

EU’s Massive New AI Law Won’t Stop Worst-Case Systems

The Act is drafted using legal language that confers broad additional power to governments
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Whether we fear or cheer artificial intelligence (AI) systems, we see AI technology everywhere. Using it to help detect diseases, refine designs of machines, and free humans from tedious tasks seems quite beneficial. When individuals or small groups use it to commit vicious financial and physical crimes of aggression, or corporations and governments use it to track and manipulate people’s thoughts and decisions, however, we want somebody “to do something about it.”  

Robot, chat bot, android and digital evolution of robotics. Future processor development technologies. 3D illustration of quantum cyberspace. AI and global data

The European Union has now “done something about it.” The European Parliament passed a comprehensive legal package on March 13, 2024 called the Artificial Intelligence Act, hoping to stimulate positive uses of AI while forbidding or highly regulating uses that endanger human values. The Parliament’s media release thus announced:

[The Act] aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

The AI Act, over 88,000 words long in English, establishes a comprehensive multi-national set of regulations to define AI system categories, forbid or regulate AI uses, establish new regulatory bodies, require business and government entity reporting, and penalize violations of the new law. Perhaps the most interesting part of it for those who watch the AI tidal waves are: (1) How does the AI Act work to prevent AI-enabled private sector criminal uses and public sector violations of human rights?  (2) Does the AI Act incline toward expanding and consolidating government power to advance politically-selected goals?

Four Categories of AI Systems Defined

     The Act defines an AI system as “machine-based” and “designed to operate with varying levels of autonomy,” exhibiting a certain “adaptiveness,” which “infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” (From a lawyer’s perspective, the definition is broad, with many terms inviting legal challenges when the Act is enforced.)

     Not all AI systems are created equal. Systems to play chess, track consumer buying habits, calculate the social acceptability of individual thoughts, or guide missiles to blow up targets, all differ in purposes, scope, quantities and qualities. The AI Act recognizes four main categories of AI systems based upon the “risk” they pose to fundamental human values. Generative systems, those that deliver text or images based upon user input such as ChatGPT, may fall into the categories as a special case.

Prohibited: Unacceptable Risk AI Systems

The first category defines AI systems that pose unacceptable risks as those that enable certain manipulation, exploitation and social control practices. AI-powered manipulation that can or does harm individuals or entities, or exploits the vulnerable abilities of specific groups of persons, are prohibited. Also banned are AI systems that categorize people by race, religious beliefs or political opinions, or conduct “social scoring” that leads to detrimental or unfavorable treatment of people with certain characteristics “in social contexts.” The wholesale gathering of facial images from the internet, and using “emotion recognition software” in the workplace and educational contexts, are likewise prohibited.

On its face, the AI Act prohibits remote biometric identification of people and predictive policing of individuals based upon their personal traits. These prohibitions seem strong until we read the “exceptions” for government activities and law enforcement. Examining the written exceptions and their implications will undoubtedly occupy lawyers, judges and policy-makers for years.

Regulated: High-Risk AI Systems

The AI Act devotes a lot of text to defining and regulating the second category of AI systems deemed “high-risk.” According to Parliament, these are deemed “high-risk” because of “their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.” The examples Parliament suggests include AI that poses a risk to critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections).

Dangerous Hooded Hacker Breaks into Government Data Servers and Infects Their System with a  Virus. His Hideout Place has Dark Atmosphere, Multiple Displays, Cables Everywhere.

The second category has two sub-categories: (1) AI systems used in products falling under the EU’s product safety legislation, e.g., toys, aviation, cars, medical devices and elevators; and (2) AI systems identified as those requiring registration in an EU database. This second sub-category would include:

•   Critical infrastructure, e.g., public utilities

•   Educational and vocational training, e.g., exam scoring and student selection

•   Employment application processing

•   Access to essential private and public services and benefits (e.g., healthcare)

•   Creditworthiness evaluations of individuals

•   Risk assessment and pricing for life and health insurance

•   Law-enforcement systems deploying automated analysis and risk scoring of potential offenders and evidence reliability

•   Migration, asylum and border control management, e.g., verifying and authenticating travel documents and asylum requests

•   Administration of justice and democratic processes, including systems to interpret and apply laws for judges

Government and private entities using high-risk systems under the Act have extensive duties to monitor, gather statistics, and make reports to EU regulatory agencies. 

Transparency Required: Limited Risk Systems

The third category under the AI Act is comprised of systems posing limited risks of harm. In practice, such systems include those that interact with people or generate content, such as chatbots like ChatGPT, emotion-recognition systems outside the high-risk category, biometric measurement systems, and “deepfake” content generators.  The Act requires these systems to inform users of the AI sources. Thus, the systems must inform users: (1) when they are interacting with an AI system; (2) when they are exposed to an otherwise lawful emotion recognition or biometric system; and (3) when the content is “deepfake,” i.e., artificially generated or manipulated.  

Generative AI Systems Overlap Category Definitions

Systems not subject to the Act fall into the fourth category. AI-powered video games and spam filters, for example, aren’t expressly regulated for specifically AI-based risks. (Evidently, Parliament was unaware that spam filters impose outside editorial judgment that isn’t but should be transparent.)

Falling potentially under the first three categories, depending upon their functionality are general-purpose AI (GPAI) systems to recognize speech and images, generate audio and video products, and detect patterns in huge amounts of data for various applications. Popular examples of such applications are ChatGPT and Dall-E. GPAI systems typically derive from underlying “foundation model” systems that train themselves on very broad and deep data sources such as the internet or vast written or image libraries. Depending upon their functions and power, GPAI and foundation model providers have duties of transparency to users, to identify their data sources, and to monitor risks of harm they pose.

Exemptions and Exceptions Empower EU Governments

Keeping the peace and protecting humans from crimes of violence, theft and fraud is what governments legitimately do. The AI Act recognizes and addresses several dangers to human life, liberty, reputation and property posed by businesses and some governmental activities that use powerful AI. That’s to the good.

Courthouse with judge's gavel and sign NO. concept of censorship and the production of restrictions and laws on restriction. Anti-popular laws, usurpation of power, conservative views. Lack of justice

Not significantly addressed, however, are two facts. First: The greatest dangers to human fundamental rights have come from governments misusing power. Death by Government (Routledge 1994), the late R. J. Rummel’s masterwork, surveys the awful results when governments have justified to themselves the use of mass murder to achieve some supposed “good” goal.

The AI Act, Article 2 section 3, exempts all military uses of AI for military purposes. Article 2 section 6 exempts purely academic research facilities also. Those exemptions guarantee continued research and development of every sort of AI system capable of any kind of harm. Human history teaches that any “good” invention will be turned to evil purposes if possible. The Act does not prevent the march of potentially devastating “progress.”

The Philosophy of the Act and Political Reality

Second and far more subtle: The Act is drafted using legal language that confers nearly unlimited power on governments. Judges routinely interpret language in favor of government’s purported motivations to protect “public health and safety,” to “fight terrorism,” and to promote one or another social engineering goal. All of the “people’s republic” and “socialist” nations of the 20th century boasted constitutions extending broad rights to the people — and those rights were non-existent in practice because the bureaucrats and judicial authorities could always interpret the “rights” in ways that served the government’s ends.  

The philosophy of the Act’s law enforcement exceptions contains the escape routes, such as the preamble language, for example in Recital 18 and 19, describes for biometric AI systems:

The use of those systems for the purpose of law enforcement should therefore be prohibited, except in exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks.

What will be the government’s justification for extending and deepening the use of AI for biometric classifications of people?  Some stated “substantial public interest” is easy to articulate. Government experts will tell the judge why the AI technology is “strictly necessary.” Others will detail how “the importance of using this technology outweighs the risks.” And the risks will be minimized or ridiculed. Expect the government expansions of power to win almost every time.

At minimum, the AI Act serves humanity by identifying the many ways AI endangers fundamental human rights as well as human peace and prosperity. We cannot relax or consider the “problem solved,” however. Eternal vigilance is the price of life and liberty.


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

EU’s Massive New AI Law Won’t Stop Worst-Case Systems