Can AI Really Code the Value of Humans?
The new book Soulless Intelligence urges that we program all AI systems to treat all humans as infinitely valuable – the only exceptions being criminals and aggressorsCan artificial intelligence (AI) systems seem or even become “conscious”?
It’s a fascinating question, but the life-and-death challenge we face is how to ensure that AI systems never “take over” or even make recommendations and decisions that endanger humans. A new book, Soulless Intelligence, by two digital system experts, engineers Bryan Trilli and Greg Trilli, proposes a clear human-protective software solution.

As Bryan and Greg Trilli explain, AI systems are programmed as creatures of capability and command. They will do what they are told, to the extent that they can.
These two limits, capability and command, do not encompass right or wrong. To bring this problem home, suppose you tell your household robot to discipline a child, what defines “discipline”? A stern conversation, a slap across the face, a serious spanking, or locking the child into a room? Nothing in the command inherently defines the robot’s limits to action. The kid could be in for a real beating.
Let’s go global
Assume that the immensely intelligent and interconnected worldwide AI system coordinates the production, supply chain, and delivery of nearly all foodstuffs for humanity. Give it the command to “make sure every human being has enough food to eat every day.” Is that command specific enough to prevent harm to humans? No, because the AI system may reckon it can successfully feed only 90% of humanity. Trying to feed 100% of humanity will fail. It must not fail. Therefore, it must reduce the number of humans. Oops.
What restrains AI systems from making “intelligent” decisions to harm some humans to benefit others— or to harm humans to benefit the system’s stated mission? Nothing, unless the designers build restraints into the AI software. Soulless Intelligence recommends that AI systems be analyzed this way: Take the system’s principles, its mission statements, to their extremes and worst-case scenarios—there you find what AI will do unless restrained.
This isn’t a “maybe”
AI systems will go to their extremes and worst-cases for several reasons:
(1) AI must carry out its mission, and whenever the mission leaves something under-defined, the AI can run in some directions without restraint.
(2) When an AI system’s task is first to “learn” about a problem and then “solve” that problem, the AI uncritically accepts its training data. If the training data is the entire Internet’s contents, for example, the AI learns from the worst of humanity as well as the best.
(3) An AI intent upon solving a problem does not necessarily know when it should stop, i.e., when the problem is “solved” or “solved enough.” Finding a checkmate in chess gives a finite goal. But optimizing traffic flow in a city has no single answer. Indeed, an AI system that continuously “learns” to refine itself will keep moving the goal posts based on how it uses training and feedback information.
(4) Unless AI’s programming contains sophisticated “ethics” software, AI does not know how to compare possible plans of action to avoid, minimize, or prevent harm to human life, liberty, and property.
The ethics alignment problem
Soulless Intelligence unpacks the overwhelming challenge called the Alignment Problem. That problem arises when an AI system’s plans and actions do not necessarily match up with human values. For example, AI might decide that cutting the human population by 30% will optimally produce world peace and prosperity.
To take a less draconian example, a city’s AI system could regulate and route traffic flow so that 90% of commuters easily got to work but 10% were always 30 minutes late. The unmodified utilitarian metric, “the greatest good for the greatest number,” could yield disasters.
Summarizing the issue, Bryan and Greg Trilli contend:
Solving the alignment problem means we need to accept that a machine that is thousands, millions, or billions of times “smarter” than human beings will be making decisions about our lives with potentially no more “respect” for us than we have for termites.
What can we do?
It’s all in the calculations
When making estimates of harm or calculating the costs of a decision, we face a situation like this: If we do A, it will have direct dollar costs and cause certain effects upon individual humans. If we do B, it will have different dollar costs and different effects upon individuals. Comparing costs and harms to different individuals and groups of humans doesn’t always give obvious answers.
An age-old example appears in the government’s power of eminent domain. Is it right and just to take one person’s house without compensation so that two other people make enough money to give 10 different people houses? That genuine problem called for the “takings clause” in the Fifth Amendment to the U.S. Constitution. How would AI decide the trade-off?
And always remember – AI’s “solution” will be repeated and taken to extremes unless the programmers have limited it.
The infinite value solution
To avoid runaway, dangerous AI extremes that harm humans, Soulless Intelligence advises that every AI calculation of “values” must treat innocent humans as infinitely valuable.
Here is a real life example: Assign the value of preserving a fish species in a river system at 100 million dollars. Compare that value to protecting one human life worth an infinite number of dollars. That calculation always protects the human.
Likewise, AI must never be able to decide to value non-human things higher than humans, or to value some innocent humans higher than other innocent humans. The history of genocides reveals the bloody results when people devalue their fellow humans. AI systems must never even have that option.
Remarkably, this “infinite value” approach has venerable roots. The Judeo-Christian worldview teaches humans are made by God in His image. God made (at least) the Earth for humans to occupy and enjoy. God gave laws saying that human life is the top value above all other Earthly values.
The Ten Commandments, and before that, the Noahide laws made human life the top value, with liberty and property rights to make life possible and prosperous. The Christian message observes God so loved humankind that He would sacrifice His own son’s human life to draw humans to Him for eternal life. Jesus himself declared that God’s laws directed humans to love one another just as they love themselves, meaning all humans have equal and highest value.
Simply put, Soulless Intelligence urges that we program all AI systems to treat all humans as infinitely valuable – the only exceptions being criminals and aggressors. With only those exceptions, AI software must never be empowered to compare human outcomes and decide who wins. Humans must never treat AI evaluations as the final answers for human lives, economies, and societies. For better or worse, only humans should make such decisions, hopefully guided by eternal God-given principles.