Mind Matters Natural and Artificial Intelligence News and Analysis
young-mechanic-repairing-the-robot-in-his-workshop-stockpack-adobe-stock
Young mechanic repairing the robot in his workshop

AI Should Be Less Perfect, More Human

Authors Angus Fletcher and Erik J. Larson point us toward a more sustainable future working alongside artificial intelligence
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Artificial intelligence is fragile. When faced with the ambiguity of the world, it breaks. And when it breaks, our untenable solution is to erase ambiguity. This means erasing our humanness, which in turn breaks us.

That’s the problem Angus Fletcher and Erik J. Larson address in their piece published this week in Wired.

AI can malfunction at the mildest hint of data slip, so its architects are doing all they can to dampen ambiguity and volatility. And since the world’s primary source of ambiguity and volatility is humans, we have found ourselves aggressively stifled. We’ve been forced into metric assessments at school, standard flow patterns at work, and regularized sets at hospitals, gyms, and social-media hangouts. In the process, we’ve lost large chunks of the independence, creativity, and daring that our biology evolved to keep us resilient, making us more anxious, angry, and burned out.

Angus Fletcher and Erik J. Larson, “Optimizing Machines Is Perilous. Consider ‘Creatively Adequate’ AI.” at Wired

Fletcher is a professor at Ohio State’s Project Narrative, with a Ph.D. from Yale in literature and a previous educational history in neuroscience that he now uniquely blends together to understand the science of stories. He also works with artificial intelligence, including his current project to engineer AI “that’s smart enough to know it’s dumb.”

Larson is a tech entrepreneur, a computer scientist, and the author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, a book that William Dembski called “far and away the best refutation of Kurzweil’s overpromises…”.

Erik J. Larson

In their Wired article, Fletcher and Larson make the case that we need to rethink artificial intelligence (AI) and change our approach in its programming and use. Since ambiguities cannot be removed from reality, AI should be designed to handle the ambiguities better. And how AI can handle ambiguities better is if it was a little less perfectionistic and a little more… human.

Or, in their own words, “Instead of remaking ourselves in AI’s brittle image, we should do the opposite. We should remake AI in the image of our antifragility.”

To do this would mean to re-program AI away from one of its greatest strengths: Optimization. But optimization also proves to be one of AI’s greatest weaknesses. Fletcher and Larson go a step even further: Not only is AI’s optimization a weakness for the system itself, it’s “antidemocratic.” Optimization requires that AI collect as much data as possible. This is why we have found ourselves in a world where we are constantly surveilled and our activities tracked.

Optimization is the push to make AI as accurate as possible. In the abstract world of logic, this push is unambiguously good. Yet in the real world where AI operates, every benefit comes at a cost. In the case of optimization, the cost is data. More data is required to improve the precision of machine-learning’s statistical computations, and better data is necessary to ensure that the computations are true. To optimize AI’s performance, its handlers must intel-gather at scale, hoovering up cookies from apps and online spaces, spying on us when we’re too oblivious or exhausted to resist, and paying top dollar for inside information and backroom spreadsheets.

Angus Fletcher and Erik J. Larson, “Optimizing Machines Is Perilous. Consider ‘Creatively Adequate’ AI.” at Wired

Optimization, they argue, must be scaled back to restore the proper balance between man and man’s machines.

But how would we go about this?

Fletcher and Larson provide three prescriptions that can be implemented in our treatment of AI immediately:

  1. Program AI to hold several possible interpretations at once, like “a human brain that continues reading a poem with multiple potential interpretations held simultaneously in mind.”
  2. Use data as a source of falsification instead of inspiration. Fletcher and Larson explain that AI currently acts as “a mass-generator of trivially novel ideas.” What if its skills were turned toward discovering “today’s unappreciated van Goghs” instead of creating (a feat AI is incapable of at the human level). Instead, it could mine through the countless number of works posted online to find those “wildly unprecedented ones” and bring them to light.
  3. In their final suggestion, Fletcher and Larson boldly challenge that we should merge ourselves with AI. But they do not mean this in the sci-fi sense of creating cyborgs. Instead, they mean making the interplay between AI and humans better so that humans are not subordinated to AI processes.

You can read the entire article, a brilliant piece to ponder and digest, here.

In all of these suggestions, Fletcher and Larson lay out an argument to make AI and humans better partners, in which the strengths of the one make up for the weaknesses of the other, a truly complementary team. AI will never be as smart as humans. We should not seek for AI to replace us, but to help us, and that means recognizing where AI will never measure up to human potential.


Caitlin Cory

Communications Coordinator, Discovery Institute
Caitlin Cory is the Communications Coordinator for Discovery Institute. She has previously written for Discovery on the topics of homelessness and mental illness, as well as on Big Tech and its impact on human freedom. Caitlin grew up in the Pacific Northwest, graduated from Liberty University in 2017 with her Bachelor's in Politics and Policy, and now lives in Maryland with her husband.

AI Should Be Less Perfect, More Human