Mind Matters Natural and Artificial Intelligence News and Analysis

# CategoryArtificial Intelligence

## World’s Fastest Computer Breaks Into the Exascale

How fast? “If each person on Earth completed one calculation per second, it would take more than 4 years to do what an exascale computer can do in 1 second.”

The Department of Energy’s Oak Ridge National Laboratory in Tennessee announced earlier this week that its Frontier Supercomputer, having broken the exascale barrier, is the world’s fastest. It can do more than a quintillion calculations per second: The Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory earned the top ranking today as the world’s fastest on the 59th TOP500 list, with 1.1 exaflops of performance. The system is the first to achieve an unprecedented level of computing performance known as exascale, a threshold of a quintillion calculations per second. Frontier features a theoretical peak performance of 2 exaflops, or two quintillion calculations per second, making it ten times more powerful than ORNL’s Summit system. News, “Frontier supercomputer…

## AI expert: Stop Distinguishing Between AI, Human and Animal Minds

Aaron Sloman’s approach to minds sounds a bit like panpsychism — which is increasingly accepted in science — but there are differences

Philip Ball, author of The Book of Minds: How to understand ourselves and other beings, from animals to AI to aliens (University of Chicago Press, 2022), profiles University of Birmingham computer scientist Aaron Sloman, whose 1984 paper, “The structure of the space of possible minds” sought to account for human, animal, and AI minds as “behaving systems.” Along the way, Sloman came to a significant conclusion: “We must abandon the idea that there is one major boundary between things with and without minds,” he wrote. “Instead, informed by the variety of types of computational mechanisms already explored, we must acknowledge that there are many discontinuities, or divisions within the space of possible systems: the space is not a continuum, nor…

## Do Ants Think? Yes, They Do — But They Think Like Computers

Computer programmers have adapted some ant problem-solving methods to software programs (but without the need for complex chemical scents)

Navigation expert Eric Cassell, author of Animal Algorithms: Evolution and the Mysterious Origin of Ingenious Instincts (2021), offers some insights in the book into how ants organize themselves using what amount to algorithms, without any central command: Ants are remarkably consistent in their lifestyle: All of the roughly 11,000 species of ants live in groups, large or small. There are no known solitary ants. Living in groups, they have developed a social lifestyle that includes “agriculture, territorial wars, slavery, division of labor, castes, consensus building, cities, and a symbolic language.” (p. 85) How is this managed by ants with very small brains (200,000 to 250,000 neurons) and very limited individuality? For comparison, among mammals, the agouti has roughly 857 million…

## Why You Are Not — and Cannot Be — Computable

A computer science prof explains in a new book that computer intelligence does not hold a candle to human intelligence.

An excerpt from Chapter 1 of Non-Computable You (2022) by Walter Bradley Center director Robert J. Marks (Discovery Institute Press, June 2022) The Non-Computable Human If you memorized all of Wikipedia, would you be more intelligent? It depends on how you define intelligence. Consider John Jay Osborn Jr.’s 1971 novel The Paper Chase. In this semi-autobiographical story about Harvard Law School, students are deathly afraid of Professor Kingsfield’s course on contract law. Kingfield’s classroom presence elicits both awe and fear. He is the all-knowing professor with the power to make or break every student. He is demanding, uncompromising, and scary smart. In the iconic film adaptation, Kingsfield walks into the room on the first day of class, puts his notes…

## At Salon, Funk and Smith Take On “Stealth AI Research”

All we know for sure about the claims about Google AI’s LaMDA showing human-like understanding is that, since 2020, three researchers who expressed doubt/concerns were fired

Yesterday at Salon, Jeffrey Funk and Gary N. Smith took a critical look at “stealth research” in artificial intelligence. Stealth research? They explain, A lot of stealth research today involves artificial intelligence (AI), which Sundar Pichai, Alphabet’s CEO, has compared to mankind’s harnessing of fire and electricity — a comparison that itself attests to overhyped atmosphere that surrounds AI research. For many companies, press releases are more important than peer review. Blaise Agüera y Arcas, the head of Google’s AI group in Seattle, recently reported that LaMDA, Google’s state-of-the-art large language model (LLM), generated this text, which is remarkably similar to human conversation: Blaise: How do you know if a thing loves you back? LaMDA: There isn’t an easy answer…

## Will AI Overtake Humans? Only If We Let It

We are getting too cozy with dependence on high tech to ease the burden of thinking for ourselves

This story originally appeared at Newsmax. (May 19, 2022) As I reported last year, there’s been a lot of talk about a Singularity in the last decade. That’s the point when machine intelligence (AI) exceeds human intelligence and begins to rule humanity and eventually the entire universe. It’s a scary proposition to be sure, but we can rest easy on that front, because it’s not going to happen. The futurists assume there’s a bridge between narrow applications of AI and the general intelligence humans possess. But no such bridge exists. As Erik J. Larson explains in his book The Myth of Artificial Intelligence, we’re not even on the right road to such a bridge. You can also take George Gilder’s word for it. One…

## New Learning Model for Brain Overturns 70 Years of Theory

The new model, if confirmed, could change the way algorithms are developed

According to new research, when learning takes place, it’s not just the synapses (by which neurons send signals to each other) but the whole communication structure (the dendrites) of the neuron that changes. The researchers compare the synapses to leaves and the dendrites to a tree. This, if it replicates, is a radical revision from nearly a century ago. For the last 70 years a core hypothesis of neuroscience has been that brain learning occurs by modifying the strength of the synapses, following the relative firing activity of their connecting neurons. This hypothesis has been the basis for machine and deep learning algorithms which increasingly affect almost all aspects of our lives. But after seven decades, this long-lasting hypothesis has…

## Of Woman and Machine: Are Women and Technology at Odds?

A DailyWire host turns to Mary Shelley’s Frankenstein for some insights

That’s what author and DailyWire host Andrew Klavan argues in his book The Truth and Beauty (2022). In perhaps the best chapter in a book analyzing the Romantic poets, Klavan turns to Mary Shelley (1797–1851), the teenage author of Frankenstein. Shelley was not a Romantic poet, Klavan admits, but she was married to a Romantic poet (Percy Shelley, ) and was greatly influenced by the Romantics of her era. The common conclusion is that Frankenstein is about man’s attempt to usurp God. Even Shelley herself stated that about her book. “But I don’t think this is what the novel is about at all,” Klavan posits. To me, the greatness of the story, the horror of the story, and the threat…

## Turing Tests Are Terribly Misleading

Black box algorithms are now being trusted to approve loans, price insurance, screen job applicants, trade stocks, determine prison sentences, and much more. Is that wise?

In 1950 Alan Turing proposed that the question, “Can machines think?,” be replaced by a test of how well a computer plays the “imitation game.” A man and woman go into separate rooms and respond with typewritten answers to questions that are intended to identify the players, each of whom is trying to persuade the interrogators that they are the other person. Turing proposed that a computer take the part of one of the players and the experiment be deemed a success if the interrogators are no more likely to make a correct identification. There are other versions of the game, some of which were suggested by Turing. The standard Turing test today involves a human and a computer and…

## How Far Will Unicorn Share Prices Fall?

Cumulative losses give us some insights

Most investors know that America’s Unicorns are losing money. What they don’t know is that most Unicorns have dug big holes for themselves and aren’t sure how to dig themselves out. What do I mean by holes? I mean massive cumulative losses that have been accumulated over many years of yearly losses. Because many of today’s Unicorn startups were founded at least 10 years ago, and are still unprofitable, they have a had a long time to create huge cumulative losses, some much more than the $3 billion that Amazon once had. The biggest losses are for Uber ($29.1 billion), WeWork ($12.2 billion), Snap ($8.7 billion), Lyft ($8.5 billion), Teledoc Health ($8.1 billion), and Airbnb (\$6.4 billion), followed by four…

## Would Health Care AI Mean Caregivers Spend More Time on Patients?

Chances are, it will just mean fewer and less qualified caregivers

Pat Baird, regulatory head of global software standards at Philips, recently wrote an article titled, “Can Artificial Intelligence ‘Rehumanize’ Healthcare?” His thesis is that “By lowering administrative burden, AI can increase caregivers’ time spent actually caring for patients.” I will argue that this vision for the contribution of AI to healthcare delivery will not happen due to some very observable forces. A place to begin the analysis is with the funding source for AI in healthcare. AI is bought or developed by healthcare delivery organizations. These organizations are following a business plan and if AI does not provide a business benefit, they will not pay for it. We can conclude that AI in healthcare will be designed and used to…

## Why AI Can’t Save Us From Ourselves — If Evolution Is Any Guide

Famous evolutionary theorist E. O. Wilson’s reflections help us understand

The late E. O. Wilson (1929–2021) received more than one hundred awards for his research and writing, including two Pulitzer Prizes. As a professor at Harvard University, Wilson influenced generations with his ideas about human evolution and ethics. In his 2012 New York Times essay “Evolution and Our Inner Conflict,” Wilson asked two key question regarding the problem of evil in our world: Are human beings intrinsically good but corruptible by the forces of evil, or the reverse, innately sinful yet redeemable by the forces of good? Are we built to pledge our lives to a group, even to the risk of death, or the opposite, built to place ourselves and our families above all else? Wilson believed that humans…

## Why GPT-3 Can’t Understand Anything

Without long-term memory, human conversation becomes impossible

There is a mathematical reason why machine learning systems like GPT-3 are incapable of understanding. The reason comes down to the fact that machine learning has no memory. It is just probabilistic associations. If there is only a 10% chance of going off topic, then after just seven exchanges there is a greater than 50% chance the machine learning model has gone off topic. The problem is that when prediction is just based on probabilities, the likelihood of making a misprediction increases exponentially. A long-term memory is needed in order to maintain long-term coherence. GPT-3 is essentially a sophisticated Markov process. What is important about the Markov process is that the next step in the process is only dependent on…

## AI Companies Are Massively Faking the Loophole in the Turing Test

I propose the Turing Test be further strengthened by presuming a chatbot is human until proven otherwise

Computer pioneer Alan Turing was posed the question, how do we know if an AI has human like intelligence? He offered his famous Turing test: If human judges cannot differentiate the AI from a human, then it has human-like intelligence. His test has spawned a number of competitions in which participants try to fool judges into thinking that a chatbot is really a human. One of the best-known chatbots was Eugene Goostman, which fooled the judges into thinking it was a 13-year-old boy — mostly by indirection and other distraction techniques to avoid the sort of in-depth questioning that shows that a chatbot lacks understanding. However, there is a loophole in this test. Can you spot the loophole? What better…

## Soylent AI is…people!

OpenAI advertises itself as AI-powered, but at the end of the day, the system is human-powered

In the sci-fi movie, “Soylent Green,” the big reveal is that a food called soylent green is actually made from human beings, the catchphrase being “soylent green is people.” Likewise, as I discovered from a recent exchange with OpenAI’s GPT-3, “soylent AI is people.” GPT-3 is the product of AI company OpenAI. The company made headlines in 2019 with the claim that their AI model was too dangerous to publicly release. OpenAI is not a mere research company. While their publicly stated goal is fairly modest – “Aligning AI systems with human intent” – their CEO Sam Altman has bigger plans. He left his very successful role as president of Y Combinator, one of Silicon Valley’s most successful venture capital…

## Will AI Start the Next Pandemic? It Easily Could.

It’s a bigger risk than we might think, as an experiment written up in a Nature journal has shown

In a recent paper at Nature Machine Intelligence, three drug discovery researchers share an unsettling result from their experiment with AI drug discovery. Their normal practice when getting AI software to motor through thousands of possibilities (which might take human researchers years) is to penalize toxicity and reward bioactivity. They wondered what would happen when they decided to reward both toxicity and bioactivity — to challenge their artificial intelligence — modeled on open source software — to create a lethal bioweapon: To narrow the universe of molecules, we chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the twentieth century — a few salt-sized grains…

## What Happened When a Deepfake Got Weaponized in the Ukraine War…

“Zelensky surrenders” wasn’t a world class effort so it got found out. But Nixon’s (fictional) eulogy for astronauts won awards…

On March 16, Ukrainians were surprised by a video shared at Facebook and YouTube in which their president Volodymyr Zelensky told them to surrender to the invading Russians: A fake video emerged on Facebook and YouTube in which a strangely motionless version of Zelensky asked Ukrainian troops to lay down their weapons in a voice different from his usual tone. The clip was also posted to Telegram and Russian social network VKontakte, according to the US think tank the Atlantic Council. TV Channel Ukraine 24 said hackers defaced its website with a still from the video and inserted a summary of the fake news into a broadcast’s scrolling chyron. Minutes after the TV station posted about the hack, Zelensky himself…

## Lead Us Not Into the Uncanny Valley …

Robert Marks and Gretchen Huizinga discuss whether future developments in artificial intelligence will lead to a better future or a worse one

This is the fourth and final segment of the the recent podcast, “What Does It Mean to Be Human in an Age of Artificial Intelligence?”, featuring Walter Bradley Center director Robert J. Marks and veteran podcaster Gretchen Huizinga. In the first segment, they discussed what AI can and can’t do. In the second segment, they discussed “How did all the AI hype get started?” Then, in the third part, the discussion turned to “Straight talk about killer robots/a>” because Dr. Marks is the author of The Case for Killer Robots. And now we come at last to the Uncanny Valley, where too much AI and robotics risks making everything weird. The entire interview was originally published by Christian think tank,…

## Computer Prof: We Can’t Give Machines Understanding of the World

Not now, anyway. Melanie Mitchell of the Santa Fe Institute finds that ever larger computers are learning to sound more sophisticated but have no intrinsic knowledge

Last December, computer science prof Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans (2019), let us in on a little-publicized fact: Despite the greatly increased capacity of the vast new neural networks. they are not closer to actually understanding what they read: The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding. Consider what it means to understand “The sports car passed the mail truck because it was going slower.” You need to know what sports cars and mail trucks are, that cars can “pass” one another, and, at an even more basic level, that vehicles are objects that…

## Promising New Developments in AI Prostheses Raise Stark Questions

The Olympic performance of amputee runner Oscar Pistorius in 2012 led to accusations that the prostheses performed better than natural limbs

Our rapidly developing ability to interface neurons and electronics offers amputees much more functional prostheses (though it is still a long and winding road). Here are some encouraging recent developments: ● A newer technology pioneered at Helsinki University Hospital and Imperial College London enables improved compatibility between a prosthesis and the remaining portion of the amputee’s limb. One current problem is that the connections between the prosthesis and the muscle that gives the commands (the myoelectric interface) can grow weaker due to external factors like sweating. Currently existing systems require adjustments or other measures from the user, but Yeung and his team developed a fully automated system that learns during normal use and thus adapts to varying conditions. “In this…