Mind Matters Natural and Artificial Intelligence News and Analysis

CategoryArtificial Intelligence

programming-code-script-abstract-screen-of-software-developer-stockpack-adobe-stock
Programming code script abstract screen of software developer.

Why You Are Not — and Cannot Be — Computable

A computer science prof explains in a new book that computer intelligence does not hold a candle to human intelligence.

An excerpt from Chapter 1 of Non-Computable You by Walter Bradley Center director Robert J. Marks (Discovery Institute Press, June 2022) The Non-Computable Human If you memorized all of Wikipedia, would you be more intelligent? It depends on how you define intelligence. Consider John Jay Osborn Jr.’s 1971 novel The Paper Chase. In this semi-autobiographical story about Harvard Law School, students are deathly afraid of Professor Kingsfield’s course on contract law. Kingfield’s classroom presence elicits both awe and fear. He is the all-knowing professor with the power to make or break every student. He is demanding, uncompromising, and scary smart. In the iconic film adaptation, Kingsfield walks into the room on the first day of class, puts his notes down,…

engineers-meeting-in-robotic-research-laboratory-engineers-scientists-and-developers-gathered-around-illuminated-conference-table-talking-using-tablet-and-analysing-design-of-industrial-robot-arm-stockpack-adobe-stock
Engineers Meeting in Robotic Research Laboratory: Engineers, Scientists and Developers Gathered Around Illuminated Conference Table, Talking, Using Tablet and Analysing Design of Industrial Robot Arm

At Salon, Funk and Smith Take On “Stealth AI Research”

All we know for sure about the claims about Google AI’s LaMDA showing human-like understanding is that, since 2020, three researchers who expressed doubt/concerns were fired

Yesterday at Salon, Jeffrey Funk and Gary N. Smith took a critical look at “stealth research” in artificial intelligence. Stealth research? They explain, A lot of stealth research today involves artificial intelligence (AI), which Sundar Pichai, Alphabet’s CEO, has compared to mankind’s harnessing of fire and electricity — a comparison that itself attests to overhyped atmosphere that surrounds AI research. For many companies, press releases are more important than peer review. Blaise Agüera y Arcas, the head of Google’s AI group in Seattle, recently reported that LaMDA, Google’s state-of-the-art large language model (LLM), generated this text, which is remarkably similar to human conversation: Blaise: How do you know if a thing loves you back? LaMDA: There isn’t an easy answer…

social-but-not-social-stockpack-adobe-stock
Social but not social

Will AI Overtake Humans? Only If We Let It

We are getting too cozy with dependence on high tech to ease the burden of thinking for ourselves

This story originally appeared at Newsmax. (May 19, 2022) As I reported last year, there’s been a lot of talk about a Singularity in the last decade. That’s the point when machine intelligence (AI) exceeds human intelligence and begins to rule humanity and eventually the entire universe. It’s a scary proposition to be sure, but we can rest easy on that front, because it’s not going to happen. The futurists assume there’s a bridge between narrow applications of AI and the general intelligence humans possess. But no such bridge exists. As Erik J. Larson explains in his book The Myth of Artificial Intelligence, we’re not even on the right road to such a bridge. You can also take George Gilder’s word for it. One…

nerve-cells-stockpack-adobe-stock
nerve cells

New Learning Model for Brain Overturns 70 Years of Theory

The new model, if confirmed, could change the way algorithms are developed

According to new research, when learning takes place, it’s not just the synapses (by which neurons send signals to each other but the whole communication structure (the dendrites) of the neuron that changes. The researchers compare the synapses to leaves and the dendrites to a tree. This, if it replicates, is a radical revision from nearly a century ago. For the last 70 years a core hypothesis of neuroscience has been that brain learning occurs by modifying the strength of the synapses, following the relative firing activity of their connecting neurons. This hypothesis has been the basis for machine and deep learning algorithms which increasingly affect almost all aspects of our lives. But after seven decades, this long-lasting hypothesis has…

a-frankensteins-monster-lurks-in-the-dead-of-night-stockpack-adobe-stock
A Frankenstein's Monster Lurks in the Dead of Night

Of Woman and Machine: Are Women and Technology at Odds?

A DailyWire host turns to Mary Shelley’s Frankenstein for some insights

That’s what author and DailyWire host Andrew Klavan argues in his book The Truth and Beauty (2022). In perhaps the best chapter in a book analyzing the Romantic poets, Klavan turns to Mary Shelley (1797–1851), the teenage author of Frankenstein. Shelley was not a Romantic poet, Klavan admits, but she was married to a Romantic poet (Percy Shelley, ) and was greatly influenced by the Romantics of her era. The common conclusion is that Frankenstein is about man’s attempt to usurp God. Even Shelley herself stated that about her book. “But I don’t think this is what the novel is about at all,” Klavan posits. To me, the greatness of the story, the horror of the story, and the threat…

positive-girl-resting-on-the-couch-with-robot-stockpack-adobe-stock
Positive girl resting on the couch with robot

Turing Tests Are Terribly Misleading

Black box algorithms are now being trusted to approve loans, price insurance, screen job applicants, trade stocks, determine prison sentences, and much more. Is that wise?

In 1950 Alan Turing proposed that the question, “Can machines think?,” be replaced by a test of how well a computer plays the “imitation game.” A man and woman go into separate rooms and respond with typewritten answers to questions that are intended to identify the players, each of whom is trying to persuade the interrogators that they are the other person. Turing proposed that a computer take the part of one of the players and the experiment be deemed a success if the interrogators are no more likely to make a correct identification. There are other versions of the game, some of which were suggested by Turing. The standard Turing test today involves a human and a computer and…

unicorn-landscape-stockpack-adobe-stock
Unicorn Landscape

How Far Will Unicorn Share Prices Fall?

Cumulative losses give us some insights

Most investors know that America’s Unicorns are losing money. What they don’t know is that most Unicorns have dug big holes for themselves and aren’t sure how to dig themselves out. What do I mean by holes? I mean massive cumulative losses that have been accumulated over many years of yearly losses. Because many of today’s Unicorn startups were founded at least 10 years ago, and are still unprofitable, they have a had a long time to create huge cumulative losses, some much more than the $3 billion that Amazon once had. The biggest losses are for Uber ($29.1 billion), WeWork ($12.2 billion), Snap ($8.7 billion), Lyft ($8.5 billion), Teledoc Health ($8.1 billion), and Airbnb ($6.4 billion), followed by four…

Stethoscope on computer with test results in Doctor consulting room background and report chart for medical costs in modern hospital on Laptop desk. Healthcare costs business and fees concept.

Would Health Care AI Mean Caregivers Spend More Time on Patients?

Chances are, it will just mean fewer and less qualified caregivers

Pat Baird, regulatory head of global software standards at Philips, recently wrote an article titled, “Can Artificial Intelligence ‘Rehumanize’ Healthcare?” His thesis is that “By lowering administrative burden, AI can increase caregivers’ time spent actually caring for patients.” I will argue that this vision for the contribution of AI to healthcare delivery will not happen due to some very observable forces. A place to begin the analysis is with the funding source for AI in healthcare. AI is bought or developed by healthcare delivery organizations. These organizations are following a business plan and if AI does not provide a business benefit, they will not pay for it. We can conclude that AI in healthcare will be designed and used to…

eurpean-siblings-brother-and-sister-quarelling-stockpack-adobe-stock
eurpean siblings brother and sister quarelling

Why AI Can’t Save Us From Ourselves — If Evolution Is Any Guide

Famous evolutionary theorist E. O. Wilson’s reflections help us understand

The late E. O. Wilson (1929–2021) received more than one hundred awards for his research and writing, including two Pulitzer Prizes. As a professor at Harvard University, Wilson influenced generations with his ideas about human evolution and ethics. In his 2012 New York Times essay “Evolution and Our Inner Conflict,” Wilson asked two key question regarding the problem of evil in our world: Are human beings intrinsically good but corruptible by the forces of evil, or the reverse, innately sinful yet redeemable by the forces of good? Are we built to pledge our lives to a group, even to the risk of death, or the opposite, built to place ourselves and our families above all else? Wilson believed that humans…

chatbot-social-bot-mit-quellcode-und-keyboard-stockpack-adobe-stock
Chatbot / Social Bot mit Quellcode und Keyboard

Why GPT-3 Can’t Understand Anything

Without long-term memory, human conversation becomes impossible

There is a mathematical reason why machine learning systems like GPT-3 are incapable of understanding. The reason comes down to the fact that machine learning has no memory. It is just probabilistic associations. If there is only a 10% chance of going off topic, then after just seven exchanges there is a greater than 50% chance the machine learning model has gone off topic. The problem is that when prediction is just based on probabilities, the likelihood of making a misprediction increases exponentially. A long-term memory is needed in order to maintain long-term coherence. GPT-3 is essentially a sophisticated Markov process. What is important about the Markov process is that the next step in the process is only dependent on…

Chatbot / Social Bot mit Quellcode im Hintergrund

AI Companies Are Massively Faking the Loophole in the Turing Test

I propose the Turing Test be further strengthened by presuming a chatbot is human until proven otherwise

Computer pioneer Alan Turing was posed the question, how do we know if an AI has human like intelligence? He offered his famous Turing test: If human judges cannot differentiate the AI from a human, then it has human-like intelligence. His test has spawned a number of competitions in which participants try to fool judges into thinking that a chatbot is really a human. One of the best-known chatbots was Eugene Goostman, which fooled the judges into thinking it was a 13-year-old boy — mostly by indirection and other distraction techniques to avoid the sort of in-depth questioning that shows that a chatbot lacks understanding. However, there is a loophole in this test. Can you spot the loophole? What better…

positive-girl-resting-on-the-couch-with-robot-stockpack-adobe-stock
Positive girl resting on the couch with robot

Soylent AI is…people!

OpenAI advertises itself as AI-powered, but at the end of the day, the system is human-powered

In the sci-fi movie, “Soylent Green,” the big reveal is that a food called soylent green is actually made from human beings, the catchphrase being “soylent green is people.” Likewise, as I discovered from a recent exchange with OpenAI’s GPT-3, “soylent AI is people.” GPT-3 is the product of AI company OpenAI. The company made headlines in 2019 with the claim that their AI model was too dangerous to publicly release. OpenAI is not a mere research company. While their publicly stated goal is fairly modest – “Aligning AI systems with human intent” – their CEO Sam Altman has bigger plans. He left his very successful role as president of Y Combinator, one of Silicon Valley’s most successful venture capital…

prohibited-development-of-bioweapon-in-a-lab-a-dropper-and-a-petri-dishes-with-human-blood-sample-and-a-row-of-ampoules-with-a-bio-hazard-sign-close-up-selected-focus-stockpack-adobe-stock
Prohibited development of bioweapon in a lab. A dropper and a Petri dishes with human blood sample and a row of ampoules with a bio-hazard sign, close-up, selected focus.

Will AI Start the Next Pandemic? It Easily Could.

It’s a bigger risk than we might think, as an experiment written up in a Nature journal has shown

In a recent paper at Nature Machine Intelligence, three drug discovery researchers share an unsettling result from their experiment with AI drug discovery. Their normal practice when getting AI software to motor through thousands of possibilities (which might take human researchers years) is to penalize toxicity and reward bioactivity. They wondered what would happen when they decided to reward both toxicity and bioactivity — to challenge their artificial intelligence — modeled on open source software — to create a lethal bioweapon: To narrow the universe of molecules, we chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the twentieth century — a few salt-sized grains…

definition-of-deepfakes-stockpack-adobe-stock
definition of deepfakes

What Happened When a Deepfake Got Weaponized in the Ukraine War…

“Zelensky surrenders” wasn’t a world class effort so it got found out. But Nixon’s (fictional) eulogy for astronauts won awards…

On March 16, Ukrainians were surprised by a video shared at Facebook and YouTube in which their president Volodymyr Zelensky told them to surrender to the invading Russians: A fake video emerged on Facebook and YouTube in which a strangely motionless version of Zelensky asked Ukrainian troops to lay down their weapons in a voice different from his usual tone. The clip was also posted to Telegram and Russian social network VKontakte, according to the US think tank the Atlantic Council. TV Channel Ukraine 24 said hackers defaced its website with a still from the video and inserted a summary of the fake news into a broadcast’s scrolling chyron. Minutes after the TV station posted about the hack, Zelensky himself…

robot-arm-with-a-human-skull-stockpack-adobe-stock
Robot arm with a human skull

Lead Us Not Into the Uncanny Valley …

Robert Marks and Gretchen Huizinga discuss whether future developments in artificial intelligence will lead to a better future or a worse one

This is the fourth and final segment of the the recent podcast, “What Does It Mean to Be Human in an Age of Artificial Intelligence?”, featuring Walter Bradley Center director Robert J. Marks and veteran podcaster Gretchen Huizinga. In the first segment, they discussed what AI can and can’t do. In the second segment, they discussed “How did all the AI hype get started?” Then, in the third part, the discussion turned to “Straight talk about killer robots/a>” because Dr. Marks is the author of The Case for Killer Robots. And now we come at last to the Uncanny Valley, where too much AI and robotics risks making everything weird. The entire interview was originally published by Christian think tank,…

concept-creative-idea-and-innovation-hand-picked-wooden-cube-block-with-head-human-symbol-and-light-bulb-icon-stockpack-adobe-stock
Concept creative idea and innovation. Hand picked wooden cube block with head human symbol and light bulb icon

Computer Prof: We Can’t Give Machines Understanding of the World

Not now, anyway. Melanie Mitchell of the Santa Fe Institute finds that ever larger computers are learning to sound more sophisticated but have no intrinsic knowledge

Last December, computer science prof Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans (2019), let us in on a little-publicized fact: Despite the greatly increased capacity of the vast new neural networks. they are not closer to actually understanding what they read: The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding. Consider what it means to understand “The sports car passed the mail truck because it was going slower.” You need to know what sports cars and mail trucks are, that cars can “pass” one another, and, at an even more basic level, that vehicles are objects that…

back-view-of-disabled-athlete-woman-with-prosthetic-leg-stockpack-adobe-stock
Back view of disabled athlete woman with prosthetic leg

Promising New Developments in AI Prostheses Raise Stark Questions

The Olympic performance of amputee runner Oscar Pistorius in 2012 led to accusations that the prostheses performed better than natural limbs

Our rapidly developing ability to interface neurons and electronics offers amputees much more functional prostheses (though it is still a long and winding road). Here are some encouraging recent developments: ● A newer technology pioneered at Helsinki University Hospital and Imperial College London enables improved compatibility between a prosthesis and the remaining portion of the amputee’s limb. One current problem is that the connections between the prosthesis and the muscle that gives the commands (the myoelectric interface) can grow weaker due to external factors like sweating. Currently existing systems require adjustments or other measures from the user, but Yeung and his team developed a fully automated system that learns during normal use and thus adapts to varying conditions. “In this…

Man showing tricks with cards

The AI Illusion – State-of-the-Art Chatbots Aren’t What They Seem

GPT-3 is very much like a performance by a good magician

Artificial intelligence is an oxymoron. Despite all the incredible things computers can do, they are still not intelligent in any meaningful sense of the word. Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms…

dozens-of-drones-swarm-in-the-cloudy-sky-stockpack-adobe-stock
Dozens of Drones Swarm in the Cloudy Sky.

Robert J. Marks: Straight Talk About Killer Robots

Dr. Marks, the author of Killer Robots, shares his expertise with Gretchen Huizinga of the Beatrice Institute

In the first segment of the recent podcast, “What Does It Mean to Be Human in an Age of Artificial Intelligence?”, Walter Bradley Center director Robert J. Marks discussed what artificial intelligence can and can’t do and its ethical implications with veteran podcaster Gretchen Huizinga In the second segment, they discussed “How did all the AI hype get started?” Then, in this third part, the discussion turned to the use of artificial intelligence in warfare. Dr. Marks is the author of The Case for Killer Robots, which looks at the issues raised in some detail. Here he gives a brief overview. The entire interview was originally published by Christian think tank, the Beatrice Institute (March 3, 2022) and is repeated…

large-cog-wheels-in-the-motor-stockpack-adobe-stock
Large cog wheels in the motor.

Can Computers –- and People — Learn To Think From the Bottom Up?

That’s the big promise made in a recent article at Aeon

Tufts University biologist Michael Levin and Columbia University neuroscientist Rafael Yuste have an ambitious project in hand: To explain how evolution “‘hacked’ its way to intelligence from the bottom up,” that is, from nothing. They base their thesis on computer science: This is intelligence in action: the ability to reach a particular goal or solve a problem by undertaking new steps in the face of changing circumstances. It’s evident not just in intelligent people and mammals and birds and cephalopods, but also cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks. Across all these scales, living things solve problems and achieve goals by flexibly navigating different spaces –…