Mind Matters Natural and Artificial Intelligence News and Analysis

CategoryArtificial Intelligence

nerve-cells-stockpack-adobe-stock
nerve cells

New Learning Model for Brain Overturns 70 Years of Theory

The new model, if confirmed, could change the way algorithms are developed

According to new research, when learning takes place, it’s not just the synapses (by which neurons send signals to each other) but the whole communication structure (the dendrites) of the neuron that changes. The researchers compare the synapses to leaves and the dendrites to a tree. This, if it replicates, is a radical revision from nearly a century ago. For the last 70 years a core hypothesis of neuroscience has been that brain learning occurs by modifying the strength of the synapses, following the relative firing activity of their connecting neurons. This hypothesis has been the basis for machine and deep learning algorithms which increasingly affect almost all aspects of our lives. But after seven decades, this long-lasting hypothesis has Read More ›

a-frankensteins-monster-lurks-in-the-dead-of-night-stockpack-adobe-stock
A Frankenstein's Monster Lurks in the Dead of Night

Of Woman and Machine: Are Women and Technology at Odds?

A DailyWire host turns to Mary Shelley’s Frankenstein for some insights

That’s what author and DailyWire host Andrew Klavan argues in his book The Truth and Beauty (2022). In perhaps the best chapter in a book analyzing the Romantic poets, Klavan turns to Mary Shelley (1797–1851), the teenage author of Frankenstein. Shelley was not a Romantic poet, Klavan admits, but she was married to a Romantic poet (Percy Shelley, ) and was greatly influenced by the Romantics of her era. The common conclusion is that Frankenstein is about man’s attempt to usurp God. Even Shelley herself stated that about her book. “But I don’t think this is what the novel is about at all,” Klavan posits. To me, the greatness of the story, the horror of the story, and the threat Read More ›

positive-girl-resting-on-the-couch-with-robot-stockpack-adobe-stock
Positive girl resting on the couch with robot

Turing Tests Are Terribly Misleading

Black box algorithms are now being trusted to approve loans, price insurance, screen job applicants, trade stocks, determine prison sentences, and much more. Is that wise?

In 1950 Alan Turing proposed that the question, “Can machines think?,” be replaced by a test of how well a computer plays the “imitation game.” A man and woman go into separate rooms and respond with typewritten answers to questions that are intended to identify the players, each of whom is trying to persuade the interrogators that they are the other person. Turing proposed that a computer take the part of one of the players and the experiment be deemed a success if the interrogators are no more likely to make a correct identification. There are other versions of the game, some of which were suggested by Turing. The standard Turing test today involves a human and a computer and Read More ›

unicorn-landscape-stockpack-adobe-stock
Unicorn Landscape

How Far Will Unicorn Share Prices Fall?

Cumulative losses give us some insights

Most investors know that America’s Unicorns are losing money. What they don’t know is that most Unicorns have dug big holes for themselves and aren’t sure how to dig themselves out. What do I mean by holes? I mean massive cumulative losses that have been accumulated over many years of yearly losses. Because many of today’s Unicorn startups were founded at least 10 years ago, and are still unprofitable, they have a had a long time to create huge cumulative losses, some much more than the $3 billion that Amazon once had. The biggest losses are for Uber ($29.1 billion), WeWork ($12.2 billion), Snap ($8.7 billion), Lyft ($8.5 billion), Teledoc Health ($8.1 billion), and Airbnb ($6.4 billion), followed by four Read More ›

Stethoscope on computer with test results in Doctor consulting room background and report chart for medical costs in modern hospital on Laptop desk. Healthcare costs business and fees concept.

Would Health Care AI Mean Caregivers Spend More Time on Patients?

Chances are, it will just mean fewer and less qualified caregivers

Pat Baird, regulatory head of global software standards at Philips, recently wrote an article titled, “Can Artificial Intelligence ‘Rehumanize’ Healthcare?” His thesis is that “By lowering administrative burden, AI can increase caregivers’ time spent actually caring for patients.” I will argue that this vision for the contribution of AI to healthcare delivery will not happen due to some very observable forces. A place to begin the analysis is with the funding source for AI in healthcare. AI is bought or developed by healthcare delivery organizations. These organizations are following a business plan and if AI does not provide a business benefit, they will not pay for it. We can conclude that AI in healthcare will be designed and used to Read More ›

eurpean-siblings-brother-and-sister-quarelling-stockpack-adobe-stock
eurpean siblings brother and sister quarelling

Why AI Can’t Save Us From Ourselves — If Evolution Is Any Guide

Famous evolutionary theorist E. O. Wilson’s reflections help us understand

The late E. O. Wilson (1929–2021) received more than one hundred awards for his research and writing, including two Pulitzer Prizes. As a professor at Harvard University, Wilson influenced generations with his ideas about human evolution and ethics. In his 2012 New York Times essay “Evolution and Our Inner Conflict,” Wilson asked two key question regarding the problem of evil in our world: Are human beings intrinsically good but corruptible by the forces of evil, or the reverse, innately sinful yet redeemable by the forces of good? Are we built to pledge our lives to a group, even to the risk of death, or the opposite, built to place ourselves and our families above all else? Wilson believed that humans Read More ›

chatbot-social-bot-mit-quellcode-und-keyboard-stockpack-adobe-stock
Chatbot / Social Bot mit Quellcode und Keyboard

Why GPT-3 Can’t Understand Anything

Without long-term memory, human conversation becomes impossible

There is a mathematical reason why machine learning systems like GPT-3 are incapable of understanding. The reason comes down to the fact that machine learning has no memory. It is just probabilistic associations. If there is only a 10% chance of going off topic, then after just seven exchanges there is a greater than 50% chance the machine learning model has gone off topic. The problem is that when prediction is just based on probabilities, the likelihood of making a misprediction increases exponentially. A long-term memory is needed in order to maintain long-term coherence. GPT-3 is essentially a sophisticated Markov process. What is important about the Markov process is that the next step in the process is only dependent on Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

AI Companies Are Massively Faking the Loophole in the Turing Test

I propose the Turing Test be further strengthened by presuming a chatbot is human until proven otherwise

Computer pioneer Alan Turing was posed the question, how do we know if an AI has human like intelligence? He offered his famous Turing test: If human judges cannot differentiate the AI from a human, then it has human-like intelligence. His test has spawned a number of competitions in which participants try to fool judges into thinking that a chatbot is really a human. One of the best-known chatbots was Eugene Goostman, which fooled the judges into thinking it was a 13-year-old boy — mostly by indirection and other distraction techniques to avoid the sort of in-depth questioning that shows that a chatbot lacks understanding. However, there is a loophole in this test. Can you spot the loophole? What better Read More ›

positive-girl-resting-on-the-couch-with-robot-stockpack-adobe-stock
Positive girl resting on the couch with robot

Soylent AI is…people!

OpenAI advertises itself as AI-powered, but at the end of the day, the system is human-powered

In the sci-fi movie, “Soylent Green,” the big reveal is that a food called soylent green is actually made from human beings, the catchphrase being “soylent green is people.” Likewise, as I discovered from a recent exchange with OpenAI’s GPT-3, “soylent AI is people.” GPT-3 is the product of AI company OpenAI. The company made headlines in 2019 with the claim that their AI model was too dangerous to publicly release. OpenAI is not a mere research company. While their publicly stated goal is fairly modest – “Aligning AI systems with human intent” – their CEO Sam Altman has bigger plans. He left his very successful role as president of Y Combinator, one of Silicon Valley’s most successful venture capital Read More ›

prohibited-development-of-bioweapon-in-a-lab-a-dropper-and-a-petri-dishes-with-human-blood-sample-and-a-row-of-ampoules-with-a-bio-hazard-sign-close-up-selected-focus-stockpack-adobe-stock
Prohibited development of bioweapon in a lab. A dropper and a Petri dishes with human blood sample and a row of ampoules with a bio-hazard sign, close-up, selected focus.

Will AI Start the Next Pandemic? It Easily Could.

It’s a bigger risk than we might think, as an experiment written up in a Nature journal has shown

In a recent paper at Nature Machine Intelligence, three drug discovery researchers share an unsettling result from their experiment with AI drug discovery. Their normal practice when getting AI software to motor through thousands of possibilities (which might take human researchers years) is to penalize toxicity and reward bioactivity. They wondered what would happen when they decided to reward both toxicity and bioactivity — to challenge their artificial intelligence — modeled on open source software — to create a lethal bioweapon: To narrow the universe of molecules, we chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the twentieth century — a few salt-sized grains Read More ›

definition-of-deepfakes-stockpack-adobe-stock
definition of deepfakes

What Happened When a Deepfake Got Weaponized in the Ukraine War…

“Zelensky surrenders” wasn’t a world class effort so it got found out. But Nixon’s (fictional) eulogy for astronauts won awards…

On March 16, Ukrainians were surprised by a video shared at Facebook and YouTube in which their president Volodymyr Zelensky told them to surrender to the invading Russians: A fake video emerged on Facebook and YouTube in which a strangely motionless version of Zelensky asked Ukrainian troops to lay down their weapons in a voice different from his usual tone. The clip was also posted to Telegram and Russian social network VKontakte, according to the US think tank the Atlantic Council. TV Channel Ukraine 24 said hackers defaced its website with a still from the video and inserted a summary of the fake news into a broadcast’s scrolling chyron. Minutes after the TV station posted about the hack, Zelensky himself Read More ›

robot-arm-with-a-human-skull-stockpack-adobe-stock
Robot arm with a human skull

Lead Us Not Into the Uncanny Valley …

Robert Marks and Gretchen Huizinga discuss whether future developments in artificial intelligence will lead to a better future or a worse one

This is the fourth and final segment of the the recent podcast, “What Does It Mean to Be Human in an Age of Artificial Intelligence?”, featuring Walter Bradley Center director Robert J. Marks and veteran podcaster Gretchen Huizinga. In the first segment, they discussed what AI can and can’t do. In the second segment, they discussed “How did all the AI hype get started?” Then, in the third part, the discussion turned to “Straight talk about killer robots/a>” because Dr. Marks is the author of The Case for Killer Robots. And now we come at last to the Uncanny Valley, where too much AI and robotics risks making everything weird. The entire interview was originally published by Christian think tank, Read More ›

concept-creative-idea-and-innovation-hand-picked-wooden-cube-block-with-head-human-symbol-and-light-bulb-icon-stockpack-adobe-stock
Concept creative idea and innovation. Hand picked wooden cube block with head human symbol and light bulb icon

Computer Prof: We Can’t Give Machines Understanding of the World

Not now, anyway. Melanie Mitchell of the Santa Fe Institute finds that ever larger computers are learning to sound more sophisticated but have no intrinsic knowledge

Last December, computer science prof Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans (2019), let us in on a little-publicized fact: Despite the greatly increased capacity of the vast new neural networks. they are not closer to actually understanding what they read: The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding. Consider what it means to understand “The sports car passed the mail truck because it was going slower.” You need to know what sports cars and mail trucks are, that cars can “pass” one another, and, at an even more basic level, that vehicles are objects that Read More ›

back-view-of-disabled-athlete-woman-with-prosthetic-leg-stockpack-adobe-stock
Back view of disabled athlete woman with prosthetic leg

Promising New Developments in AI Prostheses Raise Stark Questions

The Olympic performance of amputee runner Oscar Pistorius in 2012 led to accusations that the prostheses performed better than natural limbs

Our rapidly developing ability to interface neurons and electronics offers amputees much more functional prostheses (though it is still a long and winding road). Here are some encouraging recent developments: ● A newer technology pioneered at Helsinki University Hospital and Imperial College London enables improved compatibility between a prosthesis and the remaining portion of the amputee’s limb. One current problem is that the connections between the prosthesis and the muscle that gives the commands (the myoelectric interface) can grow weaker due to external factors like sweating. Currently existing systems require adjustments or other measures from the user, but Yeung and his team developed a fully automated system that learns during normal use and thus adapts to varying conditions. “In this Read More ›

Man showing tricks with cards

The AI Illusion – State-of-the-Art Chatbots Aren’t What They Seem

GPT-3 is very much like a performance by a good magician

Artificial intelligence is an oxymoron. Despite all the incredible things computers can do, they are still not intelligent in any meaningful sense of the word. Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms Read More ›

dozens-of-drones-swarm-in-the-cloudy-sky-stockpack-adobe-stock
Dozens of Drones Swarm in the Cloudy Sky.

Robert J. Marks: Straight Talk About Killer Robots

Dr. Marks, the author of Killer Robots, shares his expertise with Gretchen Huizinga of the Beatrice Institute

In the first segment of the recent podcast, “What Does It Mean to Be Human in an Age of Artificial Intelligence?”, Walter Bradley Center director Robert J. Marks discussed what artificial intelligence can and can’t do and its ethical implications with veteran podcaster Gretchen Huizinga In the second segment, they discussed “How did all the AI hype get started?” Then, in this third part, the discussion turned to the use of artificial intelligence in warfare. Dr. Marks is the author of The Case for Killer Robots, which looks at the issues raised in some detail. Here he gives a brief overview. The entire interview was originally published by Christian think tank, the Beatrice Institute (March 3, 2022) and is repeated Read More ›

large-cog-wheels-in-the-motor-stockpack-adobe-stock
Large cog wheels in the motor.

Can Computers –- and People — Learn To Think From the Bottom Up?

That’s the big promise made in a recent article at Aeon

Tufts University biologist Michael Levin and Columbia University neuroscientist Rafael Yuste have an ambitious project in hand: To explain how evolution “‘hacked’ its way to intelligence from the bottom up,” that is, from nothing. They base their thesis on computer science: This is intelligence in action: the ability to reach a particular goal or solve a problem by undertaking new steps in the face of changing circumstances. It’s evident not just in intelligent people and mammals and birds and cephalopods, but also cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks. Across all these scales, living things solve problems and achieve goals by flexibly navigating different spaces – Read More ›

words-hype-and-hope-written-on-blocks-of-wood-the-word-hope-goes-over-the-word-hype-stockpack-adobe-stock
Words HYPE and HOPE written on blocks of wood. The word HOPE goes over the word HYPE.

Robert J. Marks: AI History — How Did All the Hype Get Started?

Dr. Marks and Gretchen Huizinga muse on the remarkable inventors who made AI what it is — and isn’t — today

In the first segment of the recent podcast, “What Does It Mean to Be Human in an Age of Artificial Intelligence?”, Walter Bradley Center director Robert J. Marks discussed what artificial intelligence can and can’t do and its ethical implications with veteran podcaster Gretchen Huizinga In this segment, they talk about the hope, the hype and the likely realities. The entire interview was originally published by Christian think tank, the Beatrice Institute (March 3, 2022) and is repeated here with their kind permission: https://mindmatters.ai/wp-content/uploads/sites/2/2022/03/Mind-Matters-Episode-176-Beatrice-Institute-Rebroadcast-rev1.mp3 Here’s a partial transcript of the second segment, with notes and links: This portion begins at 18:55 min. A partial transcript and notes, Show Notes, and Additional Resources follow. Gretchen Huizinga: Computational intelligence is one of Read More ›

asian-businesswoman-in-formal-suit-working-with-computer-laptop-for-polygonal-brain-shape-of-an-artificial-intelligence-with-various-icon-of-smart-city-internet-of-things-ai-and-business-iot-concept-stockpack-adobe-stock
Asian businesswoman in formal suit working with computer laptop for Polygonal brain shape of an artificial intelligence with various icon of smart city Internet of Things, AI and business IOT concept

Robert J. Marks: Zeroing In on What AI Can and Can’t Do

Walter Bradley Center director Marks discusses what’s hot and what’s not in AI with fellow computer maven Gretchen Huizinga

What makes mankind special? And what does it mean to flourish on the frontier of a technological future? In a recent podcast, “What Does It Mean to Be Human in an Age of Artificial Intelligence?”, Walter Bradley Center director Robert J. Marks discusses what artificial intelligence can and can’t do and its ethical implications with veteran podcaster Gretchen Huizinga This interview was originally published by Christian think tank, the Beatrice Institute (March 3, 2022) and is repeated here with their kind permission: https://mindmatters.ai/wp-content/uploads/sites/2/2022/03/Mind-Matters-Episode-176-Beatrice-Institute-Rebroadcast-rev1.mp3 Here’s a partial transcript of the first segment, with notes and links: Gretchen Huizinga: Well, Bob, you’re not just a senior fellow and director of the Walter Bradley Center, but you’re also a co-founder and were instrumental Read More ›

team-of-ants-gathering-strawberry-agriculture-teamwork-stockpack-adobe-stock
team of ants gathering strawberry, agriculture teamwork

Ants Use Algorithms Similar to Those of the Internet

Optimization algorithms enable the ant colony to decide how many ants to send to a given food source and when to drastically reduce the number

Researchers are beginning to understand how ant colonies can make complex decisions. It’s best understood, they say, as something like an optimization algorithm: Scientists found that ants and other natural systems use optimization algorithms similar to those used by engineered systems, including the Internet. These algorithms invest incrementally more resources as long as signs are encouraging but pull back quickly at the first sign of trouble. The systems are designed to be robust, allowing for portions to fail without harming the entire system. Understanding how these algorithms work in the real world may help solve engineering problems, whereas engineered systems may offer clues to understanding the behavior of ants, cells, and other natural systems. Cold Spring Harbor Laboratory, “Deciphering algorithms Read More ›