Mind Matters Natural and Artificial Intelligence News and Analysis

CategoryMachine Learning

real-python-code-developing-screen-programing-workflow-abstract-algorithm-concept-lines-of-python-code-visible-under-magnifying-lens-stockpack-adobe-stock
Real Python code developing screen. Programing workflow abstract algorithm concept. Lines of Python code visible under magnifying lens.

How Do We Know the Machine Is Right If No One Knows How It Works?

We don’t, and that’s a problem, says Oxford philosopher John Zerilli

Oxford philosopher John Zerilli, author of A Citizen’s Guide to Artificial Intelligence (2021), asks us to consider how machine learning, the most widely used type of AI, might be deciding our lives without our knowing it: There are many reasons not to take job rejections personally, but there’s one in particular you might not consider: you might have been screened out by an algorithm that taught itself to filter candidates by gender, surname or ethnicity – in other words, by factors that have nothing to do with your ability to do the job. Even if you’re unfazed by the spectre of runaway robots enslaving humanity, this little tale shows how the ascendancy of machine learning (ML) comes with risks that…

artificial-intelligence-connections-and-nucleus-in-concept-of-interconnected-neurons-abstract-background-with-binary-numbers-neural-network-and-cloud-computing-stockpack-adobe-stock
Artificial intelligence, connections and nucleus in concept of interconnected neurons. Abstract background with binary numbers, neural network and cloud computing.

How Marvin Minsky Inspired Artificial Neural Networks

And what Minsky said when a scientist seeking to further develop the field finally met him

Dr. Paul Werbos calls it “a soap opera you wouldn’t believe”: the story of how a young Werbos was inspired by the pioneering computer scientist to pursue the development of artificial neural networks, and how Minsky later could not support the effort for disbelief that there was a solution to its many problems. In this week’s podcast, Dr. Robert J. Marks interviewed Dr. Paul Werbos, famous for his 1974 dissertation which proposed training artificial neural networks through the use of a backpropagation of errors. The two discuss Werbos’s journey in the development of artificial neural networks and the role Marvin Minsky played throughout. This portion begins at 04:25. A partial transcript, Show Notes, and Additional Resources follow. Robert J. Marks:…

ai cubicles.jpg
3D rendering of a conceptual images of office cubicles where workers where replaced by artificial intelligence.

Will AI Ever Replace Human Beings? Why Do You Ask?

A better question might be: Why do we want to know the future of artificial intelligence?
The question of whether a machine can ever fully replace a human can only have one, predefined answer. My question is, why bother asking the question? You already know the only answer you will accept! Read More ›
circle-of-people-teamwork-business-meeting-negotiations-reaching-consensus-in-disagreements-joint-problem-solving-conflict-resolution-through-dialogue-compromise-cooperation-and-collaboration-stockpack-adobe-stock.jpg
Circle of people. Teamwork. Business meeting. Negotiations, reaching consensus in disagreements. Joint problem solving. Conflict resolution through dialogue. Compromise. Cooperation and collaboration

Consensus Gives Us Information Only If We Are Free to Doubt

There are so many credentialed people on the internet with sufficiently differing views that it sometimes seems as if we could find an expert somewhere to support almost any harebrained idea. So how does a non-expert figure out the truth? Most of us lack the time, training, and inclination to investigate most subjects sufficiently so we are often urged to adopt the consensus opinion. While an individual expert may have wild and crazy ideas, the consensus will most likely be an average informed view. But it’s not that simple. Most of the time it is impossible for the public to determine the consensus opinion. What is usually labeled as consensus opinion is what media believe it to be. And the…

concept-of-robots-replacing-humans-in-offices-stockpack-adobe-stock.jpg
Concept of robots replacing humans in offices

Will Humans Ever Be Fully Replaceable by AI? Part 1

We must first determine, what is a person and what is the nature of the universe in which a person can exist?

The title question has been around for quite some time. In this discussion, I would like to take an ontological look at this question. What is the essential nature of being a person? To fully replace humans, what must AI machines become capable of? IF we want to consider the possibility of making humans obsolete, we need to know what is the essence of humanity? What is the ontological nature of a person? What characteristics define being a person? Even before we can address the essential nature of a person, we must identify the essential nature of the universe in which that person exists. What is the universe? How many dimensions does it have? Can the universe, or in it…

in-the-futuristic-laboratory-creative-engineer-works-on-the-transparent-computer-display-screen-shows-interactive-user-interface-with-deep-learning-system-artificial-intelligence-prototype-stockpack-adobe-stock.jpg
In the Futuristic Laboratory Creative Engineer Works on the Transparent Computer Display. Screen Shows Interactive User Interface with Deep Learning System, Artificial Intelligence Prototype.

A Critical Look at the Myth of “Deep Learning”

“Deep learning” is as misnamed a computational technique as exists.

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, and here. “Deep learning” is as misnamed a computational technique as exists. The actual technique refers to multi-layered neural networks, and, true enough, those multi-layers can do a lot of significant computational work. But the phrase “deep learning” suggests that the machine is doing something profound and beyond the capacity of humans. That’s far from the case. The Wikipedia article on deep learning is instructive in this regard. Consider the following image used there to illustrate deep learning: Note the rendition of the elephant at the top and compare it with the image of the elephant as we experience it at the bottom. The image at the bottom is rich,…

sisters-playing-with-wagon-cart-on-the-road-outdoors-stockpack-adobe-stock.jpg
Sisters playing with wagon cart on the road outdoors

Artificial Unintelligence

The failure of computer programs to recognize a rudimentary drawing of a wagon reveals the vast differences between artificial and human intelligence

In 1979, when he was just 34 years old, Douglas Hofstadter won a National Book Award and Pulitzer Prize for his book, Gödel, Escher, Bach: An Eternal Golden Braid, which explored how our brains work and how computers might someday mimic human thought. He has spent his life trying to solve this incredibly difficult puzzle. How do humans learn from experience? How do we understand the world we live in? Where do emotions come from? How do we make decisions? Can we write inflexible computer code that will mimic the mysteriously flexible human mind?  Hofstadter has concluded that analogy is “the fuel and fire of thinking.” When humans see, hear, or read something, we can focus on the most salient features, its “skeletal essence.”…

Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with Pink Neon Visualization Projection of Data Transmission Through High Speed Internet.
Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with Pink Neon Visualization Projection of Data Transmission Through High Speed Internet.

AI Researcher: Stop Calling Everything “Artificial Intelligence”

It’s not really intelligence, says Berkeley’s Michael Jordan, and we risk misunderstanding what these machines can really do for us

Computer scientist Michael I. Jordan, a leading AI researcher, says today’s artificial intelligence systems aren’t actually intelligent and people should stop talking about them as if they were: They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley. Katy Pretz, “Stop Calling Everything AI, Machine-Learning Pioneer Says” at IEEE Spectrum (March 31, 2031) Their principal role, he says, is to “augment human intelligence, via painstaking analysis of large…

cancel-culture-symbol-stockpack-adobe-stock.jpg
Cancel Culture Symbol

How an AI Giant Beat Cancel Culture (You Can Too!)

A Twitter mob led by an AI industry bully made a mistake when it came for University of Washington's Pedro Domingos

These days Cancel culture can descend suddenly on anyone who doesn’t think the way a Twitter mob likes about one or another issue. For example: ➤ Celebrity atheist scientist Richard Dawkins was Canceled from speaking at Trinity College in Ireland because he has said critical things about Islam and about some claims of sexual assault. Note: Dawkins says critical things about all religions but Cancel mobs focus narrowly. ➤ The enforcement is irrational. Antiracist author Ibrahim X. Kendi can make negative statements about transgender culture comparatively safely but J. K. Rowlings, in a similar circumstance, became the target of a vicious “deplatform” campaign, against which she ably defended herself. However, people who cannot write like Rowlings have not nearly been…

aiartificial-intelligence-concept-stockpack-adobe-stock.jpg
AI(Artificial intelligence) concept.

Exactly What IS Artificial Intelligence Anyway?

How does AI relate to machine learning (ML), neural computing, informatics, and a host of other hot CS buzz words?

Robert J. Marks, director of the Walter Bradley Center for Natural & Artificial Intelligence, likes to explain AI by saying “AI is anything computers do that is kind of amazing.” (“Human Exceptionalism,” Reasons to Believe, August 8, 2020). Using this definition AI is a general term that includes a collection of computer science technologies. AI is fluid. Dr. Elaine Rich (pictured), noted computer scientist and an author of Artificial Intelligence, offers a more specific definition: “AI is the study of how to make computers do things which, at the moment, people do better.” (Accessed February 17, 2021) Relying on this definition John Hsia observes: “By definition, once a computer can do what people used to do better, it’s no longer…

desert-locust-schistocerca-gregaria-is-a-species-of-locust-a-periodically-swarming-short-horned-grasshopper-in-the-family-acrididae-stockpack-adobe-stock.jpg
Desert locust Schistocerca gregaria is a species of locust, a periodically swarming, short-horned grasshopper in the family Acrididae

AI Tool Now Predicts Attacks of Locust Swarms for African Farmers

Under the right circumstances, data from the past can be used to predict data in the future

A new free AI tool now forewarns African farmers about impending locust attacks: “Farmers and pastoralists receive free SMS alerts 2-3 months in advance of when locusts are highly likely to attack farms and livestock forage in their areas, allowing for early intervention.” The Kuzi early warning tool is one of a number of new tools that can predict reasonably expected futures. This sort of forecasting is possible if there is large body of oracle ergodic data to train machine intelligence. “Oracle ergodic” simply means that data from the past can be used to predict data in the future. That’s not self-evident. Flipping a coin, for example, is not oracle ergodic in the sense that a history of past flips…

data-technology-background-big-data-visualization-flow-of-data-information-code-background-in-a-matrix-style-4k-rendering-stockpack-adobe-stock.jpg
Data technology background. Big data visualization. Flow of data. Information code. Background in a matrix style. 4k rendering.

Torturing Data Can Destroy a Career: The Case of Brian Wansink

Wansink wasn’t alone. A surprising number of studies published in highly respected peer-reviewed journals are complete nonsense and could not be replicated with fresh data

Until a few years ago, Brian Wansink (pictured in 2007) was a Professor of Marketing at Cornell and the Director of the Cornell Food and Brand Lab. He authored (or co-authored) more than 200 peer-reviewed papers and wrote two popular books, Mindless Eating and Slim by Design, which have been translated into more than 25 languages. In one of his most famous studies, 54 volunteers were served tomato soup. Half were served from normal bowls and half from “bottomless bowls” which had hidden tubes that imperceptibly refilled the bowls. Those with the bottomless bowls ate, on average, 73 percent more soup but they did not report feeling any fuller than the people who ate from normal bowls. Eating is evidently…

dating app.jpg
Online dating app or site in mobile phone. Finding love and romance from internet with smartphone. Man giving like. Many hologram photos of beautiful woman around cellphone. Stalker looking at profile

Can AI Find You the Love of Your Life?

Faced with a steeply declining birth rate, Japan’s government has decided to try AI matchmaking

Well, outsourcing everything to technology is the thing these days and the Japanese government, faced with a steeply declining birthrate, is giving AI matchmaking a try: Around half of the nation’s 47 prefectures offer matchmaking services and some of them have already introduced AI systems, according to the Cabinet Office. The human-run matchmaking services often use standardized forms to list people’s interests and hobbies, and AI systems can perform more advanced analysis of this data. “We are especially planning to offer subsidies to local governments operating or starting up matchmaking projects that use AI,” the official said. AFP-JIJI, “We have a match! Japan taps AI to boost birth rate slump” at Japan Times (December 7, 2020) Declining birthrate? Japan Times…

therapist-helping-young-man-learning-to-walk-again-in-rehabilitation-stockpack-adobe-stock.jpg
Therapist helping young man learning to walk again in rehabilitation

Why Robotics Offers Hope for Paraplegics

The breakthrough idea is that the human brain can process electronic signals from machines as well as signals from peripheral nerves

Neuroengineer Gordon Cheng compares technology that can help paraplegics to walk again to learning to drive a car: The idea behind this is that the coupling between the brain and the machine should work in a way where the brain thinks of the machine as an extension of the body. Let’s take driving as an example. While driving a car, you don’t think about your moves, do you? But we still don’t know how this really works. My theory is that the brain somehow adapts to the car as if it is a part of the body. With this general idea in mind, it would be great to have an exoskeleton that would be embraced by the brain in the…

student-term-paper-showing-a-grade-stockpack-adobe-stock.jpg
student term paper showing 'a' grade

Can a Computer Write Your Paper for You Someday Soon?

GPT-3 recently came up with a paragraph that—a pop psychologist agreed—sounded just like him

This summer the OpenAI lab, backed by $1 billion in funding from Microsoft, Google, and Facebook, released an updated version of GPT-3, a text generator that produces convincing sentences by analyzing, among other online sources, Wikipedia, countless blog posts, and thousands of digital books. According to a recent story by Cade Metz in the New York Times, one GPT-3 programmer decided to target pop psychologist Scott Barry Kaufman. Could GPT-3 really come up with a paragraph that sounded just like him? Kaufman himself (pictured) was really impressed with this one, on the subject of becoming more creative: I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more…

computer-error-stockpack-adobe-stock.jpg
Computer error

Can AI Really Know When It Shouldn’t Be Trusted?

Some researchers claim to have taught AI how to recognize its own untrustworthiness. Others say that’s impossible

Recently, we were told that artificial intelligence is now smart enough to know when it can’t be trusted: How might The Terminator have played out if Skynet had decided it probably wasn’t responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they’re untrustworthy. David Nield, “Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted” at ScienceAlert (November 25, 2020) That’s a big claim. Intelligent humans often can’t know when they are untrustworthy. These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of…

fembot-robot-playing-chess-with-woman-stockpack-adobe-stock.jpg
Fembot Robot Playing Chess with Woman

Are Computers That Win at Chess Smarter Than Geniuses?

No, and we need to look at why they can win at chess without showing even basic common sense

Big computers conquered chess quite easily. But then there was the Chinese game of go (pictured), estimated to be 4000 years old, which offers more “degrees of freedom” (possible moves, strategy, and rules) than chess (2×10170). As futurist George Gilder tells us, in Gaming AI, it was a rite of passage for aspiring intellects in Asia: “Go began as a rigorous rite of passage for Chinese gentlemen and diplomats, testing their intellectual skills and strategic prowess. Later, crossing the Sea of Japan, Go enthralled the Shogunate, which brought it into the Japanese Imperial Court and made it a national cult.” (p. 9) Then AlphaGo, from Google’s DeepMind, appeared on the scene in 2016: As the Chinese American titan Kai-Fu Lee…

yellow cubes.jpg
Abstract 3d render, geometric composition, yellow background design with cubes

Interview: New Book Outlines the Perils of Big (Meaningless) Data

Gary Smith, co-author with Jay Cordes of Phantom Patterns, shows why human wisdom and common sense are more important than ever now

Economist Gary Smith and statistician Jay Cordes have a new book out, The Phantom Pattern Problem: The mirage of big data, on why we should not trust Big Data over common sense. In their view, it’s a dangerous mix: Humans naturally assume that all patterns are significant. But AI cannot grasp the meaning of any pattern, significant or not. Thus, from massive number crunches, we may “learn” (if that’s the right word) that Stock prices can be predicted from Google searches for the word debt. Stock prices can be predicted from the number of Twitter tweets that use “calm” words. An unborn baby’s sex can be predicted by the amount of breakfast cereal the mother eats. Bitcoin prices can be…

entrance-gate-to-persepolis-persia-iran-gate-of-all-nations-stockpack-adobe-stock.jpg
Entrance gate to Persepolis Persia Iran Gate of All Nations

How Much Can New AI Tell Us About Ancient Times?

An ambitious new project hopes to use the predictive text that cell phones use to unlock their stories

Many researchers hope that AI will leading to a“golden age” of discovery for lost languages, hard to decipher writings, and badly damaged Biblical scrolls. Algorithms can chug through vast numbers of possibilities of interpretation, presenting the scholar with probabilities to choose from. But even powerful algorithms have their work cut out for them. For example, of the hundreds of thousands of clay (cuneiform) tablets that survive from an ancient part of the Near East called Mesopotamia, many are damaged. We may know the language but we don’t know what’s missing from the text and what difference the missing part makes to what is being said. Experts try to fill in the missing parts but guessing at all the possibilities is…

virtual-keyboard-stockpack-adobe-stock.jpg
virtual keyboard

SwiftKey Co-founder: Computers Can’t Just “Evolve” Intelligence

Can vain hopes for AI spring from a wrong understanding of evolution?
Ben Medlock asks us to look at self-organization as a principle of life, lacking in computers. Read More ›