Mind Matters Natural and Artificial Intelligence News and Analysis

CategoryMachine Learning

desert-locust-schistocerca-gregaria-is-a-species-of-locust-a-periodically-swarming-short-horned-grasshopper-in-the-family-acrididae-stockpack-adobe-stock.jpg
Desert locust Schistocerca gregaria is a species of locust, a periodically swarming, short-horned grasshopper in the family Acrididae

AI Tool Now Predicts Attacks of Locust Swarms for African Farmers

Under the right circumstances, data from the past can be used to predict data in the future

A new free AI tool now forewarns African farmers about impending locust attacks: “Farmers and pastoralists receive free SMS alerts 2-3 months in advance of when locusts are highly likely to attack farms and livestock forage in their areas, allowing for early intervention.” The Kuzi early warning tool is one of a number of new tools that can predict reasonably expected futures. This sort of forecasting is possible if there is large body of oracle ergodic data to train machine intelligence. “Oracle ergodic” simply means that data from the past can be used to predict data in the future. That’s not self-evident. Flipping a coin, for example, is not oracle ergodic in the sense that a history of past flips…

data-technology-background-big-data-visualization-flow-of-data-information-code-background-in-a-matrix-style-4k-rendering-stockpack-adobe-stock.jpg
Data technology background. Big data visualization. Flow of data. Information code. Background in a matrix style. 4k rendering.

Torturing Data Can Destroy a Career: The Case of Brian Wansink

Wansink wasn’t alone. A surprising number of studies published in highly respected peer-reviewed journals are complete nonsense and could not be replicated with fresh data

Until a few years ago, Brian Wansink (pictured in 2007) was a Professor of Marketing at Cornell and the Director of the Cornell Food and Brand Lab. He authored (or co-authored) more than 200 peer-reviewed papers and wrote two popular books, Mindless Eating and Slim by Design, which have been translated into more than 25 languages. In one of his most famous studies, 54 volunteers were served tomato soup. Half were served from normal bowls and half from “bottomless bowls” which had hidden tubes that imperceptibly refilled the bowls. Those with the bottomless bowls ate, on average, 73 percent more soup but they did not report feeling any fuller than the people who ate from normal bowls. Eating is evidently…

dating app.jpg
Online dating app or site in mobile phone. Finding love and romance from internet with smartphone. Man giving like. Many hologram photos of beautiful woman around cellphone. Stalker looking at profile

Can AI Find You the Love of Your Life?

Faced with a steeply declining birth rate, Japan’s government has decided to try AI matchmaking

Well, outsourcing everything to technology is the thing these days and the Japanese government, faced with a steeply declining birthrate, is giving AI matchmaking a try: Around half of the nation’s 47 prefectures offer matchmaking services and some of them have already introduced AI systems, according to the Cabinet Office. The human-run matchmaking services often use standardized forms to list people’s interests and hobbies, and AI systems can perform more advanced analysis of this data. “We are especially planning to offer subsidies to local governments operating or starting up matchmaking projects that use AI,” the official said. AFP-JIJI, “We have a match! Japan taps AI to boost birth rate slump” at Japan Times (December 7, 2020) Declining birthrate? Japan Times…

therapist-helping-young-man-learning-to-walk-again-in-rehabilitation-stockpack-adobe-stock.jpg
Therapist helping young man learning to walk again in rehabilitation

Why Robotics Offers Hope for Paraplegics

The breakthrough idea is that the human brain can process electronic signals from machines as well as signals from peripheral nerves

Neuroengineer Gordon Cheng compares technology that can help paraplegics to walk again to learning to drive a car: The idea behind this is that the coupling between the brain and the machine should work in a way where the brain thinks of the machine as an extension of the body. Let’s take driving as an example. While driving a car, you don’t think about your moves, do you? But we still don’t know how this really works. My theory is that the brain somehow adapts to the car as if it is a part of the body. With this general idea in mind, it would be great to have an exoskeleton that would be embraced by the brain in the…

student-term-paper-showing-a-grade-stockpack-adobe-stock.jpg
student term paper showing 'a' grade

Can a Computer Write Your Paper for You Someday Soon?

GPT-3 recently came up with a paragraph that—a pop psychologist agreed—sounded just like him

This summer the OpenAI lab, backed by $1 billion in funding from Microsoft, Google, and Facebook, released an updated version of GPT-3, a text generator that produces convincing sentences by analyzing, among other online sources, Wikipedia, countless blog posts, and thousands of digital books. According to a recent story by Cade Metz in the New York Times, one GPT-3 programmer decided to target pop psychologist Scott Barry Kaufman. Could GPT-3 really come up with a paragraph that sounded just like him? Kaufman himself (pictured) was really impressed with this one, on the subject of becoming more creative: I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more…

computer-error-stockpack-adobe-stock.jpg
Computer error

Can AI Really Know When It Shouldn’t Be Trusted?

Some researchers claim to have taught AI how to recognize its own untrustworthiness. Others say that’s impossible

Recently, we were told that artificial intelligence is now smart enough to know when it can’t be trusted: How might The Terminator have played out if Skynet had decided it probably wasn’t responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they’re untrustworthy. David Nield, “Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted” at ScienceAlert (November 25, 2020) That’s a big claim. Intelligent humans often can’t know when they are untrustworthy. These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of…

fembot-robot-playing-chess-with-woman-stockpack-adobe-stock.jpg
Fembot Robot Playing Chess with Woman

Are Computers That Win at Chess Smarter Than Geniuses?

No, and we need to look at why they can win at chess without showing even basic common sense

Big computers conquered chess quite easily. But then there was the Chinese game of go (pictured), estimated to be 4000 years old, which offers more “degrees of freedom” (possible moves, strategy, and rules) than chess (2×10170). As futurist George Gilder tells us, in Gaming AI, it was a rite of passage for aspiring intellects in Asia: “Go began as a rigorous rite of passage for Chinese gentlemen and diplomats, testing their intellectual skills and strategic prowess. Later, crossing the Sea of Japan, Go enthralled the Shogunate, which brought it into the Japanese Imperial Court and made it a national cult.” (p. 9) Then AlphaGo, from Google’s DeepMind, appeared on the scene in 2016: As the Chinese American titan Kai-Fu Lee…

yellow cubes.jpg
Abstract 3d render, geometric composition, yellow background design with cubes

Interview: New Book Outlines the Perils of Big (Meaningless) Data

Gary Smith, co-author with Jay Cordes of Phantom Patterns, shows why human wisdom and common sense are more important than ever now

Economist Gary Smith and statistician Jay Cordes have a new book out, The Phantom Pattern Problem: The mirage of big data, on why we should not trust Big Data over common sense. In their view, it’s a dangerous mix: Humans naturally assume that all patterns are significant. But AI cannot grasp the meaning of any pattern, significant or not. Thus, from massive number crunches, we may “learn” (if that’s the right word) that Stock prices can be predicted from Google searches for the word debt. Stock prices can be predicted from the number of Twitter tweets that use “calm” words. An unborn baby’s sex can be predicted by the amount of breakfast cereal the mother eats. Bitcoin prices can be…

entrance-gate-to-persepolis-persia-iran-gate-of-all-nations-stockpack-adobe-stock.jpg
Entrance gate to Persepolis Persia Iran Gate of All Nations

How Much Can New AI Tell Us About Ancient Times?

An ambitious new project hopes to use the predictive text that cell phones use to unlock their stories

Many researchers hope that AI will leading to a“golden age” of discovery for lost languages, hard to decipher writings, and badly damaged Biblical scrolls. Algorithms can chug through vast numbers of possibilities of interpretation, presenting the scholar with probabilities to choose from. But even powerful algorithms have their work cut out for them. For example, of the hundreds of thousands of clay (cuneiform) tablets that survive from an ancient part of the Near East called Mesopotamia, many are damaged. We may know the language but we don’t know what’s missing from the text and what difference the missing part makes to what is being said. Experts try to fill in the missing parts but guessing at all the possibilities is…

virtual-keyboard-stockpack-adobe-stock.jpg
virtual keyboard

SwiftKey Co-founder: Computers Can’t Just “Evolve” Intelligence

Can vain hopes for AI spring from a wrong understanding of evolution?
Ben Medlock asks us to look at self-organization as a principle of life, lacking in computers. Read More ›
oh-no-stockpack-adobe-stock.jpg
Oh no!

Six Limitations of Artificial Intelligence As We Know It

You’d better hope it doesn’t run your life, as Robert J. Marks explains to Larry Linenschmidt

The list is a selection from “Bingecast: Robert J. Marks on the Limitations of Artificial Intelligence,” a discussion between Larry L. Linenschmidt of the Hill Country Institute and Walter Bradley Center director Robert J. Marks. The focus on why we mistakenly attribute understanding and creativity to computers. The interview was originally published by the Hill Country Institute and is reproduced with thanks.  https://episodes.castos.com/mindmatters/Mind-Matters-097-Robert-Marks.mp3 Here is a partial transcript, listing six limits of AI as we know it: (The Show Notes, Additional Resources, and a link to the full transcript are below.) 1. Computers can do a great deal but, by their nature, they are limited to algorithms. Larry L. Linenschmidt: When I read the term “classical computer,” how does a computer function? Let’s build on…

real-php-code-developing-screen-programing-workflow-abstract-algorithm-concept-lines-of-php-code-visible-under-magnifying-lens-stockpack-adobe-stock.jpg
Real Php code developing screen. Programing workflow abstract algorithm concept. Lines of Php code visible under magnifying lens.

Will Ideas or Algorithms Rule Science Tomorrow?

David Krakauer of the Santa Fe Institute offers an unsettling vision of future science as produced by machines that no one really understands

The basic problem is that accepting on faith what we can’t ever hope to understand is not a traditional stance of science. Thus it’s a good question whether science could survive such a transition and still be recognizable to scientists. But does turning things over to incomprehensible algorithms, as Krakauer proposes, really work anyway? Current results from a variety of areas give pause for thought.

Read More ›
robot-studies-a-coronavirus-with-magnifiernano-robot-with-bacterium3d-render-stockpack-adobe-stock.jpeg
Robot studies a coronavirus with magnifier,nano robot with bacterium,3d render.

Can AI Save Us from COVID-19? An Expert Is Skeptical

To use AI more successfully next time, we need a clear understanding of its limitations as well as its capabilities

Experts list various problems, including the fact that AI is vulnerable to failure due to unforeseen problems, including problems with data (too sparse, too noisy, too many outliers, etc.). It also doesn’t learn as well from experience as humans do.

Read More ›
Asian Doctor with the stethoscope equipment hand holding the Artificial intelligence of brain technology over Abstract photo blurred of hospital background, AI and physician concept

Why Depend on Only One Source for Modeling AI in Healthcare?

We may be missing many of the ways AI can help us

As we struggle with the COVID-19 crisis, many are beginning to ask hard questions about how our system works, its strengths, weaknesses, and vulnerabilities. One vulnerability might be too heavy reliance on a single source for data modeling and predictions. Considering all the uses to which AI may be put in health care, getting our guidance exclusively from the Institute for Health and Metric Evaluation for modeling is reckless.

Read More ›
robots in a car plant

Will the COVID-19 Pandemic Promote Mass Automation?

Caution! Robots don’t file for benefits but that’s not all we need to know about them

I understand the panic many business leaders experience as they try to stay solvent while customers evaporate. Panic, however, is a poor teacher: AI-based automation will not only not solve all their problems, it may very well add to them. AI is not a magic box into which we can stuff them and make them disappear.

Read More ›
Woman in medical protective mask applying an antibacterial antiseptic gel for hands disinfection and health protection during during flu virus outbreak. Coronavirus quarantine and novel covid ncov

AI Is Not Ready to Moderate Content!

In the face of COVID-19 quarantines for human moderators, some look to AI to keep the bad stuff off social media

Big social media companies have long wanted to replace human content moderators with AI. COVID-19 quarantines have only intensified that discussion. But AI is far, far from ready to successfully moderate content in an age of where virtual monopolies make single point failure a frequent risk.

Read More ›
Businessman with psychopathic behaviors

All AI’s Are Psychopaths

We can use them but we can’t trust them with moral decisions. They don’t care why

Building an AI entails moving parts of our intelligence into a machine. We can do that with rules, (simplified) virtual worlds, statistical learning… We’ll likely create other means as well. But, as long as “no one is home”—that is, the machines lack minds—gaps will remain and those gaps, without human oversight, can put us at risk.

Read More ›
Citation

Anti-Plagiarism Software Goof: Paper Rejected for Repeat Citations

The scholar was obliged by discipline rules to cite the flagged information repetitively

Not only was Jean-François Bonnefon’s paper rejected by conventional anti-plagiarism software but the rejection didn’t make any sense. Bonnefon, research director at Toulouse School of Economics, was informed of “a high level of textual overlap with previous literature” (plagiarism) when he was citing scientists’ affiliations, standard descriptions, and papers cited by other—information he was obliged to cite accurately, according to a standard format. “It would have taken two [minutes] for a human to realise the bot was acting up,” he wrote on Twitter. “But there is obviously no human in the loop here. We’re letting bots make autonomous decisions to reject scientific papers.” Reaction to the post by Dr Bonnefon, who is currently a visiting scientist at the Massachusetts Institute…

Demographic Change

Can The Machine TELL If You Are Psychotic or Gay?

No, and the hype around what machine learning can do is enough to make old-fashioned tabloids sound dull and respectable

Media often co-operate with researchers’ inflated claims about machine learning’s powers of discovery. An ingenious “creative” approach to accuracy enables the misrepresentation, says data analyst Eric Siegel.

Read More ›
robot work on microscope

Will an AI Win a Nobel Prize for Science All by Itself One Day?

No, but Support Vector Machines (SVMs) can allow scientists to frame questions so that a comprehensible answer is more likely

AI can certainly help scientists. But to understand why AI can’t do science on its own, we should take a look at the NP-Hard Problem in computer science. The “Hard” is in the name of the problem for a reason… 

Read More ›