Mind Matters Natural and Artificial Intelligence News and Analysis

CategoryArtificial Intelligence

weg-uber-hugel-im-sonnenuntergang-stockpack-adobe-stock.jpg
Sun over green grass field

Walter Bradley: An Engineer Who Has Made a Difference

He has impacted many, many lives, and the world is a much better place because of him

Mind Matters News is sponsored by the Walter Bradley Center for Natural and Artificial Intelligence at Discovery Institute. A new biography is just out about Distinguished Fellow, Walter Bradley, after whom our center is named. Walter Bradley has been many things: scientist, professor, NASA researcher, proponent of reconciliation of faith and science, and a leader in empowering people in Africa with appropriate technologies. Walter Bradley is not a household name, but in a fairer world he would be. He’s sort of like George Bailey in the classic film It’s a Wonderful Life: He has impacted many, many lives, and the world is a much better place because of him. Titled For a Greater Purpose: The Life and Legacy of Walter…

hot-air-balloons-flying-over-spectacular-cappadociaturkey-stockpack-adobe-stock.jpg
Hot air balloons flying over spectacular Cappadocia.Turkey

Help Mind Matters News Continue in 2021

In an otherwise bad year, you have a chance to make your mind matter

Please help Mind Matters News thrive in 2021 by donating to our sponsor. We live in a culture where human uniqueness is increasingly questioned, and where claims about “intelligent” machines replacing human beings are embraced without serious skepticism. So where can you go to separate fact and fiction when it comes to debates over humans and machines? If you are reading this article, you know where: Mind Matters News! We supply news, analysis, and weekly podcasts that explore issues relating to mind, brain, neuroscience, personal responsibility, free speech, automation, and the use and abuse of new technologies. We do this all from the perspective that humans are unique and can’t be replaced by machines. Under the editorship of Denyse O’Leary,…

lego-figure-circuit-stockpack-adobe-stock.jpg
lego figurines in front of circuits with other figurines

How Do You Know That Your “AI” Isn’t a Human Being?

AI often depends not on geniuses, but on thousands of anonymous, toiling human workers

Many people think that AI happens without human intervention. In reality, many toiling workers help make it possible. In a piece at Medium on ethical dangers of AI, Dorothea Baur (pictured) lists four concerns but one stands out. And it’s not science fiction: 4. AI hype downplays human contribution AI hype is also part of stories that exaggerate the capabilities of AI in the present when effectively humans are still doing most of the work — we have all heard about the thousands of ghost workers who are manually labeling data to feed algorithms under dire working conditions. So, presenting something as machine intelligence when it’s actually human intelligence, is also dishonest and it deprecates the humans doing the real…

bangkok-thailand-25-aug-2020-men-hand-using-digital-tablet-for-search-information-on-google-wireless-smartphone-technology-with-intelligence-search-engine-stockpack-adobe-stock.jpg
Bangkok, Thailand 25 AUG 2020. Men hand using digital tablet for search information on Google.  Wireless Smartphone technology with intelligence search engine.

Google’s Leading AI Ethics Researcher Fired, Amid Controversy

Her research team targeted Google’s “cash cow”: advertising

Timnit Gebru, a leading AI ethics researcher, was fired from Google early this month under circumstances that have raised suspicions across the industry: On December 2, the AI research community was shocked to learn that Timnit Gebru had been fired from her post at Google. Gebru, one of the leading voices in responsible AI research, is known among other things for coauthoring groundbreaking work that revealed the discriminatory nature of facial recognition, cofounding the Black in AI affinity group, and relentlessly advocating for diversity in the tech industry. But on Wednesday evening, she announced on Twitter that she had been terminated from her position as Google’s ethical AI co-lead. “Apparently my manager’s manager sent an email [to] my direct reports…

cybernetic-brain-electronic-chip-in-form-of-human-brain-in-electronic-cyberspace-illustration-on-the-subject-of-artificial-intelligence-stockpack-adobe-stock.jpg
Cybernetic Brain. Electronic chip in form of human brain in electronic cyberspace. Illustration on the subject of 'Artificial Intelligence'.

AI: Still Just Curve Fitting, Not Finding a Theory of Everything

The AI Feynman algorithm is impressive, as the New York Times notes, but it doesn’t devise any laws of physics

Judea Pearl, a winner of the Turing Award (the “Nobel Prize of computing”), has argued that, “All the impressive achievements of deep learning amount to just curve fitting.” Finding patterns in data may be useful but it is not real intelligence. A recent New York Times article, “Can a Computer Devise a Theory of Everything?” suggested that Pearl is wrong because computer algorithms have moved beyond mere curve fitting. Stephen Hawking’s 1980 prediction that, “The end might not be in sight for theoretical physics, but it might be in sight for theoretical physicists” was quoted. If computers can now devise theories that make theoretical physicists redundant, then they are surely smarter than the rest of us. The program behind the…

concept-of-a-futuristic-quantum-data-center-supercomputer-running-with-physical-waves-stockpack-adobe-stock.jpg
Concept of a futuristic quantum data center, supercomputer running with physical waves

Why Google’s “Quantum Supremacy” Isn’t Changing Much—Not Yet

Quantum computing was suggested by physicist Richard Feynman in 1982; the supremacy battles are quite recent

In last week’s podcast, “Enrique Blair on quantum computing,” Walter Bradley Center director Robert J. Marks talks with fellow computer engineer Enrique Blair about why quantum mechanics is so strange but important to our future. One thing they discussed was Google’s claim to quantum supremacy. What does it mean? Does it matter? https://episodes.castos.com/mindmatters/Mind-Matters-110-Enrique-Blair.mp3 The discussion of quantum supremacy begins at approximately 47:55. The Show Notes and transcript follow. Excerpts from the transcript: Robert J. Marks (pictured): Google recently announced they had achieved quantum supremacy. What is quantum supremacy, and how does that relate to the quantum computer and the other computers that we use today? Enrique Blair: It’s a pretty interesting buzzword. Maybe the first thing to mention is what…

student-term-paper-showing-a-grade-stockpack-adobe-stock.jpg
student term paper showing 'a' grade

Can a Computer Write Your Paper for You Someday Soon?

GPT-3 recently came up with a paragraph that—a pop psychologist agreed—sounded just like him

This summer the OpenAI lab, backed by $1 billion in funding from Microsoft, Google, and Facebook, released an updated version of GPT-3, a text generator that produces convincing sentences by analyzing, among other online sources, Wikipedia, countless blog posts, and thousands of digital books. According to a recent story by Cade Metz in the New York Times, one GPT-3 programmer decided to target pop psychologist Scott Barry Kaufman. Could GPT-3 really come up with a paragraph that sounded just like him? Kaufman himself (pictured) was really impressed with this one, on the subject of becoming more creative: I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more…

theory-of-everything-concept-stockpack-adobe-stock.jpg
Theory of Everything concept

Can a Powerful Enough Computer Work Out a Theory of Everything?

Some physicists hope so even if it would put them out of work. But is it possible?

Recently, prominent physicists were asked whether a sufficiently powerful computer could come up with a Theory of Everything, by the sheer power of crunching numbers. As a recent New York Times article by Dennis Overbye shows, physicists were divided and uncertain: “It might be possible, physicists say, but not anytime soon. And there’s no guarantee that we humans will understand the result.” But doubt, in the view of multiverse theorist Max Tegmark, means we are guilty of “carbon chauvinism”—the idea that humans could be smarter than computers. The late Stephen Hawking thought that computers would replace humans and was alarmed by the prospect. According to Overbye, Hawking had been warning that computers would start to replace physicists in particular since…

human-prion-3d-model-prion-is-an-infectious-agent-that-can-fo-stockpack-adobe-stock.jpg
Human prion (3d model). Prion is an infectious agent that can fo

AlphaFold Scores Huge Breakthrough in Analyzing Causes of Disease

In a world so deeply designed and complexly organized, we need a quick and practical way of knowing what was going on in cells and viruses. AI can help

Alphabet’s DeepMind team has just scored a breakthrough in finding treatments for diseases. Their latest AlphaFold system won a grand challenge in analyzing the “folds” of proteins. Proteins—large and often very complex chains of amino acids—do the work in our cells. But, like all bodies, they are three-dimensional. We can’t understand them until we can analyze the folds (the third dimension) that are unique to each type among hundreds of thousands. Knowing what a given protein actually does (or doesn’t) is critical to developing many new medical treatments. How hard is the problem? In his acceptance speech for the 1972 Nobel Prize in Chemistry, Christian Anfinsen famously postulated that, in theory, a protein’s amino acid sequence should fully determine its…

computer-error-stockpack-adobe-stock.jpg
Computer error

Can AI Really Know When It Shouldn’t Be Trusted?

Some researchers claim to have taught AI how to recognize its own untrustworthiness. Others say that’s impossible

Recently, we were told that artificial intelligence is now smart enough to know when it can’t be trusted: How might The Terminator have played out if Skynet had decided it probably wasn’t responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they’re untrustworthy. David Nield, “Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted” at ScienceAlert (November 25, 2020) That’s a big claim. Intelligent humans often can’t know when they are untrustworthy. These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of…

high-risk-cholesterol-test-results-stockpack-adobe-stock.jpg
high risk cholesterol test results

Can an 18th Century Statistician Help Us Think More Clearly?

Distinguishing between types of probability can help us worry less and do more

Thomas Bayes (1702–1761) (pictured), a statistician and clergyman, developed a theory of decision-making which was only discussed after his death and only became important in the 20th century. It is now a significant topic in philosophy, in the form of Bayesian epistemology. Understanding Bayes’ Rule may be essential to making good decisions. Let’s say that you are a generally healthy person and have no symptoms of any illness and no specific risk factors for any illness. Acting on a friend’s suggestion, you get screened for a variety of diseases, just to be sure. Of the diseases you test for, the HIV test comes back positive. You read on the package that the test is 99.6% accurate. Are you more likely…

fembot-robot-playing-chess-with-woman-stockpack-adobe-stock.jpg
Fembot Robot Playing Chess with Woman

Are Computers That Win at Chess Smarter Than Geniuses?

No, and we need to look at why they can win at chess without showing even basic common sense

Big computers conquered chess quite easily. But then there was the Chinese game of go (pictured), estimated to be 4000 years old, which offers more “degrees of freedom” (possible moves, strategy, and rules) than chess (2×10170). As futurist George Gilder tells us, in Gaming AI, it was a rite of passage for aspiring intellects in Asia: “Go began as a rigorous rite of passage for Chinese gentlemen and diplomats, testing their intellectual skills and strategic prowess. Later, crossing the Sea of Japan, Go enthralled the Shogunate, which brought it into the Japanese Imperial Court and made it a national cult.” (p. 9) Then AlphaGo, from Google’s DeepMind, appeared on the scene in 2016: As the Chinese American titan Kai-Fu Lee…

two-young-businesspeople-or-scientists-with-robotic-hand-standing-in-office-talking-stockpack-adobe-stock.jpg
Two young businesspeople or scientists with robotic hand standing in office, talking.

Computer Prof: Feds Should Avoid University, Seek Entrepreneurs

Too much time at the U is wasted on getting papers into theoretical journals, not enough time spent on innovation, he contends

In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D. Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. It may be downloaded free here. In this fourth part (here are Part 1, Part 2, and Part 3), the discussion turned to what the government’s top priority in tech policy should be: https://episodes.castos.com/mindmatters/Mind-Matters-109-Robert-Marks.mp3 The discussion of a top priority in tech policy begins at 20:31 (A portion of the transcript follows. The whole transcript is here. Notes…

digital-world-stockpack-adobe-stock.jpg
digital world

China’s Eyes Are Watching Africa Closely

In exchange for help with high-tech communication systems, China gets to install mass surveillance technology

Depending on who you talk to, the twenty-year relationship between China and several countries in the African Union has been described as everything from mutually beneficial to asymmetric and dysfunctional right down to exploitative and neo-colonialist. Recent pre-COVID-19 surveys indicate that citizens of several African nations see their country’s interaction with China as largely positive. But if some African science fiction writers are any indicator, others see a dystopian future. The current onslaught of high-tech surveillance technologies from China provides a chance to compare the two views. Recently, I have written about racial tensions between African nationals and local government in Guangzhou amidst the COVID-19 pandemic and about how the pandemic has worsened African countries’ debt crisis, particularly due to…

black-and-caucasian-female-scientists-have-work-related-discussion-male-researcher-joins-them-they-work-in-a-modern-laboratory-research-center-stockpack-adobe-stock.jpg
Black and Caucasian Female Scientists have Work Related Discussion, Male Researcher Joins Them. They Work in a Modern Laboratory/ Research Center.

Is the Research Money for AI Spent at Universities Just Wasted?

A computer engineering prof tells an AI think tank about the time a four-star general visited his university

In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D. Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. The book may be downloaded free here. In this third part (here are Part 1 and Part 2), the discussion turned to stalled AI research at universities: https://episodes.castos.com/mindmatters/Mind-Matters-109-Robert-Marks.mp3 The discussion of the state of AI research begins at 17:48 (A portion of the transcript follows. The whole transcript is here. Notes and links follow below.) Rob Atkinson: You…

robot-close-up-stockpack-adobe-stock.jpg
Robot Close Up

AI Is Not Nearly Smart Enough to Morph Into the Terminator

Computer engineering prof Robert J. Marks offers some illustrations in an ITIF think tank interview

In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D.Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. It may be downloaded free here. In this second part ( here’s Part 1), the discussion (starts at 6:31) turned to what might happen if AI goes “rogue.” The three parties agreed that AI isn’t nearly smart enough to turn into the Terminator: Jackie Whisman: Well, opponents of so-called killer robots, of course argue that the technologies can’t be…

swarm-of-drones-surveying-flying-over-city-stockpack-adobe-stock.jpg
Swarm of drones surveying, flying over city

Is the U.S. Military Falling Behind in Artificial Intelligence?

What is the likely outcome of allowing those with very different value systems to have control of global AI warfare technology?

In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D. Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. (The book may be downloaded free here.) As they jointly see the problem, There’s a lot of doomsday hype around artificial intelligence in general, and the idea of so-called “killer robots” has been especially controversial. But when it comes to the ethics of these technologies, one can argue that robots actually could be more ethical than human operators.…

twisted-clock-face-time-concept-stockpack-adobe-stock.jpg
Twisted clock face. Time concept

Do Time and Space Mean the Same Thing to Humans as to Computers?

Futurist George Gilder tells us, humans don’t treat physical and chemical forces or clock pulses the way computers do

Recently, we have looked at four of the six assumptions that, according to futurist George Gilder in Gaming AI, are generally shared by those who believe that, sometime soon in a Singularity , we will merge with our machines: Four of them are: 1) The brain is a computer and Big Data is a Big Answer (here) and 2) maps are territories and reality follows our rules (here). Now here are the final two: • The Locality Assumption: Actions of human agents reflect only immediate physical forces impinging directly on them. • The Digital Time Assumption: Time is objective and measured by discrete increments. (p. 50) Gilder tells us that the Locality Assumption means that “minds respond to local inputs…

autonomous-vehicles-on-highway-with-self-driving-cars-sensing-environment-by-radar-and-operating-safely-on-speedway-thanks-to-artificial-intelligence-and-control-systems-automated-transport-concept-stockpack-adobe-stock.jpg
Autonomous vehicles on highway with self driving cars sensing environment by radar and operating safely on speedway thanks to artificial intelligence and control systems, automated transport concept

What Real Advantage Do Self-Driving Cars Provide?

It’s time for a hard-headed look at the costs and benefits of the pursuit of fully self-driving cars

More and more people are realizing that autonomous (self-driving) vehicles are not a road to automotive prosperity. To recap, Level 5 self-driving is what most people think of when they hear the term “self-driving.” You type in an address and the car takes you where you want to go while you sleep in the back. That car is not going to hit the road anytime soon. Level 4 self-driving is similar but only works within well-defined areas or situations. In practice, Level 4 essentially relies on either intelligent infrastructure or a territory that is so predictable and well-mapped that it obviates the need for intelligent infrastructure. Huge amounts have been invested in self-driving vehicles. The Information estimated that $16 billion…

schrodingers-cat-stockpack-adobe-stock.jpg
Schrodinger's cat

Does Schrödinger’s Cat Think Quantum Computing Is a Sure Thing?

It might lead to more security, though not to thinking computers

Some hope that a move to quantum computing—qubits instead of bits, analog instead of digital—will work wonders, including the invention of the true thinking computer. In last week’s podcast, futurist George Gilder and computer engineer Robert J. Marks looked at, among other things, what’s really happening with quantum computing: https://episodes.castos.com/mindmatters/Mind-Matters-108-George-Gilder.mp3 (The quantum computing discussion begins at 15:04.) Robert J. Marks: What’s your take on quantum computing? It seems to me that there’s been glacial progress in the technology. George Gilder (pictured): I think quantum computing is rather like AI, in that it moves the actual problem outside the computational process and gives the illusion that it solved the problem, but it’s really just pushed the problem out. Quantum computing is…