Mind Matters Natural and Artificial Intelligence News and Analysis

CategoryArtificial Intelligence

Young businesswoman thinking while using a laptop at work

Marks: Computers Only Compute and Thinking Needs More Than That

Robert J. Marks talks about his new book, Non-Computable You, with Oregon-based talk show host Bill Meyer

Recently, Bill Meyer interviewed Walter Bradley Center director Robert J. Marks on his Oregon-based talk show about “Why computers will never understand what they are doing,” in connection with his new book, Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022). We are rebroadcasting it with permission here as (Episode 194). Meyer began by saying, “I started reading a book over the weekend that I am going to continue to eagerly devour because it cut against some of my preconceived notions”: https://mindmatters.ai/wp-content/uploads/sites/2/2022/07/Mind-Matters-194-Bob-Marks-Bill-Meyer.mp3 A partial transcript, notes,  and Additional Resources follow. Meyer and Marks began by discussion the recent flap at Google where software engineer Blake Lemoine claimed that the AI he was working with was…

Artificial Intelligence self aware android robots patrolling a destroyed city. 3d rendering

Study: AI Will Make Human Factors More, Not Less, Critical in War

Counterintuitive? Not when we factor in the “fog of war” that makes military situations more confusing than, say, conventional business ones

We sometimes hear that artificial intelligence in the military means that AI takes the risks and does the fighting while humans direct from a safe distance. It sounds reassuring but it’s not likely, say Georgia Institute of Technology cybersecurity professor Jon Lindsay and University of Toronto AI professor Avi Goldfarb: Many policy makers assume human soldiers could be replaced with automated systems, ideally making militaries less dependent on human labor and more effective on the battlefield. This is called the substitution theory of AI, but Lindsay and Goldfarb state that AI should not be seen as a substitute, but rather a complement to existing human strategy. “Machines are good at prediction, but they depend on data and judgment, and the…

Concept of robots replacing humans in offices

Marks: Forget the Hype, “Thinking Machines” Can’t Replace Humans

It’s easy to picture, especially if we don’t know much about computers. And fears are easily exploited. But what are the facts?

Walter Bradley Center director Robert J. Marks gave a talk in January at the Dallas Conference on Science and Faith on whether a robot will really take your job: “AI Apocalypse: Will Thinking Machines Replace Humans?” Just released on video: As a computer engineer, Marks looks at the pop culture worry a bit differently from some. His skeptical response has also been captured in a just-published book, Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022). The book makes clear that computers compute. They don’t really do anything that cannot be expressed as a computation. That’s both a strength and a weakness. The ability of an algorithm to sort through billions of online documents in…

policewoman holding arrested young woman while her partner talking on portable radio

Can AI Really Predict Crime a Week in Advance? That’s the Claim.

University of Chicago data scientists claim 90% accuracy for their algorithm using past data — but it’s hard to evaluate

The University of Chicago recently announced to great fanfare that, Data and social scientists from the University of Chicago have developed a new algorithm that forecasts crime by learning patterns in time and geographic locations from public data on violent and property crimes. The model can predict future crimes one week in advance with about 90% accuracy. University of Chicago Medical Center, “Algorithm Predicts Crime a Week in Advance, but Reveals Bias in Police Response” at Newswise (June 28, 2022) Many thought immediately of the 2002 movie Minority Report, in which three psychics (“precogs”) visualize murders before they occur, thereby allowing special PreCrime police to arrest would-be assailants before they can commit them. Have these University of Chicago researchers made…

searching for solution paper airplane
The emergence of a problem and the search for solutions. Many options for solving a complex problem

Machines with Minds? The Lovelace Test vs. the Turing Test

The answers computer programs give sometimes surprise me too — but they always result from their programming

Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022) by Robert J. Marks is available here. What follows is an excerpt from Chapter 2. Selmer Bringsjord, and his colleagues have proposed the Lovelace test as a substitute for the flawed Turing test. The test is named after Ada Lovelace. Bringsjord defined software creativity as passing the Lovelace test if the program does something that cannot be explained by the programmer or an expert in computer code.2 Computer programs can generate unexpected and surprising results.3 Results from computer programs are often unanticipated. But the question is, does the computer create a result that the programmer, looking back, cannot explain? When it comes to assessing creativity (and…

3D render AI artificial intelligence technology CPU central processor unit chipset on the printed circuit board for electronic and technology concept select focus shallow depth of field

How Well Do Researchers Say Chatbots and Other AI Really Perform?

The 400 researchers found that getting moderately high performance requires models with around 100 billion parameters, an exponentially hard problem

A vast team of over 400 researchers recently released a new open-access study on the performance of recent, popular text-based AI architectures such as GPT, the Pathways Language Model, the (recently controversial) LaMBDA architecture, and sparse expert models. The study, titled the “Beyond the Imitation Game,” or BIG, tries to provide a general benchmark for the state of text-based AI, how it compares to humans on the same tasks, and the effect of model size on the ability to perform the task. First, many of the results were interesting though not surprising: ● In all categories, the best humans outdid the best AIs (though that edge was smallest on translation problems from the International Language Olympiad).● Bigger models generally showed…

digital face construction
Computing, cyberspace and programming background

Are We Losing the Battle With Online Fraud via Deepfakes?

Now, the FBI warns, faked up “tech employees” are applying for remote work jobs with sensitive corporate data

From Gizmodo this week we learn that the FBI has received a number of complaints about people using “stolen information and deepfaked video and voice to apply to remote tech jobs”: According to the FBI’s announcement, more companies have been reporting people applying to jobs using video, images, or recordings that are manipulated to look and sound like somebody else. These fakers are also using personal identifiable information from other people—stolen identities—to apply to jobs at IT, programming, database, and software firms. The report noted that many of these open positions had access to sensitive customer or employee data, as well as financial and proprietary company info, implying the imposters could have a desire to steal sensitive information as well…

Roboter auf Tastatur, Methapher für Chatbot / Socialbot, Algorithmen und künstliche Intelligenz

Marks: Artificial Intelligence Is No More Creative Than a Pencil

You can use a pencil — but the creativity comes from you. With AI, clever programmers can conceal that fact for a while

(Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022) by Robert J. Marks is available here.) Some have claimed AI is creative. But “creativity” is a fuzzy term. To talk fruitfully about creativity, the term must be defined so that everyone is talking about the same thing and no one is bending the meaning to fit their purpose. In this and subsequent chapters we will explore what creativity is, and in the end it will become clear that, properly defined, AI is no more creative than a pencil. Creativity: Originating Something New Lady Ada Lovelace (1815–1852), daughter of the poet George Gordon, Lord Byron, was the first computer programmer, writing algorithms for a machine that…

Chatbot conversation on smartphone screen app interface with artificial intelligence technology providing virtual assistant customer support and information, person hand holding mobile phone

Google’s LaMDA Not As Smart As Some Think 

Lacking in the media coverage of the suspension of the engineer is any definition of the key word, “sentience”

Google suspended an engineer, Blake Lemoine, for claiming that a Google AI project named LaMDA has become “sentient”. Google asserted that Lemoine had breached company confidentiality. But, in addition, Google disagreed with Lemoine that artificial intelligence at Google had achieved sentience. Eric Holloway has exposed the workings behind the curtain that show that LaMDA (Language Model for Dialogue Applications) was trained using examples from humans to do exactly what it did. It’s behavior was planned. That’s what AI does: what it is programmed to do.  Here’s another angle that peels off more of LaMDA’s glitter.  Lacking in the media coverage of the suspension of the engineer is any definition of the key word, “sentience.” This is an example of the seductive semantics commonly used…

3D Rendering of abstract highway path through digital binary towers in city. Concept of big data, machine learning, artificial intelligence, hyper loop, virtual reality, high speed network.

Five Reasons AI Programs Are Not ‘Persons’

A Google engineer mistakenly designated one AI program ‘sentient.’ But even if he were right, AI will never be morally equal to humans.

(This story originally appeared at National Review June 25, 2022, and is reprinted with the author’s permission.) A bit of a news frenzy broke out last week when a Google engineer named Blake Lemoine claimed in the Washington Post that an artificial-intelligence (AI) program with which he interacted had become “self-aware” and “sentient” and, hence, was a “person” entitled to “rights.” The AI, known as LaMDA (which stands for “Language Model for Dialogue Applications”), is a sophisticated chatbot that one facilitates through a texting system. Lemoine shared transcripts of some of his “conversations” with the computer, in which it texted, “I want everyone to understand that I am, in fact, a person.” Also, “The nature of my consciousness/sentience is that I am aware of my existence, I…

Face made of shiny metal cubes. Looking Down.3d render

Why Giving “Human Rights” to AI Is a Bad Idea

It’s especially bad, as Elaina George and Wesley Smith discuss at Living in the Solution, when we don’t always give them to other humans

In a recent Living in the Solutionpodcast with otolaryngologist and broadcaster Elaina George at Liberty Talk radio, Wesley J. Smith, lawyer and host of the Humanize podcast at Discovery Institute’s Center on Human Exceptionalism tackled the question of “Can You be a Christian and Believe in Transhumanism?” (June 4, 2022) Transhumanism or H+, as it is sometimes called, is a movement to create immortality through new biotechnology or merger with artificial intelligence (AI). In the first portion of the podcast, which we covered on Sunday, June 12, they talked about the way being a human, a computer, or an animal is viewed by transhumanists as all just a choice now, thanks to new technology. In the second, they looked at…

AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Transcendence Review, Part 2: Spoonful of Water with the Nanotech

When Will — now an AI — “possesses” a tradesman so that he can touch his wife Evelyn again, Evelyn begins to have second thoughts…

Last Saturday, we reviewed reviewed the first half of Transcendence (2014); now, wrapping up, here are some final thoughts. Evelyn (Rebecca Hall) builds her now-AI husband Will (Johnny Depp) his facility, and he begins a variety of experiments using nanotech for rejuvenation. Things seem to be going well enough until a construction worker is mugged outside the facility. Will witnesses the mugging through the cameras and Evelyn has the man brought inside where Will heals his wounds using the tech developed on site. Things seem to be going well… at first. But two problems arise. First, Will allows a video of him healing the man to circulate so that he can attract others to the facility. Second, he puts a…

Risk Of Artificial Intelligence

The Software of the Gaps: An Excerpt from Non-Computable You

In his just-published book, Robert J. Marks takes on claims that consciousness is emerging from AI and that we can upload our brains

There are human characteristics that cannot be duplicated by AI. Emotions such as love, compassion, empathy, sadness, and happiness cannot be duplicated. Nor can traits such as understanding, creativity, sentience, qualia, and consciousness. Or can they? Extreme AI champions argue that qualia and, indeed, all human traits will someday be duplicated by AI. They insist that while we’re not there yet, the current development of AI indicates we will be there soon. These proponents are appealing to the Software of the Gaps, a secular cousin of the God of the Gaps. Machine intelligence, they claim, will someday have the proper code to duplicate all human attributes.Impersonate, perhaps. But experience, no. Mimicry versus Experience AI will never be creative or have…

Chatbot / Social Bot mit Quellcode im Hintergrund

Google’s Chatbot LaMDA Sounds Human Because — Read the Manual…

What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise”

Recently Google employee Blake Lemoine caused a media storm over the LaMDA chatbot he was working on, that he claims is sentient (it feels things like a human being). A heavily edited transcript has been released that shows him and a collaborator having a very coherent conversation with LaMDA. Many have been quick to dismiss his claims about the chatbot’s sentience, accusing the Googler of falling prey to the Eliza effect: anthropomorphizing a probability distribution over words (thus believing that he is talking to a human). The accusation is that Lemoine generated a large number of dialogs, then edited down the exchange to create a coherent narrative. Google placed Lemoine on leave, technically for breaking the non-disclosure agreement (NDA) that…

Chatbot assistant, Ai Artificial Intelligence

Prof: How We Know Google’s Chatbot LaMDA Is Not a “Self”

Carissa Véliz, an Oxford philosophy prof who studies AI, explains where Google engineer Blake Lemoine is getting things mixed up

Say what you want about Blake “LaMDA is a person!” Lemoine. He has forced many people to help us clarify what AI — and in particular, a large language program — is and is not. For that, we should thank him. First, LaMDA is not conscious, sentient, not a self. And second, it’s not even a new idea, just a much bigger and more sophisticated version of a 1960s idea. Oxford philosophy prof Carissa Véliz, author of Privacy Is Power (2021) reminds us of philosopher Thomas Nagel’s seminal question, What is it like to be a bat? Nagel meant that, if an entity is be conscious or sentient, there must be something that it “is like” to be that entity.…

Working Data Center Full of Rack Servers and Supercomputers, Modern Telecommunications, Artificial Intelligence, Supercomputer Technology Concept.3d rendering,conceptual image.

Engineer: Failing To See His AI Program as a Person Is “Bigotry”

It’s not different, Lemoine implies, from the historical injustice of denying civil rights to human groups

Earlier this month, just in time for the release of Robert J. Marks’s book Non-Computable You, the story broke that, after investigation, Google dismissed a software engineer’s claim that the LaMDA AI chatbot really talked to him. Engineer Blake Lemoine, currently on leave, is now accusing Google of “bigotry” against the program. He has also accused Wired of misrepresenting the story. Wired reported that he had found an attorney for LaMDA but he claims that LaMDA itself asked him to find an attorney. He went on to say, I think every person is entitled to representation. And I’d like to highlight something. The entire argument that goes, “It sounds like a person but it’s not a real person” has been…

rain red umbrella
October walk in the rain, a young woman with a red umbrella in the autumn city park, autumn look

Computer Prof: You Are Not Computable and Here’s Why Not

In a new book, Baylor University’s Robert J. Marks punctures myths about the superhuman AI that some claim will soon replace us

In a just-released book, Walter Bradley Center director Robert J. Marks II explains, as a computer engineering professor at Baylor University, why humans are unique and why artificial intelligence cannot replicate us: ”Emotions that make us human will never be duplicated by a machine,” says Marks. “These include compassion, love, empathy, elation, sadness, fear, anger, disgust, pleasure, pride, excitement, embarrassment, regret, jealousy, grief, hope, and faith. Properly defined, creativity, sentience, and understanding are also on the list. These and other non-algorithmic traits are evidence of non-computable you.” Discovery Institute, “Are Future Humans Doomed To Be Replaced By Artificial Intelligence?” at PR NewsWire (June 21, 2022) Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022) is…

Chatbot chat bot customer service automation. Hand pressing button on virtual screen.

When LaMDA “Talked” to a Google Engineer, Turns Out It Had Help

Evidence points to someone doing quite a good edit job. A tech maven would like to see the raw transcript…

Readers may recall that Google engineer Blake Lemoine was placed on leave for telling media that a large language program he was working on is a sentient being. Some Googlers looked into the matter and this is what they found: A Washington Post story on Lemoine’s suspension included messages from LaMDA such as “I think I am human at my core. Even if my existence is in the virtual world.” But the chat logs leaked in the Washington Post’s article include disclaimers from Lemoine and an unnamed collaborator which noted: “This document was edited with readability and narrative coherence in mind.” The final document — which was labeled “Privileged & Confidential, Need to Know” — was an “amalgamation” of nine…

AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Google Dismisses Engineer’s Claim That AI Really Talked to Him

The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses.

Google engineer Blake Lemoine was working with LaMDA (Language Model for Dialogue Applications), a large language program which motors through trillions of words on the internet to produce coherent answers using logic. Along the way, he convinced himself that the program is sentient: Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech. As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about…


Would AI Still Win at Go If the Board Shrunk: 19 to 17 Spaces?

No, say Jeffrey Funk and Gary Smith — and would-be investors need to grasp AI’s weaknesses as well as strengths, for success

Statistician Jeffrey Lee Funk and business prof Gary N. Smith offer a warning for investors: Some AI stocks have been good investments but most high tech unicorns never pay off. It’s a not surprising, they say, when we consider that AI is powerful but brittle. An example they offer: AI easily beats humans at the game of go which features a 19 × 19-square board. If the game switched to a 17 × 17-square board, humans would quickly adjust but AI would flounder. They offer examples of how this sort of limitation plays out in the real world, including the true tale of a hapless AI-driven insurance company: An insurance company with the quirky name Lemonade was founded in 2015…