Mind Matters Natural and Artificial Intelligence News and Analysis

TagGary Smith

big letters
BIG letters inside a London store

Does Creativity Just Mean Bigger Data? Or Something Else?

Michael Egnor and Robert J. Marks look at claims that artificial intelligence can somehow be taught to be creative

In Define information before you talk about it, neurosurgeon Michael Egnor interviewed engineering prof Robert J. Marks on the way information, not matter, shapes our world (October 28, 2021). In the first portion, Egnor and Marks discussed questions like: Why do two identical snowflakes seem more meaningful than one snowflake? Now they turn to the relationship between information and creativity. Is creativity a function of more information? Or is there more to it? This portion begins at 10:46 min. A partial transcript and notes, Show Notes, and Additional Resources follow. Michael Egnor: How does biological information differ from information in nonliving things? Robert J. Marks: I don’t know if it does… I do believe after recent study that the mind…

competition-between-humans-and-robots-in-tug-of-war-concept-stockpack-adobe-stock
Competition between humans and robots in tug of war concept

Robots Will NOT Steal Our Jobs, Business Analysts Show

Doomsayers typically do not factor in all components of the job that a robot would have to replace or all of the true costs of trying, they say

At Fast Company, data analyst Jeffrey Funk and business prof Gary N. Smith dispute the claim that robots are coming for all our jobs. They point to a history of overblown claims: In 1965, Herbert Simon, who would later be awarded the Nobel Prize in Economics and the Turing Award (the “Nobel Prize of computing”), predicted that “machines will be capable, within 20 years, of doing any work a man can do.” In 1970, Marvin Minsky, who also received the Turing Award, predicted that, “in from three to eight years we will have a machine with the general intelligence of an average human being.” The implications for jobs were ominous, but robotic-takeover predictions have been in the air for a…

majestic-unicorn-posing-in-an-enchanted-forest-stockpack-adobe-stock
Majestic Unicorn posing in an enchanted forest

The Unicorn Might Be Very Profitable — If It Existed

The statistical reality is that most new businesses flop

Jeffrey Funk and Gary Smith, well known to many of our readers, have just published an article at MarketWatch, warning against heedless optimism about “unicorn” stocks. As they put it, “The stock market unleashes its ‘animal spirits’ on an animal that doesn’t exist.” They begin by pointing out that most new businesses flop. The president of one venture capital company estimated the chance of success at one in 1,000. An SEC study of 500 randomly selected new issues found that 43% were confirmed bankrupt, 25% were losing money but still afloat, and 12% had disappeared without a trace. Of the remaining 20%, just 12 companies seemed solid successes — a scant 2% of the companies surveyed. Jeffrey Funk and Gary…

cold-fresh-lemonade-with-slices-of-ripe-lemons-stockpack-adobe-stock
Cold fresh lemonade with slices of ripe lemons.

Insurance Company Gives Sour AI Promises

Data collection and discriminatory algorithms are turning Lemonade sour

An insurance company with the quirky name Lemonade was founded in 2015 and went public in 2020. In addition to raising hundreds of millions of dollars from eager investors, Lemonade quickly attracted more than a million customers with the premise that artificial intelligence (AI) algorithms can estimate risks accurately and that buying insurance and filing claims can be fun: Lemonade is built on a digital substrata — we use bots and machine learning to make insurance instant, seamless, and delightful. Adding to the delight are the friendly names of their bots, like AI Maya, AI Jim, and AI Cooper. The company doesn’t explain how its AI works, but there is this head-scratching boast: A typical homeowners policy form has 20-40…

prosthetic arm fist.jpg
Prosthetic robotic arm with palm in fist, 3d rendering on black background

Nobel Prize Economist Tells The Guardian, AI Will Win

But when we hear why he thinks so, don’t be too sure

Nobel Prize-winning economist (2002) Daniel Kahneman, 87 (pictured), gave an interview this month to The Guardian in which he observed that belief in science is not much different from belief in religion with respect to the risks of unproductive noise clouding our judgment. He’s been in the news lately as one of the authors of a new book, Noise: A Flaw in Human Judgment, which applies his ideas about human error and bias to organizations. He told The Guardian that he places faith “if there is any faith to be placed,” in organizations rather than individuals, for example. Curiously, he doesn’t seem to privilege science organizations: I was struck watching the American elections by just how often politicians of both…

artificial-intelligence-concept-robotic-hand-is-holding-human-brain-3d-rendered-illustration-stockpack-adobe-stock.jpg
Artificial intelligence concept. Robotic hand is holding human brain. 3D rendered illustration.

Failed Prophecies of the Big “AI Takeover” Come at a Cost

Like IBM Watson in medicine, they don’t just fail; they take time, money, and energy from more promising digital innovations

Surveying the time line of prophecies that AI will take over “soon” is entertaining. At Slate, business studies profs Jeffrey Funk and Gary Smith offer a whirlwind tour starting in the 1950s, with stops along the way at 1970 (“In from three to eight years we will have a machine with the general intelligence of an average human being”) and at 2014: In 2014, Ray Kurzweil predicted that by 2029, computers will have human-level intelligence and will have all of the intellectual and emotional capabilities of humans, including “the ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy.” As we move closer to 2029, Kurzweil talks more about 2045. Jeffrey Funk and…

female-doctor-consoling-senior-woman-wearing-face-mask-during-home-visit-stockpack-adobe-stock.jpg
Female doctor consoling senior woman wearing face mask during home visit

#8 in our AI Hype Countdown: AI Is Better Than Doctors!

Sick of paying for health care insurance? Guess what? AI is better ! Or maybe, wait…

Merry Christmas! Our Walter Bradley Center director Robert J. Marks has been interviewing fellow computer nerds (our Brain Trust) Jonathan Bartlett and Eric Holloway about 12 overhyped AI concepts of the year. From AI Dirty Dozen 2020 Part II. Now here’s #8. Sick of paying for health care insurance? Guess what? AI is better! Or maybe, wait… “Is AI really better than physicians at diagnosis?” starts at 01:25 Here’s a partial transcript. Show Notes and Additional Resources follow, along with a link to the complete transcript. Robert J. Marks: We’re told AI is going to replace lawyers and doctors and accountants and all sorts of people. So, let’s look at a case of the physicians. This was a piece on…

digital-binary-code-matrix-background-in-graphic-concept-stockpack-adobe-stock.jpg
Digital binary code matrix background in graphic concept

Smith and Cordes’ Phantom Pattern Problem A Top 2020 Book

Published by Oxford in 2020, it deals with the “patterns” Big Data throws up that aren’t really there

David Auerbach has picked The Phantom Pattern Problem (2020) by Gary Smith and Jay Cordes as one of the top books of 2020 in the science and tech category. Auerbach, who describes himself as “a writer and software engineer, trying to bridge the two realms,” is the author of BITWISE: A Life in Code (2018). He has an interesting way of choosing books to recommend: Those that resist the “increasingly desperate and defensive oversimplification” of popular culture: I hesitate to mention too many other books for fear of neglecting the others, but I will say that of the science and technology books, several deal with subjects that are currently inundated with popularizations. In my eye, those below are notably superior…

quitting-smoking-male-hand-crushing-cigarette-stockpack-adobe-stock.jpg
Quitting smoking - male hand crushing cigarette

Yellow Fingers Do Not Cause Lung Cancer

Neurosurgeon Michael Egnor and computer engineer Bob Marks look at the ways Big Data can mislead us into mistaking incidental events for causes

It’s easy to explain what “information” is if we don’t think much about it. But what if we ask a student, what does your term paper weigh? How much energy does it consume? More or less matter and energy than, say, lightning striking a tree? Of course, the student will protest, “But that’s not the point! It’s my term paper.” Exactly. So information is very different from matter and energy. It means something. Realizing that information is different from matter and energy can help us understand issues like the difference between the causes of a problem (causation) and circumstances that may be associated with the problem but do not cause it (correlation). In last week’s podcast, “Robert J. Marks on…

robot-close-up-stockpack-adobe-stock.jpg
Robot Close Up

AI Is Not Nearly Smart Enough to Morph Into the Terminator

Computer engineering prof Robert J. Marks offers some illustrations in an ITIF think tank interview

In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D.Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. It may be downloaded free here. In this second part ( here’s Part 1), the discussion (starts at 6:31) turned to what might happen if AI goes “rogue.” The three parties agreed that AI isn’t nearly smart enough to turn into the Terminator: Jackie Whisman: Well, opponents of so-called killer robots, of course argue that the technologies can’t be…

3d-rendering-of-binary-tunnel-with-led-leading-light-concept-for-data-mining-big-data-visualization-machine-learning-data-discovery-technology-customer-product-analysis-stockpack-adobe-stock.jpg
3D Rendering of binary tunnel with led leading light. Concept for data mining, big data visualization, machine learning, data discovery technology, customer product analysis.

The Brain Is Not a Computer and Big Data Is Not a Big Answer

These claims are mere tales from the AI apocalypse, as George Gilder tells it, in Gaming AI

In Gaming AI, George Gilder (pictured) sets out six assumptions generally shared by those who believe that, in a Singularity sometime soon, we will merge with our machines. Some of these assumptions seem incorrect and they are certainly all discussable. So let’s look at the first two: • “The Modeling Assumption: A computer can deterministically model a brain.” (p. 50) That would be quite difficult because brains don’t function like computers: As neuroscientist Yuri Danilov said last year, “Right now people are saying, each synoptical connection is a microprocessor. So if it’s a microprocessor, you have 1012 neurons, each neuron has 105 synapses, so you have… you can compute how many parallel processing units you have in the brain if…

artificial-intelligence-composition-on-the-subject-of-future-technologies-3d-rendered-graphics-stockpack-adobe-stock.jpg
Artificial Intelligence. Composition on the subject of Future Technologies. 3d rendered graphics.

Why AI Geniuses Haven’t Created True Thinking Machines

The problems have been hinting at themselves all along

As we saw yesterday, artificial intelligence (AI) has enjoyed a a string of unbroken successes against humans. But these are successes in games where the map is the territory. Therefore, everything is computable. That fact hints at the problem tech philosopher and futurist George Gilder raises in Gaming AI (free download here). Whether all human activities can be treated that way successfully is an entirely different question. As Gilder puts it, “AI is a system built on the foundations of computer logic, and when Silicon Valley’s AI theorists push the logic of their case to a “singularity,” they defy the most crucial findings of twentieth-century mathematics and computer science.” Here is one of the crucial findings they defy (or ignore):…

crypto-currency-background-with-various-of-shiny-silver-and-golden-physical-cryptocurrencies-symbol-coins-bitcoin-ethereum-litecoin-zcash-ripple-stockpack-adobe-stock.jpg
Crypto currency background with various of shiny silver and golden physical cryptocurrencies symbol coins, Bitcoin, Ethereum, Litecoin, zcash, ripple

Has COVID-19 Helped or Harmed Crypto and Blockchain?

Cryptocurrencies rebounded after an initial slump earlier this year

The recently aired discussion at COSM about the future of bitcoin and other privately minted cryptocurrencies took place last October, before COVID-19 was much thought of in the Western world. Catching up, the cryptos and blockchain had a rough ride earlier this year but they have stabilized recently. In February, as the pandemic sent markets scurrying, things were looking grim for the cryptos: During the last week, the spread of the coronavirus has been all over the news; the virus, which had remained well-contained in China, spread throughout South Korea, Iran, Italy, and is now reaching its fingers into other parts of Europe. The New York Times reported on Thursday that “the signs were everywhere…that the epidemic shaking much of…

yellow cubes.jpg
Abstract 3d render, geometric composition, yellow background design with cubes

Interview: New Book Outlines the Perils of Big (Meaningless) Data

Gary Smith, co-author with Jay Cordes of Phantom Patterns, shows why human wisdom and common sense are more important than ever now

Economist Gary Smith and statistician Jay Cordes have a new book out, The Phantom Pattern Problem: The mirage of big data, on why we should not trust Big Data over common sense. In their view, it’s a dangerous mix: Humans naturally assume that all patterns are significant. But AI cannot grasp the meaning of any pattern, significant or not. Thus, from massive number crunches, we may “learn” (if that’s the right word) that Stock prices can be predicted from Google searches for the word debt. Stock prices can be predicted from the number of Twitter tweets that use “calm” words. An unborn baby’s sex can be predicted by the amount of breakfast cereal the mother eats. Bitcoin prices can be…

call-for-papers-cfp-science-stockpack-adobe-stock.jpg
Call for papers (Cfp, science)

From Nature: A New, Topflight Computer Science Journal

Starting in January 2021, it proposes to tackle a key problem in computer use in science - replication of findings

The Springer Nature Group is launching a new online-only journal,Nature Computational Science. It is described as a “dedicated home for computational science” and we are told: Recent advances in computer technology, be it in hardware or in software, have revolutionized the way researchers do science: problems that are too complex for human or analytical solutions are now easier to address; problems that would take years to solve can now be unraveled in days, hours, or even seconds. The use and development of advanced computing capabilities to analyse and solve scientific problems, also known as computational science, has undoubtedly played a key role in transformational scientific breakthroughs of our last century, making progress possible in many different disciplines. Elizabeth Hawkins, “A…

close-up-view-of-robot-playing-chess-selective-focus-stockpack-adobe-stock.jpg
close-up view of robot playing chess, selective focus

Bingecast: Robert J. Marks on the Limitations of Artificial Intelligence

Robert J. Marks talks with Larry L. Linenschmidt of the Hill Country Institute about nature and limitations of artificial intelligence from a computer science perspective including the misattribution of creativity and understanding to computers. Other Larry L. Linenschmidt podcasts from the Hill Country Institute are available at HillCountryInstitute.org. We appreciate the permission of the Hill Country Institute to rebroadcast this…

white-swan-mask-on-black-wooden-surface-empty-space-stockpack-adobe-stock.jpg
White swan mask on black wooden surface. Empty space.

New Book Takes Aim at Phantom Patterns “Detected” by Algorithms

Human common sense is needed now more than ever, says economics professor Gary Smith

Pomona College economics professor Gary Smith, author with Jay Cordes of The Phantom Pattern Problem (Oxford, October 1, 2020), tackles an age-old glitch in human thinking: We tend to assume that if we find a pattern, it is meaningful. Add that to the weaknesses of current artificial intelligence and “Houston, we have a problem,” he warns: The scientific method tests theories with data. Data-mining computer algorithms dispense with theory and search through data for patterns, often aided and abetted by slicing, dicing, and otherwise mangling data to create patterns. Gary Smith, “Phantom patterns: The big data delusion” at IAI News (August 24, 2020) Many of the patterns so detected are obviously spurious, for example: A computer algorithm for evaluating job…

crypto-currency-concept-stockpack-adobe-stock.jpg
crypto currency concept

Is Crypto Just a Flash in the Pan?

Or, to put it more bluntly, will blockchain ever grow up to be a real financial system? Forbes says yes, cautiously
Will blockchain and other non-government currencies ever grow up to be a real financial system? What about the weird Canadian crypto uproar in which the only a dead man knows the code to release the missing millions? Read More ›
close-up-view-of-the-difference-engine-stockpack-adobe-stock.jpg
Close-up view of the Difference Engine

Lovelace: The Programmer Who Spooked Alan Turing

Ada Lovelace understood her mentor Charles Babbage’s plans for his new Analytical Engine and was better than he at explaining what it could do

Turing thought that computers could be got to think. Thus he had to address Lovelace’s objection from a century earlier, that they could not be creative.

Read More ›
technical financial graph on technology abstract background

Is Moore’s Law Over?

Rapid increase in computing power may become a thing of the past

If Moore’s Law fails, AI may settle in as a part of our lives like the automobile but it will not really be the Ruler of All except for those who choose that lifestyle. Even so, a belief that we will, for example, merge with computers by 2045 (the Singularity) is perhaps immune to the march of mere events. Entire arts and entertainment industries depend on the expression of such beliefs.

Read More ›