Mind Matters Natural and Artificial Intelligence News and Analysis

TagGary Smith

generative-ai-overview-for-project-managers-fund-invetment-a-824288989-stockpack-adobe_stock
generative ai overview for project managers fund invetment and data analysis.

Why Generative AI Will Not End Up Running the World

Business prof Gary Smith and computer engineering prof Robert J. Marks talk about what we should and shouldn't believe about claims that computers taking over
Smith urged caution in applying AI to high-stakes decisions. For critical areas like finance, law, and medicine, reliance on AI remains risky. Read More ›
3d-rendering-humanoid-robot-working-on-computer-with-earth-p-675603027-stockpack-adobe_stock
3d rendering humanoid robot working on computer with earth planet hologram

The Hype and Limitations of Generative AI

On this episode, host Robert J. Marks concludes his conversation with economics professor and author Gary Smith about the hype and limitations of generative AI. Smith is the Fletcher Jones Professor of Economics at Ponoma College and a frequent contributor to Mind Matters News. In this portion of the conversation, Smith and Marks explore the hype around artificial general intelligence Read More ›

inferno-social-media-hype-concept-art-poster-stockpack-adobe-843041379-stockpack-adobe_stock
Inferno Social media hype concept art poster.

AI Pessimists vs True Believers: A Deepening Divide

There is a growing divide about AI even among AI experts but many of them like polarizing content, just as the consumers of political information do
True believers don’t focus on short-term trends because they aren’t very positive. But short-term trends should be consistent with a long-term forecast. Read More ›
a-running-humanoid-robot-in-a-marathon-race-with-many-people-in-the-background-ai-robot-with-ability-to-move-and-do-activities-like-human-artificial-intelligence-coexistence-generative-ai-stockpack-adobe-stock
A running humanoid robot in a marathon race with many people in the background. AI robot with ability to move and do activities like human. Artificial intelligence coexistence. Generative AI

AI health coaching: Risk vs. benefit

As health care analyst Katie Suleta points out, familiar problems like bias and hallucination could impact the health advice the AI coach gives
Maybe it comes down to the traditional advice about the internet in general: If it sounds unbelievable, don’t believe it, and when in doubt, doubt. Read More ›
beautiful-unicorn-in-a-magical-forest-digital-illustration-stockpack-adobe-stock
Beautiful unicorn in a magical forest - digital illustration

Tech Industry: Are the Unicorns an Endangered Species?

The canaries are cheeping loudly that new tech startup funding is fading
How serious the current downturn will be, they say, depends on how long the current AI bubble lasts. Winter always comes, just not when many of us expect it. Read More ›
3d-render-gold-metallic-pie-chart-icon-on-dark-background-concept-for-analyze-data-information-stockpack-adobe-stock
3d render gold metallic pie chart icon on dark background concept for analyze data information

Let’s Dispose of Exploding Pie Charts

Pie charts are seldom a good idea. Here's why.
Points can be made without sensationalized graphs that undermine the credibility of the argument. Let’s dispose of exploding pie charts. Read More ›
senior-man-group-and-running-on-street-together-for-elderly-fitness-and-urban-wellness-with-happiness-happy-retirement-smile-and-runner-club-in-workout-diversity-and-teamwork-in-park-for-health-stockpack-adobe-stock
Senior, man group and running on street together for elderly fitness and urban wellness with happiness. Happy retirement, smile and runner club in workout, diversity and teamwork in park for health

Blue Zone BS: The Longevity Cluster Myth

We need to be reminded how much real science has done for us and how real science is done.
Real science is currently under siege, pummeled by conspiracy nuts and undermined internally by a replication crisis created by sloppy science. Read More ›
scoring-the-winning-points-at-a-basketball-game-stockpack-adobe-stock
Scoring the winning points at a basketball game

Sabrina Ionescu’s Hot Hand

When basketball players hit a "streak," does that elevate the probability of success?
Athletes do sometimes get hot—not that their chance of success is 100% but that it is temporarily elevated above their normal probability. Read More ›
accounting-stockpack-adobe-stock
Accounting.

OpenAI is Now Under Investigation

The Federal Trade Commission wants to know how OpenAI gets their data and how much harm ChatGPT could have

The Federal Trade Commission (F.T.C.) sent a letter to OpenAI, the San Fransisco company responsible for creating ChatGPT, the Large Language Model that captured the world’s imagination in November of 2022. Per the New York Times, the F.T.C. is investigating the AI company’s methods of data acquisition and also plans on measuring the potential harms of AI on society, citing concerns over false information and job replacement. Cecilia Kang and Cade Metz report: In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The F.T.C. asked the company dozens of questions in its letter, including how the start-up trains its A.I. models and treats personal data. The Read More ›

cosmetic-laboratory-research-and-development-science-bio-skincare-cream-serum-product-with-leaves-natural-organic-beauty-cosmetics-concept-cosmetology-stockpack-adobe-stock
cosmetic laboratory research and development . science bio skincare cream serum product with leaves. natural organic beauty cosmetics concept. cosmetology.

Scientists Have Been Recommending Changes to Science Education for Decades

The modern education system seems designed to squelch curiosity

Gary Smith describes the problems with today’s science in his new book Distrust: Big Data, Data-Torturing, and the Assault on Science. He recounts endless examples of disinformation, data torture, and data mining, much of which we already knew. Taken together, however, and as I described in this review, they are mind-blowing. He argues that many of these problems come from things scientists do such as p-hacking during statistical analysis, too little emphasis on “impact” in statistical analyses, outright data falsification, and the creation of the Internet, which can be a huge disinformation machine in addition to a valuable resource. In the last chapter, he also offers some solutions such as ending the artificial thresholds for p-values such as 0.05, requiring Read More ›

big-data-analytics-visualization-technology-with-scientist-analyzing-information-structure-on-screen-with-machine-learning-to-extract-strategical-prediction-for-business-finance-internet-of-things-stockpack-adobe-stock
Big data analytics visualization technology with scientist analyzing information structure on screen with machine learning to extract strategical prediction for business, finance, internet of things

Review of Distrust: Big Data, Data-Torturing, and the Assault on Science

Tech expert Jeffrey Funk reviews Gary Smith's enlightening new book on data, disinformation, and the "assault on science"

The pandemic proved a lot of things, one of them being that science is under assault. In this enlightening and entertaining new book, Professor Gary Smith shows us how much of the assault has its roots in what scientists do. The easiest impact to understand is the Internet, which was originally created by scientists in the 1970s to exchange scientific information. Now it has become a great way to spread disinformation on almost every subject. A former chief historian of NASA noted that: “The reality is, the internet has made it possible for people to say whatever the hell they like to a broader number of people than ever before.” Smith recounts endless examples of this disinformation, much of which Read More ›

smartphone-screen-with-smart-home-technologies-on-a-blue-backdrop-internet-of-things-isometric-conceptual-image-digital-residence-utilizing-a-mobile-phones-fingerprint-to-get-access-to-iot-systems-stockpack-adobe-stock
Smartphone screen with smart home technologies on a blue backdrop. Internet of things isometric conceptual image. Digital Residence utilizing a mobile phone's fingerprint to get access to IOT systems

The LLM Deep Fake—Follow the Money

Hype sells. Just look at what's happening with AI

Large language models (LLMs) are far more impressive now than they were just a few months or a year ago. Compare these exchanges I had with GPT-3 in June 2022 and with ChatGPT two days ago: Gary:          Can I improve my test scores if I study after taking the test? GPT-3:        That’s a good question. I’m not sure if you can improve your test scores by studying after taking the test, but it couldn’t hurt to try! ChatGPT:   Studying after taking a test is unlikely to directly improve your scores on that particular test since the test has already been completed. Once a test is finished, the opportunity to answer questions and provide responses has passed. However, studying after taking Read More ›

young-asian-business-team-people-meeting-in-office-stockpack-adobe-stock
young asian business team people meeting in office

The Death of Peer Review?

Science is built on useful research and thoroughly vetted peer review

Two years ago, I wrote about how peer review has become an example of Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” Once scientific accomplishments came to be gauged by the publication of peer-reviewed research papers, peer review ceased to be a good measure of scientific accomplishments. The situation has not improved. One consequence of the pressure to publish is the temptation researchers have to p-hack or HARK. P-hacking occurs when a researcher tortures the data in order to support a desired conclusion. For example, a researcher might look at subsets of the data, discard inconvenient data, or try different model specifications until the desired results are obtained and deemed statistically significant—and therefore Read More ›

close-up-of-calculation-table-printed-in-an-old-book-stockpack-adobe-stock
close up of calculation table, printed in an old book

An Illusion of Emergence, Part 2

A figure can tell a story but, intentionally or unintentionally, the story that is told may be fiction

I recently wrote about how graphs that use logarithms on the horizontal axis can create a misleading impression of the relationship between two variables. The specific example I used was the claim made in a recent paper (with 16 coauthors from Google, Stanford, UNC Chapel Hill, and DeepMind) that scaling up the number of parameters in large language models (LLMs) like ChatGPT can cause “emergence,” which they define as qualitative changes in abilities that are not present in smaller-scale models but are present in large-scale models; thus they cannot be predicted by simply extrapolating the performance improvements on smaller-scale models. They present several graphs similar to this one that seem to show emergence: However, their graphs have the logarithms of Read More ›

observing the data
In the System Control Room Technical Operator Stands and Monitors Various Activities Showing on Multiple Displays with Graphics. Administrator Monitors Work of  Artificial Intelligence.

Gary Smith’s New Book Reviewed in Washington Post

Smith argues that science itself is being undermined by the tools scientists use

Walter Bradley Center Senior Fellow Gary Smith’s book Distrust: Big Data, Data-Torturing, and the Assault on Science was reviewed in The Washington Post today. Smith is a frequent contributor to Mind Matters and teaches economics at Pomona College. In his new book, Smith argues that science itself is being “undermined” by the tools scientists use. Reviewer Abby Ohlheister writes, Smith, an economist whose work often examines the misuse of data and statistics in a variety of disciplines, argues that the current crisis of trust in science falls at the intersection of three forces: disinformation, data torturing and data mining. Disinformation, as Smith writes, is “as old as the human race,” but accelerated in speed and reach alongside social media. Data Read More ›

bulb brain
Creative Idea with Brain and Light Bulb Illustration, with Generative AI Technology

Dear Silicon Valley: You’re Over-Hyping ChatGPT

The abilities of these new chatbots are grossly overstated

Gary Smith and Jeffrey Funk, frequent Mind Matters contributors, co-wrote a piece at Salon on the over-exaggerated dreams big tech has for AI. They write, Silicon Valley’s pre-eminent leaders love prematurely predicting that their products will completely upend the world as we know it. The latest case study comes from Sam Altman, CEO of OpenAI, the company behind the ChatGPT AI chatbot that has gone viral for its convincing imitations of human writing. Two years ago, Altman wrote a manifesto, “Moore’s Law for Everything,” in which he forecast that artificial intelligence would make huge swaths of both white collar and blue collar jobs obsolete. -Smith & Funk, Don’t believe the hype: why ChatGPT is not the “holy grail” of AI research | Read More ›

hands-stockpack-adobe-stock
hands

Observing and Communing

What human art and literature do that AI can't

AI image generators like Midjourney or DALL-E are generally adept at capturing the accuracy of the human form. The concerns over copyright, job infringement, and general degradation of the visual arts via such AI are ongoing concerns for many artists and practitioners. However, a new New Yorker article by Kyle Chayka identifies a noticeable flaw in AI artwork: human hands. Missing the Big Picture Chayka begins by recalling an art class where he was asked to draw his own hand. It’s an assignment for beginners, and as behooves a novice, tempts the artist to focus more on the specific contours of the hand instead of the overall structure and form. The forest gets lost in the trees, so to speak. Read More ›

students-making-notes-stockpack-adobe-stock
Students making notes

Learning to Communicate

Why writing skills are so important, especially in today's artificial world

Educators have been shaken by fears that students will use ChatGTP and other large language models (LLMs) to answer questions and write essays. LLMs are indeed astonishing good at finding facts and generating coherent essays — although the alleged facts are sometimes false and the essays are sometimes tedious BS supported by fake references. I am more optimistic than most. I am hopeful that LLMs will be a catalyst for a widespread discussion of our educational goals. What might students learn in schools that will be useful long after they graduate? There are many worthy goals, but critical thinking and communication skills should be high on any list. I’ve written elsewhere about how critical thinking abilities are important for students Read More ›

artfcal

Let’s Take the “I” Out of AI

Large language models, though impressive, are not the solution. They may well be the catalyst for calamity.

When OpenAI’s text generator, ChatGPT, was released to the public this past November, the initial reaction was widespread astonishment. Marc Andreessen described it as, “Pure, absolute, indescribable magic.” Bill Gates said that the creation of ChatGPT was as important as the creation of the internet. Jensen Huang, Nvidia’s CEO, Jensen Huang, said that, “ChatGPT is one of the greatest things ever created in the computing industry.” Conversations with ChatGPT are, indeed, very much like conversations with a super-intelligent human. For many, it seems that the 70-year search for a computer program that could rival or surpass human intelligence has finally paid off. Perhaps we are close to the long-anticipated singularity where computers improve rapidly and autonomously, leaving humans far behind, Read More ›