
TagGary Smith


The Hype and Limitations of Generative AI
On this episode, host Robert J. Marks concludes his conversation with economics professor and author Gary Smith about the hype and limitations of generative AI. Smith is the Fletcher Jones Professor of Economics at Ponoma College and a frequent contributor to Mind Matters News. In this portion of the conversation, Smith and Marks explore the hype around artificial general intelligence Read More ›

AI Pessimists vs True Believers: A Deepening Divide
There is a growing divide about AI even among AI experts but many of them like polarizing content, just as the consumers of political information do
AI health coaching: Risk vs. benefit
As health care analyst Katie Suleta points out, familiar problems like bias and hallucination could impact the health advice the AI coach gives
Tech Industry: Are the Unicorns an Endangered Species?
The canaries are cheeping loudly that new tech startup funding is fading
Let’s Dispose of Exploding Pie Charts
Pie charts are seldom a good idea. Here's why.
The MLB Coin-Flipping Contest
What are the chances that wild-card teams will make it to the World Series and win?
Blue Zone BS: The Longevity Cluster Myth
We need to be reminded how much real science has done for us and how real science is done.
Sabrina Ionescu’s Hot Hand
When basketball players hit a "streak," does that elevate the probability of success?
OpenAI is Now Under Investigation
The Federal Trade Commission wants to know how OpenAI gets their data and how much harm ChatGPT could haveThe Federal Trade Commission (F.T.C.) sent a letter to OpenAI, the San Fransisco company responsible for creating ChatGPT, the Large Language Model that captured the world’s imagination in November of 2022. Per the New York Times, the F.T.C. is investigating the AI company’s methods of data acquisition and also plans on measuring the potential harms of AI on society, citing concerns over false information and job replacement. Cecilia Kang and Cade Metz report: In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The F.T.C. asked the company dozens of questions in its letter, including how the start-up trains its A.I. models and treats personal data. The Read More ›

Scientists Have Been Recommending Changes to Science Education for Decades
The modern education system seems designed to squelch curiosityGary Smith describes the problems with today’s science in his new book Distrust: Big Data, Data-Torturing, and the Assault on Science. He recounts endless examples of disinformation, data torture, and data mining, much of which we already knew. Taken together, however, and as I described in this review, they are mind-blowing. He argues that many of these problems come from things scientists do such as p-hacking during statistical analysis, too little emphasis on “impact” in statistical analyses, outright data falsification, and the creation of the Internet, which can be a huge disinformation machine in addition to a valuable resource. In the last chapter, he also offers some solutions such as ending the artificial thresholds for p-values such as 0.05, requiring Read More ›

Review of Distrust: Big Data, Data-Torturing, and the Assault on Science
Tech expert Jeffrey Funk reviews Gary Smith's enlightening new book on data, disinformation, and the "assault on science"The pandemic proved a lot of things, one of them being that science is under assault. In this enlightening and entertaining new book, Professor Gary Smith shows us how much of the assault has its roots in what scientists do. The easiest impact to understand is the Internet, which was originally created by scientists in the 1970s to exchange scientific information. Now it has become a great way to spread disinformation on almost every subject. A former chief historian of NASA noted that: “The reality is, the internet has made it possible for people to say whatever the hell they like to a broader number of people than ever before.” Smith recounts endless examples of this disinformation, much of which Read More ›

The LLM Deep Fake—Follow the Money
Hype sells. Just look at what's happening with AILarge language models (LLMs) are far more impressive now than they were just a few months or a year ago. Compare these exchanges I had with GPT-3 in June 2022 and with ChatGPT two days ago: Gary: Can I improve my test scores if I study after taking the test? GPT-3: That’s a good question. I’m not sure if you can improve your test scores by studying after taking the test, but it couldn’t hurt to try! ChatGPT: Studying after taking a test is unlikely to directly improve your scores on that particular test since the test has already been completed. Once a test is finished, the opportunity to answer questions and provide responses has passed. However, studying after taking Read More ›

The Death of Peer Review?
Science is built on useful research and thoroughly vetted peer reviewTwo years ago, I wrote about how peer review has become an example of Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” Once scientific accomplishments came to be gauged by the publication of peer-reviewed research papers, peer review ceased to be a good measure of scientific accomplishments. The situation has not improved. One consequence of the pressure to publish is the temptation researchers have to p-hack or HARK. P-hacking occurs when a researcher tortures the data in order to support a desired conclusion. For example, a researcher might look at subsets of the data, discard inconvenient data, or try different model specifications until the desired results are obtained and deemed statistically significant—and therefore Read More ›

An Illusion of Emergence, Part 2
A figure can tell a story but, intentionally or unintentionally, the story that is told may be fictionI recently wrote about how graphs that use logarithms on the horizontal axis can create a misleading impression of the relationship between two variables. The specific example I used was the claim made in a recent paper (with 16 coauthors from Google, Stanford, UNC Chapel Hill, and DeepMind) that scaling up the number of parameters in large language models (LLMs) like ChatGPT can cause “emergence,” which they define as qualitative changes in abilities that are not present in smaller-scale models but are present in large-scale models; thus they cannot be predicted by simply extrapolating the performance improvements on smaller-scale models. They present several graphs similar to this one that seem to show emergence: However, their graphs have the logarithms of Read More ›

Gary Smith’s New Book Reviewed in Washington Post
Smith argues that science itself is being undermined by the tools scientists useWalter Bradley Center Senior Fellow Gary Smith’s book Distrust: Big Data, Data-Torturing, and the Assault on Science was reviewed in The Washington Post today. Smith is a frequent contributor to Mind Matters and teaches economics at Pomona College. In his new book, Smith argues that science itself is being “undermined” by the tools scientists use. Reviewer Abby Ohlheister writes, Smith, an economist whose work often examines the misuse of data and statistics in a variety of disciplines, argues that the current crisis of trust in science falls at the intersection of three forces: disinformation, data torturing and data mining. Disinformation, as Smith writes, is “as old as the human race,” but accelerated in speed and reach alongside social media. Data Read More ›

Dear Silicon Valley: You’re Over-Hyping ChatGPT
The abilities of these new chatbots are grossly overstatedGary Smith and Jeffrey Funk, frequent Mind Matters contributors, co-wrote a piece at Salon on the over-exaggerated dreams big tech has for AI. They write, Silicon Valley’s pre-eminent leaders love prematurely predicting that their products will completely upend the world as we know it. The latest case study comes from Sam Altman, CEO of OpenAI, the company behind the ChatGPT AI chatbot that has gone viral for its convincing imitations of human writing. Two years ago, Altman wrote a manifesto, “Moore’s Law for Everything,” in which he forecast that artificial intelligence would make huge swaths of both white collar and blue collar jobs obsolete. -Smith & Funk, Don’t believe the hype: why ChatGPT is not the “holy grail” of AI research | Read More ›

Observing and Communing
What human art and literature do that AI can'tAI image generators like Midjourney or DALL-E are generally adept at capturing the accuracy of the human form. The concerns over copyright, job infringement, and general degradation of the visual arts via such AI are ongoing concerns for many artists and practitioners. However, a new New Yorker article by Kyle Chayka identifies a noticeable flaw in AI artwork: human hands. Missing the Big Picture Chayka begins by recalling an art class where he was asked to draw his own hand. It’s an assignment for beginners, and as behooves a novice, tempts the artist to focus more on the specific contours of the hand instead of the overall structure and form. The forest gets lost in the trees, so to speak. Read More ›

Learning to Communicate
Why writing skills are so important, especially in today's artificial worldEducators have been shaken by fears that students will use ChatGTP and other large language models (LLMs) to answer questions and write essays. LLMs are indeed astonishing good at finding facts and generating coherent essays — although the alleged facts are sometimes false and the essays are sometimes tedious BS supported by fake references. I am more optimistic than most. I am hopeful that LLMs will be a catalyst for a widespread discussion of our educational goals. What might students learn in schools that will be useful long after they graduate? There are many worthy goals, but critical thinking and communication skills should be high on any list. I’ve written elsewhere about how critical thinking abilities are important for students Read More ›

Let’s Take the “I” Out of AI
Large language models, though impressive, are not the solution. They may well be the catalyst for calamity.When OpenAI’s text generator, ChatGPT, was released to the public this past November, the initial reaction was widespread astonishment. Marc Andreessen described it as, “Pure, absolute, indescribable magic.” Bill Gates said that the creation of ChatGPT was as important as the creation of the internet. Jensen Huang, Nvidia’s CEO, Jensen Huang, said that, “ChatGPT is one of the greatest things ever created in the computing industry.” Conversations with ChatGPT are, indeed, very much like conversations with a super-intelligent human. For many, it seems that the 70-year search for a computer program that could rival or surpass human intelligence has finally paid off. Perhaps we are close to the long-anticipated singularity where computers improve rapidly and autonomously, leaving humans far behind, Read More ›