
Of Logic and Lawyers: AI’s Fragile Competence
The case against near‑term legal automation—and the AI blind spots that still matter.First, we must remind ourselves that “data is not the plural of anecdote.”
Read More ›
First, we must remind ourselves that “data is not the plural of anecdote.”
Read More ›
MIT’s Joseph Weizenbaum was only one of many pioneers but he saw how readily people came to depend on the bots as if they were real people.
Read More ›
Smith urged caution in applying AI to high-stakes decisions. For critical areas like finance, law, and medicine, reliance on AI remains risky.
Read More ›
On this episode, host Robert J. Marks concludes his conversation with economics professor and author Gary Smith about the hype and limitations of generative AI. Smith is the Fletcher Jones Professor of Economics at Ponoma College and a frequent contributor to Mind Matters News. In this portion of the conversation, Smith and Marks explore the hype around artificial general intelligence Read More ›

True believers don’t focus on short-term trends because they aren’t very positive. But short-term trends should be consistent with a long-term forecast.
Read More ›
Maybe it comes down to the traditional advice about the internet in general: If it sounds unbelievable, don’t believe it, and when in doubt, doubt.
Read More ›
How serious the current downturn will be, they say, depends on how long the current AI bubble lasts. Winter always comes, just not when many of us expect it.
Read More ›
Points can be made without sensationalized graphs that undermine the credibility of the argument. Let’s dispose of exploding pie charts.
Read More ›
One anomaly this year is that Seattle didn’t qualify for the playoffs even though it had a better record than the Central Division winner, Minnesota.
Read More ›
Real science is currently under siege, pummeled by conspiracy nuts and undermined internally by a replication crisis created by sloppy science.
Read More ›
Athletes do sometimes get hot—not that their chance of success is 100% but that it is temporarily elevated above their normal probability.
Read More ›
The Federal Trade Commission (F.T.C.) sent a letter to OpenAI, the San Fransisco company responsible for creating ChatGPT, the Large Language Model that captured the world’s imagination in November of 2022. Per the New York Times, the F.T.C. is investigating the AI company’s methods of data acquisition and also plans on measuring the potential harms of AI on society, citing concerns over false information and job replacement. Cecilia Kang and Cade Metz report: In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The F.T.C. asked the company dozens of questions in its letter, including how the start-up trains its A.I. models and treats personal data. The Read More ›

Gary Smith describes the problems with today’s science in his new book Distrust: Big Data, Data-Torturing, and the Assault on Science. He recounts endless examples of disinformation, data torture, and data mining, much of which we already knew. Taken together, however, and as I described in this review, they are mind-blowing. He argues that many of these problems come from things scientists do such as p-hacking during statistical analysis, too little emphasis on “impact” in statistical analyses, outright data falsification, and the creation of the Internet, which can be a huge disinformation machine in addition to a valuable resource. In the last chapter, he also offers some solutions such as ending the artificial thresholds for p-values such as 0.05, requiring Read More ›

The pandemic proved a lot of things, one of them being that science is under assault. In this enlightening and entertaining new book, Professor Gary Smith shows us how much of the assault has its roots in what scientists do. The easiest impact to understand is the Internet, which was originally created by scientists in the 1970s to exchange scientific information. Now it has become a great way to spread disinformation on almost every subject. A former chief historian of NASA noted that: “The reality is, the internet has made it possible for people to say whatever the hell they like to a broader number of people than ever before.” Smith recounts endless examples of this disinformation, much of which Read More ›

Large language models (LLMs) are far more impressive now than they were just a few months or a year ago. Compare these exchanges I had with GPT-3 in June 2022 and with ChatGPT two days ago: Gary: Can I improve my test scores if I study after taking the test? GPT-3: That’s a good question. I’m not sure if you can improve your test scores by studying after taking the test, but it couldn’t hurt to try! ChatGPT: Studying after taking a test is unlikely to directly improve your scores on that particular test since the test has already been completed. Once a test is finished, the opportunity to answer questions and provide responses has passed. However, studying after taking Read More ›

Two years ago, I wrote about how peer review has become an example of Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” Once scientific accomplishments came to be gauged by the publication of peer-reviewed research papers, peer review ceased to be a good measure of scientific accomplishments. The situation has not improved. One consequence of the pressure to publish is the temptation researchers have to p-hack or HARK. P-hacking occurs when a researcher tortures the data in order to support a desired conclusion. For example, a researcher might look at subsets of the data, discard inconvenient data, or try different model specifications until the desired results are obtained and deemed statistically significant—and therefore Read More ›

I recently wrote about how graphs that use logarithms on the horizontal axis can create a misleading impression of the relationship between two variables. The specific example I used was the claim made in a recent paper (with 16 coauthors from Google, Stanford, UNC Chapel Hill, and DeepMind) that scaling up the number of parameters in large language models (LLMs) like ChatGPT can cause “emergence,” which they define as qualitative changes in abilities that are not present in smaller-scale models but are present in large-scale models; thus they cannot be predicted by simply extrapolating the performance improvements on smaller-scale models. They present several graphs similar to this one that seem to show emergence: However, their graphs have the logarithms of Read More ›

Walter Bradley Center Senior Fellow Gary Smith’s book Distrust: Big Data, Data-Torturing, and the Assault on Science was reviewed in The Washington Post today. Smith is a frequent contributor to Mind Matters and teaches economics at Pomona College. In his new book, Smith argues that science itself is being “undermined” by the tools scientists use. Reviewer Abby Ohlheister writes, Smith, an economist whose work often examines the misuse of data and statistics in a variety of disciplines, argues that the current crisis of trust in science falls at the intersection of three forces: disinformation, data torturing and data mining. Disinformation, as Smith writes, is “as old as the human race,” but accelerated in speed and reach alongside social media. Data Read More ›

Gary Smith and Jeffrey Funk, frequent Mind Matters contributors, co-wrote a piece at Salon on the over-exaggerated dreams big tech has for AI. They write, Silicon Valley’s pre-eminent leaders love prematurely predicting that their products will completely upend the world as we know it. The latest case study comes from Sam Altman, CEO of OpenAI, the company behind the ChatGPT AI chatbot that has gone viral for its convincing imitations of human writing. Two years ago, Altman wrote a manifesto, “Moore’s Law for Everything,” in which he forecast that artificial intelligence would make huge swaths of both white collar and blue collar jobs obsolete. -Smith & Funk, Don’t believe the hype: why ChatGPT is not the “holy grail” of AI research | Read More ›

AI image generators like Midjourney or DALL-E are generally adept at capturing the accuracy of the human form. The concerns over copyright, job infringement, and general degradation of the visual arts via such AI are ongoing concerns for many artists and practitioners. However, a new New Yorker article by Kyle Chayka identifies a noticeable flaw in AI artwork: human hands. Missing the Big Picture Chayka begins by recalling an art class where he was asked to draw his own hand. It’s an assignment for beginners, and as behooves a novice, tempts the artist to focus more on the specific contours of the hand instead of the overall structure and form. The forest gets lost in the trees, so to speak. Read More ›