Mind Matters Natural and Artificial Intelligence News and Analysis

TagLarge Language Models (LLMs)

science page
Drawings of microbes in antique book with annotations on an abstract language, generative ai illustration

ChatGPT Listed as “Co-Author” For Several Scientific Papers

Large language models can’t be authors of text because they can’t have responsibility, critics say

ChatGPT was listed as a contributing author for at least four scientific articles, according to a report from Nature. The news arrives amid a flurry of debate over the place of AI in journalism and artistic and academic disciplines, and now the issue has spread to the scientific community. People are pushing back against the idea of ChatGPT “authoring” text, claiming that because AI cannot take responsibility for what it produces, only humans should be listed as authors. The article notes, The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using Read More ›

error-page-template-for-website-retro-robot-with-monitor-computer-head-warning-message-on-blue-screen-macro-view-copy-space-stockpack-adobe-stock
Error page template for website. Retro robot with monitor computer head. Warning message on blue screen. macro view copy space

Large Language Models Can Entertain but Are They Useful?

Humans who value correct responses will need to fact-check everything LLMs generate

In 1987 economics Nobel Laureate Robert Solow said that the computer age was everywhere—except in productivity data. A similar thing could be said about AI today: It dominates tech news but does not seem to have boosted productivity a whit. In fact, productivity growth has been declining since Solow’s observation. Productivity increased by an average of 2.7% a year from 1948 to 1986, by less than 2% a year from 1987 to 2022. Labor productivity is the amount of goods and services we produce in a given amount of time—output per hour. More productive workers can build more cars, construct more houses, and educate more children. More productive workers can also enjoy more free time. If workers can do in four Read More ›

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

This story, by Pomona College business and investment prof Gary Smith was #6 in 2022 at Mind Matters News in terms of reader numbers. As we approach the New Year, we are rerunning the top ten Mind Matters News stories of 2022, based on reader interest. At any rate: “Chatbots: Still dumb after all these years.” (January 3, 2022) In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of Read More ›

black-and-white-computer-keyboard-keys-close-up-concept-of-unstructured-big-data-that-need-to-be-sorted-ready-to-be-consumed-by-machine-learning-model-for-deep-learning-stockpack-adobe-stock
Black and white computer keyboard keys close-up. Concept of unstructured big data that need to be sorted ready to be consumed by machine learning model for deep learning.

Large Learning Models Are An Unfortunate Detour in AI

Gary Smith: Even though LLMs have no way of assessing the truth or falsity of the text they generate, the responses sound convincing

For decades, computer scientists have struggled to construct systems possessing artificial general intelligence (AGI) that rivals the human brain—including the ability to use analogies, take into account context, and understand cause-and-effect. Marvin Minsky (1927–2016) was hardly alone in his overly optimistic 1970 prediction that, “In from three to eight years we will have a machine with the general intelligence of an average human being.” AGI turned out to be immensely more difficult than imagined and researchers turned their attention to bite-size projects that were doable (and profitable). Recently, large language models (LLMs) — most notably OpenAI’s GPT-3 — have fueled a resurgence of hope that AGI is almost here. GPT-3 was trained by breaking 450 gigabytes of text data into Read More ›

positive-girl-resting-on-the-couch-with-robot-stockpack-adobe-stock
Positive girl resting on the couch with robot

Turing Tests Are Terribly Misleading

Black box algorithms are now being trusted to approve loans, price insurance, screen job applicants, trade stocks, determine prison sentences, and much more. Is that wise?

In 1950 Alan Turing proposed that the question, “Can machines think?,” be replaced by a test of how well a computer plays the “imitation game.” A man and woman go into separate rooms and respond with typewritten answers to questions that are intended to identify the players, each of whom is trying to persuade the interrogators that they are the other person. Turing proposed that a computer take the part of one of the players and the experiment be deemed a success if the interrogators are no more likely to make a correct identification. There are other versions of the game, some of which were suggested by Turing. The standard Turing test today involves a human and a computer and Read More ›

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. As Richard Feynman famously explained, there is a fundamental difference between labeling things and understanding them: [My father] taught me “See that bird? It’s a brown-throated thrush, but in Germany it’s called a halsenflugel, and in Chinese they call it a chung ling and even Read More ›