Mind Matters Natural and Artificial Intelligence News and Analysis

TagAI

concept-of-robots-replacing-humans-in-offices-stockpack-adobe-stock
Concept of robots replacing humans in offices

Navigating the Technological Age

Humans have created technology for centuries. But what happens when we stop using tech and tech starts using us? In this episode, Discovery Institute Senior Fellow Andrew McDiarmid sits down with Robert J. Marks to talk about the root meaning of technology, how it has developed, and the ways we can healthily navigate our highly technologized world. Additional Resources

Elon_Musk_at_a_Press_Conference

Elon Musk to AI Labs: Press Pause

The petition reflects growing concern over the proper role of AI in human society and its potential for overreach

Over 1,000 leaders and experts in technology and science, including Elon Musk, are now urging artificial intelligence labs to pause their research and distribution of new AI technologies. They believe moving forward so swiftly on AI research could bring about unintended consequences in the future, and that we don’t understand AI well enough yet to be casting so much trust in it. According to The New York Times, The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be Read More ›

reaching out ai
White cyborg finger about to touch human finger 3D rendering

Robert Marks at The Daily Caller

Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes

Robert J. Marks wrote a piece at The Daily Caller this week on artificial intelligence, ChatGPT, and the manifold problems of new AI systems like Google’s Bard and older ones such as Amazon’s Alexa. Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes, although Marks believes AI has its genuine uses and benefits. Snapchat’s chatbot “My AI” gave advice about how to hide the smell of pot and alcohol to someone posing as a disgruntled teenager. Microsoft’s Bing bot professed its love for a tech journalist. A Google app made egregiously racist errors. ChatGPT is also politically biased despite claiming neutrality. Marks writes, Many warn of the future dangers of artificial intelligence. Many Read More ›

bard the bot
Artificial intelligence ai chat bot idea hands holding a phone against a blurred abstract background chatbot providing website support. Generative AI

The Bard is Beginning to Sing

Google has unleashed its own bot. Does that mean they're in the running in the AI race now?

Google has officially made its AI chatbot “Bard” available in the United States and the UK, notching their place in the artificial intelligence “arms race” that has taken the tech world by storm since OpenAI released its GPT-3 program last November. Microsoft released its own bot just a few weeks ago, though it immediately heralded concern for its unpredictable mannerisms and even “creepy” behavior, as described by Kevin Roose from The New York Times. “Google’s had a rocky AI journey after being caught out by the runaway success of OpenAI’s ChatGPT chatbot,” write a team of contributors at Forbes. This all begs the question: is Google really prepared to be a leader in this field? It may be a forefather Read More ›

observing the data
In the System Control Room Technical Operator Stands and Monitors Various Activities Showing on Multiple Displays with Graphics. Administrator Monitors Work of  Artificial Intelligence.

Gary Smith’s New Book Reviewed in Washington Post

Smith argues that science itself is being undermined by the tools scientists use

Walter Bradley Center Senior Fellow Gary Smith’s book Distrust: Big Data, Data-Torturing, and the Assault on Science was reviewed in The Washington Post today. Smith is a frequent contributor to Mind Matters and teaches economics at Pomona College. In his new book, Smith argues that science itself is being “undermined” by the tools scientists use. Reviewer Abby Ohlheister writes, Smith, an economist whose work often examines the misuse of data and statistics in a variety of disciplines, argues that the current crisis of trust in science falls at the intersection of three forces: disinformation, data torturing and data mining. Disinformation, as Smith writes, is “as old as the human race,” but accelerated in speed and reach alongside social media. Data Read More ›

COSM2022-Nov10-174A0113-blake-lemoine-panel

Lemoine at COSM 2022: A Conversation on AI and LaMDA

Will AI ever become "sentient"?

Blake Lemoine, ex-Google employee and AI expert, sat down with Discovery Institute’s Jay Richards at the 2022 COSM conference last November. Together they discussed AI, Google, and how and why Lemoine got to where he is today. Lemoine famously claimed last year that LaMDA, Google’s breakthrough AI technology, had achieved sentience. Lemoine explains that many people at Google thought AI had the potential for sentience, but that such technology should not be made prematurely for fear of the negative impacts it could have on society. You can listen to their interesting and brief conversation in the video below, and be sure to see more sessions from the 2022 COSM conference featuring Lemoine and other leaders and innovators in technology on Read More ›

bulb brain
Creative Idea with Brain and Light Bulb Illustration, with Generative AI Technology

Dear Silicon Valley: You’re Over-Hyping ChatGPT

The abilities of these new chatbots are grossly overstated

Gary Smith and Jeffrey Funk, frequent Mind Matters contributors, co-wrote a piece at Salon on the over-exaggerated dreams big tech has for AI. They write, Silicon Valley’s pre-eminent leaders love prematurely predicting that their products will completely upend the world as we know it. The latest case study comes from Sam Altman, CEO of OpenAI, the company behind the ChatGPT AI chatbot that has gone viral for its convincing imitations of human writing. Two years ago, Altman wrote a manifesto, “Moore’s Law for Everything,” in which he forecast that artificial intelligence would make huge swaths of both white collar and blue collar jobs obsolete. -Smith & Funk, Don’t believe the hype: why ChatGPT is not the “holy grail” of AI research | Read More ›

looking at art
Rear view of a guy in his 30s looking at an art exhibition

Artists and Writers Unite Against AI

New Guardian article highlights the artists and writers pushing back against AI

A new article from The Guardian covers the struggle of artists, writers, and other “creatives” who are pushing back against AI systems like Midjourney and ChatGPT, which they purport is starting to replace their viability. Betsy Reed writes, Poetry may still be a hard nut for AI to crack convincingly, but among the first to face a genuine threat to their livelihoods are photographers and designers. Generative software can produce images at the touch of the button, while sites like the popular NightCafe make “original”, data-derived artwork in response to a few simple verbal prompts. The first line of defence is a growing movement of visual artists and image agencies who are now “opting out” of allowing their work to Read More ›

three-pigs-laptop

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish

We can do a simple experiment that demonstrates the difference between AI and human intentionality. ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past. In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future. In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has Read More ›

COSM2022-Nov10-174A0082-bob-marks***compressed

Robert J. Marks on Fox News

Dr. Marks talked about the hype over AI and where the danger really lies

Robert J. Marks, director of the Walter Bradley Center, was the subject of a new op-ed at Fox News, which featured his unique insights on artificial intelligence and how it can best be used by human beings. Contrary to much of the present alarmism over AI systems like ChatGPT, Marks remains level-headed about AI’s place in today’s world, pushing back against “sci-fi” apocalypticism while at the same time addressing real issues and concerns. Marks said, “No, AI is not going to take over. You’re not going to be its pet in the future. It’s going to be a tool which disrupts society, but we’re still going to be in charge. -Scott Whitlock, AI expert warns of too much ‘hype’: Humans Read More ›

moon
Moon detailed closeup

We Can’t Build a Hut to the Moon

The history of AI is a story of a recurring cycle of hype and disappointment

Once upon a time there live a tribe who lived on the plains. They were an adventurous tribe, constantly wanting to explore. At night they would see the moon drift lazily overhead, and became curious. How could they reach the moon? The moon was obviously higher than their huts. Standing on the highest hut no one could reach the moon. At the same time, standing on the hut got them closer to the moon. So, they decided to amass all their resources and build a gigantic hut. Reason being that if standing on a short hut got them closer to the moon, then standing on a gigantic hut would get them even closer. Eventually the tribe ran out of mud and Read More ›

retro-wave-80s-computer-all-in-one-illuminated-by-neon-light-isolated-on-black-stockpack-adobe-stock
Retro wave 80s computer all-in-one illuminated by neon light isolated on black

AI is Old News, Says Tech Consultant

AI actually dates back to the 1950s. It is not new, says Funk

This week, Robert J. Marks sat down with technology consultant and retired professor Jeffrey Funk, who contributes often to Mind Matters, usually in tandem with Gary Smith. Marks and Funk talked about tech startups, where the industry is headed, and the exaggerated hype that currently attends the discourse over AI. Funk talked about the various stages of AI development. “AI is not new,” he said. AI is 70 years old. ChatGPT and other generative AI models are based on neural networks, which have become economical through Moore’s Law, through this incredible increase in computing power that has been going on since the 1950s. But it’s slowed dramatically. -Jeffrey Funk, Jeffrey Funk on AI, Startups, and Big Tech | Mind Matters Read More ›

chatgpt-chat-concept-ai-artificial-intelligence-businessman-using-technology-smart-robot-ai-enter-command-prompt-contact-for-business-information-analysis-futuristic-technology-transformation-stockpack-adobe-stock
ChatGPT Chat concept AI, Artificial Intelligence. businessman using technology smart robot AI, enter command prompt, contact for business information analysis, Futuristic technology transformation.

Jeffrey Funk on AI, Startups, and Big Tech

In this podcast episode, technology consultant and author Jeffrey Funk joins Robert J. Marks to talk about the artificial intelligence industry, how it’s used by Big Tech, and AI’s exaggerated hype.  How do we respond to AI when technology is changing every year? Additional Resources

cute-3d-snapchat-ghost-cartoon-character-stockpack-adobe-stock
Cute 3D Snapchat Ghost Cartoon Character

You Too, Snapchat? Another AI Bot Hits the Scene

"My AI" is eerily human, like the Bing bot, and just as inappropriate

Snapchat introduced a new feature in its app: an AI chatbot “friend” called “My AI.” (Just what lonesome teens need.) We’ve already seen the rogue behavior of Bing’s chatbot, which, in conversation with a New York Times tech journalist, dubbed itself “Sydney” and started beseeching its human counterpart to leave his wife and fall in love with it. Romantic, right? Not so much. The journalist left the experience with the creepy sense that AI had just crossed a sensitive boundary, and that tech companies need to get better at controlling the unpredictable beast they’ve unleashed. “My AI” Gives Shady Advice to Kids Just a couple of weeks later and here we are with AI making inroads into an app used Read More ›

human rights
Circle of paper people holding hands on pink surface. Community, brotherhood concept. Society and support.

Love Thy Robot as Thyself

Academics worry about AI feelings, call for AI rights

Riffing on the popular fascination with AI (artificial intelligence) systems ChatGPT and Bing Chat, two authors in the Los Angeles Times recently declared: We are approaching an era of legitimate dispute about whether the most advanced AI systems have real desires and emotions and deserve substantial care and solicitude. The authors, Prof. Eric Schwitzgebel at UC Riverside, and Henry Shevlin, a senior researcher at the University of Cambridge, observed AI thinkers saying “large neural networks” might be “conscious,” the sophisticated chatbot LaMDA “might have real emotions,” and ordinary human users reportedly “falling in love” with chatbot Replika.  Reportedly, “some leading theorists contend that we already have the core technological ingredients for conscious machines.”  The authors argue that if or when Read More ›

hands-stockpack-adobe-stock
hands

Observing and Communing

What human art and literature do that AI can't

AI image generators like Midjourney or DALL-E are generally adept at capturing the accuracy of the human form. The concerns over copyright, job infringement, and general degradation of the visual arts via such AI are ongoing concerns for many artists and practitioners. However, a new New Yorker article by Kyle Chayka identifies a noticeable flaw in AI artwork: human hands. Missing the Big Picture Chayka begins by recalling an art class where he was asked to draw his own hand. It’s an assignment for beginners, and as behooves a novice, tempts the artist to focus more on the specific contours of the hand instead of the overall structure and form. The forest gets lost in the trees, so to speak. Read More ›

digital-fractal-realms-stockpack-adobe-stock
Digital Fractal Realms

Blake Lemoine and the LaMDA Question

In this continuation of last week’s conversation, ex-Googler Blake Lemoine tells Robert J. Marks what originally got him interested in AI: reading the science fiction of Isaac Asimov as a boy in rural Louisiana. The two go on to discuss and debate sentience in AI, non-computable traits of human beings, and the question of the soul. Additional Resources

mature-woman-having-scones-with-orange-jam-stockpack-adobe-stock
Mature woman having scones with orange jam

AI and “Qualia,” the Ability to Experience

Robert J. Marks writes on AI's limits in new article at Salvo

Robert J. Marks wrote an article for the Spring Issue of Salvo Magazine on AI, covering his ideas on its “non-computability” in the areas of love, empathy, and creativity. The Quality of Qualia I was particularly intrigued by Marks’s thoughts on qualia, a term used to describe the multifaceted realm of sensory experience. We often report on AI’s inability to be creative here at Mind Matters, but what about experiencing the world through touch, smell, and sight? Qualia is related to the mystery of consciousness, another non-computable feature of human life, and according to Marks, is far out of the purview of AI capabilities. Marks writes about the experience of biting into an orange as an example: If the experience Read More ›

person on the precipice
Man standing in a high place looking up in wonder to the Milky Way galaxy. Small silhouette of a man under the Milky Way and the magical starry sky. Concept of human smallness.

Time for Artificial General Intelligence? Not So Fast, OpenAI

OpenAI CEO is ambitious about the company's direction, but are his hopes profoundly misguided?

OpenAI CEO Sam Altman is ambitious about his company’s future, promising the world that they are developing “artificial general intelligence” (AGI) that will supposedly compete with human intelligence, per a recent Futurism piece. However, the ambition is misguided. Or more than that, the ambition is simply delusional. AI is “not even close” to attaining the creativity and intelligence of human beings, and Altman shouldn’t be parading OpenAI products as if it is. Victor Tangermann writes, In reality, however, LLMs have a very long way to go until they’re able to compete with the intellect of a human being — which is why several experts are calling foul on Altman’s recent blog post, calling it meaningless and misleading. After all, AGI Read More ›

touching chatbot
Chatbot computer program designed for conversation with human users over the Internet. Support and customer service automation technology concept.

A Chat with Blake Lemoine on Google and AI Sentience

Former Google employee Blake Lemoine claimed that the Large Language Model LaMDA was a sentient being. The claim got him fired. In this episode, Lemoine sits down with Robert J. Marks to discuss AI, what he was doing at Google, and why he believes artificial intelligence can be sentient.   Additional Resources