Mind Matters Natural and Artificial Intelligence News and Analysis

CategoryArtificial Intelligence

lightbult brain
Brain Flourescent Light Bulb

Will AI Ever Achieve Consciousness?

A former Facebook executive thinks so, assuming progress will eventually get us there

John Carmack, a former Facebook executive who famously expressed doubts over Mark Zuckerberg’s ambitious metaverse project, thinks AI is “on the cusp” of simulating the human brain. Per a report from Futurism, Carmack sat down with Dallas Innovates and talked about the possibilities of AI, as well as its prime obstacle: an inconvenient thing called consciousness. Carmack said, The thing we don’t yet have is sort of the consciousness, the associative memory, the things that have a life and goals and planning. I mean, forget human brains; we don’t even have things that can act like a mouse or a cat.” Despite the far-off dream of developing consciousness in AI, Carmack thinks it’s plausible, given the great strides we’ve seen Read More ›

Jesus-and-AI

Artificial Intelligence and the Love of Jesus

The "He Gets Us" video ends with the declaration "Jesus' love was never artificial"

How does artificial intelligence deal with the teachings of Jesus Christ? Apparently quite well in some cases.   Super Bowl ads this year included two about Jesus from the ministry He Gets Us.  There are more thought-provoking videos at their web site HeGetsUs.com. One, linked here, is about AI.  An artificial intelligence image synthesizer  Midjourney was asked by He Gets It to generate images about love from simple text prompts. The video shows generated images using software from the company Midjourney. When prompted to synthesize an image from the prompt “love”,  the response was pictures containing hearts – the kind you might see on a cheesy valentine day’s card.   Then the AI was asked to  visualize love the way Read More ›

art-collage-businessman-with-a-laptop-instead-of-a-head-online-research-concept-stockpack-adobe-stock
Art collage. Businessman with a laptop instead of a head. Online research concept.

The Need for Accountability in AI-Generated Content

Just because we live in a world of AI does not mean we can escape responsibility

AI-generated content has become increasingly common on the web. However, as we enter this new era, we will need to think through the moral and social ramifications of what we are doing, and how we should negotiate the new ethical landscape. But first, a brief recap of recent history. The first major player to pioneer AI-generated content was the Associated Press. AP realized that many market-oriented articles were pretty monotonous and read like templates anyway, so they decided to fully commit and auto-generate many of them. If you read an AP story about a company’s earnings report and it sounds eerily like every other story about other companies’ earnings reports, there’s a reason for that. Templated content, while annoying, provides window-dressing to raw Read More ›

editing in red
Red Proofreading Marks and Pen Closeup

ChatGPT: Beware the Self-Serving AI Editor

The chatbot "edits" by reworking your article to achieve its own goals, not necessarily yours

My article, Utopia’s Braniac (short title), reported results from experiments showing that for one, ChatGPT actually lies, and secondly, it gives results plainly biased to favor certain political figures over others. I next ran a follow-up experiment: asking ChatGPT to “edit and improve” the Utopia’s Brainiac manuscript before submitting it.  Close friends told me they’d used ChatGPT to improve their written work and said the process is easy. So, I tried it myself on February 6, 2023. I entered “Please edit and improve the following essay” and pasted my piece in full text (as ultimately published). In under a minute, ChatGPT delivered its edited and revised copy. What did it do? I. Deleted Whole Section That Gave Readers an Everyday Context Read More ›

search icon
Glowing magnifier icon on blue background with mock up place. SEO search engine optimization concept. 3D Rendering.

Gary N. Smith Weighs in on the Microsoft and Google AI “Arms Race”

The optimism over LLMs is exaggerated, but tech companies are competing for AI sovereignty nonetheless

Gary N. Smith, who writes frequently for Mind Matters News, was interviewed for a recent article from The Daily Beast on the developing AI “arms race” among tech giants like Microsoft and Google. Because of the success of OpenAI’s ChatGPT, tech companies are scrambling to adjust and compete. Microsoft announced that it is now incorporating ChatGPT into its search engine, and Google advertised its AI alternative “Bard,” which they also plan to integrate with their browser. This comes just a couple of months after the revelation of ChatGPT-3, which saw several million users within the first week of its existence. Smith commented, “Big companies don’t want to miss the next big thing, and startups want to cash in on unrestrained Read More ›

bunch of junk
mosaic

Is It Worth Having ChatGPT Janitors to Clean Up Its Toxic Content?

This piece by Mathew Otieno originally appeared at MercatorNet (February 8th, 2023) and is republished here under a Creative Commons License. Ever since OpenAI’s ChatGPT chatbot burst out into the limelight late last year, its popularity has grown by leaps and bounds. By the end of January 2023, according to a report from UBS, a bank, ChatGPT had garnered over 100 million monthly active users, beating all social media sites as the fastest consumer internet service to achieve that distinction. Unsurprisingly, in lockstep with its growing popularity, controversies have also started dogging the company. For instance, in mid-January, Time magazine published a bombshell report about how OpenAI sub-contracted Kenyan workers earning less than US$2 per hour to label toxic content, like violence, sexual abuse and hate speech, to be used to train Read More ›

stable diffusion
Retro style design. Contemporary art collage of man with vintage computer head reading isolated over blue background

Will Stability AI Go Down in Court?

Getty Images, popular artist sue generative AI companies and have a strong case

The stock photo company Getty Images is suing Stability AI, the creator of the AI image generator Stable Diffusion. Getty alleges that Stability AI has committed a “brazen infringement of Getty Images’ intellectual property on a staggering scale.”  Stable Diffusion has purportedly used millions of Getty stock photos to generate content for users—without acknowledgment and permission. While usually Stable Diffusion produces an image that looks different than the original, it has noticeably reproduced Getty’s watermark, spurring concern among executives that the company’s image and reputation are being tarnished. The lawsuit arrives amidst growing tension between AI image generators and artists and copyright holders. A recent article from The Wall Street Journal details the experience of artist Grzegorz Rutkowski, whose popular Read More ›

modern-art-collage-of-a-hand-holding-a-mobile-phone-news-concept-true-or-fake-copy-space-stockpack-adobe-stock
Modern art collage of a hand holding a mobile phone. News concept. True or fake. Copy space.

Utopia’s Brainiac? ChatGPT Gives Biased Views, Not Neutral Truth

Look at what happens when you try to get ChatGPT to offer unbiased responses about political figures

Do you trust your pocket calculator? Why?  Maybe you’re using the calculator app on your phone. Enter: 2 + 2. You get an answer: 4. But you knew that already. Now enter 111 x 111. Do you get 12,321? Is that the correct answer? Work it out with a pencil. That answer is correct. Try 1234 x 5678.  My calculator app returns 7,006,652. Correct? I’m not going to check it. I’m going to trust the calculator. And so it goes. The harder the problem, the more we trust the computer. That’s one reason why many people trumpet the powers of Artificial Intelligence (AI) systems. Those systems can give answers to problems we individuals couldn’t solve in a lifetime.  But are the AI “answers” correct?  Read More ›

hand in searchbar
Hand click searching data information networking. Concept for network web and technology

Whatever You Do, Don’t Ask GPT for Sources

The chatbot will give you a lot of links that don't necessarily direct you where you want to go

One of the more amusing things I’ve found from OpenAI’s GPT-3 and ChatGPT is the fact that it will very confidently provide you with sources on anything you ask—and they will often be completely made up. It will even provide fake (but real-looking) URLs for you! I stumbled across this feature when researching a previous GPT-3 article about how well it could write blog posts compared to real authors. I initially tried asking GPT-3 to include sources, and it generated complete nonsense for the sources. I decided that, for that article, sources were not the main question, so I left it out of the final queries. However, in response to my latest article about ChatGPT not being a Google replacement, someone commented Read More ›

imagination
Book of Wonders: A Magical Book who makes dreams come true - Digital Art Design, unique illustration concept | Generative AI

What Can’t A.I. Do? Quite a Lot, Actually

NYT columnist David Brooks makes a list of uniquely human skills that students should develop in college

In an increasingly artificial world, how are we to remain human? New York Times columnist David Brooks wrote an opinion article this week seeking to answer that question. Brooks notes some of the benefits of “machine learning,” but also lists some of the characteristics artificial intelligence will forever fail to embody. “A.I. will probably give us fantastic tools that will help us outsource a lot of our current mental work,” he writes. “At the same time, A.I. will force us humans to double down on those talents and skills that only humans possess.” Uniquely Human Traits What are some of these “talents and skills” that people should intentionally develop in the age of A.I.? Brooks says an incoming college student Read More ›

data transfer
Big data technology and data science illustration. Data flow concept. Querying, analysing, visualizing complex information. Neural network for artificial intelligence. Data mining. Business analytics.

ChatGPT Violates Its Own Model

Based on these exchanges, we can at least say the chatbot is more than just the ChatGPT neural network

Here is a quick overview of how ChatGPT operates under the hood. This will make it easier to spot suspicious behavior. The following is at a very high level.  For the gory details, see the following two guides: – The Illustrated GPT-2 (Visualizing Transformer Language Models) – Jay Alammar – Visualizing machine learning one concept at a time. (jalammar.github.io) – The GPT-3 Architecture, on a Napkin (dugas.ch) What is ChatGPT? Let’s start with what ChatGPT is. ChatGPT is a kind of machine learning algorithm known as a neural network. To understand what a neural network is, recall your algebra classes. You remember being given a set of equations and being told to solve for some variables. Then you learned you Read More ›

robot at typewriter
A robot typing on a rypewritter, generating text, artificial intelligence illustration, generative ai

Tech Journal CNET Used AI to Write Articles

A writer laments CNET's reckless embrace of AI generated content

The prominent technology journal CNET has used AI to generate some of its articles, although the results have been embarrassing. Apart from general outrage from critics, who claim this maneuver will obliterate the need for entry-level writers, the unspecified AI system made lots of errors. Jon Christian gives an example of the AI’s “boneheaded” work in a Futurism report, To calculate compound interest, use the following formula: Initial balance (1+ interest rate / number of compounding periods) ^ number of compoundings per period x number of periods  For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.“ It sounds authoritative, but it’s wrong. In reality, Read More ›

surveillance
Video camera lens

China Leads the World in Face Recognition Exports

The global power is exporting the AI to weak and autocratic countries, according to a report from Brookings Institute

China is the world’s leading exporter of facial recognition technology, according to a study conducted by researchers from Harvard and MIT. The study aimed to elucidate the ramifications of the AI revolution and what it means for such technology to be exported from a global power like China. They found that China is more likely than other countries to export AI technology to other countries. China also focuses on exporting facial recognition to countries with weak democracies that are experiencing social unrest and upheaval. Political motivation might be driving such trades. The authors write, The Chinese government’s demand for surveillance and political control translates into more exports of AI. Moreover, our finding that autocrats and would-be autocrats abroad demand surveillance Read More ›

AI warfare
Close-up, a small town, a village with small houses, a river, a bridge, roads and other strategically important objects to be under the gun and shadowed. Concept of computer game and spying on people

US Military Updates Autonomous Killer Robot Policy

Like it or not, properly vetted AI autonomous weapons are in the future of US adversaries and in ours as well

The Pentagon has updated its policy on the use of artificially intelligent autonomous weapons. Autonomous means the weapon can operate on its own without human oversight. As unpacked in my book, The Case for Killer Robots, technology wins wars and gives pause to potentially dangerous adversaries.  This includes autonomous AI systems currently under development in the US, China and elsewhere. When there is an option, a human should be given oversight of the AI to avoid unexpected contingencies. But this is not always wise when (1) autonomy gives an overwhelming strategic advantage, (2) communication with remote AI is unwise or not possible, and (3) when events unfold faster than a human can respond. Imagine, for example, being attacked by a large swarm of armed drones. Read More ›

person at the internet
Business man works in office with laptop. Concept of internet network. double exposure

Why ChatGPT Won’t Replace Google

With Google, the algorithm eventually leads you to content made by real people. With ChatGPT, you never leave the algorithm

To some extent, ChatGPT is a newer, easier-to-use interface than Google.  Unlike Google, it doesn’t make you waste time by visiting those pesky websites.  It not only looks into its database for content, but it also summarizes it for you as paragraphs. There is a problem lurking in there, however.  Being computers, neither Google nor ChatGPT cares about the truth.  They are algorithms, and they merely do as they are told.  Additionally, you can’t code the human mind into algorithms.  However, there is a fundamental difference between what ChatGPT does and what Google does that will prevent content generators like ChatGPT from displacing search engines like Google: Google eventually lets you out of its system. Ultimately, the goal of search Read More ›

cubism AI
Generative AI. Abstract face portrait as a cubism art. Concept of creative shapes graphics with textured geometric shapes. Geometric face.

The Lensa Magic Avatar Feature Needs Attention

The app uses Stable Diffusion and can generate inappropriate and grotesque material

We’ve covered ChatGPT, OpenAI’s new text generator, quite a bit the last few weeks here at Mind Matters, but lest we forget, AI has made great strides in image generation, too. Another explosive AI tool is Lensa, an app that uses Midjourney’s Stable Diffusion model. Lensa was first developed by Prisma Labs in 2018 as a photo editing tool, but in late 2022, it added the “Magic Avatar” feature, which allows users to recreate their portraits artistically. Since then, it has skyrocketed to popularity alongside its AI cousin, ChatGPT. Despite its reputation as a “fun app,” some users have complained that the app produced sexualized and racialized avatars of themselves. In an Insider article, Bethany Biron writes, Melissa Heikkilä, a Read More ›

AI taking exam
Generative AI illustration of cute little robot reading assignment on paper during exam at school

AI Can Pass the MBA Exam, Wharton Professor Says

In the wake of ChatGPT uproar, its abilities (and limits) are becoming better understood

OpenAI’s ChatGPT passed the Master of Business Administration (MBA) exam according to a news report from NBC. Professor Christian Terwiesch of Wharton, who conducted the research and authored a paper on the matter, said that the bot scored somewhere in the B range on the exam, and that this has major implications for education. The report is just another in a flux of news and concerns about AI invading the spheres of education and academia. Terwiesch wrote, [The bot’s score shows] remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers, and consultants.” Despise its apparent advancement, the bot did Read More ›

science page
Drawings of microbes in antique book with annotations on an abstract language, generative ai illustration

ChatGPT Listed as “Co-Author” For Several Scientific Papers

Large language models can’t be authors of text because they can’t have responsibility, critics say

ChatGPT was listed as a contributing author for at least four scientific articles, according to a report from Nature. The news arrives amid a flurry of debate over the place of AI in journalism and artistic and academic disciplines, and now the issue has spread to the scientific community. People are pushing back against the idea of ChatGPT “authoring” text, claiming that because AI cannot take responsibility for what it produces, only humans should be listed as authors. The article notes, The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using Read More ›

humans vs
Human vs Robots concept. Technological revolution. Unemployment in the digital world. Symbol of future cooperation, technology advance, innovation. Businessman flips wood cubes human to robot symbols.

GPT-3 Versus the Writers at Mind Matters

How does the AI fare when it is asked to write on topics covered in Mind Matters articles?

In order to give a real-world comparison of the output of GPT-3 to human-written writing, I decided it would be a fun activity to see how OpenAI’s GPT-3 compares to Mind Matters on a variety of topics that we cover.  Here, we are using OpenAI’s direct API, not ChatGPT, as there is a lot of evidence that ChatGPT responses have a human-in-the-loop.  Therefore, we are going to focus on the outputs from their API directly. I used several criteria for article selection in order to even the playing field as much as possible.  For instance, I only chose articles that did not depend on recent events.  This way, GPT-3 is not disadvantaged for not having up-to-date material.  However, I also Read More ›

drone-sunrise-in-princeton-new-jersey-stockpack-adobe-stock
Drone Sunrise in Princeton New Jersey

Princeton Student Develops AI Detector App

Software engineers are finding creative ways to regulate and detect ChatGPT

A 22-year-old student from Princeton, Edward Tian, has designed an app to discern whether text is human or AI generated. The tool, GPTZero, is already garnering interest from potential investors and will come as a sigh of relief to teachers and others who are worried about the advanced abilities of ChatGPT, OpenAI’s new text generator. According to a piece from Fast Company, Tian says his tool measures randomness in sentences (“perplexity”) plus overall randomness (“burstiness”) to calculate the probability that the text was written by ChatGPT. Since tweeting about GPTZero on January 2, Tian says he’s already been approached by VCs wanting to invest and will be developing updated versions soon.” Megan Morrone, Was this written by a robot? These tools help detect AI-generated text (fastcompany.com) Read More ›