Mind Matters Natural and Artificial Intelligence News and Analysis

Tagbias

video-archives-concept-stockpack-adobe-stock
Video archives concept.

The Crisis of Trust in the Mainstream Media

A vibrant and engaged media is essential to protecting American liberty. But what if it can't be trusted?

This is cross-posted at Humanize. Visit this link to listen to the entire conversation between host Wesley J. Smith and journalist/commentator Alice Stewart. A vibrant and engaged media is essential to protecting American liberty—which is why the First Amendment provides such a strong protection for freedom of the press. If the media are to carry out their societal responsibilities, journalists must have the trust of news consumers. But these days, trust is in low supply. An October 2022 Gallup Poll found that only 34% of Americans trust the mass media to report the news “fully, accurately and fairly.” Why are the media experiencing this profound crisis of trust and what can be done about it? Wesley’s guest on this episode Read More ›

tv-studio-with-camera-and-lights-stockpack-adobe-stock
TV studio with camera and lights

Tucker Carlson and the Decline of Cable TV

What does Carlson's move to Twitter mean for legacy media?

Tucker Carlson, longtime host of “Tucker Carlson Tonight” on Fox News, “parted ways” with the media empire, and just weeks later, announced that he would be starting a new, independent show. It was a quick turnaround. Interesting thing is, Carlson said the show would air not on cable television, but on Twitter. He said that Twitter is basically the forum where today’s ideas are formulated, exchanged, and debated, and that there’s currently no better place to practice video journalism. Here’s the clip of Carlson so you can hear him for yourself. Fox News lost considerable ratings since Carlson’s departure. He was their most popular host by a longshot. On the same day he was let go, CNN fired their own Read More ›

editing in red
Red Proofreading Marks and Pen Closeup

ChatGPT: Beware the Self-Serving AI Editor

The chatbot "edits" by reworking your article to achieve its own goals, not necessarily yours

My article, Utopia’s Braniac (short title), reported results from experiments showing that for one, ChatGPT actually lies, and secondly, it gives results plainly biased to favor certain political figures over others. I next ran a follow-up experiment: asking ChatGPT to “edit and improve” the Utopia’s Brainiac manuscript before submitting it.  Close friends told me they’d used ChatGPT to improve their written work and said the process is easy. So, I tried it myself on February 6, 2023. I entered “Please edit and improve the following essay” and pasted my piece in full text (as ultimately published). In under a minute, ChatGPT delivered its edited and revised copy. What did it do? I. Deleted Whole Section That Gave Readers an Everyday Context Read More ›

humans vs
Human vs Robots concept. Technological revolution. Unemployment in the digital world. Symbol of future cooperation, technology advance, innovation. Businessman flips wood cubes human to robot symbols.

GPT-3 Versus the Writers at Mind Matters

How does the AI fare when it is asked to write on topics covered in Mind Matters articles?

In order to give a real-world comparison of the output of GPT-3 to human-written writing, I decided it would be a fun activity to see how OpenAI’s GPT-3 compares to Mind Matters on a variety of topics that we cover.  Here, we are using OpenAI’s direct API, not ChatGPT, as there is a lot of evidence that ChatGPT responses have a human-in-the-loop.  Therefore, we are going to focus on the outputs from their API directly. I used several criteria for article selection in order to even the playing field as much as possible.  For instance, I only chose articles that did not depend on recent events.  This way, GPT-3 is not disadvantaged for not having up-to-date material.  However, I also Read More ›

digital-chatbot-robot-application-conversation-assistant-ai-artificial-intelligence-concept-stockpack-adobe-stock
Digital chatbot, robot application, conversation assistant, AI Artificial Intelligence concept.

Note to Parents: Grooming and Wokeness Are Embedded in Chatbots

With or without tuning, all AI chatbots are biased one way or another. AI without bias is like water without wet

First impressions of a person can be wrong. Further interactions can reveal disturbing personality warts. Contrary to initial impressions, we might find out they lie, they are disturbingly woke,  they can’t do simple math, their politics is on the extreme left, and they have no sense of humor or common sense.   I have just described Open AI’s GPT3 chatbot, ChatGPT. Initially, users are gobsmacked by the its performance. Its flashy prose responses to simple queries look amazing.  But become roommates with the chatbot for a few hours and its shortcomings become evident .  It can’t get its facts straight, can’t do simple math problems, hates Donald Trump, and is being groomed to be “woke.” Its performance warts are so numerous that Bradley Center Senior Fellow Gary N. Smith hoists a Read More ›

white glove stage
Hand in a white glove pulling curtain away

Google: Rank Censorship Behind the Scenes

We live under a state of highly sophisticated and ubiquitous suppression of disfavored voices

One year ago today (January 1st, 2022) we saw behind the curtain at Google. With vast information scattered across a billion websites, whoever controls the search algorithm largely controls information. And if Google.com were a stage, the spotlight is centered squarely on the first result, with some ambient light spilling onto a few supporting roles. The second page results are essentially extras, unlikely to catch the attention of the audience at all. About 25% of web searchers click that first result. Another 50% follow one of the next half-dozen. A scant 6% will ever make it to the second page.* If your breaking news, breakthrough product, or bold opinion piece isn’t in a starring role on that first page, it will languish Read More ›

question mark letters angle
question mark letters

The Most “Woke” Company Could Contribute Most to Online Bias

Google has got to be one of the "Wokest" companies but there is a lesson in how Timnit Gebru got fired

Here’s a paper worth revisiting, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” (March 3, 2021), if only for the principal author’s trouble associated with publishing it. Although Google had hired Timnit Gebru to do ethics consultation, an executive, Megan Kacholia demanded that she remove all suggestion of her affiliation. In the ensuing uproar, Gebru ended up no longer employed there. The paper in question was, in Gebru’s mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software — most famously exemplified by a system called GPT-3 — that was stoking excitement in the tech industry. Google’s own version of the technology was now helping to power the Read More ›

the-concept-of-building-a-business-network-businessmen-experience-a-global-network-and-global-online-trading-development-system-exchange-stockpack-adobe-stock.jpg
The concept of building a business network Businessmen experience a global network and global online trading development system exchange.

Wikipedia’s Bias Meets a Free-Speech Alternative

The famously free encyclopedia’s pages on abortion, communism, and historical figures reveal a left-leaning bias

Last December, Wikipedia co-founder Larry Sanger announced that he would be launching a free speech alternative to Wikipedia, a website that Sanger believes has lost its credibility as a neutral source of information. Sanger’s Encyclosphere is meant to be “an open encyclopedia network” (Sanger compares it to “the blogosphere”) with the goal of “build(ing) a network that … all of humanity owns and no one exclusively controls.”  One of Wikipedia’s declared “fundamental principle(s)” is NPOV – neutral point of view. Wikipedia defines NPOV as “representing fairly, proportionately, and, as far as possible, without editorial bias, all the significant views that have been published by reliable sources on a topic.”  “This policy is non-negotiable,” the website states. But according to Sanger, “Wikipedia’s ‘NPOV’ is dead.”  Read More ›

bangkok-thailand-25-aug-2020-men-hand-using-digital-tablet-for-search-information-on-google-wireless-smartphone-technology-with-intelligence-search-engine-stockpack-adobe-stock.jpg
Bangkok, Thailand 25 AUG 2020. Men hand using digital tablet for search information on Google.  Wireless Smartphone technology with intelligence search engine.

Another AI Ethics Head at Google Gets Fired Over Diversity Issues

The AI ethics team and Google management may have very different ideas about what “ethics” means

On February 19, Google fired Margaret Mitchell, the AI ethics co-lead at Google Brain. Mitchell’s co-leading colleague, Timnit Gebru, had been fired in December, amid controversy. Both women were critical of Google’s diversity hiring record during the two years they worked together. The flashpoint in Mitchell’s case, for which she had been temporarily suspended earlier, hinged on claims of unauthorized use of files: In a statement, a Google spokesperson said Mitchell had shared “confidential business-sensitive documents and private data of other employees” outside the company. After Mitchell’s suspension last month, Google said activity in her account had triggered a security system. A source familiar with Mitchell’s suspension said she had been using a script to search her email for material Read More ›

the-concept-of-biased-views-judged-by-appearances-various-miniature-people-standing-behind-the-glasses-stockpack-adobe-stock.jpg
The concept of biased views judged by appearances. Various miniature people standing behind the glasses.

How Bias Can Be Coded Into Unthinking Programs

MIT researcher Joy Buolamwini started the project as a trivial “bathroom mirror” message

Coded Bias, a new documentary by 7th Empire Media that premiered at the Sundance Film Festival in January 2020, looks at the ways algorithms and machine learning can perpetuate racism, sexism, and infringements on civil liberties. The film calls for accountability and transparency in artificial intelligence systems, which are algorithms that sift large amounts of data to make predictions, as well as regulations on how these systems can be used and who has access to the data. The documentary follows the path of MIT researcher Joy Buolamwini. Buolamwini took a class on science fiction and technology in which one of her assignments was to create a piece of technology that isn’t necessarily useful but is inspired by science fiction. Buolamwini Read More ›

robot-working-with-digital-display-stockpack-adobe-stock.jpg
robot working with digital display

Can Robots Be Less Biased Than Their Creators?

We often think of robots as mindless but the minds of their creators are behind them

In some ways, it’s an odd question. Many of us would think of a robot as the opposite of bias. But the reality is that, because everything the robot is and does is a consequence of human actions, a robot could in fact be very biased. How will we know? Some AI developers are attempting to deal with this question: Last summer, hundreds of A.I. and robotics researchers signed statements committing themselves to changing the way their fields work. One statement, from the organization Black in Computing, sounded an alarm that “the technologies we help create to benefit society are also disrupting Black communities through the proliferation of racial profiling.” Another manifesto, “No Justice, No Robots,” commits its signers to Read More ›

3d-rendering-of-a-futuristic-robot-dog-stockpack-adobe-stock.jpg
3D rendering of a futuristic robot dog.

Questions Dog the Future of Police Robots

Robots will have all the human judgment flaws but none of the capacity to change

Here’s a snippet from a recent New York Times article on the apparent first use of police robots in the United States in 2016, to kill Micah Xavier Johnson. Johnson had been discharged from the U.S. Army under unclear circumstances and in July of that year he shot five officers dead. Like almost all police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed in labs around the world, and they will use artificial intelligence to do much more. A robot with algorithms for, say, facial recognition, or predicting people’s actions, or deciding on its own to fire “nonlethal” projectiles is a robot that many researchers find problematic. The reason: Read More ›

bottom-view-close-up-of-four-white-surveillance-cameras-stockpack-adobe-stock.jpg
Bottom view close-up of four white surveillance cameras

How Toxic Bias Infiltrates Computer Code

A look at the dark underbelly of modern algorithms

The newly released documentary Coded Bias from Shalini Kantayya takes the viewer on a tour of the way modern algorithms can undermine justice and society and are actively subverting justice at the present moment. Coded Bias highlights many under-discussed issues regarding data and its usage by governments and corporations. While its prescriptions for government usage of data are well considered, the issue of corporate use of data involves many additional issues that the film skirts entirely. As the film points out, we are presented these algorithms as if they were a form of intelligence. But they are actually just math—and this math can be used to, intentionally or unintentionally, encode biases. In fact, as Bradley Center fellows Robert J. Marks Read More ›

student-term-paper-showing-a-grade-stockpack-adobe-stock.jpg
student term paper showing 'a' grade

Can a Computer Write Your Paper for You Someday Soon?

GPT-3 recently came up with a paragraph that—a pop psychologist agreed—sounded just like him

This summer the OpenAI lab, backed by $1 billion in funding from Microsoft, Google, and Facebook, released an updated version of GPT-3, a text generator that produces convincing sentences by analyzing, among other online sources, Wikipedia, countless blog posts, and thousands of digital books. According to a recent story by Cade Metz in the New York Times, one GPT-3 programmer decided to target pop psychologist Scott Barry Kaufman. Could GPT-3 really come up with a paragraph that sounded just like him? Kaufman himself (pictured) was really impressed with this one, on the subject of becoming more creative: I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more Read More ›

Analysis of a sample of water.jpg
Analysis of a sample of water from a river or sea, ocean. The scientist in the glove took water in a test tube.

Information Today Is Like Water in the Ocean. How Do We Test It?

Often, we must sort through many layers of bias in information to get at the facts that matter
Examining specific types of bias in our thinking will help us evaluate the information on key issues that inundates us today. Read More ›
ethics-integrity-fairness-ideals-behavior-values-concept-stockpack-adobe-stock.jpg
Ethics Integrity Fairness Ideals Behavior Values Concept

AI: Design Ethics vs. End User Ethics — the Difference Is Important

The major ethical challenge in AI design is unintended consequences. It’s up to end users to debate which consequences SHOULD be intended. Read More ›
Digital globe with mosaic of images
Digital globe with mosaic of images

Why Some Nation States Are Banning TikTok

The United States is not alone in questioning the social medium’s allegiance to the Chinese government

Why is TikTok so controversial? It’s the first Chinese technology company that has reached a billion users outside of China. Its main demographic is Generation Z—teens and twenty-somethings. If you take a look at TikTok videos, most are goofy and irreverent. They’re frenetic shorts of everything from fashion tips to pranks and, of course, (bad) dancing. TikTok’s stated mission is to “inspire creativity and bring joy.” What could go wrong? Here’s what. Working with China, as Disney and the NBA can attest, comes with certain strings attached, including acquiescing to the Chinese Communist Party’s rules for acceptable speech. Because ByteDance, which owns TikTok, is a Chinese company (although partly owned by investors from the U.S. and Japan), the Chinese Communist Read More ›

robot-typing-on-keyboard-stockpack-adobe-stock.jpg
Robot typing on keyboard

Bingecast: George Montañez on Intelligence and the Turing Test

What do computer scientists say about the ability of machines to think? Alan Turing, the father of modern computer science, tackled the question in 1950 and proposed the Turing test as an answer. Is the Turing test important today? Can a deeper undertanding of intelligence be culled for the Turing test? Robert J. Marks discusses the Turing test, artificial intelligence, Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund
Chatbot / Social Bot mit Quellcode im Hintergrund

Can Machines Think?

What do computer scientists say about the ability of machines to think? Alan Turing, the father of modern computer science, tackled the question in 1950 and proposed the Turing test as an answer. Is the Turing test important today? Robert J. Marks discusses the Turing test with Dr. George Montañez. Show Notes 00:55 | Introducing Dr. George Montañez, Iris and Read More ›

jessica-ruscello--GUyf8ZCTHM-unsplash

A Closer Look at Google’s Search Engine Bias

If Google’s CEO honestly believes that there is no political bias, that is, in itself, a big part of the problem
If Sundar Pichai thinks that there is no bias in Google's algorithms, he is arguing against the nature of writing algorithms itself—not a good position for a computer guy to be in. Read More ›