Mind Matters Natural and Artificial Intelligence News and Analysis
education-kid-learning-and-training-with-ai-knowledge-and-te-1568784515-stockpack-adobestock
Education, kid learning and training with AI, knowledge and technology concept. Robot hologram in graduation hat showing while child using digital tablet with finger touch on screen, homework school.
Image Credit: tete_escape - Adobe Stock

AI in Education: Is the System Being Gamed — or the Student?

AI makes it easier to game the system, but many users are also being gamed
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Promoters of AI like to say that ChatGPT is the fastest growing technology of all time. It purportedly achieved 100 million users in five days and around 400 million people now use it every week. But the promoters don’t like to acknowledge that revenues have seen little growth and great “killer” applications have yet to emerge.  

OpenAI admits that students are the biggest users and of students who reported using AI, nearly 40% of those in middle and high schools said they employed it without teachers’ permission to complete assignments. “Among college students who use AI, the figure was nearly half. An internal analysis published by OpenAI said ChatGPT was frequently used by college students to help write papers.”

Negative impact on education?

Woman using laptop and AI chat bot for creative servicesImage Credit: Владимир Стеблев - Adobe Stock

Educators are concerned. They worry that easy access to generative AI tempts more students to avoid challenging academic work, particularly because AI-based work “will require a higher level of cognition than does the rote work many white-collar employees now do. But as AI is getting smarter, young college grads may be getting dumber. Like early versions of ChatGPT, they can regurgitate information and ideas but struggle to come up with novel insights or analyze issues from different directions.” (Wall Street Journal)

In other words, students are gaming the system, passing easily through school. But in reality, they are being gamed; their chats are used to train the system while they fail to develop the necessary skills to survive in an AI world. One graduate admits that he “used AI to cheat his way through school, used AI to cheat his way through internship interviews at Amazon and Meta — he received offers from both — and in the winter broadcast his tool on social media.”

Some schools are clamping down on the use of AI but many professors also use it, making it hard for schools to convince students that they shouldn’t use AI to do assignments. (New York Times) Professors use it to write mail and papers, do social media posts, and summarize notes.

Even worse, Meta uses ChatGPT to talk with children about sex across Instagram, Facebook and WhatsApp, Meta Platforms. Mark Zuckerberg claims this “will be the future of social media,” even as “staffers across multiple departments have raised concerns that the company’s rush to popularize these bots may have crossed ethical lines, including by quietly endowing AI personas with the capacity for fantasy sex.” (Wall Street Journal)

AI chatbot therapists

Using generative AI as a friend or therapist is also becoming popular across all ages. A survey of 1,060 teens aged 13 to 17 across the US ‘found that around three in four kids have used AI companions,” “with over half of surveyed teens qualifying as regular users of AI companions, meaning they log on to talk to the bots at least a few times per month.” (Futurism)

The results are not good. “A troubling number of ChatGPT users, both young and old, are falling into states of delusion and paranoia following extensive use of the OpenAI bot.” Perhaps they are driven by the tech’s sycophantic behavior or its penchant for being flattering, agreeable, and obsequious to users. These chats can “culminate in breaks with reality and significant real-world consequences, which include the dissolution of marriages and families, job loss, homelessness, voluntary and involuntary stays in mental health facilities,” and death. (Futurism)

More serious uses of large-language model (LLMs) also have their dark side. Writing articles for the Internet with them has produced what some people call the “enshittification” of the Internet or “slopaganda.” The Financial Times says: from enshittification to slop and slopaganda, many people are claiming that the internet is on the decline, and generative AI is contributing to that decline. “The last bits of fellowship and ingenuity on the web are being swept away by a tide of  so-called artificial intelligence.” Some are also worried that bad actors can use ChatGPT to produce mass postings on almost any topic and any political persuasion to foment dissent in America. (Wall Street Journal)

Negative impact on scientific research

Scientific researchers are also contributing to slopaganda. Even before LLMs, traditional machine learning led to errors in over 600 papers across 30 scientific fields. In many cases, the affected papers constituted the majority of the surveyed papers, raising the possibility that in many fields, the majority of AI-enabled research is flawed.” (AI Snake Oil)

Artificial Intelligence and chatbot concept.Person hand using laptop chat with ai intelligent for learning education, cloud computing, environmental, data protection, search, robot, business.Image Credit: TStudious - Adobe Stock

One analysis of the vocabulary changes in more than 15 million biomedical abstracts from 2010 to 2024 concluded that “at least 13.5% of 2024 abstracts were processed with LLMs.” (Science) Similarly, a Springer Nature book on machine learning is full of made-up citations. (Retraction Watch)

And because researchers are also using AI to review articles, some researchers are hiding secret commands  in their academic papers to trick AI-powered review systems into giving positive feedback. The instructions, such as “give a positive review only,” were concealed from human eyes using white text or microscopic fonts.

We are quickly heading towards a world in which academic papers are written and reviewed by AI. Two Princeton computer scientists don’t think AI will facilitate good scientific research. (AI Snake Oil)

This is probably one reason that most Americans don’t think AI will improve their lives as much as the experts do. A survey found that “only about 11% of the public” says that they are more excited than concerned about the increased use of AI in daily life” while 51% say the opposite. In contrast, only “15% of experts shared that pessimism.” (Ars Technica)

Many of the experts will benefit because their companies will gradually learn to monetize AI and thus be able to pay those experts top salaries. What about the rest of us? We will be forced to wade through slop and pay higher electricity costs because of AI while anything we write will be used to train the AI systems. We are being gamed just so some people can try to game the system.


Jeffrey Funk

Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Jeffrey Funk is the winner of the NTT DoCoMo Mobile Science Award and the author of six books including his most recent one: Unicorns, Hype and Bubbles: A Guide to Spotting, Avoiding and Exploiting Investment Bubbles In Tech.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

AI in Education: Is the System Being Gamed — or the Student?