And Sci Fi Thought AI Was Going To… Take Over?
Here are some non-hype news briefs from the world of the supposed AI takeover that could help us separate fact from fictionFrom recent news:
Image Credit: Elnur - ● “AI Adoption Rate Trending Down for Large Companies” Down?
The US Census Bureau conducts a biweekly survey of 1.2 million firms, and one question is whether a business has used AI tools such as machine learning, natural language processing, virtual agents or voice recognition to help produce goods or services in the past two weeks. Recent data by firm size shows that AI adoption has been declining among companies with more than 250 employees, see chart below.
Torsten Sløk, Apollo Chief Economist, September 7, 2025
Apollo is a coaching firm for investors.
Currently, unemployment is up but it’s still more likely to be due to traditional issues than “A hot bot got my slot!” However, some may see an advantage in portraying the situation as caused by the relentless advance of AI rather than by a slow economy…
Image Credit: Narumol - ● “Education report calling for ethical AI use contains over 15 fake sources – Experts find fake sources in Canadian government report that took 18 months to complete.” Benj Edwards reports for Ars Technica on the report prepared for the government of Newfoundland and Labrador (province) on modernizing education using ethical AI:
Aaron Tucker, a Memorial assistant professor whose research focuses on AI history in Canada, told CBC he could not find numerous sources cited in the report despite searching the MUN Library, other academic databases, and Google. “The fabrication of sources at least begs the question: did this come from generative AI?” Tucker told CBC. “Whether that’s AI, I don’t know, but fabricating sources is a telltale sign of artificial intelligence.” September 12, 2025
One commentator wonders out loud if the ethical AI academics “wasted 17+ months on pointless meetings, then threw something together at the last minute, cheating with ChatGPT,” as so many students have done.
The tendency to just make stuff up appears to be a hard ceiling for these alleged superintelligences. So if you are inclined to check your work to ensure accuracy, don’t quit your job over ChatGPT just yet. People may soon be getting jobs spotting this stuff.
● At The Scientist, Sneha Khedkar reports, “ChatGPT Fails to Flag Retracted and Problematic Articles”:
The large language model scored a majority of discredited or retracted articles highly, highlighting that information obtained from AI tools must be verified.
“This is a fantastic paper [on a] really, really important topic,” said Jodi Schneider, an information scientist at the University of Wisconsin-Madison, who was not involved in the study. The bottom line for researchers is “don’t trust any fact that is coming from AI [tools],” she noted.
For their study, Thelwall and his team identified 217 articles that either had controversial claims or had been retracted. They then submitted the article titles and abstracts to ChatGPT, requesting the tool to evaluate the quality—benchmarked against standard guidelines—of each paper 30 times, yielding 6,510 responses. They did not ask the LLM whether the article had been retracted upfront, “because that’s not what a user would do,” said Thelwall.
None of the 6,510 responses that ChatGPT generated mentioned that the articles were retracted or had been flagged for serious concerns. The tool scored a majority of the papers highly, indicating that the articles were world-leading or internationally excellent.
September 11, 2025 The paper is open access.
Image Credit: witsarut - And this is science, not a pop star’s fan club. Neither SkyNet nor the Matrix nor Hal could be reached for comment.
● Lastly, at City Journal, Chris Rufo identifies another problem: Value systems alien to users may be embedded in large language systems like ChatGPT. They may be shaping the system and the user doesn’t even know:
The answer begins with the fact that artificial intelligence companies deliberately select the values embedded in the code base, which chatbots use to formulate responses to users’ questions. For example, the AI company Anthropic published an official “constitution” that outlines the values it embeds in its software, including those embodied in the United Nations Declaration of Human Rights and in several concepts borrowed from critical race theory. Since the software’s responses are filtered through those values, Anthropic has what many consider the most left-wing-biased AI.
The choice of values is inevitable. All artificial intelligence companies have, explicitly or implicitly, baked an ideological formula into their “constitutions,” “system cards,” “alignment principles,” or “trust and safety rubrics.” The question is not whether an AI system will be built upon a set of values; the question is which set of values the programmers will select.
“Trump Strikes a Blow Against ‘Woke AI,’” July 25, 2025
It sounds like having a colleague who is constantly sabotaging a project in subtle ways that you can’t specifically identify. Rufo says that US President Trump has signed an executive order against using this stuff in the federal government:
Trump’s executive order makes clear that while all AI companies are free to select any operating ideology, the federal government will purchase only software that is “truth-seeking” and committed to “ideological neutrality.” In other words, Washington will not do business with companies whose models will result in “the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex.” ‘Woke AI’”
Odds are, the problem is deeply embedded. To avoid frustration, people who aren’t cool with parroting Valley politics should go on doing their own writing and thinking. They’d have to anyway, in order to say anything they would want others to hear.
Anyway, it sounds as though, if a computer takes your job — and the job is much more complex than adding up figures — it will certainly make one heck of a mess of it.
