OpenAI CEO Sam Altman is ambitious about his company’s future, promising the world that they are developing “artificial general intelligence” (AGI) that will supposedly compete with human intelligence, per a recent Futurism piece.
However, the ambition is misguided. Or more than that, the ambition is simply delusional. AI is “not even close” to attaining the creativity and intelligence of human beings, and Altman shouldn’t be parading OpenAI products as if it is. Victor Tangermann writes,
In reality, however, LLMs have a very long way to go until they’re able to compete with the intellect of a human being — which is why several experts are calling foul on Altman’s recent blog post, calling it meaningless and misleading. After all, AGI is a vague term, borrowed from the realm of science fiction, referring to something that simply doesn’t exist yet. In fact, we haven’t even settled on a common definition.-Victor Tangermann, Experts Slam OpenAI’s “Meaningless” New Promises About AGI (futurism.com)
Altman notes the potential abuses of such technology (the kind that doesn’t exist yet), although there are many complaints OpenAI is facing now. Tangermann quotes University of Washington linguistics professor Emily Bender, who wrote in a Twitter thread,
There are harms NOW: to privacy, theft of creative output, harms to our information ecosystems, and harms from the scaled reproduction of biases. An org that cared about ‘benefitting humanity’ wouldn’t be developing/disseminating tech that does those things.”
Bender thinks OpenAI has already breached a lot of boundaries and needs to be regulated.