We have enough problems with attaining universal human rights, but activists want animals and “nature” to have human-type rights. Transhumanists and futurists also worry about guaranteeing rights for AI technologies when they attain “consciousness.”
The latest example comes in The Conversation from a professor of game designing — who knew that was an academic discipline? — named Richard A. Bartle, at the University of Essex. He believes that “we may one day create virtual worlds with creatures as intelligent as ourselves.” From, “How to Be a God“:
“I believe we will have virtual worlds containing characters as smart as we are — if not smarter — and in full possession of free will. What will our responsibilities towards these beings be? We will after all be the literal gods of the realities in which they dwell, controlling the physics of their worlds. We can do anything we like to them.”Wesley J. Smith, “Professor Explains “How to Be a God”” at Evolution News and Science Today (January 18, 2022)
Smith does not see what Prof. Bartle’s ethical issue is:
Actually, that would not be a problem because they would be neither alive nor real. No matter how sophisticated these avatars or cyber creatures, it would all be mere programming, in a fictional universe of our own conjuring. That would not make us gods, but gamers.
But Bartle believes we would have a concrete moral obligation to these non-existent beings:
“If we create our characters to be free-thinking beings, then we must treat them as if they are such — regardless of how they might appear to an external observer.”Wesley J. Smith, “Professor Explains “How to Be a God”” at Evolution News and Science Today (January 18, 2022)
The obvious problem is that we have only so much time, energy, and mental space and if we were to start worrying about the ethical status of AI characters in games, it would be at the expense of actual humans. A bit like worrying about the fate of characters in a daytime soap opera.
Read the rest here.
You may also wish to read: Eugenics, transhumanism, and artificial intelligence: If we were to succeed at creating an ethical decision-making AI, whose ethics would it abide by? The utilitarian goal of a “sustainable future” must be guided by a higher ethic in order to avoid grave mistakes of the past. (J. R. Miller)