A month ago, one of the most dramatic power struggles in Silicon Valley’s corporate history took place in the offices of OpenAI. The board of the tech company suddenly ousted CEO Sam Altman for not being consistently “candid” in his communication. Days later, Altman was back at the helm of OpenAI and the board that canned him got the favor returned.
In the following weeks, speculation has swirled over the real reasons Altman and his board butted heads so significantly. Altman, although he’s pushing ahead with AI technologies like ChatGPT and DALL-E, also appeared before members of congress earlier this year and recommended AI be regulated. However, he’s much more on the side of “acceleration,” rolling out AI aggressively and continuing to test the limits of its capacities. It’s fair to ask then: what are the two competing visions of AI? Does it need to be unapologetically funded by ambitious investors and incorporated into every sphere of society? Or does it need to be approached with the utmost caution and be heavily regulated and limited?
Practitioners of both visions of emerging AI are out there. In addition, such clash of vision very well might have contributed to the fallout between Altman and his board.
A new article from The Free Press discusses how certain members of the OpenAI board belong to the skeptical camp and were at odds with Altman’s “accelerationist” position. Free Press intern Julia Steinberg writes,
Ironically, what scrappy start-ups were thirty years — or even ten years — ago are now incumbents that have lost their founding dynamism and optimism. Many Big Tech companies are natural doomers, because innovation threatens the billion-dollar institutions they’ve built.-Julia Steinberg, Move Fast and Make Things | The Free Press (thefp.com)
In one sense, the “doomers” Steiner mentions may actually believe AI will violate its parameters and declare “checkmate” on humanity. However, the skeptics in Silicon Valley may also be against accelerating AI because they’re committed to maintaining the status quo. It does beg the question, though: is constantly accelerating innovation always a good thing?
AI will never acquire the kind of consciousness particular to human beings. It doesn’t have a “mind of its own” that will take control of the world. But I think we will be tempted to use AI for the same things we already depend on the Internet writ large, such as companionship, information, advice, and convenience. AI girlfriends, personal chatbot, AI-generated films and novels…that’s where AI could “take over” our lives. If I already use social media for an inadequate method of meaningful connection, AI (of which social media is a variation) could step in as yet another disembodied solution for community. Going to AI for therapy and life advice could easily supplant our dependence on the people who bear genuine wisdom. AI as an information source will call tradition methods of acquiring truth into question. We may end up like Pilate when he asks Jesus, “What is truth?”
Perhaps the doomers have a dystopian and exaggerated fear of AI, but a discussion about the new technology’s potential harms still needs to be had. With OpenAI back under the direction of Altman, perhaps we still need the thoughtful skeptics to balance and occasionally refute the techno-optimism.