Artificial Intelligence, Artificial WisdomWhat manner of harms are we creating?
By Tom Gilson Richard Stevens’ May 11 Stream article, “AI Legal Theories,” suggests we consider making Artificial Intelligence companies legally responsible for the harms they cause. We do that already with consumer products, so in principle it should be possible to do the same with AI. Enforcement would be by civil law. Injured parties would presumably be given standing to sue the source of the harm without having to prove negligence. That gets us somewhere, but not far enough. It settles the question of who is legally responsible. But responsible for what? Specifically, what will we call harm? Who will decide? Based on what standard of wisdom? Stevens gives this example of harm, citing an earlier Stream article by Robert J. Marks: “The Snapchat ChatGPT-powered Read More ›