Mind Matters Natural and Artificial Intelligence News and Analysis
art-collage-businessman-with-a-laptop-instead-of-a-head-online-research-concept-stockpack-adobe-stock
Art collage. Businessman with a laptop instead of a head. Online research concept.

The Need for Accountability in AI-Generated Content

Just because we live in a world of AI does not mean we can escape responsibility
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

AI-generated content has become increasingly common on the web. However, as we enter this new era, we will need to think through the moral and social ramifications of what we are doing, and how we should negotiate the new ethical landscape.

But first, a brief recap of recent history.

The first major player to pioneer AI-generated content was the Associated Press. AP realized that many market-oriented articles were pretty monotonous and read like templates anyway, so they decided to fully commit and auto-generate many of them. If you read an AP story about a company’s earnings report and it sounds eerily like every other story about other companies’ earnings reports, there’s a reason for that.

Templated content, while annoying, provides window-dressing to raw data, and is unlikely to lead to direct harm. However, CNET has recently been going beyond templated content trying its hand at making original articles using AI technology.  However, this has not gone as well for them as many of the generated articles contained serious errors. In a strangely ironic twist, the computers were caught making obvious basic mathematical errors, such as saying that a $10,000 deposit with 3% interest will earn a whopping $10,300 each year. CNET had to correct 41 of its 77 generated articles.

This brings back a topic that we have covered several times on this site—who exactly is responsible for the results of an AI system? This is a topic that the self-driving car industry has had to deal with for a while now (some more responsibly than others) and the question is starting to be a more societally important one.

Interestingly, how best to manage this environment comes not from the software industry, but from a group of theologians.  The “Evangelical Statement of Principles” on artificial intelligence makes some excellent points, such as the fact that not only should humans not cede our moral responsibilities and accountability to computers, but that we can’t. Ultimately, somewhere in the chain, there is a human responsible whether we admit it or not.

Discerning Who is Responsible

Recognizing this fact helps us move forward in what to do about AI-generated content. What is needed is a method for clearly marking who is taking responsibility for the content.  This is not just a byline attribution, but who should be held accountable for mistakes in content. Let’s say that Fred uses AI tools to generate content. Fred should be held accountable for mistakes in the content just as if he had made the mistakes himself. This is not something that should be punted to a committee or group, but there should be an individual who is explicitly named as being responsible for the content. 

For strictly templated content, the software developer could be that named individual. After all, if the content is not populating properly, it probably is their fault. For completely generated content, it should probably be the editor. However, this means that the editor must play a more engaged role than previously. With human authors, the primary responsibility for article content is the writer, not the editor. The editor can indeed entrust another human with that responsibility. With AI-generated content, the editor must bear the responsibility and must do so directly.

We need to hold content creation companies accountable for bad content they provide, and demand that they hold their writers and editors accountable as well. Responsibility is not something that can be passed off to a machine, and we can’t scapegoat our computers for bad content. Being in a world of AI does not mean that we can escape responsibility.


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software engineer at McElroy Manufacturing. Jonathan is part of McElroy's Digital Products group, where he writes software for their next generation of tablet-controlled machines as well as develops mobile applications for customers to better manage their equipment and jobs. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

The Need for Accountability in AI-Generated Content