Mind Matters Natural and Artificial Intelligence News and Analysis
internet-law-concept-stockpack-adobe-stock
Internet law concept
Image licensed via Adobe Stock

Framework for AI Legislation

Unfortunately, current calls for AI legislation seems to be largely motivated by fear of the unknown rather than looking for specific policy goals.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The sudden rise of artificial intelligence (AI) into the Internet landscape has caused many people to be concerned. The people advancing AI seem to have few scruples about where and how it should be applied. This sudden technological change coupled with the fact that those on the forefront seem to be largely amoral opportunists have raised calls for legislation of AI technology. Unfortunately, current calls for AI legislation seems to be largely motivated by fear of the unknown rather than looking for specific policy goals.

In this article, I am going to lay the groundwork for what I think good AI legislation will be. However, before I do that, I want to give some cautionary advice about such legislation.

It wasn’t too long ago that, during another radical technological shift (cryptocurrencies), Sam Bankman-Fried used his position to steal billions from his customers. One might think that this gives reason to regulate the cryptocurrency industry. In fact, it might be. However, also keep in mind that the person on the forefront of calling for regulation was Bankman-Fried himself.  Opportunists tend to go where there are few rules, expand their prominence and power through unethical actions, and then work with legislatures to prevent anyone else from overtaking their prominent position. 

As such, I am quite skeptical when I hear people like Elon Musk calling for an “AI pause” or Sam Altman asking congress to regulate him. I think that, in reality, these are the people who have gotten out in front, and what they are really trying to do is to prevent anyone else from taking over their lead by making advancements illegal. 

So, let me propose a series of what I think are commonsense AI legislations and why I think they make a lot of sense.

Protect Content Consumers from AI

One of the biggest problems with AI right now is that AI content is being mixed in with human-generated content. This brings three problems: (1) the user does not know that the content is from an AI. This means that the user doesn’t know that they should be on the lookout for invented facts or an amoral perspective. (2) there is no standard for who is responsible for the content. Many outlets have created fake profiles for their AI authors. This leads users to expect accountability where there is none. (3) AI is built on human-generated content, and quickly devolves when it consumes its own content.

Proposal: The government should legislate technical and visual markers for AI-generated content, and the FTC should ensure that consumers always know whether or not there is a human taking responsibility for the content.

This could be done by creating special content markings which communicate to users that content is AI-generated.  Since entire books are now being generated with AI, this would apply to all forms of AI-generated media. I can think of several “levels” of AI that could be clearly marked out for users:

  • Content that is purely AI-generated and no human assumes responsibility for. 
  • Content that is AI-generated but the person/company stands behind the results – i.e., they have implemented sufficient controls that they are confident that the output is correct.  The user should be able to identify the user or corporate entity responsible.
  • Content that is a combination of human and AI input. Here, the responsible editor should be identified.
  • Content that the company is not able to determine if it was AI-generated or not, such as submissions from anonymous users where they are not asked if the submission was AI-generated

I can see putting a box around AI-generated content with a robot icon on one corner to identify that the content is AI and using slightly different icons/colors for the different levels of AI involvement. A similar watermark could be developed for AI images and video. 

On a technical level, this can be implemented with specialized HTML tags and attributes. One could mark AI-generated content with a specialized new tag and specialized attributes which designate that the contained content is AI-generated. The FTC can enforce their usage with heavy fines if the proper tagging is not followed. This will enable Google to do things such as allow users to not include AI content when searching. It will enable users to detect which parts of their content are AI-generated and apply the appropriate level of skepticism. And future AI language models can also use these tags to know not to consume AI-generated content.

Note that this can also be used to limit the role of chatbots attempting to influence political outcomes. If all AI-generated content must be marked as AI-generated, then it would not be useful for someone to use AIs to generate a “sock puppet army” to try to persuade the public of something.

Clarify Rules Regarding Responsible Parties

One of the problems created by AI is the inability to identify who is actually responsible for an outcome. This is not so much a problem with chatbots — users generally are aware that the chatbot is not a responsible agent and is not generating reliable content. However, as AI moves into other products, the dividing line may not be as clear. Therefore, legislation should be available to ensure that companies are clear about who exactly is taking responsibility for what.

It’s fine for a software product to produce a result that the software company views as advisory only, but it has to be clearly marked as such. Additionally, if one company includes the software built by another company, all companies need to be clear as to which outputs are derived from identifiable algorithms and which outputs are the result of AI. If the company supplying the component is not willing to stand behind the AI results that are produced, then that needs to be made clear.

In short, right now, people are using AI to produce an “impressive” demo, and then disclaiming responsibility when it produces bad output (“it’s just an AI”). Legislation needs to be made to ensure that all parties are clear about the extent to which they are taking responsibility for the outcomes of their software. As we have mentioned before, computers cannot be held responsible for harms. Ultimately, responsibility must belong to humans.

Clarify Copyright Rules on Content Used in Models

The primary way that modern AI systems such as ChatGPT work is by ingesting huge amounts of content from the Internet, and then processing it into an internal representation that allows the software to respond in a similar fashion when faced with similar inputs. The problem, here, is that in an important way, these internal representations are derivative works of the content that they ingest.

However, from another perspective, for public content, there is no intrinsic limit on who can view such content, and, in fact, it is perfectly legal to learn from such content and be influenced by it. One could say that, since the internal representations are not storing the contents themselves, but instead are merely using the content to generate an algorithm, that this is a fair use of Internet content.

Personally, I think the true answer lies somewhere in the middle. However, at the present moment, there is no clarity on where this line is drawn.

The question also extends to chat content. Can Twitter use your conversations to improve their AI? Do they owe you money for your contributions? What if you didn’t want your personal chats being used to further the AI agenda? To what extent do users have the right to withdraw consent or demand compensation?

Until these questions are answered clearly by the legislators, this leaves the question to the whims of the court system, and therefore means that the likely answer is that the lines are drawn in favor of whoever can afford the more expensive lawyers.

Summary

Note that nothing here limits the technological development of Artificial Intelligence. Additionally, it is not a set of arbitrary rules about what you can and can’t do with AI. AI is just a tool. The goal of these proposals is to give clarity to all involved what the expectations and responsibilities of each party are.


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software R&D engineer at Specialized Bicycle Components, where he focuses on solving problems that span multiple software teams. Previously he was a senior developer at ITX, where he developed applications for companies across the US. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

Framework for AI Legislation