Mind Matters Natural and Artificial Intelligence News and Analysis
silhouette of business people networking on cyberspace

Your Software Could Have More Rights Than You

Depending on politics and court judgments, legal loopholes could lead to AI personhood
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Can software be legal “persons”?

Debates about rights are frequently framed around the concept of legal personhood. Personhood is granted not just to human beings but also to some non-human entities, such as corporations or governments. Legal entities, aka legal persons, are granted certain privileges and responsibilities by the jurisdictions in which they are recognized, and many such rights are not available to non-person agents. Attempting to secure legal personhood is often seen as a potential pathway to get certain rights and protections for animals1, fetuses2, trees and rivers 3, and artificially intelligent (AI) agents4.

It is commonly believed that a new law or judicial ruling is necessary to grant personhood to a new type of entity. But recent legal literature 5–8 suggests that loopholes in the current law may permit legal personhood to be granted to AI/software without the need to change the law or persuade a court.

For example, L. M. LoPucki6 points out, citing Shawn Bayern’s work on conferring legal personhood on AI7, 8, “Professor Shawn Bayern demonstrated that anyone can confer legal personhood on an autonomous computer algorithm merely by putting it in control of a limited liability company (LLC). The algorithm can exercise the rights of the entity, making them effectively rights of the algorithm. The rights of such an algorithmic entity (AE) would include the rights to privacy, to own property, to enter into contracts, to be represented by counsel, to be free from unreasonable search and seizure, to equal protection of the laws, to speak freely, and perhaps even to spend money on political campaigns. Once an algorithm had such rights, Bayern observed, it would also have the power to confer equivalent rights on other algorithms by forming additional entities and putting those algorithms in control of them.”6. (See Note 1.)

Would a loss of human rights result?

The process of getting legal rights for AI, described above, doesn’t specify any minimal intelligence/capability for the AI involved. It could amount to just a few “if” statements, a random decision generator, or an emulation of an amoeba. (See Note 2.) To grant most, if not all, human rights to a cockroach, for example, would be an ultimate assault on human dignity (but it may please some people9). The purpose might be an art project or a protest of unequal treatment of all humans by human rights activists. We have already witnessed an example of such an indignity, and consequent outrage, from many feminist scholars10 when Sophia the robot was granted citizenship in Saudi Arabia, a country notorious for unequal treatment of women.

One outcome of legal personhood and granting of associated rights to AI is that some humans will have fewer rights than trivial (non-intelligent) software and robots, a great indignity and discriminatory humiliation. For example, certain jurisdictions limit the rights of their citizens, such as a right to free speech, freedom of religious practice, or expression of sexuality. But AIs with legal personhood in other jurisdictions would be granted such rights.

If, on the other hand, AIs were to become more intelligent than humans, the indignity for humanity would come from being relegated to an inferior place in the world, outcompeted in the workplace and all other domains of human interest 11, 12. AI-led corporations, for example, would be in a position to fire their human workers. This might lead to deteriorating economic and living conditions, permanent unemployment and potential reduction in rights, not to mention worsening the risk of existential catastrophes such as extermination.13

If AI gains legal personhood via the corporate loophole, laws granting equal rights to artificially intelligent agents may result, as a matter of equal treatment. That would lead to a number of indignities for the human population. Because software can reproduce itself almost indefinitely, if given civil rights, it would quickly make human suffrage inconsequential 14 leading to the loss of self-determination for human beings. Such loss of power would likely lead to the redistribution of resources from humanity to machines as well as the possibility of AIs serving as leaders, presidents, judges, jurors, and even executioners. We might see military AIs targeting human populations and deciding on their own targets and acceptable collateral damage. They may not necessarily subscribe to the Geneva Convention and other rules of war. Torture, genocide, and nuclear war may become options to consider to reach desired goals.

Conclusion

We have looked at a number of problems which AI personhood can cause as well as the direct impact on human dignity that arises from such legal recognition. The question before us: Is there anything we can do to avoid such a dehumanizing future? While some solutions may be possible in theory, it does not mean that they are possible in practice. Changing the law to explicitly exclude AIs from becoming legal entities may be desirable but unlikely to happen in practice, because that would require changing existing corporate law across multiple jurisdictions and such major reforms are unlikely to pass. Perhaps it would be helpful to at least standardize corporate law across multiple jurisdictions, but that is likewise unlikely to happen in the foreseeable future.


This article is an abridged version of a research paper. Human Indignity: From Legal AI Personhood to Selfish Memes, Roman V. Yampolskiy, (Submitted on 2 Oct 2018), rXiv.org > cs > arXiv:1810.02724

Note 1: See the original article for footnotes, which have been removed to improve the readability of quotes.

Note 2: The same legal loophole could be used to grant personhood to animals or others with inferior rights.

1 Varner, G.E., Personhood, ethics, and animal cognition: Situating animals in Hare’s two level utilitarianism. 2012: Oxford University Press.

2 Schroedel, J.R., P. Fiber, and B.D. Snyder, Women’s Rights and Fetal Personhood in Criminal Law. Duke J. Gender L. & Pol’y, 2000. 7: p. 89.

3 Gordon, G.J., Environmental Personhood. Colum. J. Envtl. L., 2018. 43 : p. 49.

4 Chopra, S. and L. White. Artificial agents-personhood in law and philosophy. in Proceedings of the 16th European Conference on Artificial Intelligence. 2004. IOS Press.

5 Solum, L.B., Legal personhood for artificial intelligences. NCL Rev., 1991. 70: p. 1231.

6 LoPucki, L.M., Algorithmic Entities. Washington University Law Review, 2018. 95(4).

7 Bayern, S., The Implications of Modern Business–Entity Law for the Regulation of Autonomous Systems. European Journal of Risk Regulation, 2016. 7(2): p. 297-309.

8 Bayern, S., Of Bitcoins, Independently Wealthy Software, and the Zero-Member LLC. Northwestern University Law Review, 2013. 108: p. 1485.

9 Tomasik, B., The importance of insect suffering. Essays on Reducing Suffering, in Reducing Suffering. 2016: Available at: The Importance of Insect Suffering

10 Kanso, H., Saudi women riled by robot with no hjiab and more rights than them. (Reuters, November 1, 2017).

11 Bostrom, N., Superintelligence: Paths, dangers, strategies. 2014: Oxford University Press.

12 Yampolskiy, R.V., Artificial Superintelligence: a Futuristic Approach. 2015: Chapman and Hall/CRC.

13 Pistono, F. and R.V. Yampolskiy. Unethical Research: How to Create a Malevolent Artificial Intelligence. in 25th International Joint Conference on Artificial Intelligence (IJCAI-16). Ethics for Artificial Intelligence Workshop (AI-Ethics-2016). 2016.

14 Yampolskiy, R.V., Artificial intelligence safety engineering: Why machine ethics is a wrong approach, in Philosophy and Theory of Artificial Intelligence. 2013, Springer. p. 389-396.


Roman Yampolskiy

Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Yampolskiy is a Senior member of IEEE and AGI and Member of Kentucky Academy of Science. Dr. Yampolskiy’s main areas of interest are AI Safety and Cybersecurity.

Your Software Could Have More Rights Than You