Ryan Abbott wrote:
AI Legal Neutrality
The law plays a critical role in the use and development of AI. Laws establish binding rules and standards of behavior to ensure social well-being and protect individual rights, and they can help us realize the benefits of AI while minimizing its risks – which are significant. AI has been involved in flash crashes in the stock market, cybercrime, and social and political manipulation. Famous technologists like Elon Musk and academics like Stephen Hawking have even argued that AI may doom the human race. Most concerns, however, focus on nearer-term and more practical problems such as technological unemployment, discrimination, and safety.
Although the risks and benefits of AI are widely acknowledged, there is little consensus about how best to regulate AI and jurisdictions around the world are grappling with what actions to take. Already, there is significant international division regarding the extent to which AI can be used in state surveillance of its residents, whether companies or consumers “own” personal data vital to AI development, and when individuals have a right to an explanation for decisions made by AI (ranging from credit approval to criminal sentencing). It is tempting to hope that AI will fit seamlessly into existing rules, but laws designed to regulate the behavior of human actors often have unintended and negative consequences once machines start acting like people. Despite this, AI-centric laws have been slow to develop, due in part to a concern that an overly burdensome regulatory environment would deter innovation. Yet AI is already subject to regulations that may have been created decades ago to deal with issues like privacy, security, and unfair competition. What is needed is not necessarily more or less law but the right law.
In 1925, Judge Benjamin Cardozo admonished a graduating law school class that “the new generations bring with them their new problems which call for new rules, to be patterned, indeed, after the rules of the past, and yet adapted to the needs and justice of another day and hour”. This is the case for AI, even if it only differs in degree from other disruptive technologies like personal computers and the Internet. A legal regime optimized for AI is even more important if AI turns out to be different in kind.
There is not likely to be a single legal change, such as granting AI legal personality similar to a corporation, that will solve matters in every area of the law, which is why it is necessary to do the difficult work of thinking through the implications of AI in different settings. In this respect, it is promising that there have been efforts in recent years to articulate policy standards or best principles such as trustworthiness and sustainability specifically for AI regulation by governments, think tanks, and industry. For example, the Organisation for Economic Co-operation and Development (OECD) adopted Principles on Artificial Intelligence in May 2019, and one month later the G20 adopted human-centered AI principles guided by those outlined by the OECD.
The central thesis of this book contends that there needs to be a new guiding tenet to AI regulation, a principle of AI legal neutrality asserting that the law should not discriminate between AI and human behavior. Currently, the legal system is not neutral. An AI that is significantly safer than a person may be the best choice for driving a vehicle, but existing laws may prohibit driverless vehicles. A person may be a better choice for manufacturing goods, but a business may automate because it saves on taxes. AI may be better at generating certain types of innovation, but businesses may not want to use AI if this restricts ownership of intellectual property rights. In all these instances, neutral legal treatment would ultimately benefit human well-being by helping the law better achieve its underlying policy goals.
AI can behave like a person, but it is not like a person. Differences between AI and people will occasionally require differential rules. The most important difference is that AI, which lacks humanlike consciousness and interests, does not morally deserve rights, so treating AI as if it does should only be justified if this would benefit people. An example of this would be if autonomous vehicles needed to directly hold insurance policies or other forms of security to cover potential injury to pedestrians. This is essentially the rationale for corporations’ being allowed to enter into contracts and own property. Their legal rights exist only to improve the efficiency of human activities such as commerce and entrepreneurship, and like AI corporations do not morally deserve rights. They are a member of our legal community but not our moral community.
Consequently, this book does not advocate for AI’s having rights or legal personhood. Nor is a principle of AI legal neutrality a moral principle of nondiscrimination in the way that term is traditionally used. Antidiscrimination laws have helped improve conditions for historically marginalized groups, primarily as a matter of fairness. However, antidiscrimination laws can also promote competition and efficiency.
Certainly, AI legal neutrality should not be the driving force behind every decision. It should not come at the expense of other principles such as transparency and accountability. A person may be more efficient at mining minerals in hazardous conditions, but automation could be preferable based on safety considerations. An AI may be more efficient at identifying and eliminating military targets, but there could be other reasons not to delegate life and death decisions to an AI.
Rather than a dispositive policymaking principle, AI legal neutrality is an appropriate default that may be departed from when there are good reasons for so doing. This book examines how such a principle would impact four areas of the law – tax, tort, intellectual property, and criminal – and argues that as AI increasingly occupies roles once reserved for people, AI will need to be treated more like people, and sometimes people will need to be treated more like AI. …
…. Over the years, there have been many proposals for extending some kind of legal personality to AI. Most famously, a 2017 report by the European Parliament called on the European Commission to create a legislative instrument to deal with “civil liability caused by robots.” It further requested that the commission consider “a specific legal status for robots” or “possibly [apply] electronic personality” as one solution to tort liability. Even in such a speculative and tentative form, this proposal proved highly controversial. More than 150 AI “experts” subsequently sent an open letter to the European Commission warning that “from an ethical and legal perspective, creating a legal personality for a robot is inappropriate whatever the legal status model.”
Full-fledged legal personality for AI equivalent to that afforded to natural persons, with all the legal rights that they enjoy, would clearly be inappropriate. For example, allowing AI to vote would undermine democracy, given the ease with which anyone looking to determine the outcome of an election could create AI systems to vote for a designated candidate. However, the rights and obligations associated with legal personality vary, even for natural persons such as children who are treated differently than adults.
Crucially, no artificial person enjoys all the same rights and obligations as a natural person. Companies – the best-known class of artificial persons – have long enjoyed only a limited set of rights and obligations that allows them to sue and be sued, enter contracts, incur debt, own property, and be convicted of crimes. However, they do not receive protection under constitutional provisions such as the Equal Protection Clause of the Fourteenth Amendment, nor can they bear arms, run for or hold public office, marry, or enjoy other fundamental rights possessed by natural persons. Thus, granting legal personality to AI in order to allow its punishment would not require AI to receive the rights afforded to natural persons, or even those afforded to companies. AI legal personality could consist solely of obligations.
Even so, any sort of legal personhood for AI would be a dramatic legal change that could prove problematic. Providing legal personality to AI could result in increased anthropomorphisms. People who humanize AI expect it to adhere to social norms and have higher expectations of its capabilities. This is problematic where such expectations are inaccurate and AI is operating from a position of trust. Such anthropomorphisms could result in “cognitive and psychological damages to manipulability and reduced quality of life” for users. These outcomes may be more likely if AI were held accountable by the state in ways normally reserved only for human members of society. Strengthening questionable anthropomorphic tendencies regarding AI could also lead to more violent or destructive behavior directed at AI, such as vandalism. In addition, punishing AI could also affect human well-being in less direct ways, such as by producing anxiety about one’s own status within society due to the perception that AI is given a legal status on par with human beings.
The Source:
Ryan Abbott, The Reasonable Robot, Cambridge University Press 2020 [pp. 2-4, 127-128]
Evolutions of social order from the earliest humans to the present day and future machine age.