27159
post-template-default,single,single-post,postid-27159,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.4,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Liability With Regard to Artificial Intelligence

Liability With Regard to Artificial Intelligence

In 2016, Microsoft developed a chatbot—a piece of artificial intelligence (“AI”) software created to “develop conversational understanding by interacting with humans”—named Tay to engage with users on Twitter.[1] Tay was designed to appeal to millennial users and learn from other users’ posts.[2] However, within 24 hours, Twitter users manipulated Tay into posting right-wing and anti-Semitic hate speech.[3] In response, Microsoft shut Tay down, explained that users “exploited a vulnerability in Tay” which developed from “a critical oversight [by Microsoft] for this specific attack,” and took “full responsibility” for not anticipating this vulnerability.[4] But how exactly were Twitters users able to do this? And what would Microsoft taking responsibility mean in a legal sense?

A very brief overview of how AI software like Tay may be helpful to unpacking both questions. Basic AI networks involve code which resembles a series of layered “neurons” that use linear algebra to processes data.[5] The initial layer of neurons, which are the code’s vessels for holding values, receive the information and assigns each input a value from zero to one.[6] They then pass these values along to further “hidden” layers of neurons which refine it in some desired way.[7] After a series of hidden layer refine these inputs and attribute some new value, these hidden layer neurons then activate the output layer neurons (which correspond to outputs designated by the code) to varying degrees.[8] The values of the output neurons represent the confidence the network has in that output, where zero indicates no confidence that this output is correct, and one being complete confidence.[9] The output with the most confidence is then selected as the network’s “answer.”[10]

However, issues arise when the network either chooses the wrong answer or is uncertain about its answer (usually denoted by middling confidence amongst many different outcomes).[11]. To rectify these issues, the system undergoes a process called “deep learning” where it tries to minimize the “cost function” that measures the system’s output inaccuracy; the greater a cost function, the more inaccurate the system.[12] The system then undergoes “backpropagation” in order to change values of the prior neurons to conform to the minimization of its cost function.[13] Backpropagation changes how the system’s neurons interpret the input data in hopes of producing more accurate outputs.[14] After this process is complete, the system will do the same thing after a new batch of input data is achieved.

This process allows the programmers to control how the AI “learns,” but not exactly what they “learn.” With Tay, too many right-wing posts manipulated her algorithm to make it “think” that this was the proper way to have conversations.[15] Its algorithm essentially told it that these were the correct inputs and, consequently, its further communications attempted to mirror these inputs responses as outputs. The network’s ability to “learn” and then act on this learning—without further input from the developers—creates an issue for assigning liability: if a program “learns” and then “chooses” to do something without the developer’s input, can the developer be held liable?

Very little litigation has occurred to develop a uniform framework to address this issue of how to assign liability for AI software like Tay.[16] Some courts have heard cases regarding breach of warranty liability, categorizing non-AI software as a service rather than a product.[17] Meanwhile, others have found software to be a product, rather than a service.[18] Categorizing software as property could subject developers to product liability claims, and thus hold them strictly liable for the “decisions” of an AI robot.[19]

Some state legislatures have adopting autonomous vehicle laws to address similar liability concerns.[20] California has opted to require autonomous vehicle manufactures to purchase a $5,000,000 surety bond to protect citizens if the manufacturer “fails to pay any final judgement for damages for personal injury, death or property damage arising from an accident an autonomous vehicle operated by the manufacturer before are allowed….”[21] Florida, like California, also puts the liability of autonomous vehicles onto the manufacturers, requiring that they hold a $1,000,000 insurance policy.[22] While it seems these regulations take a stance as to where liability falls, some academic authors are advocating for some AI (like IBM’s Watson) to be treated differently than autonomous vehicles, and to instead be given personhood designations.[23] While the conversation about liability for some forms of AI software is being had, we will have to wait for courts and legislatures to address this issue directly.

Footnotes[+]

Pearse Walsh

Pearse Walsh is a 2LE staff member on the IPLJ and graduated from Fordham College at Rose Hill in 2019 with a B.S. in General Science and Philosophy. At FLS, he is also a TA for Professor Denno’s Crim class, one of the co-VPs on the FEDS E-Board, and the EVP of the SBA E-Board.