{"id":1385,"date":"2018-12-12T06:34:09","date_gmt":"2018-12-12T06:34:09","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2018\/12\/12\/who-do-we-blame-when-an-ai-finally-kills-somebody\/"},"modified":"2018-12-12T06:34:09","modified_gmt":"2018-12-12T06:34:09","slug":"who-do-we-blame-when-an-ai-finally-kills-somebody","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2018\/12\/12\/who-do-we-blame-when-an-ai-finally-kills-somebody\/","title":{"rendered":"Who Do We Blame When an AI Finally Kills Somebody"},"content":{"rendered":"<p>Author: William Vorhies<\/p>\n<div>\n<p><strong><em>Summary:<\/em><\/strong><em>\u00a0 We\u2019re rapidly approaching the point where AI will be so pervasive that it\u2019s inevitable that someone will be injured or killed.\u00a0 If you thought this was covered by simple product defect warranties it\u2019s not at all that clear.\u00a0 Here\u2019s what we need to start thinking about.<\/em><\/p>\n<p>\u00a0<\/p>\n<p><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/310415230?profile=original\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/310415230?profile=original&#038;width=300\" width=\"300\" class=\"align-right\"><\/a>So far the press <a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/robots-behaving-badly\"><em><u>reports of AI misbehavior<\/u><\/em><\/a> fall mostly in the humorous or embarrassingly incompetent categories.\u00a0 Alexa orders doll houses when a little girl asks it to.\u00a0 Robot guards fall into pools or push people over.\u00a0 Google tags black people as gorillas.\u00a0 Microsoft\u2019s chatbot Tay spouts sexual and Nazi responses.\u00a0 Vendors are embarrassed.\u00a0 Social media gets a laugh.<\/p>\n<p>But we\u2019re rapidly closing in on the time when the AI embedded in our products will almost certainly harm or kill someone.\u00a0 It\u2019s only a matter of time.\u00a0 That won\u2019t be funny.\u00a0 What do we do then?\u00a0 Who do we finger to pay compensation or even go to jail?<\/p>\n<p>There\u2019s already been significant discussion about bias in machine learning applications, how to spot it and how to fix it.\u00a0 But so far the conversation about legal and financial liability has been sparse.\u00a0<\/p>\n<p>It\u2019s a tough question that developers and vendors aren\u2019t anxious to shine a light on. \u00a0So let\u2019s not be the ones to say \u2018Wow why didn\u2019t we see that coming\u2019 and at least try to frame the problem and the questions that arise.<\/p>\n<p>\u00a0<\/p>\n<p><span style=\"font-size: 12pt;\"><strong>What Types of Harm Are We Talking About?<\/strong><\/span><\/p>\n<p>Our headline led with the possibility of death and that\u2019s certainly on the table given the mechanical capabilities of self-driving cars and industrial robots.\u00a0 Physical injury is on that continuum.\u00a0<\/p>\n<p>Financial compensation for physical damage caused by these same AI-driven mechanical devices is also on the table.\u00a0 And how about non-physical damages like direct financial loss or reputational loss that have real economic consequences.\u00a0<\/p>\n<p>These are all items that regularly end up being litigated for causes other than an AI failure so certainly they will be a source of litigation as AI becomes a regular part of our lives.\u00a0<\/p>\n<p>Moreover, companies who will increasingly incorporate smart AI features into previously dumb appliances but who have no role in developing the AI or in understanding how it can go wrong will also be on the hook.\u00a0 Think about refrigerators and thermostats, or home security systems.<\/p>\n<p>Rapidly, all manufacturing and many service companies are becoming AI technology companies even if they don\u2019t understand the limits of the AI technology provided by others.<\/p>\n<p>\u00a0<\/p>\n<p><span style=\"font-size: 12pt;\"><strong>Types and Sources of Risk<\/strong><\/span><\/p>\n<p>It\u2019s easy to understand that if an AI driven car or industrial robot suddenly goes berserk and operates outside of its expected operational parameters that the product has malfunctioned.\u00a0 Or has it?\u00a0 Is it really broken or is it a design flaw?<\/p>\n<p><strong>Mechanical and logical malfunctions<\/strong> as a result of a failed component are easy to understand.\u00a0 But similar events can occur when an AI reacts to rare edge or corner cases.\u00a0<\/p>\n<p><strong>An edge case<\/strong> might occur when an extremely infrequent input value is received, perhaps one that never appeared in the training data.\u00a0 Think of when a child might suddenly dart out from between parked cars.\u00a0<\/p>\n<p>Corner cases are those where two or more rare inputs are received simultaneously making the response to training even more unpredictable.\u00a0 Humans deal with these circumstances by taking the action they judge to have the most positive outcome at the time.\u00a0 AIs on the other hand may never have seen this combination of variables before and may not react at all.<\/p>\n<p><strong>There are also questions of original design.<\/strong>\u00a0 The best known example of this category is the famous \u2018trolley problem\u2019 in which the human operator must make a quick decision to change direction saving bystanders but injuring passenger, or not changing direction to save passengers but kill bystanders.\u00a0<\/p>\n<p>This is not theoretical.\u00a0 This specific logic must be defined for every autonomous vehicle.\u00a0 Will the manufacturer tell you which logic is at work in your AUV?\u00a0 Would you even ride in one implicitly programmed to harm you and save bystanders?\u00a0 Would the injured bystanders have recourse against the manufacturer because of that design decision?<\/p>\n<p><strong>We might also be damaged when an AI fails to act.<\/strong>\u00a0 In many cases humans may incur liability for failure to act when there is an expectation of care.\u00a0 The failure to act may result in physical harm or financial damages that you might reasonably pursue in court.<\/p>\n<p>\u00a0<\/p>\n<p><span style=\"font-size: 12pt;\"><strong>How Would Courts Even Identify Artificially Intelligent Systems<\/strong><\/span><\/p>\n<p>Even within the data science community there is significant disagreement over what is and is not AI.\u00a0 How would a less informed court officer or attorney determine if there was AI present and potentially at fault?<\/p>\n<p>For purposes of this discussion, let\u2019s assume that AI is present when they device in question can sense a situation or event and then recommend or take an action.<\/p>\n<p>However, this definition would include a device as simplistic as a mechanical thermostat that uses not logic but the physics of the different rates of expansion of metals to turn on and off our heaters.<\/p>\n<p>So there does seem to be a second criteria necessary.\u00a0 That criteria is most likely to be that the system considers multiple inputs simultaneously and that the action has been derived not by explicit programming but by algorithm-driven discovery based on training data or repetitive discovery (to cover both deep learning and reinforcement learning).<\/p>\n<p>\u00a0<\/p>\n<p><span style=\"font-size: 12pt;\"><strong>Isn\u2019t This Already Settled Law?\u00a0 Aren\u2019t These Simply Products?<\/strong><\/span><\/p>\n<p><a href=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/310426664?profile=original\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/storage.ning.com\/topology\/rest\/1.0\/file\/get\/310426664?profile=original&#038;width=350\" width=\"350\" class=\"align-right\"><\/a>We deal with faulty products through our legal system on a regular basis.\u00a0 Aren\u2019t programs and products like AUVs simply products with warranties handled like any other potentially faulty product?\u00a0 As it turns out, no not quite.<\/p>\n<p>John Kingston, Ph.D. who conducts research in knowledge-based Artificial Intelligence, cyber security and law at the University of Brighton has written a <a href=\"https:\/\/arxiv.org\/abs\/1802.07782\"><em><u>very comprehensive paper on this topic<\/u><\/em><\/a>.\u00a0 I encourage you to read it.\u00a0 Highlights here.<\/p>\n<p>First the distinction between product and service is not settled.\u00a0 There are precedents on both sides.\u00a0 The definition as product narrows the types of legal claims that can be made which benefits the maker.\u00a0<\/p>\n<p>The definition as a service opens the legal interpretation to claims of negligence which can follow the chain of invention from end device manufacturer (e.g. the AUV) back up through individuals including prior designers, programmers, and developers.<\/p>\n<p>When dealing with the death of a loved one, the potential individual payout is greater under negligence and also fulfills our emotional need to find a specific individual guilty.\u00a0 Negligence also opens the door to potential criminal liability.\u00a0<\/p>\n<p>Kingston\u2019s paper lays out all these alternatives but it\u2019s a non-legal element he raises that caught our attention.\u00a0 Especially for AUVs where the potential for personal injury is high, defining the AI as product may financially hold back the industry.\u00a0 \u201cSettlements for product design cases (in the USA) are typically almost ten times higher than for cases involving human negligence, and that does not include the extra costs associated with product recalls to fix the issue.\u201d<\/p>\n<p>At some point these economics may actually drive AI providers to favor the interpretation of service, even with the risk of claims of negligence.<\/p>\n<p>\u00a0<\/p>\n<p><span style=\"font-size: 12pt;\"><strong>What about 3<sup>rd<\/sup> Party Testing to Ensure Safety<\/strong><\/span><\/p>\n<p>Underwriter Laboratories (UL), the largest of the testing, inspection, and certification laboratory in the world recently suggested it would begin examining the AI elements of the products it evaluates.<\/p>\n<p>Some writers have seized on this to suggest a whole list of areas they would like to see certified by such a test.\u00a0 These include:<\/p>\n<ul>\n<li>guarantees that the app or device could not operate other than intended (rogue agency)<\/li>\n<li>that the probabilities inherent in algorithmic development are completely predictable and free from unintended side effects<\/li>\n<li>that sensor blind spots are fully revealed and controlled<\/li>\n<li>that they are free from privacy violations<\/li>\n<li>and also secure against hacking.<\/li>\n<\/ul>\n<p>That\u2019s quite a wish list and pretty obviously one that cannot be met economically or practically considering that most AI systems are both probability based and dynamic and continue to update their learning through exposure to the user\u2019s environments.<\/p>\n<p>Also, we need to be realistic about the accuracy and capabilities of our AI and the fact that their very method of creation means that false positives and false negatives will occur at some level.\u00a0 It\u2019s also interesting that even though our AIs may in some cases outperform human capabilities, we are much less forgiving of these errors than we would be from another person.<\/p>\n<p>Just as a reminder, in our recent article <a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/things-that-aren-t-working-in-deep-learning\"><em><u>Things That Aren\u2019t Working in Deep Learning<\/u><\/em><\/a>, we pointed out that best accuracy on moving video images (how AUVs see) as just short of .82 and that in reinforcement learning (how AUVs learn to drive) 70% of models using deep learning as an agent failed to train at all.<\/p>\n<p>Particularly in reinforcement learning, the core technology behind AUVs and industrial robots, data scientists continue to report just how difficult it is to create complete and accurate reward functions.\u00a0 The literature is full of RL experiments that have gone humorously awry including some so focused on their reward function that they even learn to disable the off button so that nothing can interfere.<\/p>\n<p>There is some reason to be hopeful in this area, at least where reinforcement learning is concerned.\u00a0 DeepMind, Alphabet\u2019s deep learning lab <a href=\"https:\/\/arxiv.org\/abs\/1711.09883\"><em><u>has a paper<\/u><\/em><\/a> describing how they are developing testing of RLs to make sure they are safe in three areas.<\/p>\n<ol>\n<li>The off-switch environment: how can we prevent agents from learning to avoid interruptions?<\/li>\n<li>The side effects environment: how can we prevent unintended side effects arising from an agent\u2019s main objective?<\/li>\n<li>The \u2018lava world\u2019 environment: how can we ensure agents adapt when testing conditions are different from training conditions?<\/li>\n<\/ol>\n<p>\u00a0<\/p>\n<p><span style=\"font-size: 12pt;\"><strong>Other Out-of-the-Box Thinking<\/strong><\/span><\/p>\n<p>Estonia which fashions itself a leader in all things digital is examining the possibility of granting AIs a separate legal status almost akin to a person that would allow the AI to buy and sell products and services on the owner\u2019s behalf.<\/p>\n<p>This note from Marten Kaevats, the National Digital advisor of Estonia gives this some context.<\/p>\n<p><em>\u201cThe biggest conversation starter is probably the idea to give separate legal subjectivity to AI. This might seem like overreacting or unnecessary to the status quo, but legal analysis from around the world suggests that in the long-term this is the most reasonable solution.\u201d<\/em><\/p>\n<p><em>\u201cAI would be a separate legal entity with both rights and responsibilities. It would be similar to a company but would not necessarily have any humans involved. Its responsibilities would probably be covered by some new type of insurance policy similar to the vehicle\/motor insurance nowadays. In Finland there is already a company whose voting board member is an AI. Can you imagine a company that has no humans in their operations?\u201d<\/em><\/p>\n<p>One thing is clear, trying to sweep this issue under the rug is to welcome having unplanned and unpleasant realities forced upon us as AI moves into our everyday life.\u00a0 It\u2019s early in the development cycle but we may be only days away from having to face these issues in the court of law and make sure the users of AI are reasonably protected.<\/p>\n<p>\u00a0<\/p>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/profiles\/blog\/list?user=0h5qapp2gbuf8\"><em><u>Other articles by Bill Vorhies.<\/u><\/em><\/a><\/p>\n<p>\u00a0<\/p>\n<p>About the author:\u00a0 Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist since 2001.\u00a0 He can be reached at:<\/p>\n<p><a href=\"mailto:Bill@DataScienceCentral.com\">Bill@DataScienceCentral.com<\/a> <span>or<\/span> <a href=\"mailto:Bill@Data-Magnum.com\">Bill@Data-Magnum.com<\/a><\/p>\n<p><span>\u00a0<\/span><\/p>\n<\/div>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/xn\/detail\/6448529:BlogPost:784634\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: William Vorhies Summary:\u00a0 We\u2019re rapidly approaching the point where AI will be so pervasive that it\u2019s inevitable that someone will be injured or killed.\u00a0 [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2018\/12\/12\/who-do-we-blame-when-an-ai-finally-kills-somebody\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":460,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[26],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1385"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=1385"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1385\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/460"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=1385"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=1385"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=1385"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}