Academic journal article Texas Law Review

The Foreseeability of Human–Artificial Intelligence Interactions *

Academic journal article Texas Law Review

The Foreseeability of Human–Artificial Intelligence Interactions *

Article excerpt

Consider the following hypotheticals:

(1) A hospital uses artificial intelligence software to analyze a patient's medical history and make a determination as to whether he or she needs surgery. One day, the artificial intelligence software incorrectly diagnoses a patient and recommends an unnecessary surgery. In preparation for the surgery, an anesthesiologist applies an incorrect dosage of the surgical anesthetic and kills the patient.

(2) An investment firm uses artificial intelligence software to identify promising stocks for investment. Without any further research, an investment banker negligently recommends stocks off of the software's prepared list. Those stocks go bust, costing their new owners thousands of dollars.

(3) A vehicle with autonomous-driving software is cruising down a two-lane road. The lane to its right is filled with cars driving in the same direction. A human driver is in oncoming traffic and recognizes the autonomous car as being from a notable autonomous car brand. The human driver decides it would be fun to "play chicken" with the car to see how it will react. The human driver proceeds to swerve into the autonomous vehicle's lane and the autonomous vehicle, thinking it best to avoid a head-on collision and not realizing the human driver won't hit it, swerves into the right lane, triggering a collision with an innocent third-party car.

(4) A delivery drone, piloted with autonomous-piloting software, is en route to deliver a package. On its way, it passes the home of a paranoid man who is very concerned with his privacy. He proceeds to take a baseball, and with an impressive throw, knocks the drone out of the sky. The drone crashes down and hits a child playing in a nearby park.

(5) A company selling artificial intelligence software sells its product to a racist. The racist proceeds to install the software onto a robot butler, and the robot butler proceeds to learn and develop under the teachings of its owner. One day, a black UPS driver delivers a package to the front door. The now-racist robot answers the door and upon seeing the black UPS driver, thinks, "The only reason a black person would be on my front porch would be if he were here to burgle my owner." The robot proceeds to attack the UPS driver under the mistaken assumption that he is a burglar.

In each of the above hypotheticals, the use of artificial intelligence led to the injury of an innocent person. When faced with an injury caused by another, each of these persons may seek a remedy through the tort system. The tort system is designed to provide monetary damages for injured parties when they are harmed by the negligent conduct of another.1 In this way, the tort system assures that the costs of negligent conduct lie with those responsible for causing the injury.2 Each injured party in the hypotheticals above can sue the negligent actor who caused the harm-but who (or what) exactly caused the injured party's harm? In the above hypotheticals, there are human actors who cause the injured party's harm through obviously negligent conduct or even intentional conduct. These human actors present themselves as obvious targets, but what about the developers of the artificial intelligence software? When the injured parties sue in court, they are likely to sue whomever has the deepest pockets.3 This should strike fear into the hearts of many artificial intelligence companies, because in these tort suits, they are likely to be the parties in the best financial position to pay out damages.

If artificial intelligence companies are sued for the negligent development of their software, courts will be faced with a difficult question of foreseeability. When proving a case of negligence, plaintiffs are required to show the harm that occurred was a foreseeable consequence of the defendant's negligent conduct.4 This is also called satisfying the proximate cause requirement of a negligence case.5 In each hypothetical, was the interaction between the artificial intelligence software and human actor foreseeable? …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.