•  
  •  
 

San Diego Law Review

Library of Congress Authority File

http://id.loc.gov/authorities/names/n79122466.html

Document Type

Article

Abstract

This is the first article to call for a ban on the use of AI technology designed to influence human decision-making, “Persuasive AI,” for the purpose of recruiting or enrolling human participants in drug trials sponsored by commercial entities. It does so from a perspective of precaution, not fear. Advances in Artificial Technology that can assist human decision-making have tremendous potential for good. It makes the case for doing so based on both the substantial risk of harm to the decision-making process and the ineffectiveness of intermediate regulatory measures. This Article looks directly at Persuasive AI, a type of AI that claims to respond directly to the emotions of the humans with which it interacts. There is already considerable evidence of its ability to analyze data in health and military settings not just faster than humans but beyond human capacity. But there is also growing international concern about characteristics associated with “Emotion AI,” such as its persistent reproduction of societal biases and ability to develop beyond its programming, that mitigate against its use in specific, sensitive situations such as health care and the military. The challenge of mitigating these risks is that even the people who develop the programs do not know how it is making decisions and cannot intervene in ways that might prevent harm. All of these factors have led the EU to propose a ban on its use in a broad range of spheres, including health care where manipulating decision-making would be a violation of an individualsʼ human rights. While there is at present no direct federal regulation of Persuasive AI in the United States, concerns about the risks of biased or manipulated decision-making has led to calls within the United States for a moratorium on its use in settings where it is most likely to cause harm. In response, some states and cities have passed or are contemplating legislation to limit its use in law enforcement and employment decisions. As this Article discusses, one of the reasons the United States has been behind other nations is a general disinclination to interfere with the process of adult decision-making. So long as there is no fraud, threat, or deliberate deception, adults are assumed to be competent to evaluate the claims of those trying to persuade them without government protection.

The federal laws regulating obtaining informed consent for biomedical research is a dramatic exception to this mostly hands-off approach. Often collectively referred to as the“Common Rule,” these laws were promulgated in direct response to the unethical behavior of the U.S. government in withholding treatment from Black Sharecroppers (formally known as the Tuskegee Syphilis Experiment). Specifically, these laws create a system in which an ethics committee conducts a preemptive ethical review of all information, including advertisements, generated by the sponsor of the trial to potential participants. Such a review would be meaningless if, as this Article details, the party seeking consent employs a technology that can influence decision-making in ways beyond the ability of the ethics committees to detect.

While the Common Ruleʼs jurisdiction extends to all forms of human subject research, this Article focuses on clinical drug trials by commercial entities because they are, by definition, done for the purpose of marketing a product for the financial benefit of the trialʼs sponsor.

Although sometimes enrollment in a clinical trial is the only path to lifesaving treatment, in many cases the drug being studied is a variation of a product already being sold by another company. Sponsors of these “me-too” trials often find it difficult to enroll the number of patients required to complete the study. New regulations by the FDA intended to encourage the enrollment of populations disproportionately absent from drug trials, primarily Black adults, make the recruiting process for sponsors even more difficult. This is demonstrated by original research in this Article showing that many companies are marketing AI to sponsors to assist in obtaining a diverse pool of subjects. While there is no suggestion that they are offering to exert undue influence, the inability to control what these algorithms are doing supports a call for restraint. It would be especially unfortunate if these efforts to increase diversity create an incentive to manipulate and coerce the very populations originally exploited by the Public Health Service in Alabama. So, without alleging ill intent on the part of anyone involved in developing, marketing, or using this software to recruit participants or obtain their consent to participate in clinical drug trials, this Article argues that Persuasive AIʼs capacity to undermine the free will of potential participants in ways that are both undetectable and impossible to remediate.

Included in

Law Commons

Share

COinS