From “AI for Social Good” to “AI for Democracy”

by Colin Garvey, PhD Candidate, Department of Science & Technology Studies Rensselaer Polytechnic Institute, Troy, NY

1. Introduction

Regarding the responsible research, development, and use of artificial intelligence (AI) technologies, the most pressing issue I see is democracy. As a kind of litmus test for AI’s impact on society at large, public reaction to the 2016 US presidential election suggests that current AI technologies are perceived as having failed regular people while extending the power of elites. On one hand, the intelligent algorithms behind Facebook’s newsfeeds and Google’s search results kept voters in private “echo chambers,” exposing them only to news and information that fit their worldviews. On the other, less-than-scrupulous algorithms were deployed to rapidly disseminate “fake news” through armies of Twitter-bots, and carefully cultivate the outrage, fear, and suspicion of certain ethnic and religious groups by micro- targeting specific voter demographics with divisive messages. Regardless of which candidate won, at the macrosocial level, the result was a win for misinformation and hate; a loss for fact-based decision making and the value of truth. Many political commentators are suggesting that America’s democracy is now in crisis. One need not claim that AI caused this problem to see that it is nevertheless implicated in it.

These events have shown that maintaining American democracy in a digital age presents new and unforeseen challenges. And the concerns are not limited to electoral politics. Several reports have demonstrated how discriminatory racial bias can be reproduced by and entrenched within AI technologies, despite a lack of ill-will on the part of developers.[1] Moreover, data scientists are beginning to document the numerous ways in which AI technologies can exacerbate socioeconomic inequality, further threatening modern democracy.[ii]

It is my position in this essay that if our goal as researchers, developers, entrepreneurs, and scientists is to ensure that AI technologies are used responsibly for the “maximum benefit of society,” then our aim should be not merely to maintain democracy, but to strengthen it. Rather than pondering exactly which “human values” AI should be designed to align with, AI scientists in the English-speaking world should take a look in the mirror, step down a level of generality, and ask what are our values? In my view, the answer is clear. Democracy, hallmark of modern liberal societies and noble heritage of Western culture, should be the defining value for AI in the 21st century.

This essay argues that we should not limit our concerns to the responsible use of AI technologies, but emphasize responsibility in research and development (R&D) as well— and that a democratic decision making process is the best means of doing this. In the next section I will define what I mean by “democracy” in the context of AI, and offer a justification for choosing this ideal over others. The third section suggests steps governments, industries, and other organizations can take to both address the role of AI in modern democratic nations and to democratize AI R&D. In the fourth section I present a defense of democracy in AI against possible counter-claims. Finally, in the conclusion I look to the future, reemphasizing the importance of democracy in relating to the broader public.

2. What does “AI for Social Good” even mean?

Recently both citizens and experts have been paying much more attention to the social impacts of AI. What began in 2008 as part of the AAAI’s Asilomar Study on Long-Term AI Futures has since blossomed into a variety of organizations dedicated to the beneficial use of AI, including the Partnership on AI to Benefit Society, as well as multiple conferences, including a string of Whitehouse-sponsored events focused on social aspects of AI. Coming full circle not quite a decade later in January, 2017, the Future of Life Institute released a set of 23 Asilomar AI Principles intended to guide responsible AI R&D. Common to all of these efforts is the attempt to ensure that AI has positive social impacts, and much of this activity is coming together under the umbrella of “AI for Social Good.”

But what does “AI for Social Good” even mean? In addition to reading relevant literature, I have attended some of these events, including the conference on “AI for Social Good” in Washington, DC, as well as the recent workshop on “AI and OR for Social Good” at AAAI 2017 in San Francisco, CA. My impression is that “AI for Social Good” remains a relatively new concept that means different things to different people. For some it means doing good AI work in explicitly social domains. For others, it means addressing existing social problems with AI technologies. In this essay, however, I want to think bigger. What does “AI for Social Good” need to mean in order to ensure that AI is developed and used for the maximum benefit of society?

The answer, I argue, is democracy. My reasoning becomes clear if we take a step back and ask, What is “social good”? In attempting a definition, we enter the political: here, definitions are not universal, but contestable. Disagreement is a fundamental fact of political life, and the definition of “social good” is no exception to this rule. Your social good—say, equal access to medical care for men and women—could be my social evil—if I opposed abortion, for instance. Even on an issue which you and I both agree is of the utmost importance—relief aid to victims of natural disasters, for example—disagreements abound over how the measure should be implemented: where the funds should come from, how they should be allocated, and so on. It seems then that there is no prima facie means by which “social good” can be reasonable defined, since what counts as “good” depends on one’s perspective. The definition of “social good” is therefore both context- and perspective- dependent. In other words, the answer to the question depends on who you ask. Viewed in this light, it almost seems surprising that any major political decisions get made at all—but they do, and that is precisely my point in this essay. The Western cultural tradition side- stepped the philosophical problem of defining “social good” with a political innovation that dates back to 4th century BCE Athens, Greece: the process of democracy.

Lacking the possibility of achieving a universal definition that works for all people in all places, democracy provides a decision making process by which a people (demos) can, through deliberation and mutual adjustment of their partisan positions, “muddle through” to a compromise and determine for themselves what “social good” means.iii The possibility that this process will produce an outcome that no single partisan would themselves have suggested, but that all participants eventually agree to, is what political theorist Charles Lindblom famously referred to as the “potential intelligence of democracy.”[iv] Of course, everyone knows that “two heads are better than one,” but in today’s political climate it is worth reiterating that democracy is the only means Western culture has discovered for achieving this intelligence boost at scale.

So then what is “AI for Social Good”? There can be no single answer, but I propose “AI for Democracy” as an orienting concept. To ensure that AI technologies are used for the maximum benefit of society, it is necessary to establish a multitude of democratic decision processes at many scales in which people can meaningfully participate to decide for themselves what “AI for Social Good” means and how it should be implemented.

3. Steps toward the Democratization of AI

How might social institutions and practices be reconfigured so as to take differing views and priorities into more substantial account from the outset when innovating technologically? This section offers two pragmatic steps that governments, industries, and other organizations can take to initiate or improve democratic processes for technological decision making. Each step includes several sub-points to provide variables that decision makers can use to calibrate their own processes.

Step 1: Start Deliberations about AI

Deliberation is at the heart of democracy. When diverse partisans interact in a structured forum, ideally three things happen. First, the multiplicity of viewpoints brought to bear on an issue allows more potential risks and unintended consequences to be anticipated than would be the case if a narrower group were to consider the same. Second, partisans become aware of the diversity of perspectives on a single issue, and this allows for a process of “mutual adjustment” to take place between them, generating a “competition of ideas bearing on problem definition, agenda setting, option specification, and final judgement.”[v] Third, while there is no guarantee, it is possible for participants in such deliberations to derive a meaningful sense of satisfaction from having contributed to the decision making process. In order to facilitate effective deliberations and engage this tripartite process, I offer the following four recommendations.

Deliberation should be initiated as early as possible. “Innovate first, ask questions later” has become something of a mantra in certain tech circles. But this approach flatly undermines efforts toward responsible innovation and maximizing social benefit. An alternative mantra might be: “No innovation without deliberation!” By bringing a multiplicity of viewpoints to bear on a given issue as soon as possible, potential hazards, risks, and unintended consequences may be identified early, allowing revisions to be made before significant resources are expended on a project that ultimately ends in failure.

A maximum feasible diversity of concerns should be debated. In an ideally democratic decision making process, every single person whose life may be affected by AI technologies would have a say in how those technologies are developed. Obviously, this is not feasible for most projects; however, this does not invalidate the ideal. A democratic compromise can be struck by aiming to include a “maximum feasible diversity” of viewpoints, positions, and concerns in deliberations about AI technologies. One place to start would be the demographic composition of the tech sector, which is constituted primarily by white and Asian males. Naturally, the inclusion of more women and underrepresented minorities in the deliberations would do much to rectify this demographic disparity. That said, “diversity” does not stop at race and gender: social and economic class backgrounds, academic disciplines, political orientations, occupational histories, and a host of other variables should also be included for consideration. AI-focused institutions and universities could make grants and fellowships available to facilitate inclusion of these groups.

Participants should be well-informed. Technical expertise in the field of AI should not be a requirement for inclusion in deliberations. Nevertheless, it is the case that better- informed participants are better able to contribute to the deliberation process, and this is likely to lead to better outcomes overall. Two points are important here. First, “well-informed” should be understood broadly to include cognate fields: historians of technology, anthropologists of science, risk assessors, and policy analysts may be very well-informed about AI, but in ways that technical experts are won't to overlook. Second, almost everyone is already overworked and underpaid. Rather than assuming that individuals will bear the responsibility of educating themselves about AI, those organizations with the resources to provide education and information should pursue means to incentivize well-informed participation from the broader public.

Deliberations should be intense and long-lasting. The future of humanity is a serious matter. If AI technologies really do contain the potential to radically transform life on Earth as we know it, then clearly it is worth spending a non-trivial amount of time in careful deliberation about how to research, develop, and use them. In practice, this may mean slowing down the R&D process to allow time for these critical deliberations. Unfortunately, however, because of the enormous economic incentives for accelerating R&D and streamlining the innovation pipeline, industries may find it difficult or impossible to adjust their pace. If this is the case, it provides an excellent opportunity for governments to step in and modulate the pace of innovation.

In sum, deliberation is the first step toward the responsible development of AI technologies for the maximum benefit of society. But where to begin? Conferences are an excellent venue to initiate such deliberations. In this vein, the series of “AI for Social Good”- themed events held throughout 2016 set a fine precedent. Going further, scholarly societies such as the AAAI and ACM could ask that a portion of future conferences be devoted to deliberations about AI, and their executive councils might regularly revisit the issue of how to diversify the stakeholders participating in those deliberations.

Step 2: Revamp the Decision Making Process

All the deliberation in the world is meaningless if decision making processes in AI R&D do not begin to change. In industry, decision making is rarely democratic, following instead a “top-down” pattern that begins in the executive suite and disseminates through middle- management before being executed by workers on the floor. Governments, too, often replicate this pattern, falling short of the democratic ideals they were built upon. Below I offer four questions that decision makers, whether in industry or government, can ask themselves and others to make their processes more democratic.

Is there a fair sharing of influence? What exactly constitutes “fair” may vary with context and be difficult to define. An alternative approach is to identify practices that are “unfair” and work to mitigate them. Obviously, after including a wide range of partisan participants with a diversity of concerns in deliberations as part of Step 1, it would be unfair if those with the greatest resources, authority, or expertise ultimately had the final say on important decisions. Furthermore, it could be argued that it is unfair that tech-industry leaders set the technological agenda by innovating rapidly, forcing governments, NGOs, and the public to constantly play catch-up. This disparity of influence could be mitigated by implementing something like Arizona State University’s Real-time Technology Assessment program, in which every significant design initiative is deliberated with outside interests.[vi]

Is the process transparent? In comparison to the amount of attention being paid to the transparency of algorithms and decision making processes internal to AI technologies, there is relatively little focus on the organizations behind them. If it is reasonable to expect that AI programs should be able to explain how they made a given decision, then it is not unreasonable to expect the same of the organizations that produce them. Of course, the strong economic incentives for secrecy in AI pose a barrier to perfect transparency, but the Securities and Exchange Commission and equivalent regulatory institutions have developed “secrecy guarantees” and other processes for the safe handling of proprietary commercial information that could serve as models.

Is the burden of proof distributed evenly? Historically, the responsibility of anticipating the risks and unintended consequences of new technologies has fallen almost exclusively to critics. And saying “I told you so” after a major disaster is a comfort to no one. A better division of labor could be established by sharing this burden more equally between critics and innovators alike, perhaps by implementing a “precautionary principle” in AI. Adopted widely throughout Europe in the wake of the GMO controversy, the precautionary principle states that when an innovation poses risks, it is the innovators’ responsibility to demonstrate its safety before embarking on their proposed course of action. The rigor of this demonstration should be commensurate with the degree of risk: governments may insist that highly disruptive technologies achieve scientific consensus within relevant fields before permitting implementation.

Is the authority to decide allocated properly? Simple heuristics such as “make whatever people will buy” and “go for the lowest-hanging fruit” are probably inappropriate for making decisions about technologies that could induce epochal transformations affecting the lives of billions of people living now and in the future. To better address the gravity of the situation, AI could adopt best practices from established areas that have already been socially sanctioned with authority over the life and death, such as law and medicine. For example, AI-focused institutes could offer rigorous third-party accreditation processes modeled on those found in the legal profession, or governments and universities could collaborate to establish institutional review boards (IRB) for AI research. Importantly, industry should recognize such possibilities not as “government interference” but as opportunities to establish trust with a fearful, robot-averse public.

To summarize, democratizing the decision making around AI may require significant, unprecedented changes to the R&D process itself. However, I believe that the pioneers at the forefront of the field are capable of social innovations in addition to technological breakthroughs. But where to start? Modern engineering education provides a possible model. Many engineering programs now require courses in the ethical, legal, and social impacts of technology. This trend could be extended to encompass the political dimensions of decision making in R&D. Universities and polytechnics could draw upon expertise within their political science and other departments to introduce “democracy training” as a part of graduate-level engineering curricula. This would do much to guide future generations of engineers’ thinking about the kind of values they build into AI.

4. A Defense of Democracy

Having outlined the case for democracy in AI, I now address several counter-arguments. Aren’t there cases in which democratic decision making just does not work? And isn’t AI one of those cases? Let us consider three possible objections:

1) AI technologies are too complex for most people to understand, and therefore it makes no sense to include them in the decision making process—what could they possibly add?

2)  Determining what should count as “social good” is not as difficult as I made it appear in Section 2, and therefore a democratic decision making process is unnecessary.

3)  A sufficiently intelligent artificial agent, perhaps endowed with “artificial general intelligence”vii (AGI) or “artificial super-intelligence”[viii] (ASI), would know better than any group of people how to maximize the benefit of AI technologies for society, and therefore no one need worry about these issues until such an entity is developed. 

Although they concern cutting-edge technologies, these arguments are by no means new. Indeed, they have an illustrious precedent in Plato’s Republic, the classic treatise on government. In it, Plato, traumatized by the experience of seeing his beloved teacher Socrates sentenced to death on trumped-up charges by a mob of his fellow Athenians, set out to write the definitive refutation of democratic governance. Longing for the enlightened rule of the “philosopher-king”—something that even he acknowledged was a bit too idealistic—Plato settled on the creation of a special class of elites that he called “guardians.” Populated by selecting the best of youths, guardians were to be raised separately from their peers in communal settings under conditions of the harshest discipline and given an unparalleled education in philosophy and state-craft. Emerging from this rigorous training process, guardians would be endowed with superior intelligence, knowledge, wisdom, and physiques. And, Plato concluded, by virtue of their superior qualities, guardians would have the right to rule over the polis. Their rulership would protect the ignorant masses from themselves: hence, “guardians.” What is more, Plato insisted that the guardians would embody the very soul of the state, and therefore would naturally act in all cases with its best interests at heart. Surely this is preferable to “mob rule”?

Yet Western political history has shown that Plato’s theory of governance by elites fails. In case after case, the rule of would-be guardians devolved into the outright tyranny of a corrupt, self-serving minority. Whenever an exclusive elite has been allowed to dominate political decision making, the majority of people suffer. Nevertheless, despite the history of failure, the dream of what political scientists today call guardianship, the “proposition that some people are so much wiser and more virtuous than others that their rule can promote the good of all better than it can be promoted by giving everyone a voice,”[ix] continues on into the present.

In the modern era, guardianship often manifests as technocracy, the view that scientists and engineers are better equipped than anyone else to govern for the good of all. There are striking parallels: modern scientists and engineers also undergo a rigorous selection and training process; they also possess superior knowledge (of their field); and are also often granted positions of power within society. Does this mean that Plato’s theory has been vindicated by modern science and technology? Not at all. Technocracy, like guardianship, also fails. The disastrous “scientifically planned economies” of the former Soviet Union and its satellites provide the classic example of technocratic guardianship run amok. In America, the abuses of famed city planner Robert Moses are often cited as an example of technocracy’s ills. To promote his version of “social good,” Moses had highway overpasses in New York City built too low to allow the passage of busses, thereby effectively preventing low-income citizens from accessing the public beaches and parks he sought to segregate.x In my field of Science & Technology Studies (STS), hundreds if not thousands more examples of technocratic abuses have been documented at length.

Returning to the three hypothetical objections to democracy, I will address them in reverse order, last to first, showing how each reflects a technocratic mindset. First, regarding the claim that an AGI/ASI, thanks to its superior intelligence, will know how to maximize social benefit better than the human members of a given society—is this not simply Plato’s longing for a “philosopher-king” come back to haunt us in a new guise? Even if this AGI/ASI were wise, powerful, and selflessly dedicated to the maximally beneficial governance of humanity—and there are certainly argumentsxi why this would not be the case—there still remains the problem of concentrated authority in a single individual entity. After centuries of hard-won progress towards democratization in nations around the globe, how can anyone seriously advocate a regression to computerized monarchy? The embrace of AI philosopher- kings—no matter how wise—is an indefensible proposition in our modern world.

The second claim, that democracy is unnecessary because technical experts know as well as anyone what “social good” is, is countered by one of the starkest lessons of 20th century technological history: unintended consequences. Consider a well-known example in which a group of well-meaning German chemists sent a drug to market intended to combat bouts of morning sickness in expecting mothers. Thalidomide, the drug in question, did indeed cure morning sickness—but it also resulted in thousands of aborted pregnancies and infants with serious physical deformities. Perhaps more stringent pre-market testing could have averted this disaster, but does anything like the US Food & Drug Administration (FDA) exist for AI technologies? Or consider the American innovator and entrepreneur Henry Ford, heralded with making cars cheap enough that his own workers could afford one. Could he have foreseen a future where wars are fought for oil and entire ecosystems are devastated by petrochemically-fueled global warming? One may reply that no one could have anticipated such things—and they might be right—but is how tenable is that position in the case of driverless cars? Advocates of autonomous vehicles may be passionate about saving lives, but what plan do they have for the massive sociopolitical fallout entailed by the displacement of millions of drivers from the workforce? Technical experts are of course entitled to their own vision of “social good,” but they are by no means entitled, by virtue of their superior knowledge, to unilaterally impose that vision on the rest of society. In our world of increasingly interrelated, complex sociotechnical systems, where even the best predictive analyses come up short, it is by no means obvious that any amount of technical expertise justifies the exclusion of a majority of humanity from the decision making processes behind the technologies that will affect their lives.

Finally, to the first anti-democratic argument, that AI technologies are too complex for the lay public to understand. This may be true—but it does not follow from this premise that the broader public has nothing to contribute to the decision making process. Indeed, as debates over the “interpretability” of algorithms have made clear, many “deep learning” algorithms rely on multi-layer neural networks too complex for even leading AI experts to understand in detail. Yet this does not bar them from making important decisions about the way AI technologies are researched, developed, and deployed. This is because insisting upon the requirement of expert-level understanding as a baseline for participation in decision making is simply a form of guardianship designed to restrict authority over the process to a certain community. Expertise is undoubtedly useful and even necessary for intelligent democratic decision making, but it is by no means a necessity that all participants possess it. Instead, it is only necessary that participants understand how the issue or technology in question will affect them. This is enough to have a say, and vote up or down on proposals. Importantly, a vote of “No” provides crucial negative feedback about social impacts. Had biotechnology corporations been more receptive to negative feedback in the early days of genetically-modified organisms (GMOs), they might have avoided the massive publicity campaigns and protests that transformed GMOs into an ongoing public controversy with no end in sight. Will AI suffer the same fate, or will its advocates work to construct a process by which decisions about how to do “AI for Social Good” can be made democratically?

5. The Future Looks Bright: Brilliant Technologies or Metallic Glare?

AI scientists often lament what might be called the “Terminator syndrome,” in which every news article about AI seems to include a photo of the eponymous robot killing machines under a headline like, “Rise of the Machines: Meet the new Robot Overlords.” Indeed, AI is often presented throughout popular culture as a potentially dangerous yet inevitable technological development that humanity can do little about, and this generates fear. In the AI community, Terminator syndrome is typically attributed to the overwhelming influence of Hollywood in American popular culture, and it is often suggested that if only the public knew more about the practical realities of what AI scientists actually do, they would be much less fascinated with “machine takeovers” and “killer robots.”

I suggest the Terminator syndrome might be more effectively countered—and the ideal of democracy served—by starting or expanding broad-based, public deliberations about the goals of AI, and the means of implementation. The fact is that there is no single, inevitable “AI,” but rather a diversity of programs and approaches, each of which may have different social impacts. Although it does seem inevitable that some kinds of AI systems will become pervasive in developed nations in the near future, there still remains considerable latitude over the design and functionality of these systems, as well as the way they will be implemented, monitored, and controlled. These are the very issues that need to be deliberated broadly and decided upon democratically if the AI technologies of the future are to maximize benefit to society as a whole, rather than a small group of technologists. Another mantra: “Maximum benefit to society requires maximum input from society.”

The robot-averse public will probably never stop watching Hollywood movies. But they may very well find their attitude toward AI changing when they are given a chance to participate meaningfully in the decision making processes governing R&D.

------------------

[i] Julia Angwin et al., “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks.,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing; Kate Crawford, “Artificial Intelligence’s White Guy Problem,” The New York Times, June 25, 2016, http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences- white-guy-problem.html.

[ii] Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 2016

[iii] Charles E. Lindblom, “The Science of ‘Muddling Through,’” Public Administration Review 19, no. 2 (1959): 79, doi:10.2307/973677.

[iv] Charles E. Lindblom, The Intelligence of Democracy: Decision Making through Mutual Adjustment (New York: Free Press, 1965).

[v] Charles E. Lindblom and Edward J. Woodhouse, The Policy Making Process, Prentice- Hall Foundations of Modern Political Science Series (Englewood Cliffs, N.J.: Prentice Hall, 1993).

[vi] David H. Guston and Daniel Sarewitz, “Real-Time Technology Assessment,” Technology in Society, American Perspectives on Science and Technology Policy, 24, no. 1–2 (2002): 93–109, doi:10.1016/S0160-791X(01)00047-1.

[vii] Ben Goertzel, “Artificial General Intelligence: Concept, State of the Art, and Future Prospects,” Journal of Artificial General Intelligence 5, no. 1 (January 1, 2014): 1–48, doi:10.2478/jagi-2014-0001.

[viii] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014).

[ix] Michael Bailey and David Braybrooke, “Robert A. Dahl’s Philosophy of Democracy, Exhibited in His Essays,” Annual Review of Political Science 6, no. 1 (2003): 99–118, doi:10.1146/annurev.polisci.6.121901.085839.

x Langdon Winner, “Do Artifacts Have Politics?,” Daedalus, 1980, 121–136.

xi Steve Omohundro, “The Basic AI Drives,” in AGI, vol. 171, 2008, 483–492; “Autonomous Technology and the Greater Human Good,” Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3 (July 3, 2014): 303–15, doi:10.1080/0952813X.2014.895111.