Regulatory Dynamics Of Artificial Intelligence Global Governance

Click Here To Download The PDF

Analysis of the Internet Governance Analogue, Incentives & Nascent Efforts

Introduction

Artificial Intelligence is widely acknowledged to hold radically transformative—and disruptive—potential on par with the invention of the printing press and the Industrial Revolution. Sergey Brin (2018), co-founder of Google, has belittled his accomplishments in comparison: “The new spring in artificial intelligence is the most significant development in computing in my lifetime.” As AI has transformed the economy (Chui et al., 2018), and changes the role of government (Eggers et al., 2017), it also holds the potential for mass unemployment (Nedelkoska & Quintini, 2018), and possibly constitutes an existential threat to humanity (Bostrom, 2014). The careful governance of this disruption is a key challenge and imperative of our time.

Scholars have suggested that AI should see coordinated governance at the global level, drawing inspiration from existing Internet governance processes (Gasser & Almeida, 2017; Turner, 2018). This paper interrogates this hypothetical model, and finds it of limited use. AI resembles the uppermost, application layer of the Internet, which does not see coordinated global governance, and, as such, the underlying technology encourages national-level governance to dominate AI. Understanding the importance of national regulations, a framework for regulatory competition among states reveals that AI industry dynamics will see little pressure for global coordination. These dynamics call into question the durability and influence of existing AI global governance efforts. The paper concludes with recommendations for existing institutions and areas for further research.

A Brief Overview of Internet Governance

Although the history of the Internet dates back to the 1960s, its governance and regulation is a more recent concern (Mueller, 2009, pp.1-10; Leiner et al., 1997). What eventually emerged was a layered approach to governance (Figure 1). This perspective on governance sees the Internet divided into three: the infrastructure layer, the logical layer, and the application (or economic and social) layer. Each layer is governed by different organizations with different expertise (Figure 1). The first two layers see global governance that ensures the interoperability of both infrastructure and software protocols for a worldwide connected Internet. This interoperability, in of itself, does not pose a threat to national governments, and as such, their representatives readily participate in transnational governance via the UN International Telecommunication Union (ITU) and ICANN, among other venues.

The Three Layers of Digital Governance

Figure 1. (XPLANE, 2015)

At the third, application layer, however, there is less agreement on global governance: a whole host of actors regulates the applications and content that are delivered upon handling an Internet traffic request. These actors include national governments, corporations, and, indirectly, the UN Internet Governance Forum (IGF). Particular regulations at this third layer can vary considerably around the world, where, e.g., the Great Firewall of China blocks many globally popular applications and the Right to Be Forgotten affects search results in the EU.

Each layer sees a series of groups interact with different mandates. The result has been called a governance mosaic (Dutton & Peltu, 2005). Notably, however, within this diverse ecosystem, constituent groups’ internal organizational governance often follows a common multistakeholder model. The multistakeholder model varies in practice, but at its most abstract it works to “bring together all major stakeholders in a new form of communication, decision-finding (and possibly decision-making) on a particular issue” (Hemmati, 2002, p.2). In the context of Internet governance, these stakeholders are generally the academic and technical community, corporations, governments, and civil society (Almeida et al., 2015, p.75). Initially a form epitomized by ICANN and IETF, the past twenty years has seen multistakeholderism endorsed by numerous international organizations, including the UN General Assembly, OECD, Council of Europe, ITU, and G8, to the extent that some claim it as an international norm (Internet Society, 2016). Yet, this model is no panacea; it must fit its particular context if it is to be successful (Hemmati, 2002, p.3; DeNardis & Raymond, 2013).

To what extent can Internet governance inform AI governance? Turner (2018) suggests that ICANN, the preeminent multistakeholder organization governing the logical layer of the Internet may be a useful case. Gasser & Almeida (2017) draw inspiration from the layered governance model itself, suggesting a similarly layered model for AI, with each layer developing over time. Yet, to what extent are the technologies sufficiently similar to enable this approach? Are incentives similarly aligned to see actors coordinate at the global level?

AI as the Internet Application Layer

In seeking to apply a governance model from one technology to another, policymakers confront a tension between minimizing regulatory uncertainty by using established methods well understood by stakeholders and maximizing ‘fit’ of the regulation to the particularities of the technology in question (Brownsword & Yeung, 2008, p.5). Insofar as the technologies themselves and the actors’ incentives in said technologies resemble one another, then this tension can be abated. The benefits to reusing governance models, if they fit, are significant for both business efficiency and institutional legitimacy (ibid., p.6), so the impetus to use Internet governance for AI is understandable. But is it fit for purpose?

An essential question here is to what extent do the two technologies resemble one another? Although AI presents definitional problems, for our purposes consider it as a set of computational decision-making techniques (See Elish & Hwang, 2016; Hernández-Orallo, 2017). These techniques–ranging from a hidden Markov model for predictive texting to a deep neural network implementation that can play the board game Go better than any human–are akin to computer applications. Thus, AI is essentially the application layer of the Internet. Indeed, AI applications often times are already deployed at this Internet layer, e.g., software as a service and cloud computing solutions (The Economist, 2017). Even if a particular AI application remains firmly in a single locale, it likely uses data that was transported on the Internet.

Understanding AI as the application layer of the Internet is useful to understand stakeholders’ incentives. The application layer of the Internet need not see universal interoperability for nations and corporations to achieve their goals on the global network. China argues that although its Great Firewall blocks access to some international content, businesses in and outside of China are still able to connect to each other and possible customers (Global Times, 2016). Following the Snowden revelations in 2013, many countries adopted data localization laws that require firms to store users’ data domestically (See Selby, 2017). More broadly, there is growing concern of the “balkanization” of the Internet, where application-specific regulations undermine global connectivity for content (Bleiberg and West, 2014; Frosio 2018). These regulatory developments at the application layer of the Internet represent important interventions of national governments and harbingers of a possible AI governance landscape.

Regulatory Competition for AI

If AI resembles the application layer of the Internet, national regulations are essential to understanding any possible global system of governance. Insofar as AI research is geographically located, requires significant capital expenditure, and yields highly valuable intellectual property (The Economist, 2017), then AI resembles many technologically advanced industries. National governments are increasingly attempting to foster such industry domestically: the UK (HM Government, 2017), France (Villani, 2018), Japan (METI, 2015), and China (State Council, 2017), among other countries have published strategies with significant financial incentives for the domestic development of AI industry. These concerted national efforts at attracting AI industry further bolster the salience of the regulatory competition between states as essential to understanding the potential global governance of AI.

In the regulatory competition model, industry characteristics–particularly firm concentration, asset specificity, and cross-border value chains–impact companies’ incentives and national responses (See Murphy, 2004). Companies may choose to lobby for regulatory change, leave for a more favorable country, or passively accept the status quo (ibid., p.10). Figure 2 plots the possible outcomes of aggregate national responses to industry characteristics and companies’ actions: national regulations can converge towards a higher common denominator, a lower common denominator, or fail to converge at all.

Trajectories of regulatory competition among competing states

Figure 2. “Trajectories of regulatory competition among competing states”

(Murphy, 2004, p.6)

How does regulatory competition for AI play out? Today, US-based multinational corporations and Chinese Internet companies dominate AI (The Economist, 2017). Market concentration is diffused among the leading companies and competition among these firms is heated (ibid.), so companies may be unable to lobby for individually advantageous regulation beyond the generally low regulation status quo (Murphy, 2004, p.14; Chander, 2013). Two key inputs for AI are highly specific to particular jurisdictions. AI demonstrates high asset specificity in human capital: demand for technical talent far outstrips supply, so firms compete vigorously for new hires and tend to locate near university research hubs and preexisting technical clusters (The Economist, 2017). Similarly, data-localization requirements see data, essential for AI, fixed to a particular country (Selby, 2017). The national AI strategies from the UK, France, and China all acknowledge as much: they seek to make available rich datasets and to increase the number of qualified PhDs in their countries (See HM Government, 2017; Villani, 2018; State Council, 2017). AI does not see traditional global value chains that may be expected in manufacturing or other industries. Yet, some leading AI companies, including Google, Microsoft, Facebook, Amazon, and Baidu do have research labs across multiple countries (The Economist, 2017). These companies must comply with preexisting differential process regulations in different jurisdictions, e.g., the EU data protections (See ICO, 2017). Thus, cross-border value chain pressures that may support homogenous global regulations are observed to be less applicable for AI (Murphy, 2004, pp.17-8). These factors—a competitive market, high asset specificity, and modest cross-border value chains—predict heterogeneous regulatory outcomes (ibid., pp. 20-1). This fractured AI governance landscape becomes all the more likely when considering the industry’s strategic and military value. Recent speculation of US regulation to prevent Chinese investment—and even collaboration—with US firms on AI research is but the latest illustration of this trend in the industry (Qing, 2018).

In sum, the ongoing regulatory competition dynamics for AI do not resemble those of the early stages of the Internet. In contrast to high asset specificity in both talent and data essential for AI, the early Internet exemplified low asset specificity, whereby websites could easily move servers from one jurisdiction to another; this saw companies effectively skirt national regulations in online gambling, among other applications (See Kelly, 2000). Globally, regulatory competition dynamics do not see a race to the bottom for AI ethics where nations seek to lower privacy and oversight standards to attract companies. Nor does AI currently see strong incentives for globally coordinated stringent regulation like those observed in past climate agreements, particularly the Montreal Protocol’s ban on ozone-depleting CFC emissions (Murphy, 2004, pp.115-27). Instead, regulatory dynamics of AI see national-level governance dominate. Thus, understanding the underlying technology and its related incentives indicates that global governance akin to the infrastructure and logical layers of the Internet does not appear to be on the horizon for AI, despite some scholars’ hypotheses (Gasser & Almeida, 2017; Turner, 2018). This has significant implications for the evolution of existing and aspiring global governance institutions.

Surveying the AI Global Governance Landscape

Despite the above findings, today there are multiple ongoing efforts towards global AI governance. What follows is an overview of groups that work on AI governance at a global level; thus, it excludes national governments as well as individual corporations. Although this stakeholder mapping cannot claim to be comprehensive, it benefits from participant observation at the UN IGF 2017, research on AI ethics academic communities on Twitter (See Chowdhury, 2018), and a conversation with Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence (CFI).

A series of academic think tanks and institutes have research projects that address the governance of AI. These groups–including CFI, the Future of Humanity Institute at Oxford, the Berkman Klein Center for Internet and Society at Harvard, AI Now at New York University, the MIT Media Lab, the World Economic Forum, and Data & Society–host researchers and events on the topic. The Knight Foundation has pledged over $27 million for multidisciplinary AI research through its Ethics and Governance of Artificial Intelligence Fund (Knight Foundation, 2018). Although not governing institutions in their own right, these groups are important stakeholders in generating both the ideas and participants for future governance structures.

The United Nations has been active on developments in AI governance. The Group of Governmental Experts on Lethal Autonomous Weapons Systems is composed of representatives from over 60 countries and organizations (UNOG, 2017). The ITU recently hosted its second annual AI for Good Global Summit, which it describes as “the leading United Nations platform for dialogue on AI”; Houlin Zhao, ITU Secretary General, sees the organization’s role as “providing a neutral platform for international dialogue aimed at building a common understanding of the capabilities of emerging AI technologies” (ITU, 2018). The annual IGF has similarly promoted dialogue on AI, with the 2017 session hosting nine panels on the topic (IGF, 2017). The Interregional Crime and Justice Research Institute (UNICRI) opened a Centre on Artificial Intelligence and Robotics to help “all stakeholders, including policy makers and governmental officials, possess improved knowledge and understanding of both the risks and benefits of such technologies” (UNICRI, 2018). Taken together, these UN initiatives promote dialogue and information sharing among stakeholders; they do not constitute anything more than a modest effort at global governance for AI.

Two private initiatives represent the most developed efforts towards governance. The Partnership on AI to Benefit People and Society is an industry group founded by Amazon, Apple, DeepMind, Facebook, Google, IBM, and Microsoft. It has a mission “to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences” (Partnership on AI, 2018a). In pursuit of that mission, it has subsequently partnered with some 50 academic think tanks, companies, and civil society organizations (Partnership on AI, 2018b). Thus, the Partnership on AI adheres to the multistakeholder model common in Internet Governance, albeit without direct government involvement. Although it enjoys a nearly global membership, leading Chinese firms are notably absent (Turner, 2017).

The second private initiative towards global governance of AI is the Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems. The IEEE is the largest technical professional association in the world and, among its mandates, develops technology standards. This group has produced a series of documents identifying ethical best practices for AI designers (IEEE, 2018). IEEE is also developing a series of formal standards on ethics, transparency, privacy, and bias in AI (Rozenfeld, 2017). Yet, the impact of these global standards and best practices may be limited. Indeed, one leading AI legal scholar notes, industry-created codes of conduct have been repeatedly invalidated by the US government (Calo, 2017, pp. 407-9).

This field gets more crowded by the day, as more groups promising global governance of AI continue to emerge. In late April, the UK-based Big Innovation Centre together with representatives of the All-Party Parliamentary Group on AI proposed creating an AI Global Governance Commission (Big Innovation Centre, 2018).

Implications and Recommendations for Governance Efforts

It is too soon to tell how the AI Global Governance Commission will fare. But given that the nature of the technology favors national regulation and that the nature of the industry predicts heterogeneity in national outcomes, the Commission is unlikely to hold significant governing power. Indeed, if it survives at all, it is likely to promulgate vague or unenforceable standards, given the heterogeneous national landscape (See Drezner, 2004, pp.484-5).

Understanding that the Partnership on AI already has over 50 partner organizations around the world (Partnership on AI, 2018b), it is primed to operate as a legitimate multistakeholder governance organization (Abbott et al., 2016, pp. 270-1). Although it includes members from companies and civil society, it should also involve government stakeholders to ensure knowledge sharing of best practices and technical developments. The UN ITU and IGF have considerable convening power, and, as such, are well situated to continue to host global dialogues on AI. These groups, Partnership on AI and the UN, together with IEEE, form a triad of soft global governance on AI. Given the incentives at play, it is unlikely that global governance will strengthen in the short-term. Governance of the short-term harms will fall to national governments (Calo, 2017; Gasser, 2016).

This paper has focused on what is, not what ought to be. Insofar as AI poses a significant disruptive threat, these governance incentives may shift over time. Indeed, the status quo does not reflect some prominent academics’ concern of the existential threat posed by a super-intelligent AI (See e.g. Bostrom, 2014). If such a possibility appears increasingly likely, then preexisting institutions will be well situated to expand their mandate into formally governing aspects of AI (See Conran & Thelen, 2016). In the interim, the above “soft governance triad” should maintain open channels of communication on the topic so policymakers and institutional designers have a sufficient understanding of the current status of the technology.

In the meantime, research institutions and funders should support efforts to understand the geopolitical implications of AI and to build a roadmap for institutional development should it be needed. Although many possible governance models have been circulated (Turner, 2018; Gasser & Almeida, 2017; Murray, 2011; Tarko, 2013; Perritt, 2000; Mifsud Bonnici, 2008), there is insufficient literature on how to achieve them in practice. This paper has sought to fill that practical gap, but more work remains.