Over the course of the last decade, there has been an enormous interest in artificial intelligence (AI). Due to some breakthroughs in machine learning and new ways to gather and analyze large amounts of data, expectations abound that AI will be implemented in more and more everyday technologies and reshape societies in far-reaching and revolutionary ways. The massive interest in AI from a technological and economical perspective has been quickly followed by an intense discussion on its societal impact, especially on problems of fairness and accountability. Only recently has this sociological critique been complemented by a discourse on the effects that AI might have on politics, especially democratic politics and procedures. As a field of research, AI was established long before our current debates took off. Its origins can be traced back at least to the mid-1950s. At that time, expectations were high that machines would soon be able to think and act like humans, but the predicted progress never materialized and a long so-called AI winter followed. During that period, AI mostly disappeared from public discourse. But although it was not visible, progress was made and many of the conceptual tools and algorithmic techniques on which our current expectations are based were developed or fine-tuned. At the same time, digitalization made enormous inroads into society, preparing the ground for AI’s rise in our societies. Two factors were decisive: The availability of enormous computing power and the ever-expanding collection of data. New modes of communications, new technologies to sense, collect and store data, and strong economic incentives to gather data necessitated developing new ways of analyzing data. From 2010 onwards, AI took off as the now-dominant machine-learning methods proved their worth in a series of spectacular successes in fields such as speech and image recognition, and a machine surpassed humans in playing the board game Go, an achievement long thought to be impossible.
Today’s deep learning techniques are characterized by an inductive approach. While in the early days AI development often entailed complex deductive classification and reasoning, current approaches work by analyzing large data sets, thereby generating or adapting decision rules in order to allow for optimization of pre-defined criteria. AI systems have in many domains become highly effective in detecting and categorizing patterns and have developed to allow for adaptation to new developments or patterns while running. These capabilities are best understood as learning processes; their sheer scope and complexity makes them superior to human analysis in some cases (the optimization of complex supply chains is a good example). Still, it is important to understand current approaches to AI as narrow or weak AI. Their narrowness lies in their non-transmissibility, i.e. AI must be trained to perform a certain kind of task and cannot apply itself or be applied to other problems without adaptive steps.
Representation is key to democracy, and if we wish to have democratic representation, we need to have procedural arrangements that support the free and reflexive transmission of citizens’ voices and preferences to the political institutions (“public will formation”), and institutions that can be held accountable even if no actual elections are taking place. Democracy is not mainly about outputs or effectiveness and therefore should not be equated with a just or fail-proof society (this false equivalency is sometimes the premise in AI debates, when results are characterized as good or just, and therefore democratic). On the other hand, democracy should also not be defined narrowly by electoral procedures only, since public will formation and the way power is exercised are also important for the complex appraisal of democracy. The debate on AI’s impact on the public sphere is currently the one most prominent and familiar to a general audience.
It is also directly connected to long-running debates on the structural transformation of the digital public sphere. The digital transformation has already paved the way for the rise of social networks that, among other things, have intensified the personalization of news consumption and broken-down barriers between private and public conversations. Such developments are often thought to be responsible for echo-chamber or filter-bubble effects, which in turn are portrayed as root causes of the intensified political polarization in democracies all over the world. Although empirical research on filter bubbles, echo chambers, and societal polarization has convincingly shown that the effects are grossly overestimated and that many non-technology-related reasons better explain the democratic retreat, the spread of AI applications is often expected to revive the direct link between technological developments and democracy-endangering societal fragmentation. The assumption here is that AI will massively enhance the possibilities for analyzing and steering public discourses and/or intensify the automated compartmentalizing of will formation. The argument goes that the strengths of today’s AI applications lie in the ability to observe and analyze enormous amounts of communication and information in real time, to detect patterns and to allow for instant and often invisible reactions. In a world of communicative abundance, automated content moderation is a necessity, and commercial as well as political pressures further effectuate that digital tools are created to oversee and intervene in communication streams. Control possibilities are distributed between users, moderators, platforms, commercial actors and states, but all these developments push toward automation (although they are highly asymmetrically distributed). Therefore, AI is baked into the backend of all communications and becomes a subtle yet enormously powerful structuring force.
The risk emerging from this development is twofold. On the one hand, there can be malicious actors who use these new possibilities to manipulate citizens on a massive scale. The Cambridge Analytica scandal comes to mind as an attempt to read and steer political discourses. The other risk lies in a changing relationship between public and private corporations. Private powers are becoming increasingly involved in political questions and their capacity to exert opaque influences over political processes has been growing for structural and technological reasons. Furthermore, the reshaping of the public sphere via private business models has been catapulted forward by the changing economic rationality of digital societies such as the development of the attention economy. Private entities grow stronger and become less accountable to public authorities; a development that is accelerated by the endorsement of AI applications which create dependencies and allow for opacity at the same time. The ‘politicization’ of surveillance capitalism lies in its tendency, as Shoshana Zuboff has argued, to not only be ever more invasive and encompassing but also to use the data gathered to predict, modify, and control the behavior of individuals.
AI technologies are an integral part in this ‘politicization’ of surveillance capitalism, since they allow for the fulfilment of these aspirations. Yet at the same time, AI also insulates the companies developing and deploying it from public scrutiny through network effects on the one hand and opacity on the other. AI relies on massive amounts of data and has high upfront costs (for example, the talent required to develop it, and the energy consumed by the giant platforms on which it operates), but once established, it is very hard to tame through competitive markets as we are currently witnessing and bearing in the South Asian markets primarily India. Although applications can be developed by many sides and for many purposes, the underlying AI infrastructure is rather centralized and hard to reproduce. As in other platform markets, the dominant players are those able to keep a tight grip on the most important resources (models and data) and to benefit from every individual or corporate user. Therefore, we can already see that AI development tightens the grip of today’s internet giants even further and help them influence, shape and build narratives across the globe without any liability. Public powers are expected to make increasing use of AI applications and therefore, become ever more dependent on the actors that are able to provide the best infrastructure, although this infrastructure, for commercial and technical reasons, is largely opaque on its design, data collection, privacy and uses.
The developments sketched out above the heightened manipulability of public discourse and the fortification of private powers feed into each other, with the likely result that many of the deficiencies already visible in today’s digital public spheres will only grow. It is very hard to estimate whether these developments can be counteracted by state action, although a regulatory discourse has kicked in and the assumption that digital matters elude the grasp of state regulation has often been proven wrong in the history of networked communication. Another possibility would be a creative appropriation of AI applications through users whose democratic potential outweighs its democratic risks thus enabling the rise of differently structured, more empowering and inclusive public spaces. This is the hope of many of the more utopian variants of AI and of the public sphere literature, according to which AI-based technologies bear the potential of granting individuals the power to navigate complex, information-rich environments and allowing for coordinated action and effective oversight.
There is a range of challenges that the widespread adaptation of AI-applications in the realm of the public sphere, democratic politics and public services might create for democracy in emerging economies of India, Pakistan, Nepal, Sri Lanka, Bangladesh. Although many of the risks highlighted so far are rather speculative and do not sufficiently take into account countervailing forces and other balancing factors, we certainly should be aware of the risks that even narrow AI poses with regard to democratic politics even in smaller Island countries like Maldives. That AI needs to be regulated has been a major claim in world politics for about five years. There has been a strong proliferation of regulatory proposals in political systems as diverse as China, the United States and the European Union, each of which have worked on a comprehensive approach to regulation of AI and its applications. These proposals share some similarities in that they create a master narrative of AI as an inevitable and disrupting development. All proposals point out the high degree of uncertainty regarding the uses and impacts of AI and then go on to translate these into a high demand for regulatory leadership. India aims to regulate AI claiming that we will regulate AI through the prism of user harm. We will protect digital nagriks from harm or from derived harm through technology. We will not let platforms to operate in India that inflict user harm.[1] Bangladesh has come up with development roadmap for the pillars to establish a sustainable AI Ecosystem in the country. The six strategic pillars of AI, Bangladesh consists of i) research and development, ii) skilling and reskilling of AI workforce, iii) data and digital infrastructure, iv) ethics, data privacy, security & regulations, v) funding and accelerating AI start-ups, and vi) industrialization for AI technologies.[2] While the American approach predominantly developed during the Trump administration is mostly concerned with economic opportunities for American industry, the Chinese AI development plan is more focused on the question of how to best govern society, and carries a more behaviorist logic. European attempts to regulate AI such as the Draft EU AI Act, the Study Commission on Artificial Intelligence (deployed by the German Bundestag), the German AI strategy, or the French Villani report for A Meaningful Artificial Intelligence make the strongest use of normative language and most explicitly claim to strike a balance between economic demands and ethical considerations. Whereas, in South Asia the AI regulation is focused around controlling harms, protecting national security and integrity along with readiness for economic opportunities AI may bring for South Asian countries without giving much thought on the ethical consequences or impact of AI technologies on civic spaces. No approach is taken in any of the AI strategies in India, Pakistan, Bangladesh and Sri Lanka on the demand for high standards of transparency in order to allow but supervise and qualify the use of AI applications by private actors. Regarding the structural transformation of the public sphere, the no AI strategy or policy drafts, for instance, prohibits the use of AI for deliberative manipulative actions and creates transparency demands for the use of social bots and deep fakes. The national AI strategy of Pakistan emphasizes the need to regulate private powers, but mostly focus on economic aspects and securing countries national security. Topics that are directly linked to democratic elections are rarely addressed outright in the AI regulation proposals. Although the reason could be that elections and campaigns are seen as more of an issue in national election laws, since these most often only slowly adapt to technological changes, more attention should be given to this topic. The ethical application of AI in the high-risk environment of public administration should also create an incentive for AI development to fulfill criteria of transparency and accountability.
The developments discussed in this backgrounder show that the discourse on AI and democracy is still in its infancy in South Asian countries. Academic treatments and policy adaptation started around the same time and are by now still mostly driven by broader debates on digitalization and democracy and exemplary cases of misuse. The expected spread of AI applications in society will lead to more thorough inspections, and one can expect that in particular, the topic of the use of AI in public services will become a more important issue in the next decade.
[1] https://www.livemint.com/ai/artificial-intelligence/india-will-regulate-ai-to-ensure-user-protection-11686318485631.html
[2]https://ictd.portal.gov.bd/sites/default/files/files/ictd.portal.gov.bd/legislative_information/c2fafbbe_599c_48e2_bae7_bfa15e0d745d/National%20Strategy%20for%20Artificial%20Intellgence%20-%20Bangladesh%20.pdf