Public administrations increasingly use AI to determine the allocation of social benefits: Judges uses risk assessment algorithms to determine a person’s innate ability for bail or parole, social media platforms use AI to optimize content moderation, and political actors use these platforms to engage in micro-targeting to more accurately spread disinformation and enhance the state’s surveillance on citizens. However, given its relative “black box” nature, how is AI threatening our capacity to exercise scrutiny over the decisions of public democratic institutions?
Answering these questions requires us to first examine why we value democracy. Well, firstly, there is intrinsic value in being able to govern ourselves as free and equal citizens through collective self-determination. Secondly, democratic institutions have instrumental value; democratic deliberation can improve the quality of public decisions by providing more information on citizens and improving the responsiveness of public institutions to citizens’ preferences.
A key factor in political decisions being considered legitimate is their public nature which allows for scrutiny, but this ideal of publicity is undermined through the use of AI in social technical systems as its opaqueness brings about problems. The most important reason for this is AI’s internal architecture being highly complex. Since AI’s internal architecture comprises of multiple hidden neural networks, it is almost impossible to make the algorithm completely transparent. This runs contrary to remarks made by Elon Musk about Twitter, as he believes that the total transparency achieved by making the algorithm open source is important to preserve public trust and democracy. However, there are practical concerns and questions over the real benefits of such measures. For example, some argue that open-source code could potentially allow malicious actors to find vulnerabilities in the software to exploit.
Furthermore, AI threatens democracy by being statistically more likely to produce decisions that are structurally unjust. First and foremost, AI that shapes technological systems may reflect statistical bias, leading to decisions that are unintentionally discriminatory as the data used in algorithms may exacerbate conditions of structural injustice. Next, even if algorithms were accurate, they could still lead to injustices if used in structurally unjust societies. Marginalised groups in the population may have statistically lower prospects due to past injustices. For example, COMPAS, a criminal recidivism risk prediction algorithm was introduced to assess a criminal defendant’s likelihood of committing a crime. Although proponents of the systems argued that big data and advanced machine learning would make such analyses more accurate and less biased, it was proven that COMPAS was unreliable and racially biased with its scores appearing to favour white defendants (with a 67.0% accuracy) over black defendants (with a 63.8% accuracy) and it underpredicting recidivism for white defendants and overpredicting recidivism for black defendants.
Inadequate moderation of AI on social media platforms can influence democratic deliberation and potentially undermine the quality of information. In Reuters’ Institute 2022 Digital News Report, the authors found that since 2019, smartphone users across 46 markets preferred accessing the news through social media (39%), which surged ahead of direct access (31%) to news sites as the main access point to online news. Of these, over half of respondents (61%) expressed worries about identifying misinformation that could give voice to extreme perspectives that previously would not have been widely disseminated. The usage of AI in content moderation is unavoidable due to the sheer volume of content needed to be moderated. Yet, the lack of human expertise, especially in engaging with political content in other languages, has serious impacts that result in a lack of real moderation on content that threatens democratic integrity. This not only threatens democracy within borders, where governments can combine big data with “nudging” – governing masses efficiently without involving citizens in democratic processes – and across borders, as individuals can be geo-targeted with individually customised suggestions and local trends can be gradually reinforced by repetitions, creating echo-chamber effects which causes social polarisation and formation of separate groups at conflict with each other. The disastrous effects of the breakdown of democracy can be seen in the fragmentation of society, notably in American politics between the Democrats and the Republicans.
Overall, AI has made breath-taking advances across the technological space. With the capacity to be retrained and continuously adapt itself, algorithms have demonstrated their potential from both a technological and economic perspective. However, to harness the potential of AI and minimise its inherent risk, its societal and political impact has to be continuously revaluated to ensure and preserve the legitimacy of representative democracy and provide support tools for democratic politics and procedures, as well as fairness and accountability.
Council of Europe Portal. (2018, July 3). Safeguarding human rights in the era of artificial intelligence. Commissioner for Human Rights. Retrieved November 27, 2022, from https://www.coe.int/en/web/commissioner/-/safeguarding-human-rights-in-the-era-of-artificial-intelligence
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1). https://doi.org/10.1126/sciadv.aao5580
Reuters Institute for the Study of Journalism. (n.d.). Overview and key findings of the 2022 digital news report. Retrieved November 27, 2022, from https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2022/dnr-executive-summary