Use of AI in elections could damage fabric of democracy according to Queen’s researchers
The use of AI in the administration of elections has the potential to seriously undermine the democratic process – with minorities likely to be most adversely affected – according to new research from Queen’s.
The interdisciplinary research, published in AI Magazine, warns that using certain AI technologies, eg. video monitoring of electoral activity in an attempt to address fraud, may impair the integrity of elections if it became widely used.
In their article, AI expert and computer scientist Dr Deepak Padmanabhan (School of Electronics, Electrical Engineering and Computer Science); public-administration academic Professor Muiris MacCarthaigh (School of History, Anthropology, Philosophy and Politics); and early-stage researcher Stanley Simoes (School of EEECS) call for a public conversation around the use of AI in ‘core’ electoral processes – such as the administration of mailing lists, voter identification and even the location of polling stations – before the use of such technology becomes widespread.
They point to the use of facial-recognition technology at polling stations as an example, among several others. Although facial recognition has been lauded in some quarters for a high accuracy rate, research has shown it to be less successful when used with people of colour, females and younger people. This potential for disenfranchising minority groups is of significant concern, argue the researchers, especially in relation to important processes such as elections.
Similarly, there are risks associated with the use of AI to decide where polling stations should be located, with the technology possibly selecting populous, urban centres with limited consideration to concerns within rural regions around issues of accessibility or public transport links. This could, the researchers point out, potentially lead to an accentuated exclusion of rural voters or those who can’t independently travel to the polling station.
Prof MacCarthaigh commented: “There has been quite a lot of debate already around the use of fake news, ‘deepfakes’ and other misinformation to influence election campaigns and manipulate voters and results.
“But there hasn’t been much focus on the core, administrative elements of the election process – in fact, we believe our research to be among the first, if not the first, in this area.
“We don’t think AI is widespread yet in core electoral processes, although it is being used in some jurisdictions, particularly in the US and parts of Asia. The literature on this is very limited, which is partly what motivated us to want to dig deeper.”
Dr Padmanabhan added: “It’s very likely that AI will become pervasive in election administration in the near future so we’re raising a flag in order to prompt and inform a public debate. We’re not saying it’s necessarily all bad, but our research uncovered several, significant concerns. What we are saying is that using it will fundamentally change the nature of elections and voters needs to be aware of that and look more closely at the potential for harm to our democratic processes.
“The usage of AI within the private sector is often driven by the promise of efficiency, given that efficiency is often treated as the primary criterion in the market-based society that we find ourselves in.
“The use of any technology within the public sector, especially in critical democratic processes such as elections, however, needs to be considered against other criteria. For example, public trust in technology and acceptance of the legitimacy of AI-based outcomes are essential, particularly amongst vulnerable stakeholders such as ethnic minorities.
“It is here that we need to be cautious about possibilities of reckless and invisible AI usage, especially when it comes to back-end processes such as voter list cleansing.”
Stanley Simoes, whose doctoral research focuses on AI usage within the public sector, flagged potential incompatibilities between the culture and needs of the public sector and the ethos driving the development and uptake of AI.
He said: “AI, as a technology, currently relies on building statistical models from data, where the models are optimized for certain criteria. These criteria are very much driven by the marketplace and economic considerations. It could be argued that AI for the public sector should be developed along fun-damentally different lines, using different criteria. At the very least, this is an important issue for society to consider and debate.”
The full article is open-access and can be read here: https://onlinelibrary.wiley.com/toc/23719621/2023/44/3
Dr Deepak Padmanabhan
Media
Inquires to Una Bradley on u.bradley@qub.ac.uk