top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

Tech Journalist

Can AI now influence an election? Big tech to take action


Several of the world's leading technology companies, such as Amazon, Google, and Microsoft, have come together to address the issue of deceptive artificial intelligence (AI) in elections. These companies have signed an agreement committing to combat voter-deceiving content by using technology to detect and counter it. However, an industry expert believes that this voluntary pact may not be sufficient to prevent harmful content from being posted.



The Tech Accord to Combat Deceptive Use of AI in 2024 Elections was announced at the Munich Security Conference, recognizing the significance of the upcoming elections in countries like the US, UK, and India, where billions of people are expected to vote. The accord includes commitments to develop technology that mitigates risks associated with deceptive AI-generated election content and to provide transparency to the public regarding the actions taken by these companies. It also involves sharing best practices and educating the public on identifying manipulated content.


Signatories of this accord include social media platforms such as X (formerly Twitter), Snap, Adobe, and Meta, the parent company of Facebook, Instagram, and WhatsApp. While this initiative is commendable, computer scientist Dr. Deepak Padmanabhan from Queen's University Belfast believes that more proactive action is necessary.


Instead of waiting for harmful content to be posted and then taking it down, he suggests that companies should be more proactive in addressing the issue. Dr. Padmanabhan warns that realistic AI-generated content, which may be more harmful, could remain on platforms for longer, compared to obvious fakes that are easier to detect and remove.


Another concern raised by Dr. Padmanabhan is the lack of nuance in defining harmful content within the accord. He gives the example of a jailed Pakistani politician using AI to deliver speeches while in prison and questions whether such content should also be taken down. The signatories of the accord aim to target content that deceptively alters the appearance, voice, or actions of key figures in elections. They also aim to address false information provided to voters through audio, images, or videos regarding when, where, and how to vote.


Brad Smith, the president of Microsoft, emphasized the responsibility to ensure that AI tools are not weaponized in elections. However, US Deputy Attorney General Lisa Monaco expressed concerns that AI could "supercharge" disinformation during elections. Google and Meta have previously outlined their policies on AI-generated images and videos in political advertising, requiring advertisers to disclose the use of deepfakes or AI-manipulated content.


In conclusion, while the Tech Accord to Combat Deceptive Use of AI in 2024 Elections is a step in the right direction, it may require more proactive measures and a nuanced approach to address the challenges posed by AI in elections effectively.

wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page