AITech Interview with Colin Levy, Director of Legal at Malbek

Know how AI technologies threaten election integrity, including deepfakes and social media manipulation.

Colin, could you elaborate on the concerns you’ve raised regarding AI’s impact on elections?

Answer: When it comes to AI and its impact/role in elections, the challenge is misinformation, generated by deep fakes (e.g. someone’s image and voice being used to propagate false opinions and incorrect information), bot accounts on social media propagating incorrect and/or misleading information and people’s susceptibility these types of behaviors. In practical terms this means that we all need to be more skeptical of what we see, read, and encounter online and be able to verify what we see and hear online.

How does AI contribute to the dissemination of misinformation and disinformation during electoral processes, in your view?

Answer: AI contributes to the dissemination of misinformation and disinformation by enabling the creation and spread of convincing fake content, such as deepfakes, and by personalizing and optimizing the delivery of content on social media platforms. These capabilities can be exploited to create false narratives, impersonate public figures, and undermine trust in the electoral process.

Can you provide examples of how AI technologies, such as deepfakes and social media manipulation, undermine the integrity of elections?

Answer: Examples include:

  • Deepfakes: AI-generated videos or audio recordings that convincingly depict real people saying or doing things they never did, which can be used to create false impressions of candidates or mislead about their positions.
  • Social Media Manipulation: The use of bots and algorithms to amplify divisive content, spread falsehoods, and manipulate trending topics to influence political discourse.
  • Personalized ads:The creation and use of political ads designed to mislead, convince others of false information, and/or get them to take actions that may be against their best interests and benefit someone else unbeknownst to the viewer of the ad.

What specific measures do you recommend to combat the threat of AI interference in elections?
Answer: I do not pretend or purport to have all the answers or even any answers, per se. What I can suggest is that measures including developing and enforcing strict regulations on political advertising and the use of personal data for political purposes, implementing robust and verifiable fact-checking and content verification mechanisms to identify and label or remove false information, and encouraging the development of AI systems that prioritize transparency, accountability, and the detection of manipulative content may be useful.

In your opinion, how can transparency and accountability in AI algorithms help prevent their misuse in the electoral context?

Answer: Enhancing transparency involves making the workings of AI algorithms more understandable and accessible to regulators and the public, including disclosing when and how AI is used in content curation and distribution. Accountability measures include holding platforms and creators legally and ethically responsible for the content disseminated by their AI systems so as to ensure that there are mechanisms to challenge and rectify misleading or harmful outputs.

How do enhanced security measures contribute to safeguarding electoral systems from AI-related threats, Colin?

Answer: Improving the security of electoral systems against cyber threats by enabling more robust systems of detecting, preventing, and eliminating such threats can help ensure elections are more reliable and trusted. Additionally, securing the infrastructure against AI-powered attacks by engaging in regular security audits can help as well.

Could you explain how AI tools for misinformation detection and management can mitigate the impact of false information on voters, as you’ve suggested?

Answer: AI tools are very good at analyzing vast amounts of data. Because of this, such tools could be used/tools could be developed to better detect patterns indicative of misinformation, such as the spread of known false narratives or the abnormal amplification of content. Such tools, if used regularly and correctly*, could help platforms and regulators quickly identify and mitigate the spread of false information and ensuring more accurate information being disseminated online. *However, such tools could also be used to do the opposite, e.g. to ensure that only misinformation is shared, so who has control and use of such tools is a critical factor to account for as well.

From your perspective, how crucial is collaboration between AI companies, governments, and regulatory bodies in addressing AI challenges in elections?

Answer: Combating the risks posed by AI requires thoughtful and systematic collaboration between governments, regulatory bodies, AI companies, and, importantly, AI experts. The use of experts to supplement what these various entities know about AI currently is important as AI continues to evolve and each of us are learning as we go along. Collaboration is crucial for developing standards and norms for the ethical use of AI in elections, sharing best practices for detecting and countering misinformation, and coordinating responses to emerging threats. This includes joint efforts to improve public understanding of AI’s role in information dissemination and to develop technologies that support electoral integrity.

What strategies do you propose to enhance public awareness and education regarding AI’s potential misuse in electoral processes?

Answer: While repeating what I said earlier about not having THE answer, some strategies that strike me as potentially being helpful include launching public education campaigns to inform citizens about how AI can be used and misused in electoral contexts, teaching critical digital literacy skills to help individuals identify misinformation, and being more active and systematic in promoting and creating a better understanding of the mechanisms behind AI-driven content curation and recommendation systems.

What are the essential components of a multi-faceted approach to ensuring the integrity of elections amidst AI challenges?

Answer: Components would likely include:

  • Legal and regulatory frameworks that address AI-specific challenges in elections.
  • Technological solutions to detect and mitigate misinformation and secure electoral processes.
  • Educational initiatives to enhance public understanding and resilience against misinformation.
  • International and systematic cooperation to tackle better understanding and adaptation to the age of AI

Colin Levy

Director of Legal at Malbek

Colin S. Levy is an experienced lawyer, legal tech expert, and the author of The Legal Tech Ecosystem, available via Amazon.  Throughout his career, Colin has seen technology as a key driver in improving how legal services are performed. Because his career has spanned industries, he witnessed myriad issues, from a systemic lack of interest in technology to the high cost of legal services barring entry to consumers. Now, his mission is to bridge the gap between the tech world and the legal world, advocating for the ways technology can be a useful tool for the lawyer’s toolbelt rather than a fear-inducing obstacle to effective legal work. Colin is a sought-after writer and speaker.  He is often asked to be a guest on legal tech podcasts, contributes to articles, blog posts, and other types of content published on various law outlets, and enjoys interviewing leading leaders in the legal and legal tech spaces.

Related posts

AITech Interview with Aran Khanna, CEO and Co-founder,

AI TechPark

Interview with Len Finkle, CEO, Profisee

AI TechPark

AITech Interview with QBE insurance General Manager – David Bacon

AI TechPark