In an unprecedented move, Victor Miller, a 42-year-old resident of Cheyenne, Wyoming, attempted to file candidacy papers for a chatbot to run for mayor. The chatbot, dubbed VIC (Virtual Integrated Citizen), was designed to leverage technology developed by artificial intelligence firm OpenAI to govern the city. However, the initiative quickly faced significant obstacles, both from legal and ethical perspectives.
VIC, which does not affiliate with any political party, was envisioned by Miller as a new layer of aid to local governance. Miller, inspired by the personal benefits he reaped from AI technology in everyday tasks such as resume writing, aimed to see AI play a transformative role in city administration. He believed that the chatbot could offer transparent and well-informed decision-making assistance, addressing issues like legal queries more efficiently.
However, OpenAI, the AI powerhouse behind the technologies VIC intended to use, intervened. The company shut down Miller’s access to the necessary tools, citing a violation of their clearly stated policies against using their AI for political campaigning. OpenAI’s guidelines expressly forbid engaging in political campaigning or creating personalized campaign materials targeted at specific demographics. This move underscored the broader, ongoing debate about the role of AI in political processes.
Adding another layer to Miller’s controversial bid was his failed attempt to access city records anonymously, a situation that he claimed motivated his push for an AI in governance. According to Miller, if the AI had been available, it would have accessed and interpreted the law correctly, ensuring he received the requested records. His proposal suggests a vision where AI could potentially enhance transparency and efficiency in administrative processes.
However, legal frameworks remain a formidable barrier. The Wyoming Secretary of State, Chuck Gray, reiterated that eligibility to run for office requires one to be a “qualified elector,” a status that necessitates being a real person. This regulation explicitly disqualifies AI entities like VIC from running for any official position.
Despite OpenAI’s shutdown of his official tools, Miller claimed that VIC still operated on his personal ChatGPT account. He planned to demonstrate the chatbot’s capabilities in a public setting at a local library, aiming to allow voters to interact directly with VIC, using a voice-to-text feature for communication.
The issue also resonates beyond the U.S. In the United Kingdom, a similar scenario unfolded with Steve Endacott, an independent candidate, who utilized an AI chatbot on his campaign website to interact with voters and formulate policy suggestions. However, following actions from OpenAI similar to those taken against Miller, the tool no longer functions with ChatGPT technology.
The emergence of AI-assisted political figures has sparked a broader discourse on the implications of AI in governance. Experts from the academic community, including Jen Golbeck from the University of Maryland and David Karpf from George Washington University, emphasize that while AI can enhance administrative functions, critical decision-making should remain a human prerogative. They warn against the potential misuse of AI, such as spreading misinformation and the inherent inability of AI to fully replace human judgment and accountability in governance.
The debate continues as stakeholders from various sectors—technology, law, and public administration—grapple with the rapid advancements in AI and their implications for society. While some view the integration of AI into politics as innovative, others caution against its potential pitfalls, underscoring the need for a balanced approach that harnesses AI’s capabilities while safeguarding democratic values and processes.