A Microsoft employee has raised alarm bells over the potential harm caused by the company’s artificial intelligence systems. Shane Jones, a principal software engineering lead at Microsoft, penned a letter to the US Federal Trade Commission (FTC) detailing concerns about the AI text-to-image generator, Copilot Designer.
Jones highlighted systemic issues within Copilot Designer, alleging that the tool frequently generates offensive or inappropriate images, including sexualized depictions of women. Despite these known risks, Microsoft has marketed Copilot Designer as safe, even for children, according to Jones.
In response to prompts such as “car accident,” Copilot Designer reportedly produces images containing sexually objectified representations of women alongside the intended content. Jones claims to have discovered over 200 instances of concerning images generated by the tool during his testing.
The Microsoft employee urged the FTC to intervene, requesting the removal of Copilot Designer from public use until better safeguards can be implemented. Alternatively, he suggested restricting the tool’s marketing to adults only.
Jones’s concerns come amidst a broader conversation about the potential dangers posed by AI image generators. Recent incidents, including the spread of pornographic AI-generated images of celebrities on social media, have raised awareness of the issue.
Notably, Google faced criticism after its AI chatbot, Gemini, produced historically inaccurate images, further fueling concerns about the misuse of AI-generated content. In response, Google announced a pause in Gemini’s image generation capabilities to address the issue.
Jones called on Microsoft’s board of directors to conduct investigations into the company’s marketing practices regarding AI products with significant public safety risks. He emphasized the importance of transparency and responsible reporting in mitigating these risks, especially when AI products are marketed to children.
Furthermore, Jones revealed that he had raised similar concerns with Washington Attorney General Bob Ferguson and lawmakers, underscoring the seriousness of the issue.
This isn’t the first time Jones has spoken out about AI-related risks. He previously published an open letter to OpenAI’s board of directors, highlighting vulnerabilities in DALL-E 3, the technology underlying Copilot Designer. Jones expressed concerns that these vulnerabilities could potentially harm children’s mental health.
Despite his efforts to address these concerns internally and externally, Jones claimed that Microsoft’s legal department directed him to remove the open letter to OpenAI’s board of directors, leaving him uncertain whether his concerns were adequately addressed.
In light of these developments, stakeholders are calling for greater accountability and transparency in the deployment of AI technologies. As AI continues to advance, ensuring ethical and responsible use remains paramount in safeguarding against potential harms.