Meta, the parent company of social media giants Facebook and Instagram, has announced plans to label AI-generated images on its platforms, taking a proactive step to combat the potential spread of false information in the run-up to the 2024 elections and beyond.
In a recent blog post, Meta Global Affairs President Nick Clegg unveiled the company’s initiative to add “AI generated” labels to images created with various third-party artificial intelligence tools. These labels will help users distinguish between AI-generated and naturally captured visuals.
The move comes in response to growing concerns within the tech industry, as well as among lawmakers and information experts, about the proliferation of AI tools capable of creating highly realistic images. When coupled with social media’s rapid content dissemination, these tools pose a significant risk of spreading deceptive content that could mislead voters in various countries, including the United States.
Meta’s labeling effort will encompass images produced using a range of popular AI tools, including those from Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. Similar to their existing “imagined with AI” label for photorealistic images generated using their in-house AI generator tool, Meta’s new labels will be applied to images created with third-party AI solutions.
To achieve consistent and accurate labeling, Meta is collaborating with leading AI tool developers to establish common technical standards. These standards will involve the use of invisible metadata or watermarks embedded within AI-generated images, allowing Meta’s systems to identify them with precision.
The “AI generated” labels will be introduced across Facebook, Instagram, and Threads, and they will be available in multiple languages to ensure broad accessibility.
Clegg emphasized the importance of transparency in dealing with AI-generated content. He mentioned that users often encounter AI-generated content for the first time and appreciate knowing when such technology is involved. This labeling approach will be maintained throughout the upcoming year, which encompasses crucial elections worldwide, providing Meta with insights into user preferences and the evolving landscape of AI-generated content.
However, it’s worth noting that the industry-standard markers used to label AI-generated images will not extend to videos and audio generated by artificial intelligence at this time. To address this limitation, Meta plans to introduce a feature allowing users to identify and disclose AI-generated video or audio content when sharing it. Failure to disclose such content may result in penalties.
Meta’s commitment to transparency also extends to preventing users from removing the invisible watermarks embedded in AI-generated images, a crucial step given the increasingly adversarial nature of AI-generated content creation.
In related news, Meta has announced an expansion of its anti-sextortion tool, “Take it Down,” in partnership with the National Center for Missing & Exploited Children. This tool empowers users, particularly teens and parents, to create unique identifiers for intimate images, enabling platforms like Meta to swiftly identify and remove such content.
Initially launched in English and Spanish last year, “Take it Down” will now be available in 25 languages and additional countries. This expansion follows recent Senate hearings in which Meta CEO Mark Zuckerberg and other social media leaders faced scrutiny regarding their platforms’ protections for young users.
As Meta takes these steps to address the challenges posed by AI-generated content and protect the integrity of its platforms, the tech industry continues to grapple with the evolving landscape of information dissemination and digital manipulation in an era where visual authenticity is increasingly difficult to discern.