Meta says it will label AI-generated images on Facebook and Instagram

Meta says it will label AI-generated images on Facebook and Instagram

SeattlePI.com

Published

Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort between what’s real and not.

Meta said Tuesday it's working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools.

What remains to be seen is how well it will work at a time when it's easier than ever to make and distribute AI-generated imagery that can cause harm — from election misinformation to nonconsensual fake nudes of celebrities.

“It's kind of a signal that they’re taking seriously the fact that generation of fake content online is an issue for their platforms,” said Gili Vidan, an assistant professor of information science at Cornell University. It could be “quite effective” in flagging a large portion of AI-generated content made with commercial tools, but it won't likely catch everything, she said.

Meta's president of global affairs, Nick Clegg, didn’t specify Tuesday when the labels would appear but said it will be “in the coming months” and in different languages, noting that a “number of important elections are taking place around the world.”

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” he said in a blog post.

Meta already puts an “Imagined with AI” label on photorealistic images made by its own tool, but most of the AI-generated content flooding its social media services comes from elsewhere.

A number of tech industry collaborations, including the Adobe-led Content Authenticity Initiative, have been working to set standards. A push for digital watermarking and labeling of AI-generated content was also part of...

Full Article