As celebrities wage an increasingly fierce fight to combat AI-generated deepfakes—perhaps most notoriously Taylor Swift last week—prominent talent agency WME announced a deal with Chicago-based tech company Vermillio that aims to better arm the performers they represent from having their images manipulated and exploited online.
The deal with Vermillio, The New York Times reported on Tuesday, will allow WME to inject a digital tracker called Trace ID into images of its clients, which can then be used to monitor and identify authentic images—conversely giving them a way to protect and monetize their likeness.
While details on how the technology works are sparse, Vermillio’s Trace ID utilizes blockchain technology to record and track images, the Times said.
“We have been at this for a while to try and tackle this issue so that our clients have protections in place to at least start to address what is clearly a rampant issue,” WME’s head of digital strategies, Chris Jacquemin, told the newspaper. “You have no real ability to stop it other than manually stumbling across it,” adding that Vermillio automates the process.
WME and Vermillio have not yet responded to Decrypt’s request for comment.
A deepfake is an increasingly common video or audio content created or manipulated with artificial intelligence that depicts false events. Deepfakes are increasingly more challenging to discern as fake, thanks to the growing power of generative AI platforms like Stable Diffusion, Midjourney, and OpenAI’s DALL-E.
While world leaders and law enforcement have repeatedly sounded the alarm about the threat AI-deepfakes pose to elections, public safety, and conflict zones, the threat of AI-generated deepfakes jumped to the forefront last week as fake sexual images of global megastar Swift flooded the internet—or at least Twitter.
The images were so prevalent on social media that Twitter removed the ability to search for Swift’s name before restoring it on Tuesday.
“So there’s that ongoing thing of you can’t trust whether things are real or not,” Internet Watch Foundation CTO Dan Sexton previously told Decrypt. “The things that will tell us whether things are real or not are not 100%, and therefore, you can’t trust them either.”
Edited by Ryan Ozawa.