OpenAI said on Monday it’s joining forces with Bryan Cranston, SAG‑AFTRA, and major Hollywood talent groups to tackle the growing mess of deepfakes made on its AI video tool, Sora.
This comes after fake clips of Cranston’s face and voice popped up online right after Sora 2 launched at the end of September. Cranston had no idea they existed until they were shared with him, and he wasn’t happy.
Speaking through SAG‑AFTRA, which posted about it on X, Cranston said: “I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness.”
That line dropped right as OpenAI announced it would team up with SAG‑AFTRA, Cranston’s agency United Talent Agency (UTA), and other top groups like the Creative Artists Agency (CAA) and Association of Talent Agents to shut down unauthorized uses of actors’ identities.
OpenAI faces heat from agencies over Sora 2’s misuse
OpenAI has been under fire from talent agencies for a while now. Both CAA and UTA blasted the company earlier this year for using copyrighted work to train its models, calling Sora a straight-up threat to their clients’ intellectual property.
Those warnings turned real when users started uploading disrespectful videos of Martin Luther King Jr. to Sora. The videos were so bad that King’s estate had to step in last week and ask for them to be blocked, and OpenAI complied.
The heat didn’t stop there. Zelda Williams, daughter of late comedian Robin Williams, also told people to stop sending her AI-made videos of her dad after Sora 2 dropped. She made her frustration public not long after the launch, adding more fuel to the fire already building around OpenAI’s loose grip on identity protections.
With complaints stacking up, the company decided to tighten its policies. Sora had already required opt-in consent for voice and likeness use, but OpenAI said it’s now also promising to respond fast to any complaints it gets about impersonations or misuse.
Sam Altman updates policy and pushes NO FAKES Act
On October 3, OpenAI CEO Sam Altman made it official: the old policy that let the company use material unless someone told them not to? Scrapped. The company now gives rightsholders “more granular control over generation of characters,” meaning agencies can finally manage how and when their clients’ identities are used in Sora.
Sam also doubled down on his support for the NO FAKES Act, a U.S. bill aimed at stopping unauthorized AI replicas. “OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness,” he said. “We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.”
OpenAI has gone from a research outfit to an AI empire chasing everything; chat apps, social platforms, and enterprise tools. But with billions locked up in AI chips, and its giant data center build-out still hungry for cash, it’s looking hard at government and corporate contracts to pay the bills. That means avoiding lawsuits and getting actors, agents, and lawmakers off its back is now just as important as training the next AI model.
Sign up to Bybit and start trading with $30,050 in welcome gifts