OpenAI tightened its content rules after Sora 2 users turned Pokémon characters like Pikachu into viral A.I. videos. John Keeble/Getty Images
When OpenAI’s Sora 2 launched on Sept. 30, it was billed as a playful experiment in A.I. video generation. Within 48 hours, the app, which lets users create short-form videos from simple text prompts, soared to the top of Apple’s App Store with more than 160,000 downloads. Notably, Japanese franchises such as Pokémon and Mario dominated Sora 2’s early outputs, flooding X and Instagram with A.I.-generated clips like Mario getting arrested for reckless driving and Pikachu cosplaying as Batman. The viral frenzy soon sparked backlash from intellectual property holders and lawmakers in Japan, prompting OpenAI to revise its content policy.
Nintendo, which owns the Pokémon and Mario franchises, said on X that it “will take necessary actions against infringement of our intellectual property rights.” Akihisa Shiozaki, a Japanese lawyer and member of Japan’s House of Representatives, urged immediate action on X to protect the nation’s content industry. “When I tried inputting prompts into Sora 2 myself, it generated footage of popular anime characters with a quality indistinguishable from the originals, one after another. Yet, for some reason, characters owned by major U.S. companies, like Mickey Mouse or Superman, didn’t appear,” he wrote on X. “This was clearly an imbalance and potentially a serious issue under copyright law.”
In a blog post on Oct. 3, OpenAI CEO Sam Altman announced policy changes following the surge of videos featuring Nintendo characters. “We’d like to acknowledge the remarkable creative output from Japan—we’re struck by how deep the connection between users and Japanese content is,” he wrote.
OpenAI’s previous “opt-out” policy, which allowed IP holders and creators to request removal of their works from the training data, was replaced with a stricter “opt-in” system. Under the new rule, the company must receive explicit permission from rights holders before Sora can generate content featuring their IP. Altman said the change would give rights holders “granular control over character generation,” aligning OpenAI’s approach with existing likeness and IP protection standards.
He admitted the company had underestimated how quickly users would push the boundaries, noting that “there may be some edge cases of generations that get through that shouldn’t,” and that refining the system “will take some iteration.”
Altman also suggested that creators could eventually earn royalties when their characters appear in Sora-generated videos. “We are going to try sharing some of this revenue with rightsholders who want their characters generated by users,” he wrote. “Our hope is that the new kind of engagement is even more valuable than the revenue share, but of course we want both to be valuable.”
OpenAI is already entangled in a growing number of lawsuits from authors, media companies and other rights holders who accuse the company of using copyrighted material without permission to train its models. The New York Times sued OpenAI and Microsoft last year, alleging that ChatGPT reproduced many of its articles verbatim. A separate group of fiction writers, including George R.R. Martin, John Grisham and Jonathan Franzen, has filed a similar suit, arguing that OpenAI’s training methods violate copyright law by replicating their works.
One of OpenAI’s early backers, Vinod Khosla, came to the company’s defense. The billionaire venture capitalist hit back at critics of Sora 2, calling them “tunnel-vision creatives” who lack imagination.
“Let the viewers of this ‘slop’ judge it, not ivory tower luddite snooty critics or defensive creatives. Opens up so many more avenues of creativity if you have an imagination. This is the same initial reaction to digital music in the 90s and digital photography in the 2000s,” he wrote on X. “There will be a role for traditional video still, but many more dimensions of creative video (through A.I.).”
For now, Sora stands as both a marvel and a warning—a glimpse of how democratic storytelling might become possible through A.I. and how quickly those boundaries can be tested. The moment may have gone viral, but the ethical battle it unleashed is just beginning.