Insights & Developments

·

Copyright, Publicity Rights, and the Legal Reckoning in light of ByteDance’s Seedance 2.0

On February 12, 2026, ByteDance, the Chinese parent company of TikTok, released Seedance 2.0, an artificial intelligence video generation model that accepts text prompts, images, reference video, and audio to produce hyperrealistic video clips of up to fifteen seconds.[1] Within twenty-four hours, a two-line prompt produced a viral clip of AI-generated Tom Cruise and Brad Pitt fighting on a rooftop that amassed over 1.2 million views on X.[2] Other users quickly generated riffs on Avengers: Endgame, Game of Thrones, Spider-Man, Titanic, Lord of the Rings, Shrek, Friends, and Stranger Things, flooding social media with content built from the likenesses and copyrighted properties of Hollywood’s most valuable franchises.[3]

The response was swift and unified. The Motion Picture Association (“MPA”), through Chairman and CEO Charles Rivkin, denounced Seedance 2.0 for engaging “in unauthorized use of U.S. copyrighted works on a massive scale” and demanded that ByteDance “immediately cease its infringing activity.”[4] SAG-AFTRA condemned the blatant infringement enabled by the platform, including “the unauthorized use of our members’ voices and likenesses,” noting that SAG-AFTRA President Sean Astin was among those whose likeness had been exploited.[5] The Human Artistry Campaign, backed by Scarlett Johansson, Cate Blanchett, and Joseph Gordon-Levitt, characterized the launch as “an attack on every creator around the world.”[6]

This article examines the multi-layered legal exposure that ByteDance and similar AI video platforms face under existing U.S. copyright law, the right of publicity, emerging federal legislation, and the still-evolving judicial framework for AI and fair use. For intellectual property holders and content creators, the Seedance 2.0 episode is not merely a technology story, it is a clarion call that the tools of infringement have grown faster than the tools of enforcement.

So what is the Copyright Implication?

To understand the copyright implications of Seedance 2.0, one must first appreciate that AI video generation implicates copyright law at two distinct stages: (1) the training phase, in which copyrighted works are ingested by the model to learn patterns, styles, and representations; and (2) the output phase, in which the model generates new video content that may incorporate, reproduce, or closely mimic protected works. Each stage raises different but overlapping legal questions.

1. The Training Phase: Reproduction Without Authorization

Generative AI models are trained on vast datasets of existing works. Seedance 2.0’s ability to produce recognizable renderings of Tom Cruise, Brad Pitt, Optimus Prime, and dozens of copyrighted characters strongly suggests its training corpus included substantial quantities of copyrighted film, television, and photographic material. While ByteDance has not publicly disclosed Seedance 2.0’s training data, the outputs speak volumes, the model cannot generate faithful depictions of copyrighted characters and celebrity likenesses without having been trained on works depicting those characters and individuals.

Under Section 106 of the Copyright Act, the copyright holder possesses the exclusive right to reproduce the copyrighted work. The act of copying protected works into a training dataset constitutes reproduction, and multiple courts have now addressed whether such reproduction is excused by fair use. The answer, as discussed below, is far from settled.

2. The Output Phase: Derivative Works and Substantial Similarity

 Perhaps more immediately troubling for rights holders is what Seedance 2.0 produces. When a user prompts the system and receives video featuring recognizable copyrighted characters, Spider-Man swinging through New York, Darth Vader wielding a lightsaber or the cast of Friends reimagined as otters, the output may constitute an unauthorized derivative work under 17 U.S.C. § 101, which defines a derivative work as one “based upon one or more preexisting works.” The output may also be directly infringing if it is substantially similar to protected expression in the underlying works.

In Andersen v. Stability AI Ltd. (N.D. Cal. 2024), Judge William Orrick permitted visual artists to proceed with claims that AI-generated images were infringing derivative works, finding the plaintiffs’ “model theory”that the AI model itself constitutes an infringing copy and “distribution theory” that distributing the AI model is tantamount to distributing copyrighted works are both plausible.[7] That case, set for trial on September 8, 2026, may provide the first judicial determination of whether AI-generated images are infringing derivative works.

Applied to Seedance 2.0, the case for infringement at the output stage is arguably even stronger than in Andersen. There, plaintiffs had to argue that AI-generated images were “in the style of” particular artists. Here, Seedance 2.0 is producing video clips featuring the actual characters and likenesses of copyrighted works, not abstract stylistic similarities, but concrete, identifiable reproductions of protectable expression.

Fair Use: The Three Decisions of 2025 and What They Mean for Video Generation

 The year 2025 produced three landmark judicial decisions on fair use and AI training. Critically, these decisions do not march in a single direction, and their divergent reasoning creates both risk and opportunity for AI developers and rights holders alike.[8]

Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc. (D. Del., Feb. 2025)

In the first federal court decision to squarely address AI training and fair use, Judge Stephanos Bibas granted partial summary judgment to Thomson Reuters, holding that Ross Intelligence’s use of 2,243 Westlaw headnotes to train a competing AI legal research tool was not fair use as a matter of law.[9] The court found that Ross’s use was not transformative because it used the headnotes for the same purpose, legal research, that Thomson Reuters had created them for, and that the use threatened Thomson Reuters’ existing and potential markets.[10]

The Thomson Reuters holding carries particular relevance for Seedance 2.0. When an AI model is trained on copyrighted films and then generates video content that directly competes with or substitutes for those films, the logic of Thomson Reuters suggests that such use is unlikely to qualify as transformative. The model is not producing commentary, criticism, or a fundamentally different product, it is producing entertainment content that mimics the very works on which it was trained.

Bartz v. Anthropic PBC (N.D. Cal., June 2025)

Judge William Alsup issued what many observers have called a “split the baby” ruling.[11] He held that Anthropic’s use of lawfully acquired books to train its Claude large language model was “transformative—spectacularly so,” reasoning that the conversion of human-authored text into statistical patterns for a conversational AI constituted a fundamentally different use.[12] However, Judge Alsup drew a sharp line at Anthropic’s acquisition of millions of pirated books from shadow libraries, holding that “[p]irating copies to build a research library without paying for it… was its own use—and not a transformative one.”[13]

The subsequent class certification and $1.5 billion settlement, the largest copyright settlement in U.S. history, underscored the financial magnitude of the exposure AI companies face.[14] For Seedance 2.0, the Bartz framework presents a complex picture: even if one accepts that AI video training is transformative in the abstract, the question of how the training data was acquired remains critically important. If ByteDance trained Seedance on copyrighted films and television content without authorization, the Bartz piracy analysis could negate any fair use defense.

Kadrey v. Meta Platforms, Inc. (N.D. Cal., June 2025)

 Two days after Bartz, Judge Vince Chhabria granted summary judgment for Meta in a case brought by authors whose books were used to train Meta’s Llama models.[15] While finding Meta’s use “highly transformative,” Judge Chhabria introduced a critical nuance that future courts and ByteDance’s attorneys should heed: he acknowledged that large language models are a unique technology that is simultaneously so transformative yet so potentially dilutive.[16]

Judge Chhabria’s most significant observation was his warning about market dilution. He stated that “it seems likely that market dilution will often cause plaintiffs to decisively win the fourth factor—and thus win the fair use question overall.”[17] He also dismissed the argument that requiring AI companies to pay for training data would stifle innovation, calling it ridiculous and noting that “[t]hese products are expected to generate billions, even trillions, of dollars” and developers “will figure out a way to compensate copyright holders for it.”[18]

For Seedance 2.0, the market dilution warning is particularly salient. When users can generate fifteen-second clips of recognizable Hollywood characters, scenes, and actors at zero marginal cost, the potential for market substitution is not hypothetical, it is already occurring. As Deadpool writer Rhett Reese observed upon viewing the Cruise-Pitt clip: “I hate to say it. It’s likely over for us.”[19]

The U.S. Copyright Office Weighs In: Part 3 of the AI Report

On May 9, 2025, the U.S. Copyright Office released its 108-page Copyright and Artificial Intelligence Part 3: Generative AI Training report, offering its most comprehensive guidance to date on whether using copyrighted works to train generative AI constitutes fair use.[20] While not binding on courts, the Copyright Office’s analysis is persuasive authority that will inform judicial reasoning in pending cases.

The report’s central conclusion is directly relevant to Seedance 2.0: “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”[21] The Copyright Office further concluded that where AI outputs closely resemble and compete with original works in their existing markets, fair use does not apply.[22]

The report also addressed market harm, finding that “[t]he speed and scale at which AI systems generate content pose a serious risk of diluting markets for works of the same kind as in their training data,” creating unprecedented competition for sales of an author’s works.[23] This finding echoes Judge Chhabria’s warning in Kadrey and provides additional intellectual support for arguments that AI video generators like Seedance 2.0 inflict cognizable market harm.

Beyond Copyright: The Right of Publicity and Celebrity Likeness

Copyright infringement is only one dimension of the legal exposure created by Seedance 2.0. The viral Tom Cruise-Brad Pitt video, and numerous other Seedance clips featuring recognizable celebrity likenesses, implicates the right of publicity, a body of state law that protects individuals from the unauthorized commercial exploitation of their name, image, voice, or likeness.

The Foundational Precedent: Midler v. Ford Motor Co. (9th Cir. 1988)

The Ninth Circuit’s decision in Midler v. Ford Motor Co., 849 F.2d 460 (9th Cir. 1988), remains the foundational precedent for voice and likeness misappropriation claims. When Ford used a sound-alike singer to imitate Bette Midler’s voice in a commercial, after Midler had explicitly declined to participate, the court held that “when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort…”[24] The court famously observed that “[a] voice is as distinctive and personal as a face.”[25]

The Midler logic extends naturally to AI-generated celebrity likenesses. When Seedance 2.0 produces video of a recognizable Tom Cruise without authorization, it is doing precisely what Ford did, deliberately deploying an imitation of a distinctive, commercially valuable identity attribute to attract users and generate commercial value.

The Johansson-OpenAI Episode: A Modern Illustration

The 2024 controversy between Scarlett Johansson and OpenAI provided a vivid modern illustration of right of publicity principles applied to AI. After Johansson declined OpenAI’s request to license her voice for ChatGPT’s voice assistant, OpenAI launched a voice called “Sky” that observers widely compared to Johansson’s voice in the 2013 film Her, a comparison that CEO Sam Altman invited by posting the word “Her” on X.[26] Legal experts noted that under Midler and its progeny, even if OpenAI used a different actress, the deliberate imitation of Johansson’s distinctive voice for commercial advantage could give rise to liability.[27]

The Seedance 2.0 situation is arguably more egregious. While the Johansson-OpenAI dispute involved a voice that merely resembled the actress, Seedance 2.0 produces video content that directly depicts identifiable celebrities in wholly fabricated scenarios. The right of publicity claim is correspondingly stronger when the misappropriation extends beyond vocal similarity to full visual and behavioral likeness.

State Law Patchwork and the Need for Federal Action

Approximately thirty-five states recognize some form of statutory or common law right of publicity.[28] California’s statute, which prohibits the unauthorized use of anyone’s “name, voice, signature, photograph, or likeness” for advertising or selling purposes, is among the strongest. However, the state-by-state patchwork creates enforcement challenges, particularly for AI services operated by foreign companies, such as ByteDance, that may not maintain a significant U.S. presence against which to enforce state judgments.

Legislative Developments: The NO FAKES Act and the Take It Down Act

 Recognizing the limitations of existing law, Congress has moved to address AI-generated likenesses through two significant legislative initiatives.

The NO FAKES Act of 2025

 On April 9, 2025, a bipartisan group of Senators reintroduced the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (“NO FAKES Act”), which would create a federal “digital replication right” allowing individuals to control AI-generated versions of their voice or likeness.[29] The proposed law would make it unlawful to create or distribute an AI-generated replica of a person’s voice or likeness without consent, with limited exceptions for satire, commentary, and news reporting. The right would extend for up to seventy years after death and would be transferable to heirs.[30]

The NO FAKES Act has garnered unusually broad support from both the entertainment industry and major technology companies.[31] If enacted, it would provide a federal cause of action directly applicable to platforms like Seedance 2.0 that generate celebrity likenesses without authorization, eliminating the need to navigate the patchwork of state right of publicity laws.

The Take It Down Act (Enacted May 2025)

 On May 19, 2025, President Trump signed the Take It Down Act into law, the first federal statute specifically addressing AI-generated deepfakes.[32] The Act prohibits the knowing publication of non-consensual intimate visual depictions and deepfakes intended to cause harm, and requires covered platforms to establish notice-and-takedown procedures for such content by May 19, 2026. While the Take It Down Act is narrower in scope than the NO FAKES Act, focusing primarily on intimate imagery, it establishes the principle that AI-generated depictions of individuals can trigger federal liability.

The Sora 2 Precedent: A Roadmap for Seedance 2.0 Enforcement

The Seedance 2.0 controversy echoes and escalates a near-identical confrontation between Hollywood and OpenAI over Sora 2 in October 2025. When OpenAI launched Sora 2 with an opt-out copyright policy that allowed users to generate videos featuring copyrighted characters unless rights holders specifically requested exclusion, the MPA, studios, and talent agencies condemned the approach.[33] Warner Bros. Discovery stated that “decades of enforceable copyright law establishes that content owners do not need to ‘opt out’ to prevent infringing uses of their protected IP.”[34]

Under intense pressure, OpenAI reversed course within seventy-two hours, moving to an opt-in model requiring affirmative permission before copyrighted characters could appear in Sora 2 videos.[35] The speed of this reversal raised an uncomfortable question: if OpenAI could implement these guardrails in three days, why were they absent at launch?

The Sora 2 episode ultimately led to a landmark licensing deal with Disney in December 2025, under which Disney licensed more than 200 characters from Disney, Marvel, Pixar, and Star Wars for use on Sora in exchange for a $1 billion equity investment in OpenAI.[36] Crucially, the Disney-OpenAI agreement explicitly excluded talent likenesses and voices, did not authorize OpenAI to use Disney IP for AI training, and imposed robust controls on character usage.[37]

The Sora 2 arc, from unrestrained generation to industry backlash to licensing agreements, may foreshadow the trajectory of Seedance 2.0. However, a critical difference exists: OpenAI is a U.S.-based company subject to U.S. jurisdiction and responsive to reputational and commercial pressure from Hollywood partners. ByteDance, while operating globally, is headquartered in Beijing and has shown markedly less responsiveness to Western intellectual property concerns. As of this writing, ByteDance has not responded to requests for comment regarding the MPA’s demands.[38]

Enforcement Challenges: DMCA, Jurisdiction, and Scale

 Even where the substantive law supports infringement claims, enforcement against AI-generated content presents formidable practical challenges.

The Inadequacy of the DMCA Framework

The Digital Millennium Copyright Act’s notice-and-takedown system was designed for discrete, static infringing files: a pirated movie uploaded to a hosting platform, a song shared on a file-sharing service. AI-generated content fundamentally breaks this model. Each Seedance 2.0 output is dynamically produced in response to a user prompt and delivered directly to the user. There is no persistent “file” to target with a takedown notice; the infringing content is generated, consumed, and potentially shared before any rights holder can identify it.[39]

Moreover, the DMCA’s one-at-a-time takedown procedure is wholly inadequate when a single AI service can generate millions of potentially infringing outputs per day. Rights holders cannot be expected to monitor and send individual takedown notices for each infringing generation. As the MPA has argued in the Sora 2 context, the burden of preventing infringement must rest on the platform, not the rights holder.[40]

Jurisdictional Complexity

ByteDance’s Chinese headquarters raises significant jurisdictional questions. While U.S. courts may assert jurisdiction over ByteDance based on its U.S. operations (including TikTok), enforcing a judgment against a Chinese company presents well-documented challenges. China does not enforce U.S. civil judgments, and ByteDance’s assets outside the United States may be difficult to reach. This jurisdictional gap is precisely what makes legislative solutions like the NO FAKES Act, which could authorize injunctive relief against platforms operating in the United States, particularly important.

The Emerging Licensing Model: Confrontation Leading to Collaboration?

Despite the adversarial framing of the Seedance 2.0 controversy, the broader trajectory of the AI-content industry suggests that litigation may serve primarily as a forcing mechanism for licensing. The pattern is emerging clearly:

In music, Warner Music settled its litigation against AI music generator Suno and pivoted into a licensing partnership.[41] In film and entertainment, Disney’s $1 billion investment in OpenAI, coupled with a three-year character licensing agreement, demonstrated that major IP holders see commercial opportunity in controlled AI content generation.[42] In publishing, the $1.5 billion Bartz v. Anthropic settlement created a financial template for compensating authors whose works were used in training.[43]

This pattern suggests that the endgame may not be prohibition but rather structured licensing regimes in which AI platforms pay for the right to train on and generate content using copyrighted works and celebrity likenesses. For intellectual property holders, this means that the litigation and public pressure campaigns currently being directed at ByteDance serve a dual purpose: vindicating legal rights and establishing the commercial leverage necessary to negotiate favorable licensing terms.

Practical Recommendations for IP Holders

Based on our analysis of the current legal landscape, we recommend that intellectual property holders consider the following measures:

 

  1. Document and preserve evidence of infringement. Screenshot and archive AI-generated content that incorporates your protected works or likenesses. Establish a systematic monitoring protocol for major AI generation platforms. This evidence will be essential for any future litigation or licensing negotiation.

 

  1. Ensure copyright registrations are current. Under U.S. law, copyright registration is a prerequisite to filing an infringement action. The court in Andersen v. Stability AI dismissed claims from artists who had not obtained registrations.[44] Proactive registration of key works, including character designs, film footage, and other visual assets, is essential.

 

  1. Evaluate right of publicity protections. For talent and celebrities, review the right of publicity laws applicable in your jurisdiction and consider whether federal legislation such as the NO FAKES Act would strengthen your position. In states with strong statutory protections, such as California, consider proactive enforcement against platforms generating unauthorized likenesses.

 

  1. Engage with industry coalitions. The MPA, SAG-AFTRA, and the Human Artistry Campaign’s “Stealing Isn’t Innovation” initiative represent coordinated efforts to shape both judicial and legislative outcomes. Participation in these coalitions amplifies the voice of individual rights holders and contributes to the development of industry-wide standards.

 

  1. Prepare for licensing negotiations. The Disney-OpenAI template suggests that major AI platforms will ultimately seek licensed access to copyrighted content. IP holders should begin developing licensing frameworks, valuation methodologies, and contractual protections for AI use cases. Those who wait for litigation to force negotiations may find themselves at a disadvantage relative to early movers.

 

  1. Monitor legislative developments. The NO FAKES Act, the Take It Down Act, and the more than 300 state-level deepfake bills introduced in 2025 reflect a rapidly evolving regulatory landscape.[45] IP holders should actively engage with legislative processes to ensure that new laws adequately protect their interests.

Conclusion

 ByteDance’s Seedance 2.0 represents a qualitative leap in generative AI capability and a correspondingly significant escalation of the legal challenges that AI video generation poses to copyright holders and individuals whose likenesses are exploited without consent. The existing legal framework provides multiple avenues for redress, including copyright infringement claims at both the training and output stages, right of publicity claims under state law, and should the NO FAKES Act be enacted, federal digital replication rights.

The three fair use decisions of 2025 provide a nuanced but ultimately encouraging landscape for rights holders. Thomson Reuters established that non-transformative, competitive AI uses are not fair use. Bartz drew a clear line against piracy as a means of acquiring training data. And Kadrey, while finding for Meta on the specific facts presented, explicitly warned that market dilution by AI-generated content will “often cause plaintiffs to decisively win” the fair use analysis.

The path forward will likely be a combination of litigation, legislation, and ultimately, structured licensing. For IP holders, the imperative is clear: protect your rights aggressively, participate in industry coalitions, prepare your licensing positions, and ensure that the unprecedented capabilities of generative AI do not become a license to appropriate without compensation the creative works on which our cultural industries depend.

ENDNOTES

[1] Variety, “After AI Video of ‘Tom Cruise’ Fighting ‘Brad Pitt’ Goes Viral, Motion Picture Association Denounces ‘Massive’ Infringement on Seedance 2.0,” Feb. 13, 2026.

[2] PEOPLE, “AI-Generated Video of Brad Pitt and Tom Cruise Fighting Sparks Backlash in Hollywood,” Feb. 17, 2026.

[3] Deadline, “MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake,” Feb. 13, 2026.

[4] Variety, supra note 1 (quoting MPA Chairman Charles Rivkin).

[5] Variety, “SAG-AFTRA Slams ‘Blatant Infringement’ in Seedance AI Videos,” Feb. 13, 2026.

[6] Deadline, “Seedance 2.0’s AI Deepfakes Slammed As ‘Destructive’ By Creative Group,” Feb. 13, 2026.

[7] Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. Aug. 12, 2024) (order on motion to dismiss second amended complaint).

[8] IPWatchdog, “Copyright and AI Collide: Three Key Decisions on AI Training and Copyrighted Content from 2025,” Dec. 23, 2025.

[9] Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., No. 1:20-cv-00613-SB (D. Del. Feb. 11, 2025).

[10] Reed Smith, “Court Shuts Down AI Fair Use Argument in Thomson Reuters v. Ross Intelligence,” Feb. 2025; Loeb & Loeb, “Thomson Reuters v. Ross Intelligence, Inc.,” Feb. 2025.

[11] Bloomberg Law, “Mixed Anthropic Ruling Builds Roadmap for Generative AI Fair Use,” June 25, 2025.

[12] Bartz v. Anthropic PBC, No. 3:24-cv-05417 (N.D. Cal. June 23, 2025) (order on summary judgment).

[13] Id.; ArentFox Schiff, “Landmark Ruling on AI Copyright: Fair Use vs. Infringement in Bartz v. Anthropic,” 2025.

[14] Kluwer Copyright Blog, “The Bartz v. Anthropic Settlement: Understanding America’s Largest Copyright Settlement,” 2025. Settlement amount: $1.5 billion for approximately 500,000 works (≈$3,000 per work).

[15] Kadrey v. Meta Platforms, Inc., No. 3:2023cv03417 (N.D. Cal. June 25, 2025).

[16] Skadden, “Fair Use and AI Training: Two Recent Decisions Highlight the Complexity of This Issue,” July 2025.

[17] Id.; FisherBroyles, “Client Alert – Summary and Strategic Analysis of Judge Chhabria’s Fair Use Ruling in Kadrey v. Meta,” 2025.

[18] Kadrey, supra note 15; Goodwin, “Northern District of California Judge Rules That Meta’s Training of AI Models Is Fair Use,” June 2025.

[19] Variety, supra note 1 (quoting Rhett Reese).

[20] U.S. Copyright Office, “Copyright and Artificial Intelligence Part 3: Generative AI Training,” May 9, 2025.

[21] Id. at 5–10; Authors Guild, “U.S. Copyright Office Releases Part 3 of AI Report: What Authors Should Know,” 2025.

[22] Wiley, “Copyright Office Issues Key Guidance on Fair Use in Generative AI Training,” 2025.

[23] U.S. Copyright Office, supra note 20.

[24] Midler v. Ford Motor Co., 849 F.2d 460, 463 (9th Cir. 1988).

[25] Id.

[26] NPR, “Scarlett Johansson Wants Answers About ChatGPT Voice That Sounds Like ‘Her,’” May 20, 2024; CNN, “Why OpenAI Should Fear a Scarlett Johansson Lawsuit,” May 22, 2024.

[27] Georgetown University, “OpenAI v. Scarlett Johansson? Law Professor Answers Legal Questions on AI-Generated Content,” 2024; American Bar Association, “OpenAI’s Use of Scarlett Johansson-Like Voice in ChatGPT Exposed Gaps in the Law,” Nov. 2024.

[28] Iowa Law Review, “How Can Iowans Effectively Prevent the Commercial Misappropriation of Their Identities? Why Iowa Needs a Right of Publicity Statute,” Nov. 15, 2025.

[29] H.R.2794, 119th Congress (2025–2026): NO FAKES Act of 2025; Columbia Undergraduate Law Review, “A New Age of Publicity: The NO FAKES Act and Federal Regulation on AI Replicas,” 2025.

[30] O’Melveny, “Proposed Legislation Reflects Growing Concern Over ‘Deep Fakes’: What Companies Need to Know,” 2025.

[31] CNN, “Celebrity AI Deepfakes Are Flooding the Internet. Hollywood Is Pushing Congress to Fight Back,” Mar. 8, 2025.

[32] Skadden, “‘Take It Down Act’ Requires Online Platforms to Remove Unauthorized Intimate Images and Deepfakes When Notified,” June 2025.

[33] CNBC, “OpenAI’s Sora 2 Must Stop Allowing Copyright Infringement, Motion Picture Association Says,” Oct. 7, 2025.

[34] Los Angeles Times, “Hollywood-AI battle deepens, as OpenAI and studios clash over copyrights and consent,” Oct. 11, 2025.

[35] Copyright Lately, “Sora, Not Sorry: OpenAI Backtracks on Opt-Out Copyright Policy,” Oct. 4, 2025.

[36] NPR, “Billion-Dollar OpenAI Deal Allows Users to Make Content with Disney Characters,” Dec. 11, 2025; CNBC, “Disney Making $1 Billion Investment in OpenAI, Will Allow Characters on Sora AI Video Generator,” Dec. 11, 2025.

[37] Variety, “Disney Inks Blockbuster OpenAI Deal to Bring More Than 200 Characters to Sora Video Platform, Will Invest $1 Billion in AI Company,” Dec. 11, 2025.

[38] Variety, supra note 1.

[39] PatentPC, “DMCA Takedowns for AI-Generated Content: What Creators Need to Know,” 2025; Oxford Academic, “From Safe Harbours to AI Harbours: Reimagining DMCA Immunity for the Generative AI Era,” 2025.

[40] Variety, “Motion Picture Association Blasts OpenAI Over Sora 2 Video Copyright Opt-Outs,” Oct. 2025.

[41] Copyright Alliance, “AI Copyright Lawsuit Developments in 2025: A Year in Review,” 2025.

[42] Disney-OpenAI Agreement, supra notes 36–37.

[43] Bartz v. Anthropic Settlement, supra note 14.

[44] Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. Oct. 2023) (order on initial motion to dismiss).

[45] Regula Forensics, “Deepfake Regulations: AI and Deepfake Laws of 2025,” 2025; Deepfake Legislation Tracker, programs.com, 2025.

 

This advisory is provided for informational purposes only and does not constitute legal advice. The views expressed are those of the authors and do not necessarily reflect the views of Devlin Law Firm or its clients. For specific legal guidance, please contact the firm’s Intellectual Property Practice Group.

Devlin Law Firm  |  devlinlawfirm.com

Contributors:

Robyn T. Williams

Robyn T. Williams is a partner at Devlin Law Firm and Co-chair of the Trademark Practice Group. Please contact Robyn T. Williams via the Devlin Law Firm site to schedule a consultation regarding any of your big game promotions.

correspondence@devlinlawfirm.com

 

Angie Chen, Associate

correspondence@devlinlawfirm.com

Devlin Law Firm

 

>> Learn More about Devlin Law Firm Copyright Litigation and Prosecution