The Take It Down Act: U.S. Federal Law Addressing Non-Consensual Intimate Imagery and AI-Generated Deepfakes
On May 19, 2025, the United States enacted the Take It Down Act, a federal law targeting the distribution of non-consensual intimate images, including those generated or manipulated by artificial intelligence. This legislation introduces new legal obligations for online platforms and formally defines “digital forgeries” as a prosecutable category under U.S. law. The Act was publicly endorsed by First Lady Melania Trump, who advocated for stronger protections against the exploitation of intimate imagery, particularly where minors and AI-generated content are involved.
Overview of the Take It Down Act
The legislation, formally titled the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks Act, was introduced by Senators Ted Cruz and Amy Klobuchar. It criminalizes the knowing publication, or threat of publication of intimate imagery without the subject’s consent.
The law applies to both authentic and AI-generated depictions and outlines specific compliance duties for platforms that host user-generated content.
Key Provisions
Criminal liability for knowingly publishing or threatening to publish intimate images without consent.
Application to AI-generated deepfakes, including imagery altered or fabricated through machine learning or other technologies.
Mandated takedown by platforms within 48 hours of receiving notification from a victim.
Restitution to victims, along with potential fines and imprisonment for violators.
Obligations for platforms to establish clear reporting mechanisms accessible to individuals seeking removal of such content.
Definition of Digital Forgeries
The law introduces the term “digital forgery” to describe intimate images created or modified using AI, deepfake technology, or other computational tools. Such images are considered unlawful under this act when they portray an identifiable individual and are indistinguishable from authentic depictions.
This classification formally includes synthetic media within the scope of intimate privacy violations and expands the legal scope of image-based abuse.
Effective Date and Implementation
The Take It Down Act became effective immediately upon signing on May 19, 2025. This means:
All criminal provisions are enforceable.
Covered platforms are subject to the 48-hour takedown requirement.
Service providers must begin implementation of appropriate content moderation, reporting channels, and risk protocols to align with the law’s requirements.
There is no explicit grace period. Organizations are expected to act without delay to meet the obligations outlined in the statute.
Compliance Considerations for Organizations
The introduction of this law requires organizations, particularly those offering content-sharing platforms, AI-based media services, or user engagement tools, to assess operational readiness and legal exposure.
ART25 Consulting recommends the following steps:
Review moderation policies and takedown procedures to ensure alignment with the 48-hour rule.
Assess existing AI governance frameworks to account for misuse of generative technologies in synthetic imagery.
Implement internal protocols for documenting, responding to, and auditing takedown requests.
Train relevant staff in compliance, risk mitigation, and victim-centered handling of removal requests.
Conclusion
The Take It Down Act expands the regulatory perimeter around AI-generated media and places new burdens on technology platforms to monitor, detect, and respond to non-consensual imagery. While questions remain regarding enforcement and scope, the law is now active and requires immediate attention from legal, compliance, and technology teams.
For organizations operating globally, the Act also signals a potential trend toward stricter treatment of synthetic content and digital identity abuse across jurisdictions.
If you have any questions or would like to discuss how this law may affect your operations, please don’t hesitate to contact us