
The U.S. Congress has approved the "Take It Down Act," a landmark bill aimed at combating the malicious use of non-consensual intimate imagery (NCII), including AI-generated deepfakes. It has garnered bipartisan support, with President Donald Trump and the First Lady advocating for its adoption.Â
The Act, short for "Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act," is expected to be signed into law imminently. Â
At its core, the Take It Down Act seeks to criminalize activities such as the sharing or threatening to share NCII. Violators could face fines and prison sentences of up to two years for offenses involving adults, and up to three years for cases involving minors.Â
Online platforms are also required to remove flagged NCII content within 48 hours of a victim's notification. Â
The Act excludes content related to public interest, commercial pornography, and legally sanctioned uses such as medical procedures, national security, and law enforcement activities. Lawmakers have designed these provisions to clarify the bill’s intended scope while seeking to protect its misuse against legitimate content. Â
Social media companies and advocacy groups generally regard the criminalization of NCII as a long-overdue measure to curb harms associated with malicious content sharing.Â
Organizations such as the Cyber Civil Rights Initiative (CCRI) have expressed support for the intent behind criminalizing non-consensual distribution of intimate images, recognizing its potential to reduce abuse and exploitation online. Â
Despite its noble intent, the Take It Down Act has sparked concerns over its takedown provisions, with critics raising alarm about potential abuse to suppress lawful speech.Â
The Electronic Frontier Foundation (EFF) has argued that automated content removal systems, often relied upon by platforms to comply with the Act’s 48-hour takedown requirement, could mistakenly flag fair-use content such as news articles, commentary, or other legal materials. Â
According to EFF, the absence of robust safeguards against false removal requests and the pressure for platforms to monitor encrypted data poses significant threats to privacy and security standards. Smaller platforms, with limited resources to verify takedown claims within the stipulated timeframe, may opt to remove content preemptively to mitigate legal liabilities.Â
The CCRI has echoed similar frustrations, noting that certain provisions may unintentionally create gaps for misuse, such as allowing perpetrators who appear in a shared intimate image to bypass penalties. The group fears the Act’s lack of safeguards might facilitate politically motivated removals or ideological censorship.