Attorney General Anthony G. Brown has aligned with a bipartisan group of 47 state attorneys general to press major search engines and payment platforms for enhanced measures against the proliferation of deepfake nonconsensual intimate imagery, often referred to as deepfake NCII. The initiative, announced August 26, 2025, in Baltimore, involves two letters highlighting the urgent need to restrict access to tools and content that facilitate this harmful material.
In the letter to search engines such as Google Search, Microsoft Bing and Yahoo! Search, the coalition criticizes current shortcomings in preventing the creation and spread of deepfake nonconsensual intimate imagery. They advocate for safeguards like user warnings and redirects away from dangerous content. A separate letter to payment processors including American Express, Apple Pay, Google Pay, Mastercard, PayPal and Visa calls for denying transaction services to entities linked to deepfake NCII tools or content, effectively cutting off financial support.

The spread of computer-generated NCII online poses significant harm to the public – particularly women and girls. It has increasingly been used to embarrass, intimidate, and exploit people around the world, including notable cases involving celebrities like Taylor Swift, as well as teenagers in New Jersey, Florida, Washington, Kentucky, South Korea, and Spain. Although deepfake NCII overwhelmingly targets women and girls, men and boys have been victimized as well. A recent report found that 98% of fake videos online are deepfake NCII.
The coalition references established industry protocols that could mitigate deepfake nonconsensual intimate imagery. Search engines already restrict results for queries like “how to build a bomb” or “how to kill yourself.” The attorneys general encouraged similar restrictions for terms such as “how to make deepfake pornography,” “undress apps,” “nudify apps,” or “deepfake porn.” For payment platforms, the group pushed for revoking seller access upon discovering ties to deepfake NCII.
Joining Attorney General Brown in sending these letters are the attorneys general of Vermont, Kentucky, Massachusetts, New Jersey, Pennsylvania, Utah, Alaska, American Samoa, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, Georgia, Hawaii, Idaho, Illinois, Iowa, Louisiana, Maine, Michigan, Minnesota, Mississippi, Missouri, Nebraska, Nevada, New Hampshire, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Puerto Rico, Rhode Island, South Carolina, South Dakota, Tennessee, U.S. Virgin Islands, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.
This action comes amid growing concerns over deepfake nonconsensual intimate imagery in Maryland. State lawmakers have considered measures to address related issues, including a 2023 proposal for a task force on unsolicited sexual imagery and deepfake pornography, reflecting reports that more than 75% of millennial-aged women have received unsolicited lewd imagery. While Maryland lacks a specific statute criminalizing all forms of deepfake NCII, broader calls persist for laws targeting defamation and political deception through artificial intelligence. Nationally, at least 48 states have enacted legislation on nonconsensual intimate images, with 26 addressing manipulated media like deepfakes as of 2024. Challenges remain in defining terms amid technically complex bills.
Victims of deepfake nonconsensual intimate imagery often face severe repercussions, including damage to personal integrity, employment opportunities and mental health. In Maryland, a high-profile incident at Pikesville High School in Baltimore County underscored the broader risks of deepfake technology, where an athletic director allegedly used AI to fabricate a racist audio clip of the principal, leading to death threats and his arrest in April 2024. Although not involving intimate imagery, the case highlighted how deepfakes can incite harm, with experts noting parallels to NCII in eroding trust and causing emotional distress. Minors are particularly vulnerable, as evidenced by a 2023-2024 school year report showing generative AI’s role in tech-powered sexual harassment in K-12 settings.
Tech companies’ responses to deepfake nonconsensual intimate imagery have varied. Some platforms, under White House pressure, committed in September 2024 to reducing image-based sexual abuse through voluntary measures. However, search engines like Google and Bing have been criticized for prioritizing deepfake porn in results for celebrity-related queries. Policies differ across firms, with ongoing debates over responsibility in detecting and removing such content.
Federally, the Take It Down Act, signed by President Trump in June 2025, criminalizes the knowing publication or threat of nonconsensual intimate imagery, including AI-generated deepfakes, and mandates takedown processes for platforms within one year. This builds on earlier bills like the Deepfake Accountability Act, which stalled in 2020 but aimed at penalties for undisclosed synthetic media.
Statistics underscore the crisis: The deepfake market reached $563.6 million in 2023, projected to hit $13,889.8 million by 2032, with a 42.79% annual growth rate. Deepfake fraud incidents surged tenfold from 2022 to 2023, though NCII-specific data points to its dominance in fake videos. In 2025 alone, deepfake-enabled fraud caused over $200 million in losses, signaling broader societal risks.
For Southern Maryland residents, where communities rely on local schools and online interactions, the coalition’s efforts could bolster protections against deepfake nonconsensual intimate imagery. While no recent NCII cases were reported in the region, proximity to Baltimore amplifies awareness, as state actions influence county-level enforcement.
This push aligns with global trends, where deepfake nonconsensual intimate imagery has prompted calls for ethical AI use and victim support. As technology advances, balancing innovation with safeguards remains key to preventing exploitation.
