NEWS
Leaders pitch law to regulate harmful AI images
Proposed measure would provide enhanced civil and criminal penalties for pornographic deepfakes
The emerging threat posed by deepfake and pornographic images created with new artificial intelligence tools calls for a comprehensive law to protect New Mexicans, said two state leaders who outlined their proposed legislation Thursday.
New Mexico Attorney General Raúl Torrez and state Rep. Linda Serrato, D-Santa Fe, said the proposed bill would create the state's first legal framework for regulating AI images, and enhance civil and criminal penalties for violators.
Key elements include setting technical standards for AI developers and authorizing the New Mexico Department of Justice to investigate large tech companies' enforcement of those standards, Torrez and Serrato said at a news conference in Albuquerque.
The measure, called the Artificial Intelligence Accountability Act, would require AI companies to include digital markers in content to help identify creators of harmful deepfakes and allow people to pursue civil actions against violators.
"One of the difficult things because of this technology is the way in which it allows anonymous people to create real harm to others," Torrez said. The bill includes a provision that helps people harmed by malicious video or audio to pursue civil litigation against the creators, he said.
"This creates a private right of action for individuals who may have been harmed by the unlawful production and dissemination of those materials," Torrez said. The act would allow individuals to recover actual damages, or $1,000 per view, he said.
"That's a steep and heavy price to pay, but I think it is in line with the type of harm and the necessary deterrent for making sure that this activity is not in any way allowed," he said.
Torrez and Serrato said they plan to introduce the bill in the 2026 regular session of the Legislature, which begins Tuesday. The bill had not been prefiled as of Thursday.
The measure would authorize the New Mexico Department of Justice to bring civil action and seek penalties up to $15,000 per violation from companies that fail to comply with technical standards.
The need for the measure was illustrated this week by the arrest of an Albuquerque man who allegedly lifted photos of children from social media platforms and used commercial AI tools to manufacture thousands of images of himself engaged in sex acts with nude children, he said.
Richard Gallagher, 68, was arraigned Wednesday in Bernalillo County Metropolitan Court on 12 felony charges, including manufacturing and distribution of visual medium of sexual exploitation of children. He remained in custody Thursday at the Metropolitan Detention Center.
"This is the first instance, as far as we are aware, of someone who actually used artificial intelligence to generate images of sexual exploitation" using public images of children and AI imaging tools, Torrez said. "I think it's important for New Mexico to take a leading role in trying to develop a framework that sets clear guidelines and boundaries for the ethical development of this technology."
The proposed bill also would enhance a prison sentence by one year for people convicted of using AI to manufacture harmful deepfake images.
Serrato said she led an AI summit in December attended by 140 people with a clear interest in using the technology for a variety of legitimate purposes.
"But it was very clear that in order to get an AI ecosystem that we want to see in our state, we have to crack down on the bad actors who are misusing this and harming individuals," Serrato said.