Featured
'There's a potential here for things to go very, very bad': George R.R. Martin raises AI concerns
Mel MacKaron, left, an actor, writer and producer, and author and producer George R.R. Martin address members of the Science, Technology and Telecommunications Committee during a meeting at the UNM campus in Los Alamos on Monday.
LOS ALAMOS — In the town that produced the first atomic bomb, New Mexico legislators dedicated a day to another technology — artificial intelligence — that could shape human history.
They heard from university experts and author George R.R. Martin as they explored some of the policy questions facing states amid the rise of AI, machine learning and algorithms.
Martin, the novelist and producer based in Santa Fe, suggested the use of artificial intelligence will emerge as a point of conflict in other professions, just as it has in the Hollywood strikes.
“You have to wonder what jobs are going to be left,” Martin said, “because AI is not going to stop with actors and writers.”
The legislative hearing produced some mind-bending moments. At one point, two University of New Mexico professors called up an AI-generated version of the lecture they’d just given. The themes were remarkably similar, though the AI text tended to be more vague.
It also was unreliable.
Melanie Moses, a University of New Mexico computer science and biology professor, outlined how she sometimes asks an AI program to write a short biographic blurb about her.
“It is absolutely always wrong in ways that are completely baffling to me,” she said.
The AI program gets basics wrong — such as where she attended school — even when the information could be googled in seconds. The quirk, she said, illustrates the way AI programs that generate text often draw answers from broader statistical associations — perhaps, in her case, an assumption about where computer science professors are likely to have received a doctorate — that can have built-in biases and inaccuracies.
Making art
Martin, in his presentation, raised fundamental questions about what it means to be an author or artist. He said he’s read AI-generated work that isn’t very good.
Even if it improves over time, he said, “will it be good? It might be OK. Maybe it’s just me, but I don’t think a computer — AI — no matter how it’s programmed can do something truly original, something truly moving.”
But there may be practical applications, Martin said, that could force writers or actors out of work. Perhaps a movie studio, he said, would employ an artificial intelligence program to adapt a novel into a screen play, or use the image of a long-dead actor to star in a new film.
“There’s a potential here for things to go very, very bad — bad for us anyway, not necessarily bad for the big studios,” Martin said.
‘Falsehoods’
Moses, the UNM professor, encouraged lawmakers to focus less on existential questions — like whether artificial intelligence will destroy humanity — and more on practical safeguards for ordinary New Mexicans.
Nuclear weapons, for example, haven’t destroyed Earth, but uranium mining has caused contamination.
For artificial intelligence, Moses said, thinking about similar ripple effects is useful.
Policymakers, she said, might consider requiring transparency so people know when artificial intelligence or a similar technology has been used to produce a fake image or automate the handling of a consumer complaint.
Scams, she said, are likely to grow more convincing and effective.
“The real concern with generative AI,” Moses said, “is it will be really good at producing stories that sound convincing, that sound true, but are based on falsehoods.”
She also encouraged legislators — and the public — to be realistic about what artificial intelligence can and can’t do.
Chat functions powered by artificial intelligence may give the impression of human-like intelligence.
But “it’s not thinking,” Moses said. “This is not a brain.”
On the other hand, Joshua Garland, interim director of the Center on Narrative, Disinformation and Strategic Influence at the Arizona State University, encouraged lawmakers to avoid underestimating AI’s potential impact on democracy, even with current imperfections.
He shared real-seeming videos of a presidential speech that didn’t happen. How many people, he asked, would watch close enough to notice Joe Biden’s lips in the video didn’t match the words said out loud?
It could be a concern, he said, for any public figure.
“With three seconds of your voice,” Garland said, “I can make you say anything. It’s a real threat.”
Free classes in digital and media literacy, he said, could be part of a broader strategy to help the public understand the dangers of disinformation.
Legislation expected
Monday’s presentations — held at the UNM campus in Los Alamos — seemed to make an impact on lawmakers.
Senate Majority Peter Wirth, D-Santa Fe, asked his colleagues to help craft legislation that could be introduced in next year’s 30-day legislative session, perhaps by starting with transparency requirements. In the medical arena, for instance, patients should know if AI played a role in a decision about their care, he said.
“The transparency piece seems absolutely critical and a first step,” Wirth said.
More broadly, several lawmakers expressed concern about “dumbing down” society.
Rep. Jason Harper, a Rio Rancho Republican and research engineer at Sandia National Laboratories, outlined his experience asking an AI model to analyze his writing style and do some writing for him.
“I don’t know how you can stop it,” he said. “How do we know anything is original anymore?”
Martin, the author, offered a few of his own scenarios to ponder — deadly drones and AI-programmed soldiers, among them.
“There’s a lot to worry about here for you guys,” he said, “not just us writers.”