Elections officials in Utah and nationwide are facing an onslaught of disinformation from people trying to discredit democratic institutions, Lt. Gov. Deidre Henderson said Monday, and it requires that voters be “vigilant” and trust the systems in place.
Henderson’s comments came as students at Utah Valley University presented the results of their research on people’s ability to recognize a deepfake video generated with artificial intelligence.
Half of those students surveyed in their research could not tell a deepfake from an actual human, and, in many instances, the deepfake was considered more knowledgeable, trustworthy and persuasive than a human speaker.
Student researchers using eye-tracking software and measuring physical reactions also found that subjects were more engaged with deepfakes than with videos of an actual person and they also experienced more confusion.
Henderson said that sometimes, amid the “disinformation,” there are legitimate concerns raised “but sometimes those questions and concerns are raised because people are trying deliberately to undermine our confidence and faith and trust in our democratic process.”
“The tools may be different, the methods may be different, but the attempts to dissuade, the attempts to undermine, the attempts to trick people, that’s nothing new,” she said, “but it is something that we have to be continually vigilant and guard against.”
Henderson, as lieutenant governor, is the director of state elections. She also is running for reelection this year with Gov. Spencer Cox. In June, a deepfake circulated of Cox purportedly admitting to election improprieties.
The Utah Valley University students’ project was the first phase in testing how convincing AI-generated images can be. Brandon Amacher, director of Emerging Tech Lab and an instructor of national security at UVU who oversaw the students’ work, said the goal was to quantify the problem.
Next, the students plan to create a political campaign with an AI-generated candidate to specifically test its application in politics.
Russia, Iran, China and North Korea have all used deepfakes to promote their interests this election, said Michael Kaiser, president and CEO of Defending Digital Campaigns, a nonpartisan nonprofit that provides cybersecurity resources to political campaigns. But the aim is not necessarily to sway voters one way or the other.
“The goal of the people creating deepfakes is not just about the way you vote, it’s to make our democracy not work. That is their goal,” he said. “In some ways, they could care less who you vote for as long as you believe the system doesn’t work, because, for our adversaries, that’s winning.”
Amacher said the spread of deepfakes can “contribute to the rot we have in the trust in our information landscape, and we get to a point where the termites eat out the foundation and we don’t know if we need to condemn the building anymore.”
Kaiser and other experts on the panel said when people see a video online that seems sensational or generates an emotional response, they should pause, try to check the source of the original posts and question the intent of the poster before spreading it further.