Manjeet Rege, professor of software engineering and data science at the University of St. Thomas School of Engineering, recently spoke with PolitiFact about the rise in audio deepfakes and the increasing difficulty to detect them.
From the story:
We tested four free online tools that claim to determine whether an audio clip is AI-generated. Only one of them signaled that the Biden-like robocall was likely AI-generated. Experts told us that AI audio detection tools are lacking in accuracy and shouldn’t be trusted on their own. Nevertheless, people can employ other techniques to spot potential misinformation.
“Audio deepfakes may be more challenging to detect than image or video deepfakes,” said Manjeet Rege, director of the Center for Applied Artificial Intelligence at the University of St. Thomas. “Audio lacks the full context and visual cues of video, so believable audio alone is easier to synthesize convincingly.” ...
Rege said although researchers have also developed open-source tools, the tools’ accuracy remains to be seen.
“I would say no single tool is considered fully reliable yet for the general public to detect deepfake audio,” Rege said. “A combined approach using multiple detection methods is what I will advise at this stage.”