“This work represents an important step forward in strengthening our information edge as we combat sophisticated disinformation campaigns and synthetic media threats,” says Bustamante. Hive was selected from a pool of 36 companies to test its deepfake detection and attribution technology with DOD. This contract will enable the department to detect and respond to AI fraud at scale.
Kevin Guo, CEO of Hive AI, says defending against deepfakes is “existential.” “This is the evolution of cyber warfare.”
Hive’s technology has been trained on large amounts of content, both AI-generated and non-AI-generated content. Capture signals and patterns in AI-generated content that are invisible to the human eye but detectable by AI models.
“It turns out that every image produced by one of these generators has that kind of pattern if you know where to look for it,” Guo said. The Hive team continuously tracks new models and updates its technology accordingly.
The Department of Defense said in a statement that the tools and methodologies developed through this initiative can have a wide range of applications, including not only solving defense-related problems, but also protecting civilian organizations from disinformation, fraud and fraud.
Hive’s technology provides cutting-edge performance for detecting AI-generated content, says Siwei Lyu, a professor of computer science and engineering at the University at Buffalo. Although he was not involved in the Hive effort, he tested the detection tool.
University of Chicago professor Ben Zhao, who independently evaluated Hive AI’s deepfake technology, agrees, but notes that it is far from perfect.
“Hive is clearly superior to most commercial companies and some of the research techniques we have tried, but it has also shown that it is not at all difficult to circumvent,” Zhao said. The team discovered that attackers could manipulate images in a way to bypass Hive’s detection.