AI Detection Tools Nail Easy Fakes, Choke on Complex Ones
Tests of over 12 AI-detection tools reveal a gap between catching simple fakes and handling sophisticated manipulations.
A New York Times investigation put more than 12 AI-detection tools through their paces. The verdict? They can spot the obvious stuff, but throw anything sophisticated at them and they start to crumble.
Most tools handled basic AI-generated fakes reasonably well. The real problems emerged with complex images — the kind of nuanced manipulations that actually fool people in the wild. That's exactly where you need detection to work, and it doesn't.
Video analysis is an even bigger blind spot. Very few tools can even attempt to analyze AI-generated video, which is a growing concern as synthetic video gets cheaper and easier to produce.
Audio detection was a bright spot. Most tools successfully flagged fake audio content.
These tools are increasingly deployed to verify online content. But the gap between catching amateur fakes and catching dangerous ones remains uncomfortably wide.