cover image: Fighting deepfakes when detection fails

20.500.12592/wvgbcb

Fighting deepfakes when detection fails

14 Nov 2019

Deepfakes intended to spread misinformation are already a threat to online discourse, and there is every reason to believe this problem will become more significant in the future. So far, most ongoing research and mitigation efforts have focused on automated deepfake detection, which will aid deepfake discovery for the next few years. However, worse than cybersecurity’s perpetual cat-and-mouse game, automated deepfake detection is likely to become impossible in the relatively near future, as the approaches that generate fake digital content improve considerably. In addition to supporting the near-term creation and responsible dissemination of deepfake detection technology, policymakers should invest in discovering and developing longer-term solutions. Policymakers should take actions that:Support ongoing deepfake detection efforts with continued funding through DARPA’s MediFor program, as well as adding new grants to support collaboration between detection efforts and training journalists and fact-checkers to use these tools.
technology & innovation

Authors

Alex Engler

Published in
United States of America