top of page

DARPA Contracts Lockheed Martin To Develop AI Detection System To Defeat Disinformation And Propaganda


ree

The United States Defense Advanced research Projects Agency (DARPA) has reportedly contracted the country’s top defense contractor Lockheed Martin to produce a prototype censorship system that scans and analyzes media for “disinformation.”

U.S. military researchers are asking Lockheed Martin Corp. to continue work on prototyping a system to detect and defeat automated enemy disinformation campaigns launched by manipulating the Internet, news, and entertainment media. Officials of the Air Force Research Laboratory Information Directorate in Rome, N.Y., announced a $19.3 million order in August to the Lockheed Martin Advanced Technology Laboratories in Cherry Hill, N.J., to finish a prototype for the Semantic Forensics (SemaFor) program.

DARPA explains the purpose of SemaFor in a short post on its website, which states:

Media generation and manipulation technologies are advancing rapidly and purely statistical detection methods are quickly becoming insufficient for identifying falsified media assets. Detection techniques that rely on statistical fingerprints can often be fooled with limited additional resources (algorithm development, data, or compute). However, existing automated media generation and manipulation algorithms are heavily reliant on purely data driven approaches and are prone to making semantic errors. For example, generative adversarial network (GAN)-generated faces may have semantic inconsistencies such as mismatched earrings. These semantic failures provide an opportunity for defenders to gain an asymmetric advantage. A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.


The Semantic Forensics (SemaFor) program seeks to develop innovative semantic technologies for analyzing media. These technologies include semantic detection algorithms, which will determine if multi-modal media assets have been generated or manipulated. Attribution algorithms will infer if multi-modal media originates from a particular organization or individual. Characterization algorithms will reason about whether multi-modal media was generated or manipulated for malicious purposes. These SemaFor technologies will help detect, attribute, and characterize adversary disinformation campaigns.

 
 
bottom of page