Produced by: Tarun Mishra Designed by: Manoj Kumar
The Pentagon is actively addressing vulnerabilities within its AI systems that could be exploited by attackers utilizing visual manipulation or altered signals.
Since 2022, the Guaranteeing AI Robustness Against Deception (GARD) program has been investigating these "adversarial attacks" to enhance the resilience of AI systems.
Researchers have demonstrated how innocuous patterns can deceive AI, potentially leading to critical misidentifications on the battlefield, such as mistaking a bus for a tank.
Amidst public apprehensions about autonomous weapons, the Department of Defence has revised its AI development guidelines, prioritizing responsible behaviour and mandating approval for all deployed systems.
Despite modest funding, the GARD program has made strides in developing defences against adversarial attacks and has provided tools to the Defence Department's Chief Digital and AI Office (CDAO).
Some advocacy groups express concerns that AI-powered weapons could act without cause, potentially leading to unintended escalations, particularly in tense regions.
The Pentagon's active modernization of its arsenal with autonomous weapons underscores the importance of addressing vulnerabilities and ensuring responsible development of AI technology.
Researchers from Two Six Technologies, IBM, MITRE, University of Chicago, and Google Research have generated virtual testbeds, toolboxes, benchmarking datasets, and training materials to aid in defending against adversarial attacks.
These resources, including the Armory virtual platform, Adversarial Robustness Toolbox (ART), Adversarial Patches Rearranged In COnText (APRICOT) dataset, and Google Research Self-Study repository, are now accessible to the broader research community to enhance AI security measures.