AI killing machine could become 'Rogue' and Pentagon is trying to prevent it

Produced by: Tarun Mishra Designed by: Manoj Kumar

Pentagon's AI Vulnerability Concerns

The Pentagon is actively addressing vulnerabilities within its AI systems that could be exploited by attackers utilizing visual manipulation or altered signals.

Research Program GARD

Since 2022, the Guaranteeing AI Robustness Against Deception (GARD) program has been investigating these "adversarial attacks" to enhance the resilience of AI systems.

Risk of Misidentification

Researchers have demonstrated how innocuous patterns can deceive AI, potentially leading to critical misidentifications on the battlefield, such as mistaking a bus for a tank.

Updated AI Development Rules

Amidst public apprehensions about autonomous weapons, the Department of Defence has revised its AI development guidelines, prioritizing responsible behaviour and mandating approval for all deployed systems.

Progress of GARD Program

Despite modest funding, the GARD program has made strides in developing defences against adversarial attacks and has provided tools to the Defence Department's Chief Digital and AI Office (CDAO).

Advocacy Group Concerns

Some advocacy groups express concerns that AI-powered weapons could act without cause, potentially leading to unintended escalations, particularly in tense regions.

Urgency of Addressing Vulnerabilities

The Pentagon's active modernization of its arsenal with autonomous weapons underscores the importance of addressing vulnerabilities and ensuring responsible development of AI technology.

GARD Research Achievements

Researchers from Two Six Technologies, IBM, MITRE, University of Chicago, and Google Research have generated virtual testbeds, toolboxes, benchmarking datasets, and training materials to aid in defending against adversarial attacks.

Available Resources

These resources, including the Armory virtual platform, Adversarial Robustness Toolbox (ART), Adversarial Patches Rearranged In COnText (APRICOT) dataset, and Google Research Self-Study repository, are now accessible to the broader research community to enhance AI security measures.