In 2023, former UN weapons inspector and scientist Rocco Casagrande arrived at the Eisenhower Executive Office Building, next to the West Wing of the White House, carrying a small, sealed container. Inside were a dozen test tubes containing ingredients that, if properly assembled, could cause a deadly pandemic.
According to Casagrande, an AI chatbot had not only provided him with the lethal recipe; it also offered ideas about how to pick the best weather conditions and targets for an attack. There to brief government officials on AI-enabled pandemic risks, Casagrande’s prop sent a powerful message to security officials about how rapidly AI had collapsed the barriers to engineering devastating bioweapons.
The barriers to engineering a pandemic have been lowered, and policymakers have taken note. America’s AI Action Plan proposes defensive measures, and the UK’s 2025 Strategic Defence Review prioritizes chemical and biological defense.
Likely driven by both genuine concern and the risk of legal liability, frontier AI companies have also shown leadership, reflected in the Frontier AI Safety Commitments at the Seoul Summit. These companies have publicly committed to mitigating risks across the chemical, biological, radiological, nuclear, and explosives (CBRNe) spectrum.
Yet, despite these steps, an alarming security gap remains. While it is good that companies are focusing on pandemics, the ecosystem is overly focused on a single “lone wolf virus terrorist” model as the most serious threat. Significantly less attention is being paid to all other risk scenarios. The focus on individual actors leaves state-based and terrorist group threats dangerously under-examined.
Furthermore, pandemic virus risks are not an adequate proxy for other threats. The technical steps for developing improvised explosives, non-transmissible biological agents, and chemical weapons are distinct from transmissible biological ones. Building and deploying a chemical weapon like Sarin, for example, as opposed to a virus, involves an entirely different set of technical steps. For this reason, we need distinct AI safety tests.
Yet, on their own, these “lesser” threats could still kill many, devastate communities, and sow chaos across countries. We are in danger of creating a safety system that fixates on guarding against the next pandemic while leaving the door wide open to other forms of terror.
Concerningly, we have seen major AI labs backtracking on safety commitments, likely in part because they lack the classified data to fulfill them. For instance, while some AI companies publish tests on whether their models can help cause a pandemic, they do not detail whether those same models could aid in chemical or improvised explosive attacks.
This omission is dangerous. Advanced AI can not only help individuals build weapons; it can also help them bypass export controls and hide their tracks, allowing anyone to circumvent international guardrails without detection. Crucially, this creates a blind spot that leaves us all vulnerable to the very pandemics we fear most. A rogue actor exploring the use of AI for a chemical or explosive attack today may be rehearsing for a biological attack tomorrow.
If our safety systems are designed to only flash red for the apocalypse, we will miss the warning signals leading up to it.
We need a broader categorization of risks, informed by experts. A fundamental rebalancing is required to also address more probable AI-enabled threats, not just the most apocalyptic ones. Frontier companies and the community of institutions advising them can’t solve this alone—they lack the classified intelligence to inform a clearer understanding of state and non-state threats, and the government mechanisms and international relations needed to create the laws, structures, and partnerships necessary to tackle the challenge. Governments, conversely, lack the full understanding of AI capabilities and proprietary knowledge, as well as essential suspicious user behavior data. A dangerous silence results from these silos.
Public-private partnerships should be designed for national security. Such partnerships would merge the classified intelligence of the state with the proprietary data of tech firms and third parties, bringing together cross-cleared personnel to spot threats that no side can see alone. This will require a level of innovation beyond anything attempted before, but the benefits would be immense.
Indeed, history teaches us that diplomacy and policy can save lives. Roughly half a century ago, world leaders attended the Biological Weapons Convention and signed a disarmament treaty prohibiting the development of biological and toxin weapons. And last year was the centenary of the Geneva Protocol, which banned the use of chemical and biological weapons in international armed conflicts. They both represent collective agreements about the fundamental inhumanity of using these weapons and our shared responsibility for ensuring they are prevented
A century ago, those who understood the devastating potential of new technologies realized the need to take determined action across a full range of foreseeable threats.
In a radically different age, we need that same determination again—and we need it before it is too late.
Uncategorized,AIAI#Weapons #Mass #Destruction #Security #Gap1770904885