Leading AI developers, like OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people.
Now, their tools aren't used as weapons, but AI gives the Department of Defense a “significant advantage” in identifying, tracking, and assessing threats, said the Pentagon's Chief Digital and AI Officer, Dr. Radha Plumb, to TechCrunch in a phone interview.
“Obviously we are increasing the ways in which we can accelerate the execution of the kill chain so that our commanders can respond in time to protect our forces,” Plumb said.
The “kill chain” refers to the military's process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing stages of the kill chain, according to Plumb.
The relationship between the Pentagon and AI developers is relatively new. OpenAI, Anthropic, and Meta returned their terms of use in 2024 to let US intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans.
“We've been really clear on what we will and won't use their technologies,” Plumb said, when asked how the Pentagon works with AI model providers.
However, it started a speed dating round for AI companies and defense contractors.
Meta partnered with Lockheed Martin and Booz Allenamong others, to bring its Llama AI models to defense agencies in November. In the same month, Anthropic has partnered with Palantir. In December, OpenAI made a similar deal with Anduril. quieter, together Palantir also deployed models of this.
As generative AI proves its usefulness in the Pentagon, it could push Silicon Valley to loosen its rules on the use of AI and allow more military applications.
“Playing in different situations is something that can help develop AI,” Plumb said. “It allows you to take advantage of the full range of tools available to our commanders, but to think creatively about different response options and potential trade offs in an environment where there is a potential threat, or series of threats, which need to be prosecuted. ”
It is unclear whose technology the Pentagon is using for this task; the use of generative AI in the kill chain (even in the early planning stages) seems to violate the usage rules of some leading model developers. Anthropic's policyfor example, it prohibits the use of its models to construct or modify “systems designed to cause injury to or loss of human life.”
In response to our questions, Anthropic directed TechCrunch to its CEO Dario Amodei recent interview with the Financial Timeswhere he defended his military career:
The position that we shouldn't use AI in defense and intelligence settings doesn't make sense to me. The position that we should go gangbusters and use it to do whatever we want — up to and including doomsday weapons — is just plain crazy. We try to find the middle ground, to do things responsibly.
OpenAI, Meta, and Cohere did not respond to TechCrunch's request for comment.
Life and death, and AI weapons
In recent months, there has been a debate over defense tech whether AI weapons should really be allowed to make life and death decisions. Some argue that the US military already has the weapons that exist.
Anduril CEO Palmer Luckey recently mentioned in X that the US military has a long history of purchasing and using autonomous weapons systems such as a CIWS turret.
“The DoD has been purchasing and using autonomous weapons systems for decades now. Their use (and export!) is well understood, tightly defined, and explicitly regulated by rules that are not voluntary, Luckey said.
But when TechCrunch asked if the Pentagon was buying and operating fully autonomous weapons — ones without humans in the loop — Plumb rejected the idea on principle.
“No, is the short answer,” Plumb said. “As a matter of both reliability and ethics, we always have people involved in the decision to use force, and that includes for our weapons systems.”
The word “autonomy” is somewhat vague and sparked debates across the tech industry about when automated systems – such as AI coding agents, self-driving cars, or self-firing weapons – become truly independent.
Plumb said the idea that automated systems independently make life-and-death decisions is “too binary,” and that reality is less “science fiction-y.” Instead, he suggested that the Pentagon's use of AI systems is really a collaboration between humans and machines, with senior leaders making active decisions throughout the process.
“People tend to think about it as if there are robots somewhere, and then the goculator [a fictional autonomous machine] spit out a sheet of paper, and people just checked a box,” Plumb said. “That's not how human-machine teaming works, and that's not an effective way to use these types of AI. system.”
AI safety at the Pentagon
Military partnerships don't always end well with Silicon Valley employees. Last year, dozens of Amazon and Google employees fired and arrested after protesting their companies' military contracts in Israelcloud deals codenamed “Project Nimbus.”
In comparison, there has been a relatively muted response from the AI community. Some AI researchers, like Anthropic's Evan Hubinger, say that the use of AI in the military is inevitable, and that it's crucial to work directly with the military to make sure they get it right.
“If you take the catastrophic risks from AI seriously, the US government is a very important actor to be involved with, and trying to just block the US government from using AI is not a viable strategy,” Hubinger said in a november post on the online forum LessWrong. “It's not enough to just focus on catastrophic risks, you also have to prevent any way the government could potentially misuse your models.”