Pentagon Will Let OpenAI Build Its Own Safety Rules
Sam Altman told staff the DOD won't force compliance if OpenAI's model refuses a military task.
OpenAI is hammering out a deal with the Department of Defense that comes with a remarkable concession: the Pentagon will let the company build its own "safety stack."
CEO Sam Altman dropped the news during an all-hands meeting on Friday. The key detail? If OpenAI's model refuses a task, the DOD won't force the company to override it.
That's a significant carve-out. It means OpenAI would maintain autonomous control over what its AI will and won't do, even in a military context. The DOD essentially agreeing to work within OpenAI's safety boundaries — not the other way around.
Altman described the arrangement as a "potential agreement" still taking shape. But the direction is clear: the Pentagon wants access to OpenAI's models badly enough to play by OpenAI's rules.