OpenAI Teams Up with the Defense Department: A Controversial Deal
Okay, this is a bit of a head-scratcher. So, OpenAI, led by Sam Altman, has shaken hands with the Department of Defense (DoD) – or as Altman cheekily calls it, the Department of War (DoW) – to get their AI models working within the agency's network. Interesting, right?
Apparently, Altman made the announcement on X, highlighting that OpenAI's core principles – no mass domestic surveillance and humans in charge of lethal force – are supposedly baked into the deal. It sounds reassuring, but I can't help but feel a little uneasy.
Here's where it gets really interesting. This deal comes hot on the heels of Donald Trump's order to ditch Claude (from Anthropic) and other Anthropic services across all government agencies. Why? Apparently, Defense Secretary Pete Hegseth threatened to label Anthropic a “supply chain risk” if they didn't remove the guardrails preventing their AI from being used for mass surveillance and autonomous weapons. It appears that Anthropic stood its ground! Good for them!
So, if OpenAI's models also have these guardrails, why the sudden partnership? Altman says they're pushing for the government to offer the same terms to all AI companies they work with. Jeremy Lewin chipped in, saying that the DoD's contracts include "certain existing legal authorities and mutually agreed upon safety mechanisms." Apparently, this was the same "compromise that Anthropic was offered, and rejected.”
Anthropic, bless their hearts, isn't backing down. They released a statement saying no amount of intimidation will change their stance on mass surveillance or autonomous weapons and that they're ready to challenge any "supply chain risk" label in court. Talk about sticking to your principles!
Altman claims OpenAI will build technical safeguards to ensure their models behave as they should, which is apparently what the DoD wants. They're even sending engineers to work with the agency and will only deploy on cloud networks. The New York Times pointed out that OpenAI isn't on Amazon cloud yet, which the government uses. But hey, they just announced a partnership with Amazon Web Services (AWS), so that might change soon.
Honestly, I find this whole situation a bit perplexing. On one hand, it's good to see AI companies seemingly prioritizing safety and ethics. However, the fact that the Department of Defense is so eager to get its hands on these AI models raises some serious questions. What exactly do they plan to use them for? And can we really trust that these "safeguards" will prevent misuse?
For me, the most important thing is transparency. We, the public, need to know how AI is being used, especially when it comes to matters of national security. I do have some concerns. It's a slippery slope, and we need to make sure we're heading in the right direction. One thing is certain: the AI race is on, and it's going to be interesting to see how it all plays out.
Source: Engadget