> Amodei refused to budge on relinquishing final say over Claude usage.
So did Altman. The terms of each company’s agreement with the DoW are roughly the same when they come out of the wash.
“Mr. Altman negotiated with the Department of Defense in a different way from Anthropic, agreeing to the use of OpenAI’s technology for all lawful purposes. Along the way, he also negotiated the right to put safeguards into OpenAI’s technologies that would prevent its systems from being used in ways that it did not want them to be.”
> a company acting in a way that the Department of War perceives as benefiting enemy states could certainly be a justification for declaring a supply chain risk.
What’s the difference between a company not building something that’s fit for purpose for fighting a war (like a nursery refusing to build land mines), and thus not being a qualified supplier to the Government for conducting military operations, vs. being tarred with the “supply chain risk” brush? The former seems uncontroversial; the latter seems petty and retaliatory. “Supply chain risk” designations are for companies that you would do business with but might be compromised by the enemy, like when a supplier agrees to provide the DoW grenades, but the grenades could be intentionally defective such that they detonate prematurely in the soldier’s hand.
Besides, as an Israeli, imagine a world in which the manufacturers of Zyklon B refused to sell Hitler their product for the purposes of gassing human beings. It might not have prevented the Holocaust, but at least maybe impeded it a little.
>Besides, as an Israeli, imagine a world in which the manufacturers of Zyklon B refused to sell Hitler their product for the purposes of gassing human beings. It might not have prevented the Holocaust, but at least maybe impeded it a little.
Honestly, if the Holocaust was today, we would probably get 10% of comments here trying to defend "both sides". Some people have a need to try to defend every side, even if one of the sides it's asking for them to be murdered.
This is not true. A different deal was offered to Anthropic, and they refused. Then the DoW turned around and went with OpenAI even though their terms weren’t materially different from the terms of their agreement with Anthropic.
The stories I’ve been reading say that the DoW’s agreement with OpenAI contain the very same limitations as the agreement with Anthropic did. In other words, they pressured Anthropic to eliminate those restrictions, Anthropic declined, then they made a huge fuss calling them “a radical left, woke company,” put them on the supply-chain risk list, then went with OpenAI even though OpenAI isn’t changing anything either.
The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.
“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”
The stories I’ve been reading say that the DoW’s agreement with OpenAI contain the very same limitations as the agreement with Anthropic did. In other words, they pressured Anthropic to eliminate those restrictions, Anthropic declined, then they made a huge fuss calling them “a radical left, woke company,” put them on the supply-chain risk list, then went with OpenAI even though OpenAI isn’t changing anything either.
The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.
“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”
The PR strategy described here is often referred to as "The Overton Window Shift" or "Strategic Iteration." Essentially, OpenAI (or any entity using this tactic) enters a negotiation or public debate by asserting a position that seems flexible or "safety-first." When a competitor like Anthropic holds a firm ethical line, the entity uses aggressive framing—or coordinates with third parties—to paint that competitor as an outlier or "radical." By the time the dust settles and the entity signs a deal with the exact same restrictions they previously criticized, the public and stakeholders have been fatigued by the controversy. The goal is to normalize their own brand as the "pragmatic" choice while the competitor remains "tarred and feathered," effectively moving the goalposts of acceptable behavior until the original contradiction is ignored.
It also helps greatly if you can leverage the opportunity window of a temper tantrum being thrown by an incompetent, petulant, volatile, and impulsive President.
The stories I’ve been reading say that the DoW’s agreement with OpenAI contain the very same limitations as the agreement with Anthropic did. In other words, they pressured Anthropic to eliminate those restrictions, Anthropic declined, then they made a huge fuss calling them “a radical left, woke company,” put them on the supply-chain risk list, then went with OpenAI even though OpenAI isn’t changing anything either.
The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.
“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”
That’s where the “free as in puppy” comes in. It’s still a classic case of build vs buy, except building is now quicker than it used to be. You still have to ask, “suppose I did build it myself. Then what?”
Yeah. So then you get your own product, tailor-made to your organisation, that you own (well, it's public domain because LLM-generated, but same same), and that you can change whenever you want without having to deal with a SaaS company's backlog. If you don't like something in it, you fire up Claude Code and get it changed.
There's also no danger of it being enshittified. Or of some twat of a product manager deciding to completely change the UI because they need to change something to prove their importance. Or of the product getting cancelled because it's not making enough money. Or of it getting sold to an evil corp who then sells your data to your competition. Or any of the other stupid shit we've seen SaaS companies pull over the past 20 years.
Respectfully, I think you’re only considering upsides and not considering downsides, opportunity costs, and ongoing maintenance costs. This is not what smart managers do. Plus, just because you can build something cheaper with an LLM doesn’t mean you can operate it more cheaply than a specialist can. Economies of scale haven’t been obviated by AI.
It’s useful to take an argument and take it to its logical extreme: I just don’t see every company in the world, large and small alike, building everything they depend on in-house, as though they were a prepper stocking up for Armageddon. That seems pretty fanciful on its face.
As an attorney (and this is not legal advice), I would argue--and the U.S. Copyright Office has already stated--that machine-generated content is not copyrightable, because it's not a form of human creative expression. https://www.copyright.gov/ai/Copyright-and-Artificial-Intell... ("Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements.")
That said, the inquiry doesn't there. What happens next after the content is generated matters. If human creativity is then applied to the output such that it transforms it into something the machine didn't generate itself, then the resulting product might be copyrightable. See Section F on page 24 of the Report.
Consider that a dictionary contains words that aren't copyrightable; but the selection of words an author select to write a novel constitutes a copyrightable work. It's just that in this case, the author is creatively constructing from much larger components than words.
Lots of questions then obviously follow, like how much and what kind of transformation needs to be applied. But I think this is probably where the law is headed.
Can the output of the service be licensed? A bit like the AGPL, you're licensed to use/reuse/derive new works.
So if it's distributed outside of the license, that's subject to contractual penalties? I guess that's what all the "wrapper" SaaS businesses will do.
Read that report, it defined the issues and the boundaries well, for the current generation of AI tools. As they develop and expand, it's going to get interesting, especially if robotics/3d printing etc get involved.
If I use an Optimus Prime to help create art, similar to Andy Warhol's "factory", do I own the copyright on the completed work?
If a person uses AI to generate work that ends up being patentable, are patents also not available?
I couldn't possibly disagree with this more. Since the acquisition Twitter/X has had far more features at a far faster pace than in the 10 years prior. They've added all sorts of great stuff, and recently have been near the top of the charts in the Apple App Store.
It doesn’t cost much to keep the lights on. As far as I know, X post-acquisition is not investing in innovation anymore.
Musk might have been right that shifting to KTLO mode was a good idea, but the company would still be better off if someone other than him had bought it and done the same thing.
So did Altman. The terms of each company’s agreement with the DoW are roughly the same when they come out of the wash.
“Mr. Altman negotiated with the Department of Defense in a different way from Anthropic, agreeing to the use of OpenAI’s technology for all lawful purposes. Along the way, he also negotiated the right to put safeguards into OpenAI’s technologies that would prevent its systems from being used in ways that it did not want them to be.”
https://www.nytimes.com/2026/02/27/technology/openai-agreeme...
reply