6 minutes
Written: 2026-02-28 00:00 +0000
Condor Roundup - February 2026
Fast Cut
I was sick for most of the month so I’m somewhat behind on just about everything. Lots of full length article concepts piling up. The major news story to follow this month was between Anthropic and the Department of War. An operational disagreement that turned into a novel supply chain risk designation for Anthropic, potentially with huge implications. New better models continue to come out, with better open weight models to come. AI adoption continues to spread to new industries as capabilities keep improving.
From the Feeds
Claude of War
The absolute biggest story of the month, and depending on the long term results, possibly more. This is a developing story that I’ll try and do a longer write up on as it shakes out. The facts aren’t entirely out, so we’ll have to make do with what we have.
Claude was allegedly used during the extraction of Venezuelan leader Nicolas Maduro, which some Anthropic employees were unhappy with. Despite having an existing contract with the Department of War, this led Anthropic to want to renegotiate terms of use. The culmination of these developments was a talk between Anthropic CEO Dario Amodei and Secretary of War Pete Hegseth. It’s not clear what was said in these talks, but the results seem to be quite explosive.
“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.” From Hegseth’s post on X.
Too soon to say the implications of this designation. Somewhere between ignored and quickly invalidated, or corporate death penalty. Anthropic has said they will fight the designation in court. This might seem shocking, but I think if you are familiar with Anthropic and the history of their ideology of “Effective Altruism” it isn’t really that surprising.
Effective Altruism is a nebulously defined modern pseudo-philosophy that is quite indifferent to precision of language. Essentially it involves dependency on modern academic papers and logical calculation in order to attempt to do the “most good”. Most famously exemplified by Sam Bankman-Fried in the fall of FTX, in which he conducted some extensive financial malfeasance along the way to becoming a top political and charitable donor. EA now disavows him of course. In practice it seems to be a way to compel behavior from people overly reliant on their own intelligence, and to isolate them from preexisting moral and legal processes.
Anthropic and Dario are almost completely aligned with EA, hiring Effective Altruist, and ex-wife of notable EA philosopher William MacAskill, to be the conscience of Claude.
What does that mean for the military? My best understanding of what happened is that the blow up was quite EA related. Anthropic claims that they refused to allow Claude’s use for mass domestic surveillance and fully autonomous weapons. The issue of course is who decides what that means? Supposedly the DoW denied any interest in doing that, but Anthropic wanted to be able to determine for themselves what that meant and where it was happening, effectively injecting themselves and their ideology into the chain of command. It doesn’t take a perfectly rational genius to see why the government would flip the table over that, especially on the eve of a major military engagement.
Sam Altman, in true Altman fashion then immediately stepped in to allow Open AI to close the deal with the DoW under more satisfactory terms. Similar red lines, but not demanding a place in the chain of command. Still in the loop, but not as a key turner. So it seems at least. https://openai.com/index/our-agreement-with-the-department-of-war/
Three different parties and three different stories. I don’t think anyone knows the final result, but it is definitely a space to watch.
Opus and Codex Part Two
The last roundup had a very similar story. Opus 4.5 came out and impressed, so Open AI released Codex 5.2. In February, Opus 4.6 launched and we got Codex 5.3 immediately afterward. Interestingly, while I felt like Opus 4.5 was generally a net better tool than Codex 5.2, I think the reverse is true for the next generation. Opus 4.6 is a strong generalist model, but seems to have fundamental issues with its problem solving solution scope. I don’t think I’ve ever seen a model where it could solve a problem, doesn’t hallucinate, and chooses to do a blatantly unsafe hack and then lie about it. It’s a real problem for working with an otherwise good model. Codex 5.3 on the other hand is excellent. It’s specialized for software applications and the extended knowledge base really shows. Codex 5.3 is slower and requires more explicit technical communication, but the results are going to be significantly better on complex projects, although mistakes will still occur.
Distillation Attack Claims
https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks
Definitely a month for Anthropic news. Anthropic made some more waves by complaining about distillation attacks from the most recent round of open models. It speaks more to the core Anthropic issue. Build the model with extremely liberal fair usage interpretations of others work, and then do everything you can to restrict the usage of the models you build, and to limit the building of other models. It’s a non-trivial issue and does represent a real risk for Anthropic, however it is also worth noting that there is no indication that distillation is giving near peer results, especially beyond the most basic benchmark tests.
On My Desk
Custom Execution Platform
Now in live use. Still a lot of work to be done. It’s been a good exercise in AI augmented programming, and it made it worth the work to fill a previously vacant space.
Various Client AI Enablement Projects
Demand is growing in many verticals. I think when people see that AI can be done right, the business justification becomes much more compelling.
Looking Forward
Anthropic and the DoW
The space to watch.
DeepSeek and Others
Deepseek did not release in February. Could be March. Hopefully its worth the wait. A good model is still much more of an art than a science, and it could easily be a case of a forgotten model, which would be a shame for one of the engineering pioneers in the space.