7 minutes
Written: 2026-01-31 00:00 +0000
Condor Roundup - January 2026
This isn’t intended to be yet another AI newsletter, and especially not an AI generated one. More of a tales from the frontlines than an actual news round up. Covering what I am seeing, what I am working on, and what to look out for. It should provide galvanization for some more articles and as much of a public building concept as private enterprise work allows.
Fast Cut
Recursive agents are definitively here. Model provider competition is heating up, with increasing attempts at vendor lock in, and narrowing gaps between open and closed models. Token prices are the story of the day, heavily subsidized for now. Will they get cheap enough before the subsidies run out? Lots of applied AI work going on at the company I work for, and we are hiring.
From the Feeds
OpenCode Disputes
A small incident with large implications. The Claude Max subscription was previously working with OpenCode. I’d been using this heavily as the basis for my Emacs client. Anthropic blocked access from OpenCode and labeled workarounds as a ban worthy TOS violation. Anthropic is within their rights to do so, but it highlighted a hostility towards 3rd party tooling, and a desire to enforce vendor lock in.
To give some insight as to how much goodwill Anthropic lost here, OpenAI immediately collaborated with OpenCode to allow their max plan equivalent to be used. Losing any goodwill battle to OpenAI should be thought provoking. In my testing OpenCode with the OpenAI plan actually didn’t work very well, but that may just be early teething and routing issues.
The larger implication is that Anthropic is demonstrating a need to lock users in to their ecosystem while their model advantage persists. It’s not an enviable place to be. No lab can afford to always have the best model, and given current investments no lab can afford to not have the best model to command premium pricing. Trying to mitigate this by having your model used through high lock in tools is understandable, but it is also understandable that consumers will try and fight this as much as possible.
Token Subsidization
The attempted software lock in is currently being paid for by token subsidization. The Claude Max plan, and competitor alternatives are very much in the $5 Uber rides phase. Max is $200, and if you are fully consuming the plan allotments, you are consuming thousands of dollars of list price tokens.
This is a space to watch. With the various AI assistant fads and swarm programming, (see Clawdbot a.k.a moltbot a.k.a openclaw.ai and Gas Town for examples) tokens can be burned faster than ever before. The question is to do what. There are stories of people having their personal assistants spend hundreds of dollars on tokens to perform basic tasks. At a certain point being more expensive than a real assistant.
A big question becomes how will token prices trend? Will they get significantly cheaper to enable the growing automated demand, or will frontier intelligence prices get more expensive as the hardware demands increase? Historically you’d expect cheaper, but a lot of people are going to run in to a token price wall in the meantime.
Opus and Codex
Not to be outdone by the launch of Opus 4.5, OpenAI followed up with Codex 5.2. Officially gpt-5.2-codex, xhigh, with xhigh referring to a manually set thinking state. Hard to believe OpenAI is on the back foot with branding. I am making heavy use of both of these models and they are very interesting how different they are. Codex is a specialized coding model and it really shows. It has significantly improved design patterns and sight unseen issue identification as compared to Opus 4.5. Despite that it really isn’t as good at iterating through problems or operating autonomously. On top of that, especially in xhigh mode, codex is extremely slow.
The best way I think you can explain it is that these models have a structure to them. To effectively work with them you are attempting to fit these structures in to the problem space that you have. I use both models for problem solving, and I find that they can compliment each other well. I am waiting on a third model that isn’t out yet to fill on some of the remaining problem space gaps. I think model shape and integration probably deserves an independent article at some point.
Kimi 2.5 and Open Models Return
As is tradition, some months after the frontier model launches, a near peer open competitor emerges. I haven’t used Kimi 2.5 as much as I’d like, but it seems like a budget alternative that wont be actively painful if that is your usecase. My general expectation is that if you intelligently harness and iterate K2.5 you could probably get a superior performance to base Opus for a similar cost, but without actually doing it that’s just speculation.
Kimi’s launch is notable for being particularly close behind. 2 months behind Opus 4.5 does not leave a lot of breathing room for the frontier labs. At 10x cheaper than Opus 4.5 (and ~30x cheaper than 4.1) you have a lot of expectations for a significantly superior product at those prices.
Stressful times all around at the labs.
On My Desk
Custom Execution Platform
I’ve been spending a lot of time working on a custom execution setup for the core company platform. Whether it makes it in or not remains to be seen, but at worst it’ll be my Arrakis for AI projects. It’s given such a large range of problem solving opportunities and modern tooling experiences that the final product should be novel. Essentially prioritizing extremely rapid container execution, with fully extensible infrastructure that can adapt to many complex enterprise client needs.
Various Client AI Enablement Projects
AI is definitely hitting the corporate world. Lots of large scale document processing and human task replication. Issues with common human tasks are becoming more like the bad fingers of old AI images. That doesn’t mean there isn’t a lot of wrangling and testing to get it there, but the results are impressive. I’ll have to write about those separately at some stage.
Hiring
I’ve been conducting interviews for various positions at the company. A task that leads to a lot of thinking about what a good hire looks like in the age of commodity intelligence. Another possible article topic. I think the largest difference is that tightly directed and managed work can now be heavily streamlined by frontier models with the proper configuration. That still takes time, but it takes a different skill set. The technology is changing so fast that it is also unfavorable to people too locked into any one way of work. I would describe the primary skill we are looking for as independence. Independent learning, thinking, and delivering. If you can do well autonomously, and effectively communicate your needs and results, I think there’s a strong potential to do well. If that sounds like you, or anyone you know, you can find my contact info on the about me page.
Looking Forward
DeepSeek
Probably the premier open model provider is due for another launch this month. Following along from their new mhc paper on improved architectures, there’s a lot of anticipation.
Increased AI Self Improvement
More agents modifying their own agents. Will it lead to novel improvements or will it implode into bloated scrap? With the current generation it’s probably the latter, but it does seem to be getting closer.