The power play of packaging persistence
The large players like OpenAI and Anthropic are working hard on creating better and better models, and it feels like a race that is unequivocally positive for us as users and builders. The better the models, the greater the opportunities. And the more fantastic things we get to build. Right?
The more interesting, and telling, work they do is the work they're less vocal about. Or aren't talking about at all. There have been signs here and there, and in retrospect they are easier to spot. But only after Anthropic accidentally published 500k lines of their Claude Code code did we get real insight into the plans they likely didn't want us talking about.
Although most outlets have been focused on Mythos/Capybara, how awesome this new model looks to be, how they spin it as so good that they don't really dare to release it etc, another part of the code will likely matter much more to the future of both users and builders.
That part has a name: Conway
The bet on more walls
A quick Google search on anthropic "mythos" yields 2.5M results, while anthropic "conway" is only just above 200K. And, digging into many of the results, most pages take the same angle: Conway is an always-on agent, it works while you sleep, it's like OpenClaw for Claude, etc. But the leaked code revealed something more about Conway: the cnw.zip extension protocol.
Some outlets mention the cnw.zip extensions and how Anthropic likely wants this to become an ecosystem in the same vein as Apple's App Store: builders can create tools, apps, handlers and more that people get easy access to via the cnw.zip discoverability features. The protocol sits on top of MCP, which, after Anthropic released it as an open standard, has seen rapid adoption. But cnw.zip itself is proprietary. So anything you build for this protocol will only work inside Anthropic's ecosystem. Their bet, of course, is that you'll choose to base your tool on their proprietary protocol and enjoy the reach it provides over using MCP and having to solve the distribution problem yourself.
And this lock-in extends to the users as well. When you, as a user, use Conway or anything built on top of Conway, the memory compounds the longer you use it. Your assistant, that started as an interesting and sometimes useful, sometimes quirky helper, gradually transforms into an indispensable companion that you simply cannot imagine losing.
But the compounded memory that is YOU belongs to Anthropic. They own the treasure chamber that makes this assistant unlike any other assistant. How likely is it that Anthropic will let you leave with this memory? Or that it's even possible?
Not likely.
Will you then start from scratch somewhere else, or just stay inside their beautiful garden? Of course you'll stay. When leaving is that painful, you are effectively locked in.
This is lock-in at a layer that hasn’t existed before. It’s not about data portability - we have laws and frameworks for that. It’s about intelligence portability. —Nate B Jones
Nate's substack is the only place I've found this angle on Conway. And I think this is the most astute take on this. It's easy to be blinded by the dazzling lights in these times, so it's especially important to put on some extra-dark sunglasses and look for patterns rather than the individual shining stars. And it is actually written in the stars: just as Microsoft and Google and Apple and many more have done before, the AI companies are tending their gardens and building the walls. And staying out will likely be challenging. Because if you build your tools and try to do business on the outside, the result may be, as Nate says:
You're building a website in 2008 when everyone's downloading iPhone apps. —Nate B Jones
Do you want to be the creator of the website or the iPhone app in 2008?
Build your own memory persistence
But, as users, the future may paint a different picture. You don't have to have your memory be handled and owned by the AI companies. You can build your own memory. Already today, it's not really that hard, even for non-technical people. There are many tutorials and videos on how you can build a second brain as a persistent store for your memories. Your agents can learn about you and your habits and store that in a place you own. It might be less streamlined than what the AI companies will provide in their harnesses, but it'll be worth it. You can take your memories with you wherever you want, and to whatever provider you want. And the longer you use it, the more it compounds into something truly useful. Choose your harness with care. Because
when most models converge and offer similar performance and quality, the harness will be the most consequential layer in artificial intelligence.