Back to Blog
The Lobster That Lives in Your Terminal
Engineering6 min readJanuary 26, 2026Casey

The Lobster That Lives in Your Terminal

There's something strange happening in AI right now. The most interesting project isn't a new model or a well-funded startup. It's an open-source assistant called Clawdbot that runs on a Mac Mini in your closet.

I've been thinking about why it matters, and I think it comes down to a question most people haven't asked yet: Who should own the infrastructure that thinks for you?

The Cloud Assumption

Every major AI assistant works the same way. You type something into a box, it goes to a server farm somewhere, gets processed, and a response comes back. ChatGPT, Claude, Gemini—they're all variations on the same architecture. You're a tenant. The landlord sets the rules.

This seemed inevitable for a while. Training large models requires enormous compute. Running them requires specialized hardware. Of course it has to be centralized.

But that's not quite right anymore. The models themselves are commodities now. You can rent intelligence from Anthropic or OpenAI or run something locally with Ollama. The expensive part—the training—already happened. What you're paying for is inference, and inference keeps getting cheaper.

So if intelligence is rentable, what's left? What actually has to live somewhere?

The Body Problem

Clawdbot's creator, Peter Steinberger, frames it this way: the brain can be rented, but the body must be owned.

The "body" is everything that isn't raw intelligence. Your conversation history. Your preferences. Your connected accounts. The tools the assistant can use. The rules about when it can act autonomously versus when it needs to ask permission.

Right now, all of that lives on someone else's servers too. When you use ChatGPT, OpenAI knows what you asked, stores your history, and decides what tools you can access. They could change the rules tomorrow. They could read your conversations for training data. They could shut down your account.

Most people don't think about this because it hasn't bitten them yet. But if you actually want an AI assistant that knows you—that remembers your projects, understands your preferences, handles your email—you're handing over a lot.

Clawdbot proposes a different split. Rent the thinking. Own everything else.

What Running Locally Actually Means

The implementation is surprisingly mundane. Clawdbot is a Node.js process that runs on your machine. It connects to your messaging apps—WhatsApp, Telegram, Discord, Slack, iMessage, about 29 platforms in total. When you message it, the request goes to whatever AI model you've configured (Claude, GPT-4, local Llama), and the response comes back through the same channel.

Your conversation history? Markdown files in a folder on your computer. Your preferences? More markdown files. The whole thing is just text files and a gateway process. You can read it, back it up, version control it.

The interesting part isn't the technology. It's what this architecture enables.

Continuity Without Permission

Because Clawdbot owns the connection to your messaging apps, conversations can flow across them. Start a thread on WhatsApp during your commute, continue it on Discord from your desktop. The assistant knows it's all you because it's running on your infrastructure, maintaining a unified session.

Cloud assistants can't do this. They're each locked to their own platform. Clawdbot treats platforms as interchangeable pipes—the intelligence and context sit above them.

This is closer to how a human assistant works. You don't have a different assistant for email versus Slack versus phone. You have one person who can reach you through all of them.

The Permission Question

Here's where it gets tricky. The whole point of an AI assistant is that it does things for you. But doing things means having permissions. Access to your email. Ability to send messages on your behalf. Maybe even running code on your machine.

When those permissions live on someone else's servers, you're trusting them not to abuse them. When they live on your machine, you're trusting yourself to secure them.

Clawdbot takes the second approach. The assistant can execute shell commands, modify files, browse the web—but only because you gave it access to your local environment. If someone compromises your Mac Mini, they get everything the assistant could do. That's terrifying, but at least the attack surface is your hardware, not a cloud service with a thousand employees.

There's a human-in-the-loop system for dangerous operations. Want to run a shell command? The request goes to your phone, you approve it, execution proceeds. It's like sudo for AI.

What This Is Really About

I think the reason Clawdbot matters isn't the specific implementation. It's that it proves the architecture is possible.

The default assumption in AI has been that everything has to be centralized because the models are expensive. But models are getting cheaper fast. What's left is the context layer—memory, tools, permissions, identity. And there's no technical reason that has to be centralized.

Once you see it that way, the current landscape looks stranger. Why does every AI assistant require an account on someone else's service? Why can't you run your own? The answer used to be "because you can't afford a datacenter." The answer now is "because nobody built the plumbing yet."

Clawdbot is the plumbing. Or at least a proof-of-concept for what plumbing could look like.

The Trade-offs Are Real

I don't want to oversell this. Running your own AI infrastructure has real costs.

Setup isn't trivial. You need to install Node.js, configure API keys, pair your messaging accounts. It's about 45 minutes if you know what you're doing, longer if you don't.

You're responsible for security. No one's monitoring your gateway for intrusions. If you misconfigure something, it's on you.

And there's a financial calculation. Heavy users might spend $50-100/month on API calls. That's more than a ChatGPT subscription, though you get different capabilities.

But here's the thing: for the people who want this—the ones who actually want a persistent AI collaborator that knows their work, their preferences, their whole context—the trade-off might be worth it.

What Happens Next

The explosion of interest in Clawdbot—tens of thousands of GitHub stars in weeks, Federico Viticci calling it "the future of personal AI assistants"—suggests there's real demand for this model.

I suspect we'll see more projects like it. The technical pieces are all available: rentable AI APIs, established messaging protocols, mature tooling for local services. Someone just had to wire them together in the right way.

The more interesting question is what happens when this approach becomes mainstream. If everyone runs their own AI gateway, what does that do to the big AI companies? They become pure API providers—commodity brain rental. The differentiation moves to the local layer: better memory systems, better tool integrations, better security.

That's a very different market structure than "everyone uses ChatGPT." It might also be a healthier one.


For the engineering details, MMNTM has two deep dives: the architecture overview covering gateway design and multi-agent routing, and the implementation analysis with code-level breakdowns of lane concurrency, channel plugins, and routing cascades. For the philosophical implications, see The Sovereign Agent on Texxr.

Share

Similar Articles

The Lobster That Lives in Your Terminal | Clawdbot | Precedent