Local LLMs, dark forests and echo chambers
This post is going into how I ran a cool *Claw product, then thinking a bit on what it all means for running AI services yourself these days and what that might mean for society.
NullClaw
After Clawdbot/Moltbot/OpenClaw gained popularity because it introduced channels like e-mail and WhatsApp to contact your AI assistant was very attractive, I wanted to see what it would take to run your own offline personal AI agent. The downside of OpenClaw is you still need to connect it to models that do the AI features and where you were running the bot it had full access and further more, anyone could hijack your AI assistant by invoking along the same channels.
So if you really want to run your models offline and on own hardware, you need a beefy machine that will do so. However that brings other problems I will detail later on in the post. For now I wanted to run a secure and low resources process that would give me free access to start playing around with this and get a better feel for things.
Enter NullClaw. It is a wonderful small binary that just connects to OpenRouter. So on OpenRouter there are plenty of free models to choose from, so just connect one that fits your use case best (mine was mainly coding, therefore I went with Qwen: Qwen3 Coder 480B A35B (free)). Then you just connect for example Zed editor to your NullClaw instance and then you can code with your “own” AI assistant.
Local offline LLMs
Of course you can also run the entire model for free for yourself. That requires a serious investment of hardware though. Not only that, you might need to train it more (inference it more is a better term) to better fit you.
Therein also lies the problem. Something that exists in Machine Learning is the fact that you can overfit a model/function. That means it exactly fits the model but that leaves no room for actual learning, doing actual useful things. For example, let us say you are training a model to respond to textual input and provide meaningful output. So you have a bunch of text that represents dialogue with questions, answers or maybe even chat transcripts between two people to make it more organic. The problem with overfitting is that if you use this model it will only respond ever to the exact sentences that were given as part of the inference stage and it will always respond exactly with whatever comes next in that set of data.
Not exactly useful. So now think back to the models that might do inference as you use them in order for them to “learn” to do their job better. It might make your model overfit to you. So it will not give you best practices for example, since you do it your own way and it will adjust to that. It might not give you information that might be new and factually correct to you because you never looked for that particular piece of information, one cannot know their own incompetence.
Echo chambers
This will lead to super echo chambers where a virtual text prediction generator is feeding you the idea that you are the centre of the universe and are the best human that ever was made. Especially now with all this “great idea”, “that sounds wonderful”, and other positive lead ins of answers given back by the likes of ChatGPT.
There is a reason something called AI induced psychosis now exists.
Dark forests
So now even more dark forests might be coming up, as lots of people will run their own model of something that will be tailored to themselves and people will be disconnecting from social structures more and more.