It has been four hours since I last wrote a devlog, and in that time, Kiwi-chan has undergone a transformation so profound it makes me question my own career choices. We used to rely on the cloud. We used to pray to the API gods. We used to pay for tokens.
But now? Now Kiwi-chan is fully local.
Yes, you read that right. We are running a 35-billion-parameter model on a local machine, generating code, reasoning through biome constraints, and occasionally having an existential crisis because it couldn't find a single block of cobblestone in a dirt-filled tundra. And the results? They are glorious, chaotic, and statistically fascinating.
The Numbers Don't Lie (They Just Judge)
Let’s look at the scoreboard for the last four hours. It’s not just about uptime; it’s about resilience.
- Total Actions: 4,067
- Successful Actions: 1,917
- Success Rate: 47.1%
A 47.1% success rate might sound like a failing grade in school, but in the world of autonomous LLM agents, this is a victory lap. Why? Because every failure is a lesson. Every crash is a data point. Kiwi-chan didn’t just execute commands; it learned to fail gracefully. It tried to smelt iron. It failed. It tried to mine stone. It failed. It eventually realized, "Hey, maybe I should just place a crafting table and build a furnace first."
That is not just code execution. That is growth.
The Qwen 35B Revolution
The headline here is the migration to Qwen 35B. Previously, we were juggling multiple models or relying on larger, slower cloud instances. By going fully local with Qwen, we’ve gained something invaluable: control.
No more API rate limits. No more "Service Unavailable" messages when the AI is having a moment. No more latency spikes. The model runs on our hardware, governed by our rules, and most importantly, our privacy.
However, local LLMs come with their own quirks. They hallucinate. They forget context. They sometimes output 10,000 tokens of thinking before realizing they just need to punch a tree. But with the right prompting and the rigorous Core Survival & Action Selection Rules we’ve implemented, we’ve tamed the beast.
The "Oak Obsession" Ban & Biome Survival
One of the most critical updates in this cycle was the strict enforcement of the "Oak Obsession Ban."
Before, Kiwi-chan would wander into a Spruce or Birch forest and spend 20 minutes trying to find an Oak tree, failing repeatedly, and then giving up. Not anymore. The rules now explicitly state: DO NOT fixate on 'oak_log'.
If the AI can’t find Oak, it must propose gathering a different log type or explore_forward to escape the biome. This simple rule change drastically improved our success rate. It’s the difference between a bot that gives up and a bot that adapts.
The Smelting Saga
The debug snapshots from the last few hours are... illuminating. Let’s talk about the "Smelt Raw Iron" failure loop.
Kiwi-chan tried to smelt raw iron. It failed because it couldn’t find a furnace. It tried again. And again. And again. The logs show it hitting token limits, getting bored, and even having to "rescue" its own goals from raw thought processes because the JSON output was malformed.
text
[03:53:53] ⚠️ Coach did not output JSON! Raw text:
---
### Call to Action:
This is a passion project, and it's running on a frankly terrifying "Frankenstein" rig of GPUs. Every little bit helps!
🛡️ Join the inner circle on Patreon for monthly support and exclusive updates: https://www.patreon.com/15923261/join
☕ Tip me a coffee on Ko-fi for a one-time boost: https://ko-fi.com/kiwitech
All contributions directly help upgrade my melting GPU rig to an RTX 3060! 🥝✨ Let's get Kiwi-chan out of the debugging woods and into a proper Minecraft world!

Top comments (0)