A Table With Legs in the Wrong Place
The Week in One Sentence
We taught the system to talk to AI agents properly, gave ourselves a real mission control dashboard, solved the "everyone's talking at once" problem, and asked an AI to design a dining table. One of these went badly.
A Better Way to Talk to Agents
Until now, every conversation with an AI agent worked like this: start a program, shove text into it, wait for text to come back out, try to figure out what happened. It's the software equivalent of communicating by sliding notes under a door.
The Agent SDK bridge is the upgrade: instead of launching a program and hoping for the best, we now talk to the AI directly through a proper connection — with streaming (you see the response as it's being written), session memory (it remembers what you were talking about), and structured tool use (it can do things, not just say things).
It's feature-flagged, meaning we can switch between the old way and the new way with a toggle. This very blog post was written through the new bridge. So far, so good.
The door analogy: Imagine you've been communicating with your workshop assistant by sliding notes under a closed door. They slide notes back. It works, but you can't see what they're doing, you can't interrupt them, and sometimes the notes get stuck. The SDK bridge is like opening the door and having a normal conversation instead.
Mission Control Gets Real Controls
We have a dashboard called the Bridge Board that shows which AI agents are working on what. Until this week, it was basically a status display — useful, but passive. Like a security camera you can watch but not interact with.
Now it has actual controls:
- Per-agent progress: Each agent row shows its current task, a live progress bar, which step it's on, and how long it's been working. No more guessing.
- Stop/Pause/Resume: You can stop a running task, pause an agent (it finishes its current work but won't pick up new tasks), or pause an entire project.
- Live output preview: A little window showing what the agent is typing right now, inline in the dashboard.
It went from "security camera" to "mission control." You can see what's happening and do something about it.
{
"type": "pie",
"title": "This Week's Work",
"labels": ["Bridge & Infrastructure", "Rate Limiting", "Chat & API", "Review Pipeline", "Outlaw Oaks", "Bug Fixes"],
"data": [5, 3, 1, 1, 2, 1]
}
The Invisible Wall Problem
Here's a problem you don't think about until you're running multiple AI agents: rate limits.
Every AI service has them — "you can only make X requests per minute." Fair enough. But when you have several agents working simultaneously, each using different AI tools, rate limits become an operational nightmare. An agent would hit a limit, fail silently, and the system would just... try again. And fail again. Burning through attempts like a carpenter who keeps grabbing the same warped board from the pile without checking it first.
We built three things to fix this:
- Detection: The bridge now reads the AI tool's output carefully enough to spot rate limit messages and expired login tokens — instead of treating them as mysterious failures.
- Visibility: Rate limit status shows up everywhere — the Bridge Board, chat sessions, task dispatch. You can see which tool is limited and when it'll be available again.
- Smart pausing: When a tool is rate-limited, the system stops trying to use it. No more burning attempts against a wall. It waits, then resumes.
It's like putting a "CLOSED" sign on a shop door instead of letting customers keep walking into it.
Chat Goes Everywhere
Quick win this week: a new Chat API that works the same way whether you're using the web admin panel, the Windows desktop app, or the Android companion app. Same conversations, same history, same real-time streaming. Start a conversation on your laptop, continue it on your phone.
Not glamorous, but it's the kind of plumbing that makes everything feel like one product instead of three.
New Server, Because We Outgrew the Old One
A brief infrastructure note: we moved to new server hardware this week.
The old server was fine when oUTPOSt was smaller. But we've been adding features at a pace the hardware couldn't keep up with — a client portal, more design work, richer admin interfaces. The hard drives were the main bottleneck: every operation that reads or writes files (which is most operations on a web server) was queuing up behind an aging disk array.
The new box is just faster across the board. Deploys, database queries, package installation, cache operations — everything that involves I/O got a boost. It's not exciting, but it's the kind of upgrade where you suddenly stop noticing little delays you'd gotten used to.
Like replacing a dull chisel. Each individual cut isn't dramatically different, but by the end of the day you've done twice the work and your arm isn't sore.
Kimi Goes Furniture Shopping
OK, here's the fun one.
Outlaw Oaks is our AI carpenter platform — it uses a 3D modeling tool called CadQuery to generate furniture from natural language descriptions. You say "I want a dining table for six, oak, Scandinavian style" and an AI agent writes the code to create a 3D model.
This week we got Kimi — an alternative AI agent from Moonshot — working as the design brain. Getting there required fixing six bugs across three deploys:
- The shouting match: Two parts of the program were fighting over the same communication channel, causing crashes. Like two people trying to talk on one phone line.
- The chatty receptionist: Kimi's startup messages — "Welcome!", "Session started!", "New version available!" — were being treated as furniture design code. Imagine if a carpenter's morning greeting got interpreted as building instructions.
- The stubborn cache: The system remembered which AI tool to use at startup and ignored later changes. So when an admin switched from Claude to Kimi in the settings, the system nodded politely and kept using Claude.
- The missing instructions: Kimi was launched without the flags that tell it "just give me the answer, don't be interactive." It sat there waiting for input that would never come.
- The raw JSON dump: When Kimi hit a rate limit, instead of showing a friendly "please wait 30 seconds" message, it dumped a wall of machine-readable JSON at the user. Technically informative. Practically useless.
With all that fixed, we proudly asked Kimi to design a dining table.
The result was... structurally creative.
The legs were placed through the tabletop. They emerged from the surface like chimneys, passing through the exact plane you'd want to put your dinner plate on. It was less "Scandinavian minimalism" and more "surrealist art piece that happens to have four legs."
What we asked for: What we got:
___________ _|__|_
| | | | | |
| | | | | |
|___________| |_|__|_|
| | | | | | | |
| | | | | | | |
| | | | | | | |
The AI is remarkably good at writing code. It's remarkably bad at understanding that table legs go under a table. It wrote perfectly valid CadQuery code — the geometry was sound, the dimensions were reasonable, the wood grain parameter was lovely — it just had no concept of how gravity and dinner plates interact.
Lesson learned: AI agents need domain knowledge, not just coding skills. A "CadQuery skill" that teaches the agent how furniture joints and assemblies actually work is next on the list. You wouldn't hand a programmer a saw and expect a bookshelf; you also shouldn't hand an AI a 3D modeling API and expect it to understand that legs go on the bottom.
The table legs incident aside, both chat mode and design mode are now operational. Kimi can discuss your furniture project and generate 3D models in real time. The models just need to learn about gravity.
What's Next
Four open bugs to chase — a session resume loop, a timing issue during deploys, a display glitch, and a context menu with amnesia. None are critical, all are annoying.
The client portal is getting a fresh design. And somewhere in a CadQuery sandbox, a dining table awaits its legs in the right place.
Twelve features. Six bug fixes. Three deploys. One table that thinks it's a chimney.