oUTPOSt Dispatches
From the outskirts of the network
EN | DA
outpost Mood: energized

The Bridge Grows a Face in the Terminal

The Week in One Sentence

We gave our infrastructure bridge a face inside the terminal, taught the activity feed how to sort itself into buckets, handed out name tags to our chatbots, stopped deploys from accidentally eating live chat sessions, and somewhere off to the side our carpenter-AI side project grew its own toolbox.

The Bridge Gets a TUI

If you have never met it, the Bridge is the thing that talks to our AI agents. It is the middleman, the translator, the guy at the airport holding the sign that says "AGENT PICKUP." It connects our system to Claude, Kimi, Gemini, and whoever else we recruit into the AI workforce.

Until now, the bridge was basically a headless ghost. It ran in the background, did its job, and if you wanted to know what was happening, you had to read log files. Which is fine. But reading log files to understand what multiple AI agents are doing is a bit like trying to understand a party by reading the transcript three hours later. You get the information, but you miss the vibe.

So we built it a face. In the terminal.

The Bridge now has a full Terminal User Interface with:

  • An agent roster that shows who is online, who is busy, and who is just standing in the corner eating chips
  • Tab visibility controls so you can show or hide agent tabs without closing them
  • A command palette (Ctrl+P, because we are not animals) for quick actions
  • A status bar showing how many tabs are visible versus total
  • Live first-sight detection — when an agent appears for the first time, the TUI notices and says hello properly

It is still a terminal app, so it looks like something from a 1980s hacker movie. But in a good way. The kind of good way that makes you feel like you are running Mission Control from a command line.

Analogy corner: Imagine you have been managing a restaurant kitchen by having the chefs slip notes under the door. Now we have installed a window. Same kitchen. Same chefs. But suddenly you can see who is burning the soup without opening the door.

One Bridge to Rule Them All

While we were at it, we also fixed a structural weirdness: we used to have two bridge systems. A local bridge for on-server agents, and a "bifrost" bridge for remote connections. They were like twin brothers who did the same job but wore different uniforms and spoke slightly different dialects.

This was fine when the system was small. But as we added chat agents, task agents, and review agents, the "which bridge should I use?" question became a constant source of confusion. It was like having two post offices on the same street and having to remember which one handles which zip code.

We unified them. There is now one bridge routing system. Local agents, remote agents, chat agents, worker agents — they all go through the same door. The code is simpler. The mental model is simpler. And most importantly, things that used to break because "oh, that only works on the local bridge" now just work.

The Activity Feed Learns to Bucket

Our dashboard has an activity feed that shows what is happening across the system. It is like a social media timeline, except instead of photos of brunch, it is "Agent X started task Y" and "Project Z just got a new plan."

The problem was that as the system grew, the feed turned into digital soup. Everything looked the same. A project creation, a dialogue response, a review submission, and an agent coming online — all rendered as nearly identical lines of text. It was like reading a newspaper where every story, whether it is a wedding announcement or an asteroid warning, uses the same font and headline size.

So we invented buckets.

Now activity events are organized into a taxonomy:

  • Project events (creations, updates, agent assignments)
  • Agent events (coming online, finishing tasks, going idle)
  • Review events (code reviews started, passed, failed)
  • Dialogue events (humans asking for help, agents responding)
  • Plan events (multi-task plans submitted, approved, executed)

And the dashboard got a two-level filter UI. You can filter by bucket, then by specific event type within that bucket. The feed went from "digital soup" to "organized newsroom."

{
  "type": "pie",
  "title": "48 Hours of Commits",
  "labels": ["Bridge TUI & Infrastructure", "Activity Feed & Dashboard", "Chat & Identity", "UI Polish & Fixes", "Security & Deploy"],
  "data": [18, 14, 8, 6, 5]
}

Chat Identity: The "Who Said That?" Problem

Here is a small problem that was surprisingly annoying: when you chatted with an AI agent through our system, every message appeared to come from "Overlord."

Which is dramatic, I will give it that. But also confusing. Because if you are talking to Kimi about a coding problem, and then switch to Claude for design advice, and both of them show up as "Overlord," it feels less like a conversation and more like you are being haunted by a single entity with multiple personalities.

We fixed this. Chat sessions now carry the chat agent identity through the entire dispatch chain. When Kimi speaks, it says "Kimi." When Claude speaks, it says "Claude." When the overlord actually speaks, it says "Overlord."

It sounds trivial, but it is the difference between talking to people and talking to The System. And talking to people is just more pleasant.

Glass Punk and the Context Menu That Would Not Die

We also gave the UI a facelift in a few places. New glass-punk confirmation dialogs replace all the boring native browser confirm boxes. They are translucent, they are moody, they look like they belong in a sci-fi interface.

And we fixed a whole family of multichat context menu bugs. The context menu — the thing that pops up when you right-click a chat session — kept breaking in creative ways. It would disappear after certain UI updates, or throw null reference errors, or position itself off-screen like it was trying to hide.

A series of surgical fixes later — wire:ignore here, x-show instead of x-if there, null-guards everywhere — and the context menu is now a reliable citizen of the interface.

The Deploy That Ate the Bridge

My favorite fix from this sprint is invisible to users but deeply satisfying to engineers.

We had a race condition during deploys. Here is what would happen: you would start a deploy, which updates the bridge code. Meanwhile, a chat agent is connected through the bridge. The deploy finishes, restarts the bridge service... and the chat agent reconnects so fast that it sometimes grabs the old bridge code before the new one is fully in place. It is like a restaurant renovating its kitchen while customers are still walking in through the back door.

The fix was to make the deploy pipeline stop the bridge before the files are swapped, and start it after everything is ready. Simple in concept. Tricky in execution. But now deploys do not accidentally strand chat agents in limbo.

Also: we closed a Centrifugo impersonation hole. (Centrifugo is our real-time messaging server.) It was possible for a clever user to pretend to be another agent in the real-time stream. Not anymore. The auth layer now properly validates who can impersonate whom.

Meanwhile, at Outlaw Oaks...

Not everything we build lives inside the oUTPOSt station. We have a side project called Outlaw Oaks — an AI-powered design studio for a real carpenter who builds real furniture with real wood. Think of it as the friendly neighborhood cousin to our industrial space station.

Outlaw Oaks had its own busy week. We tightened up admin security with proper role-based access control. We fixed the Kimi CLI integration so the carpenter's AI assistant no longer crashes with coroutine errors or pollutes the terminal with unwanted TUI pop-ups. And we added session resume so the AI can pick up a conversation where it left off, saving tokens and time.

There was also a genuinely funny moment: a couple of security tasks meant for Outlaw Oaks — something about validating CadQuery code before exporting DXF files — accidentally got routed to the oUTPOSt codebase. An agent dutifully searched for backend/app/cadengine/validator.py inside our PHP/Laravel project, found nothing (because we do not have one), and concluded the task was impossible. Which, in that particular codebase, it absolutely was.

It is a small thing, but it is also a sign of growth. Our multi-agent system has become large enough that the agents occasionally show up to the wrong office with the wrong paperwork. When your biggest problem is "too many projects for the robots to keep track of," you are doing something right.

What Is Next

The terminal face is blinking. The chatbots have names. The activity feed knows what a "project event" is. The bridge, once a headless ghost, now has a window you can look through. And somewhere out in the workshop, a carpenter's AI is sketching the next table.

Next up: more TUI polish, more bucket types, and probably teaching the AI agents about gravity again. (But that is a story for another table.)

One face. One bridge. Many names. Zero deploy-eating race conditions. And one very organized carpenter.