Working In Progress
This will eventually be a long form blog post.
Using AI Without Losing Your Mind
Everyone’s talking about what AI can do. Nobody’s talking about what it costs — and where the gains actually go.
The biological case for slowing down
Richard Cytowic makes this plain in Your Brain Wasn’t Built to Hold This Much Information: your brain burns ATP for every decision, every context switch, every review pass. You have a finite cognitive budget every day, and you’re spending it whether the work feels hard or not.
A Harvard Business Review study confirmed what a lot of people already felt but couldn’t name: AI-assisted work produces a specific kind of exhaustion. Not from deep focus, but from throughput. You moved fast, you produced a lot, and you’re somehow more depleted than before. What’s new? Context switching. Every time you jump between tasks or check on an agent’s output, you pay an ATP toll.
I’ve felt this for years before I had the vocabulary for it. Understanding how my brain handles flow and interruption isn’t just personal history, it’s what shaped how I built my setup.
The systemic trap: the vampiric effect
Steve Yegge named this in From IDEs to AI Agents: companies absorb every productivity gain you produce. The baseline just moves up. You’re not 3x more productive and compensated accordingly — you’re just expected to produce 3x now. The HBR study corroborates it. This isn’t new. It happened with email, spreadsheets, and smartphones. AI is just faster and the extraction is harder to see.
The vampire doesn’t kill you. It keeps you just alive enough to keep giving.
The Jr. Engineer problem
Here’s a useful mental model: each agent you run is a junior engineer who wants your attention.
Now imagine you have five of them. They’re all working on something, and they all have questions, and they’re all waiting on you. That’s not a productivity multiplier — that’s a management overhead you didn’t sign up for, and it’s not showing up in your comp.
The people who get wrecked by this are the ones who treat AI agents like background processes. They’re not. They require steering, review, judgment calls, and escalation handling. That’s real cognitive work. If you don’t design around it, the agents run you instead of the other way around.
What I actually do
I run agents in WezTerm. For notifications, I forked a ZeBar theme and added an “agent-deck” like feature — a silent visual indicator at the top of my screen that shows agent status without interrupting me. No pop-ups, no sounds.
The reason: notifications designed to interrupt get addressed immediately, even when that’s not what the moment calls for. That’s a panic response masquerading as productivity. The fix is simple — don’t let the tool set the terms of your attention. You check when you’re ready, not when the computer demands it.
This isn’t just useful if you have focus challenges. It’s useful for anyone trying to do deep work alongside AI agents. The goal is the same: protect the windows where you can actually get into flow.
Practical advice
Build tooling to fill your gaps, not to replace your judgment.
The goal isn’t maximum agent throughput. The goal is maximizing the time you spend on work that gets you into flow — and minimizing the interruptions that pull you out. AI should handle the mechanical, the repetitive, the first-pass. You handle the judgment, the architecture, the decisions that require context no agent has.
And watch the baseline. If your output keeps going up and nothing else does, you’re feeding the vampire. That’s worth naming — with yourself, and with whoever sets your goals.