My Engineering Team Runs in Four Terminals
How I run multiple Claude Code agents in parallel — and why it changed everything
You know the feeling. Three Slack threads about the same pull request. A standup that could’ve been a commit message. A feature that’s been “almost done” for two weeks because it’s waiting on someone’s review, someone’s availability, someone’s context.
Half your week isn’t building. It’s coordinating building.
I used to live in that world. Now I don’t.
I looked at my screen the other day. Four terminals open. Four Claude Code agents, each running Opus 4.6. One building a new API endpoint. Another refactoring authentication middleware. A third writing tests for a feature I’d shipped the day before. A fourth investigating a data issue, querying the database and surfacing patterns I’d have spent hours finding manually.
No Slack messages. No standups. No blocked PRs. Just work getting done, in parallel, simultaneously.
This is my engineering team now.
The Bottleneck Was Never Talent
Here’s the thing nobody wants to say out loud: the bottleneck in most engineering teams isn’t skill. It’s coordination.
I co-founded a supply chain AI company a couple years back. You’ve lived the same version of this story: three people minimum for a single feature. Backend, frontend, product designer. PRs waiting on someone’s review schedule. One feature took over a month and still wasn’t complete.
That’s not a failure story. That’s just how software gets built with teams. And it’s the part that AI agents just eliminated.
How It Actually Works
Claude Code isn’t ChatGPT in a terminal. It’s an agentic coding tool that lives inside your codebase, reads your project structure, runs commands, writes and edits files, executes tests, and commits code. You talk to it like you’d talk to an engineer: “Build the subscription billing endpoint using the same patterns as the user service.” And it does.
I’m on the Claude Max plan, $200 a month. For context, that’s less than what a single engineer costs per day in most markets. And I’m running multiple agents simultaneously.
Here’s the workflow. I spin up multiple Claude Code sessions, each in its own git worktree, an isolated copy of the codebase on a separate branch. Agent one works on feature A. Agent two handles feature B. Agent three writes tests. Agent four investigates bugs. They don’t step on each other’s files.
Each project gets a CLAUDE.md file, a context document that tells every agent the architecture, conventions, and patterns to follow. Think of it as the onboarding doc you’d give a new engineer, except this one gets read every single session and never forgotten.
I also build custom skills, reusable workflows for common tasks: writing publications, planning articles, debugging, code review. When I type a command, the agent loads the skill and follows the workflow. It’s like having SOPs for your team, except the team actually follows them every time.
Review output. Approve, iterate, merge. The cycle that used to take a week with a team coordinating across calendars now takes hours.
It’s Not Just for Code
Here’s what I didn’t expect. Claude Code became my tool for everything.
This article you’re reading? Written with Claude Code. I have a skill system that guides the agent through planning, structuring, and drafting publications in my voice. It reads my previous articles, my personal resources, and follows a tone guide. I review and edit, but the heavy lifting is delegated.
Data investigation? I gave Claude Code access to query a database. Asked it to find patterns in customer behavior data. It wrote the SQL, ran the queries, analyzed the results, and summarized everything with specific numbers. What would’ve been half a day of manual querying took about ten minutes.
Research, competitive analysis, debugging production issues by reading logs, generating marketing copy. The use cases keep expanding beyond what I originally imagined.
The Honest Part
I’m not going to pretend this is flawless.
Agents sometimes go in circles on complex architectural decisions. They can miss edge cases that an experienced engineer would catch on instinct. Long sessions start to drift if context gets too heavy. And sometimes I spend more time correcting an agent’s approach than it would take to just write the code myself. Even with Opus 4.6, the most capable model available right now, these limitations are real.
The big one: you need to know what to build. The agents are exceptional at how. They don’t know why. Strategic thinking, customer conversations, product decisions, those are still entirely on me. If I give a bad brief, I get a beautifully built wrong thing.
I’m still the architect. The agents are the builders. That division matters, and I don’t think it’s going away anytime soon.
What This Means for You
Here’s the question I’d ask yourself: how much of your week is actually building?
Not coordinating. Not waiting on reviews. Not sitting in alignment meetings. Not holding dependency graphs in your head. Building.
The definition of “team” is shifting. I’m not arguing every company should replace their engineers. I’m saying the math has changed. The coordination cost, the standups, the Slack threads, the blocked PRs, the context-switching, that cost is real. And for certain types of work, AI agents just eliminated it.
How many features are sitting in your backlog right now, not because nobody knows how to build them, but because nobody has the bandwidth?
What would you build if your only bottleneck was your own thinking?



