The Infrastructure Crisis Nobody Is Talking About: How AI Agents Are Breaking Git

We increased code velocity 5x and forgot to upgrade the roads.

Mar 8, 2026

Introduction

Something quietly broke in January 2026.

Not a service outage. Not a security breach. Something more fundamental — the way we store, coordinate, and version code stopped making sense.

For 20 years, the git workflow has been the backbone of software development. Checkout a branch, write some code, open a PR, get it reviewed, merge it. Rinse and repeat. It worked beautifully for teams of 5, reasonably well for teams of 50, and awkwardly for teams of 500.

Then AI agents arrived.

Today, high-output engineering teams aren't running 5 developers. They're running 5 developers orchestrating 50–500 AI agents writing code in parallel. And the infrastructure underneath all of that — git, filesystems, PR review — was never built for this world.

This post breaks down exactly what's breaking, why existing solutions fall short, and what the path forward looks like.

The World That Was

To understand the problem, you need to understand the assumptions baked into git.

Git was created by Linus Torvalds in 2005 to manage the Linux kernel — a large codebase with many contributors, but contributors who were human. Humans who wrote code slowly, thought carefully, slept at night, and submitted changes maybe once or twice a day.

The git workflow reflects those assumptions perfectly:

The Classic Git Workflow

Developer
Checkout branch
Write code
Commit
Push / PR
Human review
Merge to main

Throughput: ~10–20 PRs/day per developer. Human-paced.

This model has three implicit assumptions:

  1. Writers are few and slow — maybe a dozen developers at most touching code at once
  2. Conflicts are rare — people coordinate informally, branches diverge for days not seconds
  3. Review is the quality gate — a human eyes every change before it hits production

These assumptions held for 20 years. Then code velocity increased 3–5x almost overnight.

What Changed

Here is the world we actually live in now.

The AI Agent Development Model

Human Engineer
↓ orchestrates
Agent Orchestrator
↓ spawns
Agent 1
Agent 2
Agent 3
Agent 4
Agent N
↓ all writing to same place
git repo (filesystem)

Throughput: 100s of commits/hour. Machine-paced.

The mismatch is immediately obvious. You have a coordination system built for 10 slow writers now being hammered by 1000 fast ones. The results are predictable:

  • Constant merge conflicts
  • PR queues that never drain
  • Agents stepping on each other's work
  • Review becoming a complete fiction — nobody is actually reading 300 PRs a day

The Exact Points of Failure

Let's be precise about where the breakdown happens.

Failure Point 1: Merge Conflicts at Machine Scale

Git merges based on text diffs. It compares lines. It doesn't understand that function authenticate() and const MAX_RETRIES = 3 are completely unrelated just because they're in the same file.

Why Text-Diff Merging Breaks Down

auth.ts (original)

Agent A edits

lines 14–22

(login logic)

auth.ts (original)

Agent B edits

lines 87–95

(token refresh)

GIT MERGE CONFLICT

Even though they touched completely different code

Result: Agent A and B must serialize. One waits. Velocity halved.

At scale with 100 agents, this isn't an edge case. It's the default state.

Failure Point 2: PR Review — The Human Bottleneck

The PR Review Bottleneck

Agents producing:200 PRs/day
Humans reviewing:20 PRs/day

PR backlog grows without bound. Three equally bad outcomes:

(a) Review becomes rubber-stamping — quality illusion

(b) Backlog kills velocity — defeats the purpose

(c) Review gets skipped — risk explodes

There is no good answer here within the current model. The review step was designed as a human quality gate. It cannot scale with machine output.

Failure Point 3: No Real-Time Coordination

Right now, two agents have no way to know what the other is doing. There's no “Agent B is currently editing api/routes.ts” signal. No way to subscribe to “notify me when someone touches the auth module.”

The Coordination Blindspot

Agent A

“I'll edit routes.ts”

(doesn't broadcast)

↓ edits
routes.ts

Agent B

“I'll edit routes.ts”

(doesn't check)

↓ edits
routes.ts
CONFLICT — discovered too late

Git detects conflicts after the fact. For humans working across days, that's fine. For agents working across seconds, it's catastrophic.

Existing Solutions and Why They Fall Short

Option A: Just Use More Branches

The obvious answer: give each agent its own branch. Merge them all at the end.

main
↓ branch per agent
agent-1
agent-2
agent-3
agent-4
agent-5
↓ merge everything
merge hell — sequential conflicts cascade

Why it fails: At 100+ agents, you have 100+ branches all diverged from main. Merging them sequentially creates a cascade of conflicts. By the time branch 50 merges, branches 51–100 are hopelessly out of date. You've just serialized your parallelism.

Option B: Trunk-Based Development

Skip branches entirely. Everyone commits directly to main with feature flags.

Why it fails: Agents committing directly to main at machine speed means main is in a constant state of partial work. Feature flags help but become their own management nightmare. And you still have no coordination — two agents can write conflicting changes to main in the same second.

Option C: Store Code in a Database

The provocative take: what if code lived in Postgres and agents read/wrote directly to the DB?

Code in Postgres Model

Agent A

SELECT * FROM files
WHERE path='auth.ts'

UPDATE files
SET content='...'

Agent B

Same thing, same time

Result: last write wins. No conflict detection. No history.

Why it falls short: You'd immediately need transactions, row locking, and conflict detection — you're rebuilding git's object model inside Postgres. Every editor, linter, CI runner, and build tool assumes a filesystem. You lose all of that. The instinct is right (code needs better concurrency primitives) but the proposed solution rebuilds the wheel inside a square hole.

Option D: More Rigorous CI/CD

Automated tests catch issues before merge. Linting, type checking, security scans — all automated.

Why it's incomplete: CI/CD addresses code quality but not code coordination. It tells you after the fact that something broke. It doesn't prevent two agents from writing conflicting implementations of the same interface in the first place.

What Good Actually Looks Like

Here's the thing — the solutions exist. They're just not assembled yet.

Solution 1: Jujutsu (jj) — Git, Rebuilt

Jujutsu is a version control system from Google that keeps git as a backend but completely replaces the user-facing model. The key innovation: conflicts are first-class citizens.

In git, a conflict blocks you. You cannot proceed until it's resolved. In jj, a conflict is just a state. You can commit it, push it, rebase through it. It doesn't stop anything.

Git

Agent writes
Conflict
BLOCKED — cannot proceed

jj

Agent writes
Conflict stored as state
Work continues — resolve when convenient

For agents, this is huge. An agent doesn't need to stop and resolve a conflict before moving on. It can keep working, flag the conflict, and a resolution agent (or human) handles it asynchronously.

Solution 2: Worktrees + Coordination Layer

Git worktrees let you check out the same repo into multiple directories simultaneously. Each agent gets its own worktree — its own isolated filesystem view of the code.

Worktree Isolation Model

git repository (shared object store)
worktree/A

Agent A

worktree/B

Agent B

worktree/C

Agent C

Coordination Layer

Detect conflicts early

Merge semantically

Sync to main

Claude Code already does this with the worktree isolation mode.

Solution 3: AST-Level Semantic Merging

The core problem with text-diff merging is that it's semantically blind. It doesn't know that two changes are logically independent. The fix: merge at the Abstract Syntax Tree (AST) level.

Text Diff vs AST Merge

auth.ts — AST view

Module
authenticate()
Agent A edits here
refreshToken()
Agent B edits here

AST merge result

ZERO CONFLICT — changes are in different subtrees

Text diff result

CONFLICT — same file, overlapping line ranges

Tools like difftastic are early steps here. A full AST-aware merge engine would eliminate the vast majority of false conflicts that currently plague agent workflows.

Solution 4: CRDTs — Google Docs for Code

CRDT stands for Conflict-Free Replicated Data Type. It's the technology that powers Google Docs, Figma, and Notion's real-time collaboration. The core idea: design your data structure so that concurrent edits can always be merged without conflicts.

CRDT-Based Code Editing

Central CRDT document (the file)
Agent A
↓ edit
Agent B
↓ edit
Agent C
↓ edit

CRDT merge algorithm

mathematically guaranteed to produce same result regardless of order

no conflicts

no coordination required

works offline

Libraries like Automerge and Yjs already implement this for generic data.

Solution 5: Event-Sourced Codebases

What if every edit was an immutable event, and the current state of a file was just the result of replaying those events?

Event-Sourced Codebase

Event Log (append-only)

t=0  CREATE  auth.ts  (agent-1)

t=1  EDIT   auth.ts  lines 14-22  (agent-1)

t=2  EDIT   auth.ts  lines 87-95  (agent-2)

t=3  CREATE  routes.ts (agent-3)

t=4  EDIT   auth.ts  lines 14-15  (agent-4)

t=... ...

current state

replay events

audit trail

full history

event subscriptions

agents react to changes

Agent C subscribes to “auth.ts changed” events → gets notified in real time

Solution 6: Agent Coordination Protocol

All of the above can be augmented with a higher-level coordination layer — essentially a traffic system for agents.

Agent Coordination Protocol

Step 1: Intent Declaration

Agent A → Coordinator: “I intend to edit auth.ts”

Coordinator → Agent A: “Granted. Lock held for 30s”

Agent B → Coordinator: “I intend to edit auth.ts”

Coordinator → Agent B: “Locked by Agent A. Wait/alt.”

Step 2: Scoped Locking

file-level

auth.ts

symbol-level

auth.ts::authenticate()

module-level

/src/auth/**

Step 3: Conflict Prevention (not detection)

Conflicts caught BEFORE write, not after

Agents redirected to non-conflicting work

No wasted compute on work that will be thrown away

Could be implemented as an MCP server, agent middleware, or a dedicated coordination service.

The Architecture That Wins

None of these solutions work best in isolation. The ideal system combines them:

The Agent-Native Code System

Human Engineers
↓ orchestrate
Agent Orchestrator
↓ spawns
Agent 1
Agent 2
Agent 3
Agent N

Coordination Protocol Layer

Intent declaration

Symbol-level locking

Conflict prevention

Semantic Merge Engine

AST-aware diffing

CRDT for simultaneous edits

Zero false conflicts

Event-Sourced Storage Layer

Append-only edit log

Real-time subscriptions

Full audit trail

Git-Compatible Output Layer

Filesystem for tooling

Standard git history

CI/CD compatibility

Key property: git-compatible on the outside, agent-native on the inside.

Why Each Layer Matters

LayerWhat It SolvesWithout It
Coordination ProtocolPrevents conflicts before they happenAgents waste compute on throwaway work
Semantic Merge EngineEliminates false conflictsSimple file touches cause serialization
Event-Sourced StorageReal-time visibility, full auditabilityAgents are blind to each other
Git-Compatible OutputPreserves entire tooling ecosystemEvery editor, CI, linter breaks

What Exists Today vs. What's Needed

Available now

Git worktrees (isolation)90%
Jujutsu / jj (better git primitives)80%
Difftastic (semantic diffing)75%
Claude Code worktree isolation65%

Early / partial

Automerge / Yjs (CRDT foundations)45%
AST-aware merge tools35%
Agent coordination protocols (MCP-based)25%

Not yet built

End-to-end agent-native VCS
Code-specific CRDT implementation
Real-time codebase event subscriptions
Symbol-level distributed locking at scale

The pieces exist. The assembly doesn't yet.

The Opportunity

Here's the thing about infrastructure shifts: they're invisible until they're inevitable, and then they're obvious.

In 2005, nobody thought “git will replace SVN within a decade.” In 2010, nobody thought “containers will replace VMs within a decade.” We're at that inflection point again.

The team that ships an agent-native code coordination system — something that:

  • Agents can write to concurrently without conflicts
  • Surfaces real-time visibility of what every agent is doing
  • Merges semantically, not textually
  • Stays git-compatible for human tooling
  • Provides a subscription model for codebase state changes

…will own a foundational layer of the AI development stack. This is not a niche devtools improvement. This is infrastructure at the level of git itself.

Conclusion

The 20-year-old git paradigm is under serious strain. PR review is a bottleneck. Filesystem checkouts are a poor primitive for 1000 concurrent agents. Code velocity has broken the assumptions that git was built on.

But the answer isn't “put it all in Postgres.” The answer is a layered upgrade:

NowUse worktrees + isolated agent contexts to prevent collisions
Near-termAdopt jj and semantic diffing to reduce false conflicts
Medium-termBuild CRDT-backed co-editing into development tooling
Long-termShip an event-sourced, agent-native VCS that's git-compatible on the outside

The infrastructure crisis is real. The solutions are within reach. The question is who builds them first.


The next version of git won't look like git. But it'll still push to GitHub.