\n\n\n\n What 20 Years of Balsa Wood Teaches Us About Agent Persistence - AgntAI What 20 Years of Balsa Wood Teaches Us About Agent Persistence - AgntAI \n

What 20 Years of Balsa Wood Teaches Us About Agent Persistence

📖 4 min read•675 words•Updated Apr 8, 2026

Joe Macken, a delivery truck driver from Queens, spent over two decades carving a scale model of every building in New York City from balsa wood. When I first encountered this story, my immediate reaction wasn’t admiration for the craftsmanship—though that’s certainly warranted. Instead, I found myself analyzing Macken’s process through the lens of agent architecture and long-horizon task completion.

This is what 21 years of sustained, incremental progress looks like in human form. And it reveals something critical about how we’re thinking about AI agents entirely wrong.

The Persistence Problem

Current AI agent architectures fail spectacularly at tasks requiring multi-year commitment. We’ve built systems that can process millions of tokens, generate code, and reason through complex problems—but ask them to maintain coherent progress toward a single goal across thousands of sessions, and they collapse. Macken’s miniature metropolis, which began in 2004 with a replica of 30 Rockefeller Plaza, represents exactly the kind of long-horizon task our agents can’t handle.

Why? Because we’ve optimized for speed and immediate results rather than sustained attention and iterative refinement. Macken didn’t wake up one morning and carve Manhattan. He built one building, then another, then another, maintaining a mental model of the entire project while executing on microscopic details. His working memory persisted across decades.

Decomposition Without Loss

What fascinates me most is how Macken must have decomposed this enormous undertaking. You can’t hold the architectural details of every NYC building in active memory simultaneously. He needed some form of hierarchical planning—deciding which neighborhood to tackle next, which buildings within that area, which structural elements of each building. Then he had to execute at the level of individual balsa wood cuts.

This is precisely the challenge we face in agent design. How do you break down a massive goal into executable subtasks without losing sight of the overarching objective? How do you maintain consistency across thousands of individual actions? Current approaches using chain-of-thought reasoning or tree search work for problems solvable in minutes or hours. They don’t scale to years.

Error Correction and Iteration

Macken’s model went viral on TikTok with 10 million views, but I guarantee those viewers didn’t see the mistakes. The buildings that didn’t turn out right the first time. The structural decisions he had to revise. The techniques he refined over two decades of practice.

Our agents need similar capacity for self-correction over extended timescales. Not just “try again with a different prompt,” but genuine learning from accumulated experience. Macken’s later buildings were undoubtedly more sophisticated than his early attempts. His internal model of what worked and what didn’t evolved continuously.

Intrinsic Motivation Architecture

Here’s what we can’t replicate yet: Macken did this without external reward signals. No one paid him. No loss function optimized his behavior. He maintained motivation purely through intrinsic interest in the project itself.

We talk about reward modeling and reinforcement learning, but those approaches assume external feedback. Macken’s reward was entirely self-generated—the satisfaction of seeing his miniature city grow more complete. Building agents with genuine intrinsic motivation, rather than simulated versions of it, remains an unsolved problem.

What This Means for Agent Design

If we want agents capable of truly ambitious long-term projects, we need to study humans like Macken. Not because we should copy human cognition directly, but because his approach reveals requirements our current architectures don’t meet:

  • Persistent memory systems that maintain project context across arbitrary time gaps
  • Hierarchical planning that operates at multiple timescales simultaneously
  • Self-supervised learning from accumulated experience without external labels
  • Intrinsic motivation mechanisms that sustain effort without continuous reward
  • Error detection and correction that improves over thousands of iterations

Macken’s balsa wood New York City isn’t just an impressive artistic achievement. It’s a working proof-of-concept for capabilities we haven’t yet built into our agents. The fact that a human with no formal training in architecture or urban planning could execute this project suggests the core requirements aren’t superhuman—they’re just different from what we’ve prioritized.

Maybe instead of asking how to make agents smarter, we should ask how to make them more persistent.

đź•’ Published:

🧬
Written by Jake Chen

Deep tech researcher specializing in LLM architectures, agent reasoning, and autonomous systems. MS in Computer Science.

Learn more →
Browse Topics: AI/ML | Applications | Architecture | Machine Learning | Operations
Scroll to Top