Blog

Thoughts on software architecture, AI-assisted development, and building production systems.

I spend most of my time these days working with LLM agents. Not chatting with them. Working with them. I treat them like a junior dev team: I set architectural direction, write design specs, review code, and dive into complex areas directly. The agent handles the volume work, the test scaffolding, the boilerplate. This arrangement works well for coding but I noticed issues with research. Different sessions led to different research and a lot of repetition and outbound requests as well as large context blocks.

I've been spending a lot of time lately working with coding agents across multiple processes. An agent refactoring code in one project, another running tests in a second, a build pipeline doing its thing, doctrove syncing documentation in the background. The problem is visibility. Each of these runs in its own terminal or process, and when something goes wrong or you want to understand what's happening across the system, you're tabbing between windows, tailing logs, and piecing together a timeline in your head.

I've been writing code for long enough to see a lot of cycles come and go. The current LLM hype reminds me of every other "this changes everything" moment we've lived through. Like those other moments, there's some real value buried under a mountain of breathless takes and venture capital.

LLMs are good at generating CSS. They're terrible at following design systems. Give Claude or Cursor a component to build and it will happily use #FF5734 instead of your brand red, 16px instead of your spacing tokens, and font-weight: 600 instead of your typography scale.

The promise is intoxicating: AI coding tools that boost developer productivity by up to 55%. The reality is more sobering. While generative AI can dramatically accelerate code creation, recent research reveals it's simultaneously creating a new category of technical debt that compounds faster than traditional development approaches, often remaining invisible until systems begin to fail.

The challenge of constraining AI-generated CSS to follow design systems has led to sophisticated approaches using multi-level design token hierarchies. Rather than allowing LLMs to generate arbitrary CSS values, these systems create structured constraints that force AI tools to select from predefined, semantically meaningful tokens.

The web's approach to user privacy preferences has evolved significantly since the failed Do Not Track (DNT) standard. Global Privacy Control (GPC) represents the next generation of privacy signaling, one with actual legal teeth and growing industry adoption. For developers building web applications in 2026, understanding and implementing GPC isn't just good practice; it's becoming a legal requirement.