Finding Comfort in the Uncertainty


I’m writing this having just landed back in New Zealand after a very long journey home from Utah. On the final leg from Houston to Auckland, I found myself wedged between a retired industrial chemist and a theoretical physicist who was reading a C++ book. I couldn’t not say anything.

We ended up in a brief conversation about the future of software engineers. He was learning memory management for a trading firm. He thought AI-generated code was “pretty crap”. He’d also noticed, though, that engineers around him were losing their jobs.

The same questions I’d just spent two days exploring at the retreat, surfacing unprompted between strangers somewhere over the Pacific.

About 40 of us - practitioners, researchers, technical leaders from around the world - gathered in Deer Valley, Utah, for an invite-only retreat on the future of software development. The event was hosted by Martin Fowler and Thoughtworks, and held in the same mountains where the Agile Manifesto was written 25 years ago - with some of the original signatories in the room. Not an attempt to reproduce that moment, but an acknowledgement that the ground has shifted enough to warrant the same kind of conversation.

The Future of Software Development Retreat

The Future of Software Development Retreat

The format was an unconference. No presentations. No agenda handed down from above. Participants proposed sessions, voted with their feet, and talked. Over 30 sessions ran in parallel - far more than any one person could attend. The ones I was in were fascinating, and comparing notes with others afterwards revealed just how much convergence there was across sessions I’d missed. A long way to go for two days - but worth every hour.

Here’s what I took away.

Nobody Has This Figured Out

This was, for me, the single most important realisation.

I walked into that room expecting to learn from people who were further ahead. People who’d cracked the code on how to adopt AI at scale, how to restructure teams around it, how to make it work. Some of the sharpest minds in the software industry were sitting around those tables.

And nobody has it all figured out.

There is more uncertainty than certainty. About how to use AI well, what it’s really doing to productivity, how roles are shifting, what the impact will be, how things will evolve. Everyone is working it out as they go.

I actually found that to be quite comforting, in many ways. Yes, we walked away with more questions than answers, but at least we now have a shared understanding of the sorts of questions we should be asking. That might be the most valuable outcome of all.

The Increasing Cognitive Load to Be Aware Of

Something that came through in conversation, not just in the sessions, was that several people who are going hardest on AI openly admitted that it’s exhausting them.

There’s a pattern emerging. You spin up one AI coding agent, give it a plan, and while it loops you spin up another. Then another. It’s genuinely addictive - in fact, several people used that exact word. The problem comes when they all start returning at once with pages and pages of output for you to review. The cognitive load is immense, and it’s not being talked about enough.

We’re burning out on the very thing that excites us. The very technology that promises to make us more productive is, for some, creating an unsustainable pace. Because the people experiencing this most acutely are often the most senior and most enthusiastic, it’s easy to miss the warning signs.

I found this one of the most honest and human moments of the retreat - the quiet admission, between sessions, that this is hard in ways we didn’t expect.

What I Brought to the Table

I proposed two topics of my own, drawing on findings from my Masters research into the impact of AI on software engineering. The first explored what I’m calling supervisory engineering work - a new category of work emerging as we delegate more of the coding to agents. The second examined whether productivity and developer experience, long assumed to move together, might be starting to diverge. Both sparked discussions that carried well beyond the sessions themselves.

Eight Themes That Kept Emerging

Across those 30+ sessions, certain threads kept surfacing independently. Different people, different contexts, converging on the same questions.

1. The Bottleneck Has Moved

For some, the constraint is no longer engineering capacity. Agents can burn through backlogs at extraordinary speed - but then they hit what one group called ’the network of blocks’. Slow decisions. Organisational dependencies. Human-speed discovery. Market absorption.

One of the most striking comments from the retreat: ‘Humanity is not ready for this much software’. The customer doesn’t want a new version of the app every day. We can produce more than the system around us - and the world beyond it - can absorb. Decision fatigue is becoming the real bottleneck, not engineering throughput.

2. What’s the Artifact if Not Code?

Multiple sessions questioned the primacy of source code. If agents can regenerate code from a specification, is the code the durable artifact - or is it the domain model, the test suite, the intent?

Ideas that surfaced: code as ‘just another projection’ of intended behaviour. Tests as an alternative projection. Domain models as the thing that endures. One group posed the provocative question: what would have to be true for us to ‘check English into the repository’ instead of code?

The implications are significant. If code is disposable and regenerable, then what we review, what we version-control, and what we protect all need rethinking.

3. Trust, Care, and What’s Lost in Abstraction

This was the emotional thread that the room kept circling without quite naming it directly. Several sessions touched on what happens when we lose intimacy with the systems we build.

Concepts that surfaced across these discussions:

These aren’t just process problems. They’re about identity, meaning, and what it feels like to be responsible and accountable for something you didn’t fully build and don’t fully understand.

4. Platform Engineering as the Enabling Layer

Across security, governance, and adoption discussions, platforms kept emerging as the thing that makes everything else possible. The pattern: create safe defaults (the ‘pit of success’), embed guardrails, and give teams a fast but safe path.

The argument isn’t for more controls. It’s for better defaults - so that speed and safety aren’t in tension. One group observed that the things that make agents effective (clear standards, good documentation, well-structured code) are the same things that made developers effective. We spent years advocating for Developer Experience and often struggled to get investment. Now organisations are keen to invest in the same things - but for agents. Is ‘Agent Experience’ the new DevEx? And if so, might humans quietly benefit too?

5. How Do You Conceptualise Agents?

We don’t have a good taxonomy for agents yet. Are they more like APIs or more like humans? They’re not deterministic enough for the first framing, but anthropomorphising them is risky too. One participant offered a third option: they’re more like chemical vats - non-deterministic processes best managed through engineering constraints. Some companies are starting to add agents to their organisational chart. Others are talking about ‘Agent HR’ - performance management and cost management for non-human workers.

This might seem like a semantic debate, but it shapes everything downstream. If you see agents as tools, you govern them with technical guardrails. If you see them as team members, you invest in onboarding and culture. The industry hasn’t settled on a mental model yet, and that ambiguity is slowing down coherent governance and investment decisions.

6. Organisational Readiness Gates Everything

This echoes something we learned the hard way with microservices: without a certain level of organisational maturity, the technology makes things worse, not better. The same is true with AI.

AI amplifies existing conditions. Organisations that already had strong teams, good platforms, and clear governance got faster. Organisations without those foundations got messier. Several groups converged on the idea that the first year of AI adoption should focus on preparing the system - platform foundations, governance, AI fluency uplift - rather than expecting broad productivity gains immediately.

7. The Human Role is Being Redefined - But Not Yet Designed

Everyone agrees the role of the software engineer is shifting. Nobody has a clear design for what it shifts to.

Sessions explored this from multiple angles: what should a staff engineer spend their time on now? How does education need to change? What happens to juniors, or mid-career engineers, whose primary value was in implementation detail? There’s a category of work emerging around supervising, orchestrating, and evaluating AI output - a ‘middle loop’ between the increasingly automated inner loop of coding and the outer loop of delivery. But the skills, pathways, and even the dignity of this new work haven’t been designed. We’re stumbling into it rather than shaping it.

8. The Ledger as a Convergent Insight

In at least four separate conversations - spanning agentic operating systems, production operations, enterprise adoption, and hallway chats - people independently arrived at the same idea: we need a complete, verifiable record of everything agents do. Every step taken, every change made, with the ability to roll back.

The framing varied. Some described it as a work ledger analogous to a financial blockchain. Others as an audit trail for verification and accountability. Others as a unified log aggregating across infrastructure, software, and network layers. But the core insight was the same: if we’re going to trust agents with real work, we need a ledger that tells us exactly what they did and why. When this many people reach the same conclusion independently, it’s worth paying attention to.

Why It Mattered

I care deeply about this profession and the people in it. It’s why I’m researching this for a Masters degree, and it’s why being there mattered to me.

Being invited to this retreat - and getting to spend two days in conversation with some of the most respected technical leaders in the world - was a genuine privilege. The hallway conversations, the dinners, the breakfast debates. The moments where someone you’ve read and admired for years turns to you and says, ‘That’s a really interesting idea’. There were some familiar faces, but I came away with many new friendships and a lot to think about.

The organisers weren’t trying to create a new Agile Manifesto. But there was an unspoken sense that the ground has shifted enough that we need new shared language, new principles, new ways of thinking about how we build software together. We didn’t leave with those. Not yet. But I’m grateful to everyone in that group for their openness, their honesty, and their willingness to keep working on it together.

At the close, when the organisers offered the mic to anyone who wanted to share a takeaway, I took my chance. I challenged the group - all of us with platforms and voices - to help the current and next generation of engineers find their way. Not by reassuring them that everything will be fine, but by getting better at describing the new problems they get to solve now that writing code isn’t the main one anymore. There are many engineers out there who were hired to write code and are now being told to outsource that to AI. They deserve more than platitudes. They deserve us doing the hard work of defining what comes next.

But we also left with something that might matter more right now: the shared recognition that nobody has this figured out, and the commitment to keep the discourse open. Because this wasn’t the end of an event. It was the beginning of a much longer conversation.