You Can't Assess AI-assisted Development Through Theory Or Tyre-kicking Alone
I see people forming views about the impact of AI-assisted development with a primarily theoretical lens or from early experiences using AI development tools. This is not enough for an informed view.
Hi everyone,
Thank you for reading Great CTOs Focus on Outcomes. I publish weekly and have an archive of over 150 posts, each packed with valuable insights on various topics relevant to CTOs and the issues they face, distilled from my career experience.
I strive to make each post a helpful resource on the topic it focuses on so that when a CTO has a need, they can reference an atomic nugget of insight. To this end, I regularly revisit and refine posts, ensuring you always receive the best and most up-to-date information with the most clarity.
If you’d like to support the growth of this resource, consider upgrading to paid and take advantage of the other ways I can help you.
As a CTO coach, I work with numerous companies and see the whole spectrum of AI-assisted adoption. It's been interesting to speak to people across the spectrum and listen to why they hold their positions, especially for me, who started from a reasonably conservative viewpoint on generative AI and only got more engaged over the past few years.
I note a fascinating pattern from those recent conversations: Those who are more bullish tend to have more practical experience using AI-assisted development than those who are more bearish and tend to take a more theoretical approach.
This may seem unsurprising; you are less likely to engage if you are more sceptical, but it suggests where to seek informed views. Don’t ask those on the outside looking in; engage with those doing real work and seek to understand their experiences. Please don’t limit yourself to tyre-kicking, and by tyre-kicking, I mean a shallow appraisal of the tools by plugging in a few prompts, forming an opinion and then moving on.
There are people with a firm grounding in the theoretical context AND substantial practical experience with the latest generation of tools. Many concerns we can arrive at from theory or tyre-kicking have long been overcome or reduced by those who have more thoroughly engaged.
Note: For context, going back further in my career, as a senior technology leader and as a CTO, I've led substantial developments using ML and recommenders and other data science practices in the production of products I've been responsible for and for improvement of operational processes.
The majority of my career has involved working with big data in some form, so I am not engaging with AI from a place of complete inexperience, but I do recognise a gap with where AI-assisted development has moved to in the past few years, especialy given my focus on leadership matters. I've been directly involved in the strategy of using these technologies as both a leader and a consultant the entire time.
Objection: AI hallucinates and can generate noise (but various strategies exist to minimise this)
The evidence fuelling the concerns of AI sceptics consists primarily of issues already addressed by the community of AI-assisted developers, who have been defining new practices and workflows.
I've seen enough to be confident of the impact AI-assisted coding will have on changing well-accepted norms of code development, including team sizes, roles, and skill mix (a post on this is coming soon).
Experienced technologists must be careful when looking for evidence supporting our natural scepticism. Scepticism is healthy and will lead to better-quality engineering, but it can also betray us.
Here's my mildly antagonistic post on the matter - written from a place of love, based on observations I’ve noted through self-reflection as I’ve overcome the hesitancy I had:
We can dig shallowly and convince ourselves that the impact of AI-assisted development is overstated and is a flash in the pan, or we can do what people on the frontier are doing: challenge assumptions, get hands-on and find patterns that accentuate the strengths of AI-assisted tooling.
I am seeing small teams have terrific results, and picking their brains on their patterns is eye-opening. They are well beyond the issues that people post about when they waste some time due to something hallucinatory from an LLM.
Patterns similar to this one are helping bring control and management of the devs’ cognitive load:
The model's capability can influence the effectiveness of any LLM, how it is trained and fine-tuned, the information it provides via methods such as RAG, the quality of the prompts it supplies, its context window, and various other factors. Fortunately, the tool providers are improving and managing some of these elements on our behalf. However, it's also relatively straightforward for your team to manage these factors for your organisation’s context for far more reliable results.
Objection: The main bottleneck in software development is not writing code.
Indeed, the main bottleneck in software development is not writing code. I’ve heard this rejoinder when talking with teams and leaders who are hesitant to experiment much with AI-assisted development. It’s been a valid objection to many suggestions for improving productivity that amount to helping developers type faster.
At first glance, AI-assisted development seems like another way to type faster. However, when I got more hands-on experience and did real work with these tools, I found that the most effective ways to work with AI involve much more than time-saving typing.
Writing specs can clarify thinking and design, and create a productive feedback loop when working with the assistant. There are many more points in the software development lifecycle where AI assistance can be massively helpful from both an efficiency and effectiveness standpoint. Typing is not the bottleneck, and AIs are not limited to helping with typing.
Still, AI-assisted development is not limited to helping write code, so this rejoinder, which I’ve heard frequently as an objection to using AI to assist with development, is not a valid reason not to engage. AI can assist with research, design, requirements analysis, observability, testing, product strategy, architecture and many other facets of software development.
Objection: The Benefits of AI-assisted development are all hype
Focusing on vibe-coding and AGI distracts from the practical use of AI-assisted development. More practical approaches receive less coverage but are far more relevant to software development.
Cynicism is reasonable, and sorting through real and imagined impacts can be difficult; not least of the confounding factors is the Productivity Paradox. Erik Brynjolfsson coined the term in a 1993 paper ("The Productivity Paradox of IT") inspired by Nobel Laureate Robert Solow: "You can see the computer age everywhere but in the productivity statistics.". In more recent times, there’s conjecture and evidence that the productivity gains of automation improvements can be seen but show up after a lag.
Not all AI-assisted development is vibe-coding
Vibe-coding reflects some interesting progress in its own right and is an enjoyable experience fraught with risk should you ride it into production. Whilst strongly related, it's currently on vibe-coding and is on a trajectory separate from AI-assisted development, but it understandably gets conflated together.
As Simon Willison highlights, Not all AI-assisted programming is vibe coding (but vibe coding rocks)
Maybe down the track, they become one and the same.
The discourse focuses on the endgame instead of what’s already possible.
When it comes to understanding how we work and what software development jobs will look like in the future, examining the most sophisticated use cases and assessing the gap is less informative than reviewing the many mundane instances in which AI assistance excels. Instead of seeing where it falls short against replacing the best software engineers, look for evidence of what it’s already replacing.
As I covered in this post, the capabilities of the median software engineer and the conditions that they work under are a far cry from where the most productive engineers and teams are at, and this is where AI will have an impact first - not by their adoption but certainly where displacement is far more likely:
And, of course, the capabilities are improving and will continue to disrupt a broader segment of developers as the capabilities improve.
Managing Cognitive Load in an AI-assisted context
Our productivity as individuals depends on how long we can spend thinking about the task at hand in a flow state. Our productivity as a team is a function of the ease of alignment of the team members, such as signalling progress and choices in a way that doesn’t overwhelm any individuals and enables a flow of value that can be validated.
Our effectiveness in deriving benefit from AI-assisted development results from managing our cognitive load. One opportunity to conserve cognition is to remove many mundane tasks from our concerns.
Early experimentation with AI assistance will suggest to most people that out-of-the-box AI assistance threatens to make that worse. This is because the amount and scope of changes AI undertakes on our behalf that need to be checked, and worse, some of which will be misaligned with our intent, can overwhelm us—it becomes an inventory of things to consider and check, something without assistance we do in turn with our focus tuned to mostly the scope of each and thus naturally limited from being too draining. This was precisely my experience, but it shouldn’t be the point we step off, especially if we see an opportunity for competitive advantage for our companies or consider ourselves thought leaders.
So, to persevere beyond this point of friction, the effectiveness of AI-assisted development becomes a question of “How can we express intents for which we are confident of the result or have simple ways to verify?” such that the cognitive load of the assistance remains within a comfortable range most of the time.
Objection: Companies’ Adoption of AI will be similar to other trends
One last thought—and this one is a bit more speculative—is that Most software organisations' difficulty with change suggests that adapting to make good use of AI is low for most.
This is a fair point; it will be a challenge, but I am not sure it will limit the impact of AI-assisted development. We may see a resurgence of outsourcing arrangements where the capability builds up in other firms and the shift in unit economics makes it feasible.
Conclusion
The typical objections to using AI-assisted development tools seem more plausible from afar. Still, they are subject to the limitations of theory alone because details that may contradict the inferred limitations are unavailable from that distance.
Similarly, shallow tyre-kicking can lead to confirmation bias, as known limitations can be interpreted as confirmation that your reasoning for not investing more time was justified. The junior developers will be all over this stuff. As Geoff Huntley puts it, the future is for people who can do things and signals a period of discovery and acquisition of how to do things using the new paradigms.
Instead, with potentially momentous trends, it's essential to look for what they may mean as viable and valuable tools that can help you. There are tools you use today that others once were hesitant to embrace, and it's hard to imagine you, too, might be susceptible to the same rationalisation that leads to later adoption, which puts you at a disadvantage.
Where are you in your AI-assisted development adoption journey? In the comments, share your concerns, experiences, and lessons learned.
If you enjoyed this publication, please help others find us and benefit from the insights.