Are Software Development Progressives Becoming the New Curmudgeons?
Knowing AI is hype-prone is one thing. It's another thing to deny the impact it's already having and risking becoming irrelevant. How can we miss the hype without missing the essential?
Hi everyone,
Thank you for reading Great CTOs Focus on Outcomes. I publish weekly and have an archive of over 150 posts, each packed with valuable insights on various topics relevant to CTOs and the issues they face, distilled from my career experience.
I strive to make each post a helpful resource on the topic it focuses on so that when a CTO has a need, they can reference an atomic nugget of insight. To this end, I regularly revisit and refine my posts, ensuring you always receive the best and most up-to-date information with maximum clarity.
If you’d like to support the growth of this resource, consider upgrading to a paid plan and take advantage of the additional ways I can help you.
There’s something about getting older and, somewhere along the line, realising you're becoming your parents. You notice things like when your parents stopped listening to new music and the potential for that to start happening to you. I think it might be like that with learning new technologies. I've ridden a few waves, including the rise of SaaS, client-side web programming, and big data and analytics, and I will probably ride a few more before they become beyond my reach or interest. Each time, I went deep and learned everything I could, and my reward was gainful employment.
The age at which you stop listening to “new music” differs for each of us. It happens naturally and is a function of both your networks and the effort you make. Your networks influence what you are exposed to, but even if this diminishes with effort, you can overcome this if you are determined to learn, discover and sense-make. You must be aware, however, that with age and experience, you will develop cynicism and preferences and will need deliberate tactics to overcome these.
Is AI Hype an Inhibitor for Adoption by Experienced Software Engineers?
Because AI hype is a mix of informed professionals, excited newcomers and bad actors all singing from the pulpit, there is much noise and, frankly, wrong information being shared. It is understandable why this is a turn-off. This is also a risk, though, as it presents an easy out for some who may have been contemplating learning something new.
The Risk for Late Career Technologists
An example of this, I suspect, is emerging with the growth of AI. I’ve noticed a pattern among my contemporaries, which appears to be a mix of both healthy and unhealthy skepticism regarding the value of the current state of AI, especially the generative AI popularised by commodity AI providers such as OpenAI, Anthropic, Google, Meta, and the like. I won’t highlight any specific examples in this post as my intent is not to shame, and in many ways, I can empathise with the thought process. In almost all cases, these individuals hold prominent positions in the industry and have a strong history of championing progressive movements within software development. I won’t highlight any examples
In some cases, these individuals have a strong affiliation with the agile, lean software, or systems thinking movements. They’ve helped organisations achieve more value sooner for decades. However, there is now a new opportunity to do this using new methods, and not all of them are on board. Whether they are on board or not is their prerogative; what I seek to explore in this post is the range of considerations both as an exploration of my thinking and if any my
Healthy skepticism of AI-assisted software development
The healthy part is that there is skepticism and calling out hype worthwhile education on the nature of the models underlying LLM which compromise their effectiveness for certain types of problems and which suggest they won’t be the path to achieving the much vaunted Artificial General Intelligence (AGI) - a hypothetical kind of AI that possesses the ability to understand and learn any intellectual task that a human being can. Even more significant are the worries of what the societal impacts are when there is blind adoption.
These are concerns I share. However, I believe the change is inevitable, and while I won’t work to accelerate the disruption, I will do what I can to ensure that people are prepared to adapt. From my perspective, as I addressed in my earlier post, I worry about the rate of change having significant social impacts when large numbers of information workers are displaced and lack the skills to compete for the new types of jobs being created. My position is not about an unfettered progression of AI; that will happen, but what we do as a society to manage the changes is within our control.
Unhealthy skepticism of AI-assisted software development
The less healthy part is more of a knee-jerk reaction to the AI hype by some of those contemporaries. With so much focus on the march towards achieving AGI and the perceived threat of software engineers being replaced by AI, there is a general dismissal of the capabilities already available - some because they fall considerably short of the promise of AGI and some because they fall short in terms of being trustworthy. Concerns for the latter appear to be focused on the tendency for these models to hallucinate, making them unreliable, as well as the aforementioned limitations of LLM and similar generative AI approaches. The models, after all, are leveraging probabilities based on the many examples they’ve processed as part of their training.
These are valid limitations of these models. The misleading part is that some contemporaries have suggested there aren’t practical applications of these technologies, or at least that they haven’t seen any. As technologists who have navigated numerous technological waves over the past few decades, we’ve developed a keen sense of where to look for practical applications of emerging technologies. The practical applications of generative AI technologies are similarly discoverable, although admittedly obscured by a mountain of AI hype posts flooding social media.
Such a claim ignores significant examples of companies creating demonstrable competitive advantages with their AI investments. There are, of course, other companies with early AI missteps. More substantially, numerous jobs have been reduced to menial or limited responsibilities, which AI is disrupting.
Some may argue that the size of the investment in these technologies is far greater than what can reasonably be returned in the short term. In response, I emphasise that the trend of investment bubbles has a rich history, and while we can anticipate a correction, this does not diminish the utility of the technology itself. All investments are based on the uncertainty inherent in them. As certainty grows, the potential for return on investment becomes clearer, speculation reduces leading to a reallocation of capital.
Evidence of the Applicability of AI and AI-Assisted Software Development
The likelihood is increasing that if you’ve made or received a call to a call centre, the conversation was with an AI. It accounted for nearly 2% of all interactions in 2023 and is projected to make up 10% of all interactions by 2025. Of course, it’s not all smooth sailing; companies are still learning which interactions synthetic call center agents can manage most effectively. You can read about the experiences of company owners and agents themselves working alongside synthetic counterparts, as well as what works and what doesn't.
Some of these automations were limited simply by their lack of access, which is being addressed in the public commodity models through capabilities such as computer use and tool connectivity approaches, such as MCP. Commodity providers such as OpenAI and Anthropic are often mistaken as the only game in town due to the prevalence of their use in the consumer sphere. Companies have progressed much further with custom-developed models they train or by working with open-source models that they extend.
When it comes to AI-assisted software development, it’s easy to become both amazed and then quickly disenchanted. Through my experimentation and conversations with others who are well ahead of me, I am learning that there are many things you can do to improve the results you can achieve when assisted by AI. Some of these may become universal practices, while others will be based on the preferences you develop as you discover what works for you. The journey to mastery may well be a long one due to the range of variables at play combined with human factors.
The Experience of the AI-Natives
The inverse of the risks late-career technologists experience is valid for the generation that will be the AI-Natives. Whilst we may imagine that those entering the industry will all become time bombs prone to riding their synthetic partners into production incidents, like Major T. J "King" Kong rode the bomb in Dr. Strangelove, the reality is that there will be a spectrum. There will be capable software engineers who learn to leverage AI tools to maximise what they can achieve, and there will be those who will make a mess ignorantly.
What AI-Natives don’t have is the baggage of prior assumptions about how things work. That knowledge can be an advantage for my generation, but it can also blind us to specific changes, as some assumptions may change drastically in ways we don’t expect. The time that all our experience provides an advantage may be shorter than we assume. The AI-natives have access to all the world’s knowledge, supported by unprecedented tooling for accessing that knowledge. And they are actively using the tools that others are still tentatively trying to decide whether to dip their toes into.
A Strategy For Dealing With AI-hype
Keep reading with a 7-day free trial
Subscribe to Great CTO - Dedicated to helping CTOs thrive. to keep reading this post and get 7 days of free access to the full post archives.