On Business Leaders' Hypocritical Attitudes Towards AI and People
There's lots of excitement about the potential of AI and, of late, agentic AI and what it reveals about how AI is desired to be used is in contradiction to how workers are treated today.
Hi everyone,
Thank you for reading Great CTOs Focus on Outcomes. I publish weekly and have an archive of over 150 posts, each packed with valuable insights on various topics relevant to CTOs and the issues they face, distilled from my career experience.
I strive to make each post a helpful resource on the topic it focuses on so that when a CTO has a need, they can reference an atomic nugget of insight. To this end, I regularly revisit and refine posts, ensuring you always receive the best and most up-to-date information with the most clarity.
If you’d like to support the growth of this resource, consider upgrading to paid and take advantage of the other ways I can help you.
There is a lot of excitement about agentic AI. The idea is interesting to me, too. But it gets me thinking: Agentic AI leads to emergent behaviour between agents. The potential of this idea is unbounded. Agents working together spontaneously could achieve things we have yet to imagine.
This leads me to my question about this aspect of Agentic AI :
Do the leaders excited by this development realise most of them have been suppressing similar emergent behaviours between real people all this time?
They imagine AI agents collaborating to get work done. Meanwhile, humans are subjected to workplaces where they will make requests for things they need to achieve something they’ve been asked to do, and they may wait a year or never get it. AI agents will need information to act on. Meanwhile, humans work in workplaces where bosses hoard information to gain influence.
This is just one slight hypocrisy, and in this post, I will explore a few more regarding AI and people.
Whilst what is possible attracts my problem-solving tendencies as a technologist, I am also painfully aware of the implications. Like all technological progress, rapid change can lead to both rapid growth and rapid displacement of work and, in turn, people’s livelihoods.
Failing to Develop Your Team Leaves Them Vulnerable
A factor that makes knowledge workers vulnerable to premature replacement by AI progress is the lack of personal development opportunities in their organisations and environments, which are not conducive to learning.
There are many organisations where there is no advancement or opportunity to learn. The psychological safety of its workers has been reduced, thinking for yourself is disincentivised, and trying something new is discouraged. Where you play it safe because to do otherwise risks being blamed when things go wrong. Where ideas flow downhill from the top, and engineers are expected to 'do' following a prompt (sounds familiar, right?).
Knowledge workers do not care as much as they could in these places because caring inevitably leads to trouble. It’s disincentivised out.
And it is the output of these knowledge workers that is far more replicable by AI today.
Software Engineering As a Case In Point
AI still has a long way to go before replacing the top engineers—various people working in companies and actively experimenting with AI-augmented software development using publicly available consumer and commercial tooling have shared this view with me.
I am open to the idea that, potentially, in some research contexts, more progress may be made in this area. I haven’t seen it—there are many chasms to cross before AI competently replicates the range of engineering decisions an experienced engineer makes. I am not suggesting that any engineer should feel comfortable with their jobs being safe from disruption by AI.
Top engineers work in organisations that allow them to learn from their mistakes. They also have others to learn from and discuss things openly with. They understand the business they are in better and the needs of the users they serve. They actively evolve the architecture of their work to yield the qualities needed to meet the users’ expectations of the software they create.
Where they work, sensible policies for using AI tools exist, and they became familiar with them and used them to amplify their work further. AI may one day replicate its value, but that day is not today.
Addy Osmani wrote a post about the current state of AI-assisted coding and some patterns he’s observed regarding how inexperienced versus experienced coders use it.
To grossly paraphrase, both groups can use AI assistance to make progress more quickly.
The 70% Problem for Inexperienced Coders
Inexperienced developers often run into what he calls the ‘70% problem’—the first 70% of progress is quick - it may be excellent at helping them validate assumptions and provide meaningful demonstrations of the concept, and the last 30% that’s required to ensure the software is usable for actual ongoing use, addressing edge-cases, hardening the software for production-level quality, that remaining effort feels interminable to most inexperienced engineers. They may not even be excited by the idea of addressing these software quality issues to resolution.
It reminds me of the classic Project Management adage—the first 90% of the scope takes 90% of the time, and the last 10% takes another 90%… (twice as long, don’t think about it too hard).
There are exceptions, of course - I’ve known a few people who would describe themselves as less technical but, through sheer bloody-mindedness to close every issue their users faced, persevered until their applications addressed all critical needs. That is the exception.
Even more so is the class of workers I described earlier, whose inexperience comes not from a lack of working experience but from a lack of an environment conducive to learning. I’ve met many career programmers who I’d classify as inexperienced because their education was stunted early in their careers by harmful work environments, which made learning and caring very difficult.
The 70% Problem for Experienced Coders
For experienced engineers, progress is fast, but they are also using the tools to augment and automate their expertise, so they are also more efficient with the remaining 30% of software creation.
They are using the tools to leverage their knowledge of what it takes to produce high-quality software and to increase their productivity.
To contradict myself for a moment, this certainly does not mean all experienced engineers have success with AI-assisted coding. You know that idea that at some point in our lives, we stop actively seeking new music to listen to, and for this reason, it's relatively easy to predict someone’s age based on their listening interests.
For some experienced engineers, this may be true for new technology trends. At some point, the energy to learn the next trend may be too much, or the gulf between it and what they know best feels wider. Or perhaps the cynicism over a career has grown to the degree that they are suspicious of what is new and dismiss recent developments as mere fads.
But for those who embrace the tools, the fundamental knowledge of how to manually create high-quality software helps them use AI-assisted coding to achieve the same results but possibly faster and more effectively.
Now, there’s no doubt there’s a spectrum, and some relatively new coders will become AI-assisted natives who might be even more effective than experienced engineers using the new tools. They might learn ways to learn all the necessary tools faster using AI-learning support.
The Risk I See Playing Out With AI Job Displacement
I find it bitterly ironic that the terminology around AI uses keywords such as ‘training’ and ‘learning’ when the state of training and learning in most organisations is dire and exposes so many people to the risk of being replaceable.
The contradiction of leaders desiring to do things for AI workers they have rarely afforded for their human workers is repulsive. I fully expect AI workers to be more easily trusted at some point, encouraged to fraternise in ways that humans were often discouraged too, given permission to behave in ways that humans were not and engaged in many other hypocritical actions.
Unfortunately, throughout my career, I have observed quite a number of these organisations that do almost nothing effective to grow their people.
Eventually, these workplaces become career cul-de-sacs. Good talent leaves, and those who stay become a shadow of their potential. Many of these same organisations have a high appetite for some silver bullet AI workers.
Workers who don't ask questions, don't need training (as far as they assume anyway), don't take breaks, don't need HR, ping-pong tables, social events... They've tolerated all these things because until now, there were no other options.
The point is not about AI; it's about the prevalence of lousy engineering due to the compounding effects of harmful environments and a lack of effort to grow people. It's about lousy, low-skilled jobs that exist because organisations could not find a way to humanly evolve and instead left themselves exposed to rapid mass displacement. The state of AI is much closer to replacing people in these environments because the products being produced and the services undertaken are already low-quality and low-value. The same shortsighted thinking led to this hunger for easy AI workers.
Let’s explore the impacts:
There Is an Opportunity Gap Between Knowledge Workers In Organisations Where There is Growth Versus In Those Where There’s None
Lack of Talent Development or Talent Transition Worsens The Jobs Gap
The Degree of Societal Unrest Will Be a Function of The Breadth of These Gaps
Keep reading with a 7-day free trial
Subscribe to Great CTO - Dedicated to helping CTOs thrive. to keep reading this post and get 7 days of free access to the full post archives.