I've been speaking with engineers and PMs inside some of the world's largest and most advanced tech companies to understand what's actually happening with AI adoption on the ground. This is part 4 of 6.
Insight: AI doesn't close the expertise gap - it widens it
There's a well-documented psychological phenomenon called the Dunning-Kruger effect: people with limited knowledge in a domain tend to overestimate their own competence, while genuine experts are more likely to recognise the limits of what they know. The effect is often visualised as a curve - a spike of misplaced confidence early on (sometimes called "Mt. Stupid"), followed by a valley of humility as one realises the limits of their own knowledge, and eventually a plateau of actual competence.
AI is putting a rocket under that early spike. It's never been easier to produce work that looks competent. The problem is that looking competent and being competent are two very different things - and AI can't tell the difference.
Give AI tools to someone who understands architecture, security, and code quality, and they catch the bad plans, reject the duplicated code, spot the holes the AI introduced. Give the same tools to someone earlier on that curve, and as one engineer put it:
"They can churn out a lot of crap - not by malice, just by not knowing better."
The AI won't push back. It'll say "great plan, let's do this" every time.
He drew a direct comparison to the vibe-coding trend happening in the open-source world: non-engineers building applications that look functional on the surface but are riddled with security holes.
"Your app probably has so many issues, and some malicious actor can extract all your user data and your API keys and make you pay bills of thousands of dollars. But it's cool, because you see some nice things on the screen, and everyone is hyped."
Inside organisations, the same dynamic plays out. PMs without engineering backgrounds were submitting thousands of lines of AI-generated code for review.
"If people who don't actually have the ability to make quality changes using LLMs dump thousands of lines of code on engineers, that's not scalable."
Meanwhile, a PM made a point that reframes the whole conversation:
"Whether you're a product manager or not, if you have a product-focused, customer-focused and outcome-focused mindset, you're fine. Just play with the tools."
The differentiator isn't who can produce the most output. It's who can determine what's worth building, and evaluate whether what was built is actually good.
The practical takeaway: Invest in capability at least as much as tooling
AI raises the stakes on expertise. If you're investing in AI tooling, invest at least as much in capability uplift and organisational support. The organisations that will get the most value from AI are the ones where senior people are empowered to slow things down, ask hard questions, and reject work that doesn't meet the bar.
Other considerations: Who can tell good AI output from bad?
If your organisation is light on senior technical or product talent, this becomes your biggest risk. AI doesn't close the expertise gap. It widens it, and it does so in a way that's initially invisible because the output looks polished.
If your PMs are order-takers rather than strategic thinkers, they won't be able to evaluate whether the AI-generated strategy is any good. The people who look the most productive with AI in the short term may be generating the most technical debt and strategic noise. Meanwhile, the experienced people who slow down to verify and refactor AI output will look less productive by naive metrics.
The organisation that gets the most from AI isn't the one with the most licences. It's the one with the most judgment.
Hey! I'm Brendan and I'm a Product / Org Advisor at Organa. I help organisations sharpen their strategy, build capability, and improve how they operate so that they create more successful products with greater impact. If this was an enlightening read and you think your leaders would benefit from hearing more, I'm currently in between engagements and offering free in-house talks for Australian (or APAC) organisations. Shoot me a DM if you're interested in having me come and speak at your company, either on AI Adoption in big tech or on the organisational enablers that are necessary to achieve these gains - product strategy, org design, product management capability.