Back to Blog

AI Productivity: Hype vs Reality (5/6): Closed systems will fail

I've been speaking with engineers and PMs inside some of the world's largest and most advanced tech companies to understand what's actually happening with AI adoption on the ground. This is part 5 of 6.

Insight: Open codebases built years ago are now paying compound dividends

One PM I spoke with previously worked at Spotify, a company well-known for pioneering an "internal open source" model - a massive shared codebase where any engineer could contribute to any team's system. The quality gate was code review, not access control. The philosophy was simple: "As soon as my code is committed, it is no longer my code, it's our code."

That cultural principle has become even more powerful in the age of AI. With AI agents that can read code, understand systems, and submit changes across boundaries, the organisations with open, accessible codebases have a structural advantage. Teams with clear ownership of their systems - but open contribution policies across those boundaries - get compound benefits: AI tools can operate at the scope of the whole organisation, not just the narrow context of a single team.

The flip side is just as telling. The same PM described what happens when systems are locked down:

"If the agents can't even see the system that you want that is critical for automation, you're really shutting down productivity."

And even when agents can access systems, if the bottleneck becomes human review and "the humans aren't reviewing, then yeah, major problem."

The technical infrastructure matters too. Strong CI/CD pipelines, comprehensive automated test suites, and monitoring - all the things smart engineers built to prevent humans from doing bad things - turn out to be just as effective at preventing agents from doing bad things. As the PM observed:

"A lot of these systems, thankfully, they've got tests, they have monitoring, they have SLIs, SLOs, they have CICD pipelines. What I most worry about is actually new systems, where the people don't know to create these things."

The practical takeaway: Open contribution with clear ownership - not one without the other

The organisations best positioned for AI are the ones where code and systems are visible and accessible across teams, where teams have explicit ownership but contribution is encouraged across those boundaries, where CI/CD and automated testing provide fast feedback loops, and where the culture treats shared ownership as a feature, not a risk.

Other considerations: How much of your codebase can an AI agent actually see?

If your codebase is siloed behind team boundaries, if contributing to another team's system requires weeks of approvals, or if you don't have automated testing and CI/CD in place, AI adoption will be structurally limited. AI agents are only as useful as the surface area they can operate on.

But openness without ownership is a recipe for chaos. What made this model work at companies like Spotify wasn't just that anyone could contribute anywhere - it was that every system had a clear owning team responsible for quality, architecture, and review. Without that ownership layer, opening contribution to AI-assisted work from across the organisation would create exactly the kind of ungoverned mess that slows everyone down.

If your architecture is fragmented across isolated repositories with no shared conventions, agents can't build context across systems. If there's no automated test suite, there's no fast feedback loop to tell the agent (or the human reviewing its work) whether the change actually works. And if your culture treats code ownership as territorial - where teams gatekeep their systems and resist external contributions - then AI tools will be confined to the narrow scope of individual teams, rather than operating across the organisation where the real leverage is.

The companies getting the most out of AI didn't build that openness for AI. They built it years ago because it was the right way to build software. AI just made the payoff enormous.

Catch up: Part 1: Temper your expectations | Part 2: Quality is the new bottleneck | Part 3: The burnout cliff is coming | Part 4: Watch out for Mt. Stupid (Dunning-Kruger Effect)

Stay tuned for Part 6: Give people permission to play

Hey! I'm Brendan and I'm a Product / Org Advisor at Organa. I help organisations sharpen their strategy, build capability, and improve how they operate so that they create more successful products with greater impact. If this was an enlightening read and you think your leaders would benefit from hearing more, I'm currently in between engagements and offering free in-house talks for Australian (or APAC) organisations. Shoot me a DM if you're interested in having me come and speak at your company, either on AI Adoption in big tech or on the organisational enablers that are necessary to achieve these gains - product strategy, org design, product management capability.

Subscribe to our newsletter

Keep up to date with the latest blogs, news and occasionally radical bits about liberated companies from the team at Organa.

Thanks for joining our newsletter.
Oops! Something went wrong.