Back to Blog

AI Productivity: Hype vs Reality (6/6): Give people permission to play

I've been speaking with engineers and PMs inside some of the world's largest and most advanced tech companies to understand what's actually happening with AI adoption on the ground. This is the final part - 6 of 6.

Insight: The fastest adopters didn't mandate anything - they removed friction

What stood out across every conversation wasn't just the tools these companies use. It was the environment around the tools. A few engineers built their own agent runtimes just to learn and have fun. A product manager built a personal AI assistant that could review code, summarise emails, and run tests - on his own initiative. Another team created a code reviewer with multiple AI "personalities" modelled on different team members' review styles. None of this was mandated. People were experimenting because the culture made it safe to.

One PM described teaching a room full of non-technical product managers how to build production dashboards and websites in 40-minute workshops. "I talked for about 10 minutes, and then they built something for a half an hour, and everybody built it." The barrier to entry wasn't skill - it was permission and access.

At a company-wide level, there were internal API gateways that let anyone, on any team, access any major AI model through a single key - no procurement process, no approval chain. "They basically just provide you all the tooling to just start building." Internal Slack channels and shared forums meant that when someone figured out a useful pattern, it spread across the company within days. "Almost every day, there's 3 new things that people post that are like, I tried this, and it's super cool."

When a few teams compressed their roadmaps using AI, the organisation didn't lock it down or create a governance committee. They created an "AI Native Week" - essentially telling the whole company: take a week, play with the tools, see what's possible. One PM's job that week was simply teaching other PMs how to use the tools so they could try things themselves.

This is the modern version of what earlier tech cultures achieved through guilds, 20% time, and internal communities of practice: structured space for cross-pollination and self-directed learning. The difference now is that the tools are powerful enough that a small experiment can produce something genuinely useful in an afternoon.

Takeaway: Remove friction, build channels, get out of the way

The fastest way to accelerate AI adoption isn't a mandate or a training program. It's removing friction: give people access to tools without a procurement nightmare, create space for experimentation without requiring a business case for every hour spent learning, and build channels for people to share what they discover. The companies seeing real transformation aren't the ones with the best AI strategy documents. They're the ones where someone can try something on a Tuesday, share it on Wednesday, and have three other teams using it by Friday.

Considerations: Does your culture make it safe to try something and fail?

If your organisation has a culture of tight control over tools and time - where every tool requires procurement approval, where every hour needs to be billable, where experimentation happens in a quarterly innovation sprint rather than daily - AI adoption will be slow and shallow. People will use AI the way they were told to use it, within the boundaries they were given, and they'll miss the creative, unexpected applications that create real competitive advantage.

If your people need permission to try something new, if "that's not my job" is a common response, if knowledge stays inside teams rather than flowing across them - those are the constraints that will limit your AI transformation more than any technology choice.

One PM made a point that stuck with me: there was a period where people felt some shame about using AI. They'd say things like "by the way, Claude built this for me" as a disclaimer. In his view, the companies that move fastest are the ones where that stigma erodes quickly - "where you value some creativity, and you're just happy that somebody tried something, and you celebrate that."

The organisational structure you need for AI is the same one you've always needed for innovation: autonomy, trust, and the slack to explore.

That's a wrap on this series. Thanks for following along. I'd genuinely love to hear what resonated, what you disagreed with, and what you're seeing in your own organisation.

Catch up: Part 1: Temper your expectations | Part 2: Quality is the new bottleneck | Part 3: The burnout cliff is coming | Part 4: Watch out for Mt. Stupid (Dunning-Kruger Effect) | Part 5: Closed systems will fail

Hey! I'm Brendan and I'm a Product / Org Advisor at Organa. I help organisations sharpen their strategy, build capability, and improve how they operate so that they create more successful products with greater impact. If this was an enlightening read and you think your leaders would benefit from hearing more, I'm currently in between engagements and offering free in-house talks for Australian (or APAC) organisations. Shoot me a DM if you're interested in having me come and speak at your company, either on AI Adoption in big tech or on the organisational enablers that are necessary to achieve these gains - product strategy, org design, product management capability.

Subscribe to our newsletter

Keep up to date with the latest blogs, news and occasionally radical bits about liberated companies from the team at Organa.

Thanks for joining our newsletter.
Oops! Something went wrong.