Skip to main content

API The Docs Podcast - Documentation by humans, for humans

· 3 min read

How do you get people on board when you shift to a new tool or a new approach? Even if the value proposition is clear, it might be a challenge to create a practice out of the change. In this episode of APIAPI A set of rules and protocols that allows different software applications to communicate with each other. APIs define the methods and data formats that applications can use to request and exchange information. The Docs, Polina Zaichkina (Senior technical writer at Codat) and Max Clayton Clowes (Product director for experience at Codat) share insights about building a culture of participation around documentation and achieving internal enablement.

Listen now:


I joined the Developing Documentation podcast to talk through how Codat rethought its documentation — not just the tooling, but the culture around it.

The short version: we moved from a CMS to a docs-as-code setup built on GitHub, rebuilt the site architecture around our product structure, and shifted from a small writing team doing everything to a model where the whole organisation contributes. Polina is the primary steward of our technical content; I engineered the new site and manage the broader documentation programme.

The harder version: none of it was quick, and most of the difficulty was human rather than technical. Getting internal buy-in before the work is finished is a strange kind of problem — you have to build enough that people can see where you're going, without the runway to do it properly. We did the bulk of the migration in two to three months while keeping the existing site live. It worked, but only just.

The thing I keep coming back to is metrics. The instinct in tech is to reach for analytics — page views, bounce rates, engagement. I'm sceptical. Those tools were built for marketing sites. Documentation is used differently, and the data tends to flatten behaviour that's actually quite varied. We've moved toward internal NPS and, more usefully, actual user research — watching people use the docs in context. It's slower, less dashboardable, and significantly more informative.

One concrete example: we trialled Arcade, a step-by-step interactive guide tool. The in-product data suggested engagement. Sitting with users, we saw they weren't really interacting with it as intended. The data wasn't wrong, exactly — it just wasn't telling us the right thing.

The full episode is worth a listen, particularly for Polina's perspective on scaling contribution and managing feedback across irregular contributors.

Maybe I'll send an email once in a while

Monthly digest. No spam.