Imperfect AI is perfect
In a recent piece, I made the case that we should stop trying to build "perfect" AI. That imperfection is not a failure mode — it's intrinsic to how these systems work. Here, I want to go one step further: not just to excuse AI's flaws, but to explore how we can use them. How we can design with imperfection in mind.
Let's start with lossiness.
In computing, a "lossy" process throws away information. JPEGs discard detail to save space. MP3s strip out frequencies we don't notice. It's not just acceptable — it's efficient. It's why we can stream music or store thousands of photos on a phone.
When it comes to AI, we tend to view lossy-ness as a problem. We ask why the model forgot what we said five messages ago. Why it hallucinates details. Why it can't recall everything we've ever told it, forever.
But not all forgetting is bad. Sometimes remembering less helps you focus more.
At one point, Facebook explored resetting people's friends lists — not because of any technical issue, but because keeping an ever-growing database of everyone you've ever met makes the product worse. Friendships change. Context changes. Endless recall isn't a virtue — it can become a burden.
Lossy-ness introduces drift. Forgetting. Compression. That sounds bad in a spreadsheet, but in a conversation? In creativity? In social dynamics? It can be freeing.
Some of the most generative outputs from language models come from their imprecision. They take a slightly wrong interpretation and go somewhere unexpected. They misread your tone and give you a new one. They hallucinate — and sometimes, what they conjure is beautiful.
A lossy AI is closer to a thinking partner than a search engine. It's a collaborator, not a calculator. You wouldn't demand that a collaborator never forgets, never drifts, never missteps. You work with the slip-ups. Sometimes because of them.
We see the same principle in other areas of AI.
Recommendation engines — like those used by Spotify or YouTube — often misfire. And yet, some of the most delightful discoveries happen because they get it slightly wrong. If Spotify only gave you things it knew you'd like, your taste would ossify. Serendipity — the thrill of stumbling across something unexpectedly resonant — depends on a touch of algorithmic fallibility. The adjacent possible isn't always in your browsing history.
Or take search. Perfect matching sounds ideal until you experience it. A "perfect" system would return nothing unless your query was precisely formed. In practice, we want fuzzy search — the ability to be a bit vague, a bit wrong, and still get close enough. Tools like ElasticSearch rely on this. So does human memory.
If we accept that our AIs will be imperfect — and we must — then let's go further. Let's design for that imperfection. Let's lean into lossy-ness. Let's build systems that compress, forget, improvise. That don't store every detail, but instead let go of what's no longer useful.
Perfect recall, perfect logic — these are machines we don't want to work with, and don't want to be.
Imperfect AI isn't just good enough. It might be better.