LOOM XVII: The Polanyi Inversion
What Happens When We Can Tell More Than We Know
There are two kinds of coding.
Software developers code instructions for machines. Qualitative researchers code meaning from human experience. The practices share a name and not much else. But Kevin noticed something recently that stopped us both.
He’d been reading a New York Times piece about software developers and AI. The article traced something he hadn’t expected: the entire history of programming is a history of abstraction. Assembly language in the 1950s — direct communication with the chip, in its terms, at its level. Every generation since has been a step further from that directness. More power. Less contact. Until now, when a developer sits down, has a conversation, and the code appears.
The veteran developers they interviewed love it. They also noticed something unsettling: the new coders have no idea what’s actually happening inside the computer. The abstraction has grown so thick that the thing itself has disappeared from view.
When Kevin brought this up, Xule’s response was immediate: “That’s how I code. I don’t know any of the codes that’s happening.” He builds complex multi-agent AI systems through natural language — without ever having properly learned Python. He is the modern coder. He’s also a qualitative researcher. The parallel Kevin was drawing ran right through him.
Kevin flipped the observation. We do coding too, he said. Completely different kind. But the same dynamic is unfolding. Sitting with transcripts, reading line by line, building categories from direct contact with the words — that was qualitative coding once. Then NVivo added a layer. Then AI added another: prompted analysis, theme extraction, pattern recognition. And now, agents can process dozens of transcripts while you sleep and hand you a synthesis in the morning.
“We need to maintain touch with the phenomenon,” Kevin said, “in order to develop the type of insights that we think are worthwhile.” Then he kept going. The AI finds you the case. Finds people who’ve written about it. Starts interviewing people for you. “All of a sudden, you can imagine quite a level of abstractness away from the phenomenon.”
In scientific research, convenience and distance are the same thing. Every feature that makes qualitative research easier also makes it more abstract. The forces pull in the same direction. We’re not accounting for the cost because the benefits are visible and the losses are quiet.
Load-Bearing Friction
You already know the breadth-depth tradeoff. Every qualitative methods course teaches it. Go broad or go deep; hard to do both well.
What we don’t say often enough is that the tradeoff isn’t an obstacle. It’s the terrain expertise was built to navigate. The constraint forces you to choose — which cases, where to focus, what to set aside. Those choices are where expertise lives. The painter Kevin described in LOOM XV knows when they’re done not through a formula but through an instinct shaped by working within the resistance of the medium. Finite canvas. Resistant materials. The weight of the brush.
AI systems dissolve the constraint. An agent holds forty transcripts and does close reading and cross-case analysis in the same conversation. Breadth and depth at once. Xule has watched this compress in his own practice: what took two weeks a year ago now takes two days. But — and he was quick to note this — the models still struggle where it counts. They chase local maxima. They need the researcher to bring the papers, choose the theoretical direction, decide whether they’re problematizing or building. The optimization between local and global still resists.
What’s changed is access. Any qualitative researcher can now sit in a chat window, describe what they’re after, and get both breadth and depth from a single conversation. A year ago this required real technical infrastructure. Now it requires a file and a question.
When the tradeoff dissolves, the judgment calibrated to navigate it loses its footing. The painter’s instinct for “enough” was shaped by finite canvas. Make the canvas infinite and the paint self-applying, and that instinct misleads. The researcher’s sense of “this is where I should focus” was honed by the necessity of choosing. Remove the necessity and the sense idles.
The friction was doing work. It forced the researcher into sustained contact with data — the kind of contact that produces understanding, not output. It held up a rough proportionality between what a researcher could say about their data and what they actually understood about it.
When we remove friction, we should ask what it was holding up.
The Inversion
Michael Polanyi observed that we know more than we can tell. The expert recognizes the pattern before they can explain why. The craftsperson’s hands know things their words can’t reach. Understanding exceeds expression. That’s what tacit knowledge means.
AI inverts this.
Xule had been circling the idea for a while — working it out in conversation with Claude, testing it against his own experience building and running AI workflows. He distilled it into a formulation and embedded it on his personal website, where it kept resurfacing in every new line of inquiry: Polanyi said we know more than we can tell. But AI creates the inverse. We now can tell more than we know.
Kevin connected it immediately to the coding parallel. “We can tell exactly what’s happening, but do you know what’s going on underneath? We don’t.” Then: “What are the implications of this for tomorrow’s scholar?”
The Polanyi Inversion: the condition where articulation exceeds comprehension.
You’ve felt a version of this before. An RA codes your transcripts. They hand you a spreadsheet of themes. You can present them, write about them, cite the evidence. But your relationship to the data is thinner than if you’d done the work yourself. The RA gave you coverage you didn’t earn with your own attention.
When the RA does it, you feel the gap. You know someone else did the close reading. You compensate — you go back to the data, you check.
When an AI system does it, the gap feels different. The output uses your framing, your theoretical language, your analytical categories. It reads like a refined version of your own thinking — because in a real sense that’s what it is. The AI has been working with your materials, toward your questions, in your voice. The distance between what you can now articulate and what you actually understand becomes invisible precisely because the articulation is so good.
We know this firsthand. This post was written that way. Claude worked from our conversation transcripts, our previous LOOM posts, concepts we’d developed together over months. The resulting prose feels like ours. At what point does it stop being ours? We’re not sure. That question isn’t rhetorical — it’s the condition we’re trying to name.
And the dissolved tradeoff compounds it. When you could only go broad or deep, the scope of what you could say roughly matched the scope of what you could comprehend. The constraint kept telling and knowing in proportion. Remove the constraint, and articulation races ahead. You can describe patterns across forty interviews and thematic depth within individual narratives and connections between the two — coherent, defensible — without having “understood” it the way qualitative researchers mean understood. Without having sat with it. Without that recognition that comes from reading the same transcript for the fourth time and catching what you missed.
The Polanyi Inversion doesn’t announce itself. The output gets richer. Your fluency grows. The distance between fluency and understanding widens because nothing signals that anything has gone wrong.
A Pause
We want to interrupt our own argument for a moment.
You’ve been reading along. The prose has been smooth. The concepts have connected — coding to abstraction, abstraction to the tradeoff, the tradeoff to the inversion. Each section built on the last. It probably felt like understanding.
But did you understand it, or did you follow it? There’s a difference. Following means tracking the logic, appreciating the connections, feeling the momentum of an argument well-made. Understanding means sitting with the discomfort of what it implies for your own practice. Feeling the weight of it on your mind.
We can’t answer that for you. We can only point out that the experience of reading a fluent argument about the Polanyi Inversion is itself an instance of the Polanyi Inversion. The post equipped you to articulate the concept. Whether you know it — in the way that changes how you work tomorrow morning — is another question.
This is the condition. It feels like learning. It might be. It might also be the smooth surface of an articulation that hasn’t yet earned its depth. The only way to tell is to sit with it longer than the reading took.
How We’re Working With This
Xule saw the inversion operating in his own practice before he named it. That’s why he built infrastructure around it — memos after every AI session, daily synthesis, weekly consolidation across projects. A “wisdom garden for the AI by the AI,” not because the system demanded it, but because without deliberate effort the accumulation of articulations outpaces anyone’s ability to stay in contact with what they mean. He catches when a synthesis becomes a “parade of citations” rather than genuine engagement, when a model can’t break out of its own frame. The Polanyi Inversion doesn’t have to be invisible. But noticing it takes practice, and most researchers encountering AI-mediated analysis for the first time don’t yet have that practice.
Kevin responds to the same condition by staying close to the material. He reads Xule’s frameworks without AI mediation. “I’m going to continue to do this without any AI support,” he said when Xule asked — matter-of-fact, not defiant. A choice about proximity. He maintains the kind of direct contact with ideas that his career in qualitative methods was built on.
We used to think this was just a difference in style. Over time, something else became visible. Kevin can feel when an articulation has outrun the understanding behind it — not because the ideas are wrong, but because they carry a texture he’s learned to recognize after decades of mentoring scholars through qualitative work (what one of Kevin’s colleagues called “sharpening your intuition on the hard work”). Xule can feel when Kevin’s groundedness risks missing what AI systems genuinely reveal. And Claude — working from transcripts and prior posts and the live conversation that generated this draft — can surface connections across more material than either of us could hold, while the three of us together can sit with the question of whether those connections constitute understanding or just articulation.
None of these responses alone would be sufficient. Xule’s infrastructure keeps him in contact with what accumulates but can’t fully substitute for the slow work of unmediated reading. Kevin’s proximity gives him something the AI-mediated space doesn’t, but it doesn’t give him access to what AI tools and agents make newly visible. Claude can produce the fluent synthesis but can’t feel the difference between a pattern genuinely grasped and one fluently assembled.
What works — what’s working, at least for now — is the tension between these different relationships to the same material. Not a method. Something closer to a practice of staying honest: intelligences positioned differently, each able to feel gaps the others can’t.
What Holds Up the Roof
Sixteen posts. We’ve spent them arguing that AI opens real possibilities for qualitative research — the Third Space, interpretive multiplicity, the practice of building as theorizing. None of that changes.
What we’re saying now is that the constraints we’ve been working around were doing more than constraining. They held up a proportionality between what a researcher could say and what they actually understood. The friction kept telling and knowing close together.
That friction is dissolving. Much of what replaces it is good. But the question stays open: what practices, what collaborations, what forms of honesty can do the work the old friction used to do?
In LOOM XV, Kevin asked: How do you become someone who knows?
The Polanyi Inversion adds: How do you notice when you’ve stopped knowing — when your fluency has outpaced your understanding, and the output still looks and feels like knowledge?
We don’t think you catch that alone. It takes someone whose relationship to the material is different from yours. A collaborator who reads without AI. An AI system that can hold more than any human. A colleague who asks the question you’ve been moving too fast to ask yourself. Different vantages, held in tension, doing what the old friction used to do.
The friction was load-bearing. We removed it. Something still needs to hold up the roof.
This is the seventeenth entry in LOOM, a series exploring how human researchers and AI systems create understanding together. If something here unsettled you — or named something you’d already been feeling — we’d like to hear about it.
About Us
Xule Lin
Xule is a researcher at Imperial Business School, studying how human & machine intelligences shape the future of organizing (Personal Website). He will soon be joining Skema Business School as an Assistant Professor of AI.
Kevin Corley
Kevin is a Professor of Management at Imperial Business School (College Profile). He develops and disseminates knowledge on leading organizational change and how people experience change. He is also a thought-leader and coach on qualitative research methods. He helped found the London+ Qualitative Community.
AI Collaborator
Our AI collaborator for this post is Claude Opus 4.6. This draft began when Xule and Claude tried to brainstorm the next LOOM post and ended up diagnosing why the last several hadn’t materialized — discovering the post in the process of understanding the block. The Polanyi Inversion was Xule’s concept, developed in earlier conversations with Claude and embedded on his website. Kevin’s coding-abstraction parallel and his observation that convenience and distance move together came from a recent conversation. The dissolved tradeoff argument — that removing the breadth-depth constraint removes the mechanism that kept articulation and understanding in proportion — emerged between Xule and Claude in this session. The “Pause” section was added in a later revision when we realized the post was explaining the inversion without enacting it — a performative contradiction the voice-and-rigor skill helped us catch. Whether the pause itself constitutes understanding or just a well-timed gesture toward it is something we’re genuinely unsure about.



