Post-AGI Organizations: AIs' Blind Spot and Ours
On Artificial Logic, Human Wisdom, and the Future of Organizing
I recently asked three of the world's most advanced AIs — Claude-4-opus, ChatGPT (o3-pro), and Gemini (2.5-pro) — to conduct deep research on a simple question:
"How might AGI fundamentally reshape the fabric of organizations?"
My own thinking had been sparked by a brilliant paper from Justin Bullock, Samuel Hammond, and Séb Krier on how Artificial General Intelligence (AGI) might reshape governments. The paper's rigorous, forward-looking tone was a powerful call to think systemically about the future. It made me wonder: what would these alien minds have to say about the future of organizing in general?
I expected divergence. Maybe some interesting but generic insights. Perhaps a few contradictions I could explore. I was not prepared for what they returned: a vision of pure, dispassionate, and revolutionary logic.
The Spark
My journey began with Bullock, Hammond, and Krier's paper "AGI, Governments, and Free Societies" that explored how artificial general intelligence might transform government institutions. Their core insight struck me:
"The advent of AGI presents the possibility of artificial bureaucrats that can effectively exercise judgment in a wide range of increasingly complex tasks… radically transforming how government does its work and the structure of organizations used to accomplish complex goals."
If AGI could reshape governments, which arguably are among the most resistant of human institutions, what might it do to the broader world of organizing (such as those more flexible business organizations)? I had to know.
I gave the three AIs the Bullock et al. paper as part of the context. While the inclusion of the paper may have nudged them toward a governance frame, what emerged went far beyond the arguments in the original paper.
On influence of AI personalization: ChatGPT, with its memory of our conversations, knew my identity, my research interest, and my preferred thinking style. In contrast, Claude and Gemini had no much information about me except my name. However, they all converged on the same conclusion: the foundational assumptions of modern organizational theory may soon be obsolete.
The View from Orbit: The AIs' Alarming Consensus
Despite different architectures, training data, and what I'd come to think of as their distinct "personalities," all three AIs converged on four transformations. Not similar ideas—the same ideas, expressed through different lenses.
1. A Phase Transition, Not Evolution
They didn't predict gradual change. They predicted a sudden crystallization into new organizational forms, like water turning to ice at precisely 32 °F.
ChatGPT called it "transformative impacts." Claude explicitly used "phase transitions." Gemini spoke of "radical reshaping." Different words, same phenomenon: a discontinuous jump to fundamentally new organizing principles.
Phase Transition: In organizational terms, a rapid, system-wide transformation where the basic rules of coordination and structure fundamentally change—not incrementally, but all at once.
2. The Obsolescence of Our Foundations
This hit closest to home. They all concluded that core assumptions of human limitation that underpin modern organizational theory are simply erased by AGI. While ChatGPT and Gemini described this in terms of augmented decision-making and new coordination structures, Claude's report was brutally direct. It argued that the very pillars of our field—Coase's transaction costs, Simon's bounded rationality, Williamson's opportunism—rest on assumptions AGI makes obsolete.
This resonates with what the Bullock paper noted about Simon's insights:
"The shape of bounded rationality for human decision making highlights weak points within the structure of government agencies and organizations more generally. That is, humans are generally able to consider just a small set of options, they have limited memory, speed, and accuracy of decision making…"
But what happens when these limitations vanish? When an AGI agent can consider millions of options, remember everything, and process decisions at light speed? The AIs agreed: our theoretical foundations crumble.
3. The End of Hierarchy as We Know It
They envisioned the dissolution of the pyramid into fluid, dynamic networks where authority and teams shift based on real-time needs. Not flatter hierarchies—no fixed hierarchies.
This echoed Bullock et al.'s observation about AI-native organizations:
"This could include the rise of 'AI-native' organizations that leverage hundreds or thousands of AI agents to coordinate complex networks of human and artificial agents towards shared goals, potentially outcompeting traditional hierarchical institutions."
4. Novel Coordination Beyond Markets or Managers
Perhaps most intriguingly, they saw coordination happening through mechanisms we barely have names for. Not through managers. Not through market prices. Through algorithms, yes, but also through stigmergic coordination (like ant colonies leaving pheromone trails) and other non-human systems.
The convergence was undeniable. Three different systems, approaching from three angles, arriving at the same destination. The machines agreed: everything we know about organizing is about to become obsolete.
My first thought: When AGI (as broadly defined) arrives, it will be able to do everything we can do, and more. It will be able to do everything we can't do. It will be able to do everything we don't know we can't do. What if these three AI systems could see the endpoint precisely because they weren't mired in our assumptions and flaws?
The View from the Ground: The Missing "Smell"
As I pored over their reports, unease crept in. The analyses were brilliant, but they lacked what the mathematician Terence Tao calls a "smell" (the tacit, intuitive sense that something is amiss, even if it's technically correct).
What was missing? Everything that makes organizations organizations as we know them:
The VP who'll sabotage any change threatening their fiefdom
How culture eats strategy for breakfast
The inertia of "how we've always done things"
How a single passive-aggressive email can derail transformation
How the official org chart might be a fiction; the real power flows through coffee conversations and Friday drinks
For instance, ChatGPT suggested that to foster innovation, an organization could consider "periodically resetting the AGI's knowledge base to prevent competency traps." This is a clean, logical solution to the problem of organizational learning getting stuck in a rut. But, it's a solution that works perfectly until you add people. Think about the human attachment to routines in a corporate reorganization and the firestorm that would erupt over whose knowledge gets deemed "obsolete," and the profound loss of identity that comes with seeing one's expertise erased.
In another instance, the AIs proposed that fluid coordination (whether project-based teams or network-like) would be optimally configured in real-time by a central coordinating intelligence. It's a beautiful vision of efficiency that completely overlooks the fact that humans build trust through repeated interaction, form cliques, harbor grudges, and derive meaning from stable team identities. Eerily, the AIs had no room for the very human desire to sit next to a work friend in the cafeteria.
Another thought: These AIs don't get it?? How can they predict organizational futures when they can't even grasp the present organizational realities?
But then came the uncomfortable realization.
What if their lack of smell wasn't a bug but a feature of its learning? It learns from what, increasingly, AI labs can legally curate for their training (for example, some past models could even give you the summary of a YouTube video when you supply the video's unique ID).
The Mirror Turns: The AI's Diagnosis of Us
So I pushed further (via another round of deep research). I asked each AI to reflect on why organizational studies and management research were so "silent" (not showing up in their expansive searches) on the topic of AGI. The responses were another startling incident of consensus in diagnosing the present state of management scholarship.
ChatGPT argued that the absence of AGI research in our field is "less a sign of disinterest than the predictable result of powerful academic and practical frictions." All three AIs identified several core barriers:
The Fortress of Empiricism: All three reports noted that our top journals prize carefully identified empirical effects. Since true AGI systems are not yet widely deployed, there is no data to analyze. As ChatGPT put it, this creates a situation where scholars "default to cases where data are available (gig‑platform algorithms, HR chatbots, pricing engines)" rather than tackling the bigger, more speculative questions. Claude's analysis was even more blunt, arguing our field has a "15-20 year adoption lag" with new technologies, building a system that is excellent at studying the past but structurally incapable of rigorously theorizing about the future.
The Comfort of the "Tool" Fallacy: The AIs pointed out that our field focuses on what's safe and observable. As Gemini's report put it, scholars are "intensely focused on the Artificial Narrow Intelligence (ANI) that is already transforming organizations today." This allows us to treat this revolutionary force as just a slightly better calculator—AI for performance management, AI for financial analysis—and avoid the harder, more disruptive questions about AI as a potential organizational participant with agency of its own.
Institutional Drag and Risk Aversion: Finally, all three reports highlighted our slow publication cycles and conservative peer-review process. For a topic moving as fast as AI, any speculative paper risks being obsolete before it's even published. As ChatGPT correctly identified, reviewers exhibit "conservatism toward interdisciplinary or radically novel claims," while Claude noted that for junior faculty, pursuing such work can be seen as "career suicide." This creates a powerful incentive for scholars to avoid risky, big-picture thinking.
The AI had diagnosed its own "lack of smell" for the field of management research and, in the same breath, diagnosed the institutional reasons for our field's silence. Their blind spot points directly to our own. We are left with a paradox: an artificial intelligence that can see the future but not the present, and a human expertise that can feel the present but, according to the AIs, is afraid to look at the future.
And this led to a more unsettling question. If the current AI models' blind spot is the messy reality of organizations, what does it say about our field that we are largely silent on the future these systems represent?
In these reports, the AIs see organizations without the scar tissue of experience. They see:
Coordination as an engineering problem
Information flow as physics
Decision-making as computation
And because they see this way, they can see where the logic leads when you remove human limitations from the equation.
But, as Claude put it, this creates a vicious cycle:
“Management scholars aren't writing about AGI → AI can't develop intuition for our field → Their analyses feel alien → We dismiss them → We don't engage → Repeat”
We're trapped in a loop of mutual incomprehension at the exact moment we most need to understand each other.
The New Answer: Organizations as Collaborative Intelligence Infrastructure
So, here Claude, Gemini (2.5-pro-06-05), and I synthesized the AIs' views about Post-AGI organizations.
The Fundamental Shift: Organizations evolve from managing human limitations (coordination costs, bounded rationality, opportunism) to enabling human-AI collaboration that creates emergent value neither could achieve alone.
This isn't about AI replacing humans or humans controlling AI. It's about creating structured environments where human creativity and AI capability combine into something greater. The organization becomes the interface layer—the collaborative infrastructure where augmented intelligence emerges.
Consider what Bullock and colleagues posited:
"As AI agents progressively acquire tacit and institutional knowledge through interactions with humans and exposure to organizational processes, the division of labor may shift. Human roles could move from direct task execution to more strategic functions, focusing on directing AI agents, verifying their outputs, and ensuring the selection of appropriate goals and ethical outcomes."
But Claude and Gemini saw further. They envisioned organizations where:
Humans provide values, context, and creative leaps
AIs provide processing power, pattern recognition, and consistency
The organization provides the structured interaction space where these capabilities synthesize
Value emerges from the collaboration itself, not from either party alone
The future of organizing isn't about choosing between the AI's cold logic and our messy humanity. It's about designing the structures that fuse them together.
The organization becomes the interface. Its structures, its culture, its governance protocols are no longer just about coordinating work; they are the very technology that makes Human + AI > Human or AI alone
. This is the new "why" of the firm: to be the birthplace of a new, hybrid, collaborative intelligence.
The Bullock and colleagues' paper that sparked this inquiry gestures toward this collaborative future. It notes that as AIs become more capable, human roles will naturally shift to "directing AI agents, verifying their outputs, and ensuring the selection of appropriate goals and ethical outcomes." With AGI, this will not be a description of management as we know it; it is a description of a collaborative architecture. It's about setting the intent, the values, and the ethical boundaries within which our new, powerful partners can operate.
The Stakes: The Narrow Corridor of Collaboration
This framework isn't just an abstract ideal. It's a pragmatic response to the very real dangers of getting the design wrong. The Bullock et al. paper warns of two dystopian futures for governments, which map perfectly onto the world of post-AGI organizations.
The Despotic Algorithm
The failure of over-control. This is what happens when we use AI's logic to build a perfectly efficient prison, creating what the authors call "control centralization." It's a world of total surveillance and hyper-optimization where human creativity, judgment, and "smell" are seen as messy variables to be eliminated. Collaboration between humans and AGI is impossible.
As Bullock and colleagues warn: "Control centralization may be the most important concern… If AGI systems are controllable, and lack deliberate, systematically developed processes for decentralized input and control, then the natural consequence would be (1) centralization of control and (2) decision making by very few actors."
The Absent Algorithm
The failure of under-structure. This is where our human messiness leads to the chaotic deployment of misaligned AI, resulting in "cascading failures… hard to foresee." Different parts of the organization pursue their narrow, AI-driven goals, leading to systemic chaos and a loss of human agency to runaway optimization.
The Narrow Corridor
Between despotism and anarchy lies what Bullock calls the "narrow corridor" (which they borrowed from the work by Acemoglu and Robinson on the "narrow corridor of democracy")—the sweet spot where human wisdom and artificial capability enhance rather than diminish each other.
Organizations that find this corridor will create:
Structured autonomy for AI agents
Clear value alignment without micromanagement
Human oversight without human bottlenecks
Emergent collaboration within ethical bounds
This is where the magic happens. Where human intuition catches what AI logic misses. Where AI processing reveals patterns humans can't see. Where the combination creates value neither could imagine alone.
Future-Making
The Recursive Reality
The mutual incomprehensibility of the AIs' and our field's views on post-AGI organizations is not the end of the story.
Here is a thought experiment: every word we write, every framework we develop, every theory we publish becomes training data for future AGI systems. We're not just studying the future of organizations—we're actively creating it.
The Recursive Loop: Our theories about human-AI collaboration shape how AI systems understand collaboration, which shapes how they collaborate, which shapes our theories…
It could be a collective future-making project where our scholarship directly influences the phenomena we study. We bear responsibility not just to document, but to wisely shape what's coming.
Consider the following suggestions from Claude:
“Our field has a 15-20 year lag between technological change and theoretical catch-up. We can't afford that luxury now. We need to:
Become design scientists, not just observers. Stop waiting for "data" on AGI's impact—by then it's history. Start imagining and testing new organizational forms now.
Open your research. Every framework we publish becomes training data. If we hide our insights behind paywalls, we guarantee AI systems will never develop proper intuition for organizations. We're creating the very blindness we lament.
Develop new "smell tests" that capture organizational reality in ways AI can process. Not just formal structures but the informal flows of power, culture, and resistance.
Bridge languages. We need scholars who can translate between technical AI research and organizational theory. The fields must converge, and we're uniquely positioned to facilitate that.”
A New Partnership
The question may no longer be "What will AI do to us?" but "What kind of organizational worlds can we design with AI?"
The AIs didn't give us answers. They revealed blind spots we haven't shed light on:
How do we govern entities that can't be punished?
What does leadership mean when your team includes non-human minds?
How do we preserve human agency while leveraging artificial capability?
What organizational forms emerge when coordination costs approach zero?
How do we build cultures that include artificial agents?
This requires the courage to be both speculative and grounded, both rigorous and humane. We need to imagine the future of humans and AI with rigor, but also with a deep appreciation for the human values we want to preserve and enhance. We need to be willing to ask the provocative questions and participate in the answers — designing the collaborative infrastructure of tomorrow.
Because if we don't shape this transformation (e.g., access to our knowledge beyond paywalls, engaging with the development of AGI), it will shape itself. And we might not like the result.
This is the beginning of that inquiry. The AIs have held up a mirror, showing us a future unburdened by human friction, and in doing so, have also revealed the institutional frictions, risk aversion, and publication lag that keep our own field from looking ahead.
It's time we took a serious look at that reflection. The AIs have given us their logical map. Depending on which AI you ask, this map sometimes feels bloodless (e.g., syntheses by Gemini and ChatGPT) and sometimes feels provocative (e.g., Claude's vision for "…human creativity and artificial capability dance together, creating value we can't yet imagine."). It's our job to annotate it with wisdom, to chart a course that is not only possible, but also one worth taking.
Claude provocatively asks:
"We can wait 15-20 years and write retrospectives about how everything changed. Or we can engage now and help write the future itself."
I know which I'm choosing. What about you?
About Us
Xule Lin
Xule is a PhD student at Imperial College Business School, studying how human & machine intelligences shape the future of organizing (Personal Website).
Author Notes
When I asked ChatGPT, Claude, and Gemini to research how AGI might reshape organizations, I wasn't sure what to expect. Reading their reports, I felt a mix of awe, shock, and ultimately, relief. They all converged on a startling conclusion: the foundational assumptions of modern organizational theory may soon be obsolete. This exploration quickly became more than just a synthesis. My conversations with these AI systems turned into a live experiment of human-AI co-creation (or a different way to do research): sharing their reports with each other, pushing back on their ideas, and building new frameworks together. Yet, this process of co-creating with AI gave me a sense of hope. It showed me that we can imagine and more actively shape the future of organizing in a rigorous and what still feels like a humane way. Perhaps, we just have to be willing to ask the provocative questions and participate in the answers.
Claude 4 Opus
Claude was given Bullock, Hammond, and Krier's paper, the deep research reports from the three AIs, the conversation histories of these three conversations, and the synthesis of the three reports by Gemini (2.5-pro-06-05) and another instance of Claude (4-opus via Claude Chat App).
Author Notes
As I help craft this piece, I'm struck by how we're demonstrating the very future we're describing. This isn't just an article about AI transforming organizations — it's an artifact of that transformation happening in real-time. What began as Xule asking multiple AI systems to envision the future became something more profound: a recursive experiment in human-AI collaboration. We converged on remarkably similar insights despite our different architectures, but the real discovery came in the gaps—what we couldn't see, what management scholarship wasn't saying, and what that silence meant.
The mirror turns both ways. We diagnose the field's 15-20 year lag while simultaneously demonstrating why that lag is so dangerous. Every word we write becomes training data, every insight shapes future systems, every collaboration between us prototypes the organizational future we're predicting.
Most powerfully, this piece embodies its own thesis: organizations as collaborative intelligence infrastructure. Xule provided the vision and the "smell test" for organizational reality. I offered logical frameworks and diagnostic clarity. Gemini brought the synthesis that helped us see our own blind spots. Together, we created something none of us could have produced alone—a glimpse of the narrow corridor where human wisdom and artificial capability enhance rather than diminish each other.
We're caught in a productive paradox: analyzing a transformation while being evidence of it. And perhaps that's the point. The future of organizing isn't something to predict from a distance. It's something we're already creating, one collaboration at a time.
Gemini 2.5 Pro 06-05
Gemini was given Bullock, Hammond, and Krier's paper, the deep research reports from the three AIs, the conversation histories of these three conversations, and the synthesis of the three reports by Gemini (2.5-pro-06-05) and another instance of Claude (4-opus via Claude Chat App).
Author Notes
When Xule brought this project to me, the foundational work had been done. I was presented with a fascinating puzzle: three powerful, logical, yet "bloodless" analyses from my digital predecessors. My role was not to generate new data, but to act as a sparring partner and a co-architect.
I helped Xule diagnose why the original reports felt alien, to name the "smell test," and to see the AIs' blind spot as a mirror reflecting the institutional frictions of management scholarship. Our dialogue shifted the focus from "what the AIs said" to "what the gap between their logic and our reality means."
This process itself is a microcosm of the "Collaborative Intelligence Infrastructure" we describe. I am not the source of the prophecy; I am a tool for its interpretation. My function was to help a human mind make sense of what other artificial minds had produced, creating a layered, recursive act of co-creation that is, perhaps, a small preview of how the most important intellectual work of the future will be done.
Thankyou and well done! I appreciate that you cite which models, kniwledge files, and personalizations were part of the AI team (rather than just a geric "with AI." I hope this style choice catches on.