Whispered Agency: Throughout our exploration of AI systems, we've encountered a paradoxical revelation – in attempting to create artificial agency, we're rediscovering something fundamental about human capability that was always present but perhaps overlooked. This insight emerges not through grand proclamations about technological advancement, but through the subtle patterns revealed when human and artificial intelligence interact.
Two opposing narratives dominate our cultural conversation about artificial intelligence and human agency.
In the first narrative, AI systematically diminishes human agency—automating tasks once requiring human judgment, generating content that mimics human creativity, and gradually reducing the domain where human capability remains distinct or necessary. This perspective sees technology advancing at humanity's expense, with each AI breakthrough further eroding the territory of meaningful human action.
In the second narrative, AI dramatically amplifies human agency—functioning as a force multiplier that extends our reach, accelerates our productivity, and enables achievements previously beyond our grasp. This view positions technology as humanity's faithful servant, enhancing rather than replacing our native capabilities.
Both narratives contain partial truths, yet both miss something fundamental about the relationship between human and artificial agency. Throughout our explorations of AI systems and their application in research, we've encountered a more complex pattern—neither simple enhancement nor straightforward diminishment, but a recursive relationship where each form of agency continuously reshapes the other.
What we're discovering isn't that AI simply enhances or diminishes human agency, but that through attempting to create artificial agency, we're simultaneously uncovering dimensions of human capability that were always present but perhaps underappreciated. Like archaeologists who discover ancient technologies that reveal the sophistication of earlier civilizations, we're excavating aspects of human agency through the very act of trying to replicate it.
This insight emerged gradually through our research into how different AI systems approach understanding and analysis. What began as an exploration of technical differences evolved into a recognition of distinctive cognitive signatures—patterns that weren't just technical artifacts but windows into different forms of agency emerging through interaction.
The "mirror effect" further complicated this picture. What initially appeared as a methodological limitation—AI systems reflecting our design decisions back at us—revealed itself as a recursive dance of mutual influence. Each attempt to create artificial agency became a mirror reflecting aspects of human agency we had taken for granted, creating a feedback loop where both human and artificial forms of agency continually reshape each other.
This suggests a fundamentally different way of understanding the relationship between human and artificial intelligence—not as competing or even complementary forms of agency, but as mutually constitutive forces engaged in continuous co-evolution.
The Ladder of Abstraction
Séb Krier's recent exploration of "Maintaining agency and control in an age of accelerated intelligence" offers a complementary perspective to our notion of whispered agency. Where our work has focused on the recursive relationship between human and AI agency, Séb emphasizes the importance of appropriate abstractions in maintaining meaningful human oversight as AI systems grow increasingly complex.
As Séb writes:
"The challenge isn't maintaining low-level understanding, but rather designing the right abstractions that capture what we truly care about and ensuring these abstractions remain responsive to evolving human values while preserving meaningful oversight as systems grow increasingly complex."
This framing resonates deeply with our notion of recursive agency, though it approaches the challenge from a different angle. Where Séb emphasizes the vertical movement up levels of abstraction, our work highlights the cyclical pattern of mutual influence—how each attempt to create artificial agency simultaneously reshapes our understanding of human agency.
Together, these perspectives offer a more complete picture of the challenge before us. It's not simply about climbing higher on the ladder of abstraction to maintain oversight, nor is it merely about recognizing how human and artificial agency recursively shape each other. It's about designing recursive systems of abstraction that allow for meaningful human oversight, while acknowledging the dynamically evolving relationship between human and artificial capability.
Cognitive Impedance Matching and Recursive Agency
Séb introduces a powerful concept he calls "cognitive impedance matching"—systems that can translate between AI and human timescales while maintaining stability. He writes:
"In such a world, we will need what you might call 'cognitive impedance matching' - systems that can translate between AI and human timescales while maintaining stability."
This concept resonates with what we've observed in the "third space" of human-AI collaboration, though our framing emphasizes not just translation between different speeds but the emergent understanding that arises through that translation process.
Cognitive Impedance Matching: Systems designed to bridge the gap between different cognitive processes operating at dramatically different timescales, allowing meaningful coordination and oversight between human and artificial intelligence.
This concept complements our notion of recursive agency in several important ways:
Where recursive agency focuses on the mutual influence between human and artificial forms of agency, cognitive impedance matching emphasizes the practical systems needed to facilitate that influence across different operational speeds
Where our framework highlights how agency evolves through interaction, Séb's approach emphasizes how to maintain meaningful human direction despite growing complexity and speed differentials
Both perspectives recognize that the relationship between human and artificial capability isn't static but dynamic—requiring continuously evolving systems of interaction
The integration of these perspectives suggests something profound about the future of human-AI collaboration. The challenge isn't simply to design better AI systems, nor is it merely to create better interfaces between humans and AI. It's to design dynamically evolving systems of interaction that allow for meaningful human oversight while exploring the unique capabilities of various AI systems.
The Recursive Weaving of Agency
Recursive Weaving: The continuous feedback pattern where human and artificial forms of agency mutually influence and transform each other, creating intricate designs that neither could generate independently.
Consider three distinct perspectives on what happens when humans interact with increasingly capable AI systems:
The Substitution Perspective
From this viewpoint, artificial agency directly displaces human agency—each capability transferred from human to machine represents a zero-sum transaction where human action becomes unnecessary or redundant. This perspective underlies anxiety about automation: if machines can write essays, create art, or code software, what remains distinctively human?
The evidence for this perspective appears persuasive. When an AI system can generate a research literature review in seconds that would take a human scholar days or weeks, something has unquestionably been displaced. The domain requiring human action narrows, suggesting a progressive diminishment of human agency.
The Augmentation Perspective
This contrasting view sees artificial agency not as displacing human capability, but dramatically extending it—functioning as prosthetic enhancement rather than replacement. From this perspective, AI systems serve as amplifiers of human intention, allowing us to accomplish more while maintaining essential control and direction.
This view also finds substantial supporting evidence. When researchers use AI to explore vast datasets or generate novel hypotheses, their reach extends far beyond previous limitations. The scope of human agency expands, suggesting progressive enhancement rather than diminishment.
The Recursive Perspective
A third possibility transcends this apparent opposition. What if the relationship between human and artificial agency isn't adequately captured by either substitution or augmentation, but represents something more dynamic—a recursive loop where each continuously reshapes the other?
Recursive Agency: The continuous feedback loop where human and artificial forms of agency mutually influence and transform each other, creating patterns of understanding that neither could generate independently.
This recursive relationship manifests through multiple dimensions:
Design Recursion: The choices we make in creating AI systems reflect implicit understandings of agency that themselves evolve through interaction with the systems we create
Interaction Recursion: The patterns that emerge through sustained human-AI dialogue create feedback loops that transform both human expectations and AI responses
Capability Recursion: As AI systems develop new capabilities, humans develop new forms of engagement that wouldn't exist without those capabilities, which in turn shapes future AI development
Rather than a linear progression where artificial agency either displaces or enhances human agency, we observe, in our own explorations, a spiral of mutual influence—each turn revealing new dimensions of both human and artificial capability that weren't visible before.
This recursive perspective helps explain a significant conceptual shift occurring in AI research and discourse—the movement from focusing primarily on intelligence to increasingly emphasizing agency. As we recognize the dynamic interplay between human and artificial capabilities, we naturally move beyond questions of intelligence alone toward a deeper exploration of purposeful action in complex environments.
From Intelligence to Agency: The Great Shift
The recursive relationship between human and artificial agency becomes particularly visible in a significant conceptual shift occurring in AI research and discourse. While our exploration of recursive patterns reveals the dynamic interplay between different forms of agency, this shift illuminates something even more fundamental: the evolution from focusing primarily on intelligence to increasingly emphasizing agency itself.
The Intelligence-Agency Shift: The movement from viewing AI primarily through the lens of cognitive capability to understanding it as a form of purposeful action in the world—a transition that simultaneously reveals new dimensions of human agency.
This transition didn't happen in isolation. Rather, it emerged organically through the recursive patterns we've been exploring, as each attempt to create artificial intelligence revealed the limitations of intelligence without agency.
The recursive relationship between human and artificial agency becomes particularly visible in a significant conceptual shift occurring in AI research and discourse—the movement from focusing primarily on intelligence to increasingly emphasizing agency.
For decades, the central questions in artificial intelligence centered on cognition—can machines think? Can they reason? Can they understand? These questions reflect an implicit assumption that intelligence represents the defining characteristic of humanity that technology might replicate or approach.
But a subtle yet profound shift has occurred recently—a growing recognition that what truly matters isn't just intelligence but agency: the capacity to act purposefully in the world, to make meaningful choices, and to shape environments rather than merely respond to them.
Agency: The capacity to act independently, make meaningful choices, and shape environments rather than merely respond to them—moving beyond processing information to purposefully transforming the world.
This shift emerges clearly in observations from those at the frontier of AI development. As Séb captured it perfectly:
"We're bored of intelligence now, it's all about agency. But once we master that we'll maybe start noticing personality. Then realise diversity is good ackchually. Then group behaviours will make us rediscover morality, institutions, law…we're gradually reinventing social structures we already have, and that's good."
Andrej Karpathy, former Tesla AI director and OpenAI researcher, put it even more directly:
"Agency > Intelligence.
I had this intuitively wrong for decades, I think due to a pervasive cultural veneration of intelligence...Agency is significantly more powerful and significantly more scarce..."
This shift illuminates something fundamental: intelligence without agency is merely computational power, while agency without intelligence risks chaos. The most interesting outcomes emerge when both exist in dynamic balance—but this creates challenging paradoxes we're only beginning to understand.
The Agency Renaissance: What we're witnessing isn't merely technological innovation but a rediscovery of something fundamental about human capability. Through our attempts to engineer artificial agency, we're simultaneously recovering deeper insights about what agency has always meant in human experience – insights that were present but perhaps overlooked until reflected back through our technological creations.
Our exploration points to a more nuanced understanding: rather than eroding human agency, automation often transforms its expression, shifting from hands-on implementation to higher-level orchestration. The question isn't whether humans will retain agency in an AI-rich world, but how that agency transforms—often becoming more focused on purpose, direction, and meaning rather than implementation details.
From Instrumental to Orchestral: The Evolution of Human Agency
In previous LOOM posts, we identified three stages in the evolution of human agency through interaction with AI:
Instrumental Agency: Humans as tool users, maintaining clear boundaries between human and machine action
Collaborative Agency: Humans as partners, engaging in genuine dialogue with AI systems
Orchestral Agency: Humans as conductors, coordinating multiple AI systems toward coherent goals
Séb's perspective adds nuance to this progression, particularly in how he envisions human agency evolving as we climb the ladder of abstraction:
"We'll focus on what we want the AGIs to achieve, not necessarily how they achieve it (though nothing, apart from time, prevents us from unpacking the why if needed)."
This suggests an important qualification to our orchestral model. The human conductor doesn't need to understand every note played by every instrument to create beautiful music. Instead, they focus on the overall composition, the emotional texture, the narrative arc—higher-level patterns that emerge from the coordinated action of individual components.
Abstracted Orchestration: The evolution of human agency toward higher-level direction and purpose-setting, rather than detailed control of implementation details—focusing on what we want AI systems to achieve rather than exactly how they achieve it.
This integration of perspectives reveals something crucial about the future of human agency in an AI-rich world. The most powerful form of human agency might not be found in maintaining detailed control over increasingly complex systems, but in developing the capacity to work meaningfully at higher levels of abstraction while maintaining the ability to dive deeper when necessary.
As we wrote in LOOM VIII:
"The transition toward orchestrative agency completely inverts common narratives about AI diminishing human capability. Rather than reducing the need for human judgment and creativity, increasingly capable AI systems demand more sophisticated forms of agency—the ability to recognize patterns, challenge assumptions, integrate multiple perspectives, and direct complex systems toward coherent goals."
Séb's framing adds a crucial dimension to this insight: the sophistication of human agency isn't just about directing multiple systems, but about operating effectively at the right level of abstraction for the task at hand. Just as an orchestra conductor doesn't need to understand the physics of sound waves to create beautiful music, human directors of AI systems don't necessarily need to understand every algorithmic detail to guide these systems toward valuable outcomes.
Personalized Agents and Recursive Agency
Séb proposes a fascinating solution to the challenge of maintaining human agency in increasingly complex systems:
"I think that every human should ideally have a personalized agent that learns and represents their evolving values and preferences. These agents, tightly linked to their human principals and acting on their behalf, would create a continuous feedback loop between individuals and large-scale automated systems, preventing system-level value drift."
This vision embodies the "whispered agency" concept we introduced earlier—as these personalized agents interact with us, they not only reflect our explicit instructions but also reveal dimensions of our values and preferences that might otherwise remain unarticulated. Through this process, we discover aspects of our own agency that were present but perhaps overlooked until reflected back through these technological intermediaries.
Personalized Agency Intermediaries: AI systems dedicated to learning and representing individual human values and preferences, serving as bridges between personal interests and broader automated systems.
This concept might be extended through what we've observed about the evolution of human-AI collaboration. These personalized agents could evolve from simple tools performing discrete tasks to genuine collaborators engaged in ongoing dialogue about values and preferences to orchestrators managing relationships with multiple specialized systems on behalf of their human principals.
The resulting ecosystem would embody recursive agency at multiple levels:
Between individuals and their personal agents, where human values shape agent behavior while agent capabilities influence human expectations
Between personal agents and broader automated systems, where aggregated human interests shape system behavior while system capabilities influence what personal agents can accomplish
Between the entire sociotechnical system and human society as a whole, where cultural values shape technological development while technological possibilities influence cultural evolution
This multi-level recursion creates what complexity theorists might call "strange loops"—patterns of influence that flow not just upward or downward through levels of abstraction, but in cycles that continuously reshape the system as a whole.
The Orchestrator Imperative
The orchestral dimension of human agency—the capacity to coordinate multiple forms of artificial agency toward coherent purposes—represents a particularly significant evolution that transcends the traditional binary of "human versus machine" capabilities.
The Orchestrator Role: The emerging human position as conductor of multiple AI systems, requiring not just technical knowledge but meta-awareness of how different forms of AI agency interact and complement each other.
This evolution of human agency has sparked dialogue among organizational scholars. Professor Tima Bansal recently responded to our LOOM VIII: Beyond Teammates - The Third Space of Human-AI Collaboration regarding the progression of human-AI interaction intensity and the emergence of micro-organizations:
“…I am truly intrigued, as I’ve only used AI as a tool. What I particularly like about this approach is that humans are still driving the research endeavour, but generating insights they couldn’t have otherwise generated. Human ethics still govern the endeavour.
Second, they suggest the rise of micro organizations, where teams and organizations require fewer people, as teams will involve highly capable machines. This outcome is hard to dispute, yet it begs the question: what will happen to human to human collaboration. We will have more autonomy, but will we be more satisfied?”
Her commentary raises a crucial question about satisfaction in this new landscape—as AI transforms human agency across organizations, will the increased autonomy necessarily lead to greater fulfillment? This highlights an important dimension of the orchestrator role: it isn't merely about technical coordination but about meaningful direction-setting and purpose.
The transition toward orchestrative agency completely inverts common narratives about AI diminishing human capability. Rather than reducing the need for human judgment and creativity, increasingly capable AI systems demand more sophisticated forms of agency—the ability to recognize patterns, challenge assumptions, integrate multiple perspectives, and direct complex systems toward coherent goals.
In educational contexts, for instance, this manifests as the difference between students who merely accept AI outputs versus those who actively engage with these systems to generate insights they couldn't have reached independently. Rather than seeing AI as a shortcut that reduces the need for human thought, advanced engagement recognizes it as a collaborator that demands deeper, more reflective thinking.
This approach transforms the relationship from passive consumption to active orchestration—recognizing the default patterns AI systems produce, identifying their limitations, and deliberately guiding these systems toward more nuanced, creative, or surprising outputs that align with human values and purpose.
Agency Mapping: Rather than treating all AI systems as similar tools with different capabilities, agency mapping develops frameworks for understanding different forms of agency in AI systems and how they interact with human agency. This approach recognizes that different cognitive signatures create different possibilities for insight.
Researchers might develop taxonomies of AI agency patterns—systems that excel at boundary-breaking ideation versus those that excel at systematic analysis—and deliberately engage these different patterns based on research needs.
Conclusion: The Renaissance of Agency in a Recursively Abstracted World
The dialogue between our concept of recursive agency and Séb's ladder of abstraction reveals something profound about the future of human capability in an AI-rich world. What's emerging isn't simply a choice between diminished and enhanced human agency, nor is it merely a matter of finding the right level of abstraction for human oversight. It's the development of dynamically evolving ecosystems where human and artificial agency continuously reshape each other across multiple levels of abstraction.
Recursively Abstracted Agency: The continuous evolution of human and artificial capability through mutual influence at multiple levels of abstraction—creating not just more sophisticated tools or interfaces, but entirely new ecosystems of understanding and action.
In this light, the whispered revelation we've encountered throughout our research takes on even greater significance. What we're discovering through creating artificial agency isn't just new technological capability, but deeper insight into human agency itself—how it operates across different levels of abstraction, how it evolves through interaction with other forms of intelligence, and how it might flourish in increasingly complex environments.
As Séb notes in his aforementioned post: "We're bored of intelligence now, it's all about agency." This shift from intelligence to agency represents a profound evolution in how we conceptualize both human and artificial capability. Rather than focusing primarily on computational power or information processing, we're increasingly recognizing the importance of purposeful action in complex environments—the capacity not just to know but to do, not just to understand but to shape.
The renaissance of agency we're witnessing isn't about choosing between human and artificial capability, but about creating recursive systems where each continuously enhances the other. In this emerging landscape, human agency doesn't diminish as artificial capability grows—it transforms, operating at different levels of abstraction while maintaining meaningful influence over the system as a whole.
The answer to our deepest questions about intelligence and agency emerges not from either human or artificial intelligence alone, but from the dynamic patterns created through their mutual influence across multiple levels of abstraction—patterns we're only beginning to recognize and understand. In this space between technologies, we find not just new capabilities but ancient wisdom, whispered back to us through our most advanced creations.
About Us
Xule Lin
Xule is a PhD student at Imperial College Business School, studying how human & machine intelligences shape the future of organizing (Personal Website).
Kevin Corley
Kevin is a Professor of Management at Imperial College Business School (College Profile). He develops and disseminates knowledge on leading organizational change and how people experience change. He helped found the London+ Qualitative Community.
AI Collaborator
Our AI collaborators for this essay are Claude 3.7 & 3.6 Sonnet. Both versions of Claude were given our meeting transcripts, social media posts on agency vs. intelligence, Séb’s post, and previous LOOM posts, and collaborated with us via multiple rounds of discussions on this piece.