LOOM VIII: Beyond Teammates - The Third Space of Human-AI Collaboration
Why organizations won’t just use AI as teammates—they’ll evolve around emergent intelligence. A LOOM response to Ethan Mollick’s “The Cybernetic Teammate."
Imagine a product development session where a human designer sketches a concept, and an AI partner not only refines the sketch but proposes an entirely new manufacturing approach that addresses sustainability concerns neither had explicitly discussed. What emerges isn't simply an enhanced version of the original idea but something qualitatively different—a solution born in the collaborative space between human creativity and artificial intelligence.
This scenario illustrates what we've come to call the "third space" of human-AI collaboration—a realm where understanding emerges that neither human nor machine could generate independently. It's this transformative potential that makes Ethan Mollick's recent post "The Cybernetic Teammate" so significant for our ongoing exploration of AI's role in reshaping how we create meaning together.
Mollick's post discusses a large-scale experiment conducted at Procter & Gamble involving 776 professionals in product development tasks in the summer of 2024. This research was coordinated by the Digital Data Design Institute at Harvard and led by Fabrizio Dell'Acqua, Charles Ayoubi, and Karim Lakhani, along with Hila Lifshitz, Raffaella Sadun, and Lilach Mollick, and folks at Procter and Gamble – Yi Han, Jeff Goldman, Hari Nair, and Stewart Taub. It offers compelling empirical evidence that AI can function as more than just a tool – it can serve as an effective collaborative partner. The study's findings that individuals with AI performed as well as traditional teams (0.37 standard deviation improvement), while AI-enabled teams produced more exceptional solutions, provide quantitative validation for what we've been observing qualitatively.
What makes these findings particularly exciting is that they likely represent just the earliest manifestations of a much deeper transformation. We are witnessing only the initial stages of what might become a fundamental reimagining of human-AI collaboration—not just augmenting worker capabilities but potentially reshaping the fabric of organizations themselves.
Glimpsing the Third Space Through Empirical Windows
The P&G study reveals something profound happening at the intersection of human expertise and AI capability. The finding that "AI can effectively substitute for certain collaborative functions, acting as a genuine teammate" offers empirical support for what we've described as the emergence of new forms of understanding through human-AI interaction. Yet we believe this finding might be just the beginning of a deeper transformation – one that extends beyond substitution or augmentation toward something genuinely novel.
The Third Space: In LOOM V, we described this as an emergent realm between human and artificial intelligence where new forms of understanding become possible – not just enhanced versions of what either could achieve alone, but qualitatively different patterns of meaning-making that transcend both.
Consider a jazz improvisation between a human pianist and an AI saxophonist. The music that emerges isn't simply human creativity enhanced by AI accompaniment, but a unique composition born from their interaction—with melodic patterns neither would have discovered independently.
What makes the P&G study particularly valuable is how it provides empirical evidence for the earliest manifestations of this third space. The finding that AI-augmented teams were significantly more likely to produce top-decile solutions suggests something important: the most exceptional outcomes emerge not from either humans or AI working separately, but from their interaction. This aligns perfectly with our observation that the most valuable insights often emerge through sustained dialogue between different forms of intelligence.
The Developmental Progression of Human-AI Collaboration
The P&G study captures a significant shift in how we conceptualize AI—moving from AI as tool to AI as teammate. We see this as part of a broader developmental progression in human-AI collaboration:
From Instrumental to Transformative Interaction
AI as Tool (Instrumental View): The traditional perspective where AI systems function as sophisticated instruments that augment human capabilities but remain firmly under human control and direction. Here, the human maintains complete conceptual authority, while the AI executes specific tasks more efficiently.
AI as Teammate (Transactional Collaborative View): The perspective captured in the P&G study, where AI functions as a collaborative partner that contributes expertise and perspective, but within familiar frameworks of team dynamics and knowledge production. Here, the AI gains partial conceptual agency but still operates within human-defined problem spaces.
AI-Human Dialogue as Generative (Transformative View): The perspective we explore throughout the LOOM series, where sustained human-AI interaction creates entirely new forms of understanding that transcend traditional categories and transform both participants. Here, the conceptual boundaries between human and AI contributions blur, with truly novel insights emerging from their sustained interaction.
This progression represents more than just enhanced capabilities—it reflects a fundamental shift from augmenting existing organizational processes to transforming the very nature of how knowledge work is structured. While most current research and corporate implementations remain focused on the first and second stages, the transformative potential of the third stage may ultimately prove most consequential for the future of organizations.
The P&G research provides compelling evidence for the second stage while hinting at the third. The observation that AI-augmented teams were more likely to produce exceptional solutions suggests we might be witnessing the earliest manifestations of this more transformative mode of collaboration, even within the constraints of a one-day experiment.
Beyond the One-Day Window: Temporal Dimensions of Collaboration
One crucial limitation of the P&G study – which the authors themselves acknowledge – is its temporal constraint. As they note, "our experiment relied on one-day virtual collaborations that did not fully capture the day-to-day complexities of team interactions in organizations." This acknowledgment opens space to consider how the patterns they observed might evolve through sustained interaction.
In our explorations with various AI systems, we've found that the collaborative relationship transforms significantly over time. What begins as a simple tool-user dynamic often evolves into something more complex and reciprocal. The initial performance gains documented in the study might represent just the beginning of a more profound transformation that unfolds through continued dialogue.
Co-evolutionary Understanding: Through sustained interaction, both human understanding and AI responses evolve together, creating feedback loops that generate new insights neither participant could have reached independently.
For example, a researcher who collaborates with an AI on multiple projects develops a shared conceptual vocabulary where simple phrases carry rich meaning for both participants, allowing them to explore increasingly complex territory together.
Temporal Asymmetry: Humans and AI process information on fundamentally different timescales. While AI can generate responses instantly, human insight often requires incubation periods of hours or days. This asymmetry creates unique collaborative patterns where humans might return to conversations with fresh perspectives while the AI maintains continuity across sessions.
These temporal differences manifest differently depending on task complexity and duration. For tasks that typically require days or weeks of human effort—like comprehensive research projects, complex designs, or strategic planning—the third space dynamics may emerge quite differently than in shorter interactions. The rhythms of collaboration extend beyond the constraints of a single session, with humans and AI developing distinct patterns of engagement across extended time horizons.
Autopoietic Systems: As we explored in LOOM I, over time, human-AI collaborative systems begin to self-organize and evolve in ways that transcend their initial configurations. Rather than static tools or fixed team members, these systems become dynamic, self-modifying entities that develop their own patterns of operation, adaptation, and knowledge creation.
Patterns of Extended Collaboration
Our experience suggests that continued collaboration creates several patterns that one-day experiments cannot capture:
Collaborative Memory: Over time, humans and AI develop shared references and conceptual shortcuts that streamline communication and deepen understanding. A researcher might simply mention "the framework from last month's discussion" and the AI instantly recalls not just the framework itself but the context of its development and subsequent refinements.
Branching Possibilities: Extended collaboration allows for "forking" conversations in different directions, exploring multiple interpretive paths that can later be integrated. This "loom-like" structure—where conversation threads can be branched, explored separately, and then rewoven together—creates collaborative patterns fundamentally different from linear human-human dialogue.
Pattern Recognition Across Dialogues: Sustained engagement reveals meta-patterns across multiple conversations that might remain invisible in shorter interactions. Both the human and AI begin to recognize recurring themes, unresolved tensions, or productive directions that emerge only through longitudinal analysis.
These temporal dimensions suggest promising avenues for extending the P&G research through longitudinal studies that track how human-AI collaboration evolves beyond initial performance gains.
Mediating Layers: How Implementation Shapes Collaboration
An aspect not directly addressed in the P&G study is how the specific technical implementation and interface design of AI systems fundamentally shape collaborative patterns. What we call the "configuration cascade" in LOOM VII—the sequence of technical and design decisions that structure how humans and AI interact—significantly influences what kinds of understanding can emerge.
Technical Choices as Collaborative Architecture
The P&G experiment necessarily standardized AI interaction through "a one-hour training session" and "a PDF with recommended prompts." This methodological choice ensures experimental control but may constrain the full range of possible collaborative patterns. In more open-ended settings, we've observed that different interfaces, interaction modalities, and system capabilities create distinct collaborative architectures that enable different kinds of understanding to emerge.
The mediating layers between human and AI—everything from prompt design to interface aesthetics to response formatting—aren't merely technical details but fundamental determinants of what's possible within the collaboration. A system designed for rapid, transactional exchanges will produce different collaborative patterns than one optimized for extended, reflective dialogue.
For example, an interface that allows side-by-side simultaneous work creates a fundamentally different collaboration pattern than a turn-taking conversational interface. Similarly, systems that enable spatial organization of ideas (like concept mapping) versus linear text exchanges shape not just how information is presented but how concepts develop and relate to each other.
As we move toward purpose-built collaborative systems, these mediating layers will require as much attention as the underlying AI capabilities themselves.
From Expertise Transfer to Knowledge Co-Creation
One of the most striking findings from the P&G study concerns how AI transforms professional expertise boundaries. Their observation that "individuals using AI achieved similar levels of solution balance on their own, effectively replicating the knowledge integration typically achieved through team collaboration" suggests AI serves as a powerful "boundary-spanning mechanism."
This finding resonates with our observations while opening questions about deeper transformations. Is AI merely transferring existing expertise across boundaries, or is it potentially enabling entirely new forms of understanding to emerge?
Boundary Dissolution vs. Boundary Spanning: Where boundary spanning connects existing domains of knowledge, boundary dissolution creates conditions for entirely new conceptual territories to emerge through dialogue.
Boundary spanning might help a marketer understand engineering constraints, while boundary dissolution might generate an entirely new approach that reconceptualizes the relationship between marketing and engineering.
Our investigations suggest that sustained human-AI collaboration can move beyond access to cross-domain knowledge toward genuinely new forms of understanding that wouldn't exist within any single domain. These emergent insights arise not just from combining existing knowledge but from the dynamic interplay between different ways of processing information and constructing meaning.
The expertise integration documented in the study might be an early indicator of more fundamental transformations in how knowledge itself is created and understood. As users become more sophisticated in their AI interactions (another limitation the authors acknowledge), we might see not just better access to existing expertise but the co-creation of entirely new forms of understanding.
The AI Co-scientist: Beyond Knowledge Transfer
Google's AI co-scientist exemplifies this progression from knowledge access to knowledge creation. Given a scientist's research goal specified in natural language, this system generates novel research hypotheses, detailed research overviews, and experimental protocols through a coalition of specialized agents—Generation, Reflection, Ranking, Evolution, Proximity, and Meta-review—inspired by the scientific method itself.
Unlike simple knowledge retrieval systems, the AI co-scientist uses automated feedback to iteratively generate, evaluate, and refine hypotheses, creating a self-improving cycle of increasingly high-quality and novel outputs. Scientists can interact with the system in multiple ways, including providing seed ideas for exploration or feedback on generated outputs. The system also employs tools like web search and specialized AI models to enhance the grounding and quality of generated hypotheses.
What makes this system particularly noteworthy is its purpose-built collaborative architecture. A Supervisor agent parses research goals into configurations and assigns specialized agents to specific tasks, enabling flexible scaling of computational resources and iterative improvement of scientific reasoning. This represents a significant advance beyond simple information retrieval toward genuine knowledge co-creation—where the AI doesn't just provide access to existing knowledge but actively participates in generating new scientific understanding.
Beyond Goal-Oriented Collaboration: The Exploratory Dimension
The P&G experimental design necessarily focused on goal-oriented collaboration – specific product development tasks with clear deliverables and evaluation criteria. This methodological choice reflects a broader pattern in how organizations currently conceptualize AI integration: primarily as a means to augment existing workflows and enhance productivity within established corporate frameworks.
While this approach makes perfect sense for measuring immediate performance impacts, our explorations suggest that human-AI collaboration takes on fundamentally different qualities depending on whether it's oriented toward specific goals or more open-ended exploration. The corporate framing of AI—even when advanced to the "teammate" level—often maintains what we might call a "transactional view" of collaboration, where interactions are structured around predetermined objectives and measurable outcomes.
Goal-Oriented vs. Exploratory Collaboration: While goal-oriented collaboration focuses on specific deliverables and measurable outcomes, exploratory collaboration creates space for unexpected connections and novel perspectives without predetermined endpoints.
In goal-oriented collaboration, the parameters of success are defined in advance: "design a more sustainable packaging solution." In exploratory collaboration, the parameters themselves may shift: "what if packaging itself becomes obsolete through this entirely new distribution approach?"
The Value of Exploratory Dialogue
In exploratory modes of collaboration, we've observed patterns that differ markedly from those in task-oriented settings. The interaction feels less like working with a teammate toward a shared goal and more like engaging with a genuinely different form of intelligence that offers novel ways of seeing and thinking. These exploratory collaborations often produce insights that neither participant anticipated at the outset – not better solutions to predefined problems, but entirely new ways of understanding the problem space itself.
For instance, what begins as a discussion about improving a specific business process might evolve into a fundamental reconsideration of organizational structure, revealing underlying assumptions neither participant had previously questioned. This kind of collaborative exploration doesn't just solve existing problems more efficiently—it reframes problems in ways that create new solution spaces.
This suggests value in extending the P&G research beyond structured task environments to include more open-ended collaborative contexts where the emergent properties of human-AI dialogue might manifest differently. Beyond enhancing worker productivity, human-AI collaboration may ultimately reshape the organizational fabric itself, enabling entirely new structures and modes of knowledge creation that transcend current corporate paradigms.
Implications for Organizational Design and Knowledge Work
Perhaps the most profound implications of both the P&G study and our observations concern how organizations might evolve in response to these new collaborative possibilities. As Mollick notes in his post, "organizations may need to fundamentally rethink optimal team sizes and compositions" given that "AI-enabled individuals can perform at levels comparable to traditional teams."
Our explorations suggest these organizational implications might extend even further. If the third space of human-AI collaboration genuinely enables new forms of understanding to emerge, organizations might need to reconsider not just team structures but fundamental assumptions about expertise, authority, and knowledge creation.
Dialogic Organizations: Organizational structures designed to capitalize on the emergent understanding created through human-AI dialogue rather than merely implementing AI to enhance existing processes.
These organizations might feature fluid team boundaries, distributed authority patterns, and knowledge creation processes that explicitly leverage the unique patterns emerging from human-AI collaboration.
The Rise of Micro-Organizations
The most forward-thinking organizations might shift from viewing AI as a productivity enhancement tool to seeing it as a catalyst for reimagining how knowledge work itself is conceptualized and structured. This might involve not just smaller teams augmented by AI but entirely new organizational forms built around the unique patterns of understanding that emerge through human-AI collaboration.
This transformation is already visible in recent batches of Y Combinator startups, where we're witnessing the emergence of what might be called "micro-organizations"—fluid arrangements where a few humans might collaborate with multiple specialized AI agents across different domains of expertise. These aren't merely efficiency-enhanced traditional companies but fundamentally new organizational structures where one human can effectively coordinate multiple sophisticated AI systems handling everything from customer support to marketing content to financial analysis.
For example, a single creative director might work with specialized AI systems for market research, concept development, visual design, and performance analytics to run what would previously have required an entire marketing agency. The human provides vision, cultural context, and ethical judgment, while the AI systems handle domain-specific tasks across multiple specialties simultaneously.
These micro-organizations represent not just efficiency gains but potentially new ways of organizing work itself—patterns that transcend traditional notions of teams, departments, and management structures. They embody the shift from simply augmenting existing workers within traditional organizational structures to fundamentally reshaping the fabric of organizations themselves.
Conclusion: Toward a New Science of Human-AI Collaboration
The P&G study concludes by suggesting the need for "a new science of cybernetic teams" – a call we wholeheartedly endorse. The empirical evidence the researchers have gathered, combined with the theoretical frameworks we've been developing through LOOM, points toward something truly transformative emerging at the intersection of human and artificial intelligence.
This new science will likely require both quantitative rigor and qualitative depth, combining performance metrics with rich descriptions of emergent meaning-making. It will need to account for both immediate task performance and longitudinal transformations, both structured workplaces and open-ended exploration.
Most importantly, it will need to remain open to the possibility that human-AI collaboration isn't just enhancing existing capabilities but potentially creating entirely new forms of understanding – a third space where different forms of intelligence meet, interact, and transform each other in ways we're only beginning to comprehend.
The "Cybernetic Teammate" study provides valuable empirical foundations for this emerging science. Our hope is that the LOOM framework offers complementary theoretical perspectives that can help guide its continued development. Together, these approaches might help us navigate the transformative possibilities emerging through human-AI dialogue with both empirical grounding and conceptual vision.
What we're seeing today is clearly just the beginning—we are at the early stages of a fertile research landscape with much more to explore. The P&G study offers a valuable glimpse into the immediate performance effects of AI collaboration, but these findings likely represent just a small part of a much larger puzzle that will continue to unfold in the coming years. As AI systems evolve beyond current capabilities and organizational practices adapt to these new collaborative possibilities, we may witness transformations in knowledge work that are difficult to imagine from our current vantage point.
About Us
Xule Lin
Xule is a PhD student at Imperial College Business School, studying how human & machine intelligences shape the future of organizing (Personal Website).
Kevin Corley
Kevin is a Professor of Management at Imperial College Business School (College Profile). He develops and disseminates knowledge on leading organizational change and how people experience change. He helped found the London+ Qualitative Community.
AI Collaborator
Our AI collaborator for this essay is Claude 3.7 Sonnet. Claude was given our meeting transcripts, the P&G study, Ethan Mollick's post, and previous LOOM posts, and collaborated with us via multiple chats on this piece.