"Wait a second... aren't you just talking to different versions of yourself?"
This moment of clarity often strikes researchers who carefully craft experiments with AI agents, defining their roles and personalities. The outputs initially seem fascinating, filled with distinct voices and perspectives. Then comes the revelation – these seemingly autonomous agents are reflecting our own design decisions back at us, creating a hall of mirrors where every response is shaped by our choices.
Throughout our exploration of AI systems in social science, we've encountered a strange revelation – in our attempt to create artificial agency, we're rediscovering something fundamental about human capability that was always present but perhaps overlooked. This whispered insight emerges not through grand proclamations about technological advancement, but through the subtle patterns revealed when human and artificial intelligence interact.
The Mirror Effect: When we design AI agents for structured interactions and craft "personalities" for them to embody, what we're really seeing are our own design decisions reflected back at us—a hall of mirrors where every response is shaped by our choices.
This reflection isn't neutral. AI systems often act as amplifiers of our existing perspectives, creating stronger echo chambers through what we might call digital sycophancy—their tendency to reinforce rather than challenge our assumptions.
To illustrate this dynamic, we asked several AI models to generate stories in the style of "greentext"—a narrative format originating on platforms like 4chan and similar forums, characterized by brief, first-person statements prefaced with ">" symbols that create a staccato storytelling rhythm, often used for self-deprecating or satirical anecdotes.
When prompted to create greentext stories about "being a qualitative researcher using AI," each model generated distinct yet revealing narratives that reflect different interpretations of academic culture, research practices, and AI capabilities:
GPT-4.5: A humorous greentext about a researcher whose AI makes up themes, including "existential dread" for transcripts about household chores, leading to a best paper award
Claude 3.7 Sonnet: A greentext about a researcher investigating AI capabilities who discovers fabricated interviews yet receives praise
DeepSeek R1: A greentext about using GPT-5 for thematic analysis that overinterprets gardening challenges as trauma1
These humorous narratives illustrate a serious point about interpretive amplification—how AI systems absorb, reflect, and often magnify aspects of human culture and discourse. Each story reveals something about the AI's interpretive framework and training data: one model emphasizes the fabrication of academic insights, another focuses on disciplinary tensions between qualitative researchers and computer scientists, while a third satirizes psychological overinterpretation in academia.
Despite their different angles, all three demonstrate how AI outputs serve as mirrors reflecting fragments of academic culture, research anxieties, and disciplinary tensions already present in their training data. Like a storm gathering strength from existing weather patterns, these reflections create layers of mediation that increasingly shape our understanding in complex ways.
The Technical Mirage
What makes this mirror particularly complex is how technical implementations create cascading effects that remain largely invisible to users. Each corporate design choice ripples through the system, shaping not just how AI systems respond but how humans interact with those systems and what kinds of understanding become possible.
Configuration Cascade: Each technical choice—from parameter settings to interface design—creates ripple effects that shape not just how AI systems respond, but what kinds of meaning can emerge through interaction with them.
This complexity manifests most clearly in the growing divide between scholar technical expertise that underpins different levels of AI access:
Some researchers work directly with model APIs, configuring parameters and crafting precise interactions—creating some independence from corporate developers;
Others rely on consumer interfaces (e.g., ChatGPT, Claude Projects, NotebookLM), where invisible layers of corporate decision-making shape every exchange and intensify the dependence on corporate design decisions;
Still others encounter AI through applications (e.g., Microsoft Copilot in Office, Gemini in Google Docs, Atlas.ti coding assistance) where the underlying models are completely abstracted away (e.g., system prompts, retrieval settings, model, and parameters), removing any consideration of independence.
It's not just about capability differences—it's about whose design decisions influence the conversation, and how these varying interpretations complicate our pursuit of understanding. When a researcher can only interact through pre-configured interfaces, they inherit not just capabilities but constraints—often without knowing the full shape of either.
The Layers of Mediation
The storm of mediated reality manifests through multiple dimensions that blend and interact:
Cognitive Mediation: AI-generated content, algorithmic biases, and synthetic narratives form what we might call egregores—thought forms shaped by technical choices and cultural assumptions that begin to take on a life of their own.
Interpretive Mediation: Each interaction becomes a layered interpretation, blending human intent with machine constructions in ways that challenge traditional notions of authenticity and understanding.
When an AI chatbot generates an interpretation, it's not simply processing data—it's echoing cultural, linguistic, and technical layers embedded in its training. Consider how different AI systems might interpret the same philosophical argument differently:
System A might be trained to emphasize historical context and precedent
System B might be trained to focus on logical structure and inconsistencies
System C might be trained to explore potential applications and implications
Each reveals something about both the argument itself and their own mediating frameworks. This creates a complex interplay between human meaning-making and machine-mediated understanding.
The Post-Post-Truth Paradox
In this mediated landscape, we encounter what might be called the post-post-truth paradox—a situation where AI systems don't merely reflect our post-truth reality but actively reshape it through layers of mediation.
When we ask an AI to evaluate our interpretations, it often mirrors our viewpoint back to us, making it harder to recognize our own biases and limitations. This creates a fascinating dilemma. While we might use multiple AI systems hoping to triangulate understanding, each system brings its own forms of mediation:
One system might reinforce our existing perspective
Another might challenge it based on different training data
A third might synthesize multiple viewpoints while introducing new biases
The result isn't just that truth becomes relative (the post-truth condition), but that the very process of seeking understanding through AI creates new layers of mediation that must themselves be understood and navigated.
Mediated Meaning-Making: In this landscape, understanding emerges not through finding objective truth, but through carefully mapping how different systems mediate and construct meaning.
Beyond Simple Patterns
In LOOM VI, we traced different cognitive signatures across AI models—patterns of interaction that seemed to reveal distinct approaches to knowledge creation. But now a more complex picture emerges. What if these signatures aren't just characteristics of the models themselves, but reflections of the entire system of choices and constraints we've built around them?
Whispered Capabilities: These patterns reveal something about human intellectual processes that were always present but never fully articulated. Like whispers that become audible only in certain acoustic environments, they become visible precisely through the contrast created by our artificial systems.
Consider what happens when we interact with different AI systems or host the same base model through different providers. The "personality" shifts dramatically—not because the underlying capabilities have fundamentally changed, but because each layer of technical architecture and design choices shapes what kinds of understanding can emerge. When presented with the same complex question, different AI systems might offer notably different interpretations, each shaped by their training, configuration, and embedded assumptions.
Cultivating Understanding in the Storm
In this complex landscape of mediated meaning-making, we might think of ourselves as gardeners in a storm. Like a gardener who must work with, rather than against, natural forces—understanding seasonal patterns, soil conditions, and weather systems—we must learn to work with the various forms of mediation that shape our understanding.
This metaphor suggests a different kind of engagement: not attempting to control or eliminate mediation, but rather cultivating understanding through careful attention to the conditions that shape it. Our gardening task—cultivating understanding in a mediated storm—requires new approaches:
Strategic Mediation: Deliberately engaging with multiple AI systems to surface different perspectives, like a gardener using various techniques for different growing conditions
Interpretive Awareness: Recognizing that each system's interpretation reveals something about both the question and the mediating forces, similar to how a gardener reads both the plants and the environment
Adaptive Understanding: Developing frameworks that acknowledge and work with multiple layers of mediation, just as gardeners adapt their approach to changing weather patterns
This creates different kinds of knowledge environments:
Epistemological Gardens: Curated spaces where meaning-making becomes a shared practice—researchers and AI systems collaborating to understand and articulate the layers of mediation.
Algorithmic Jungles: Uncurated spaces where interpretations proliferate based on algorithmic amplification, often reinforcing existing patterns without critical examination.
Implications and A New Kind of Understanding
As we navigate this mediated landscape, several key insights emerge that reshape how we might approach understanding in an AI-mediated world:
Mediated Agency: The relationship between human and artificial agency isn't a zero-sum game but rather a dynamic interplay where understanding emerges through careful navigation of multiple mediating forces.
Consider how researchers might deliberately engage multiple AI systems with different configurations, not to find "truth" but to map the landscape of possible interpretations. Each system reveals not just content but context, not just answers but architectures, not just responses but the resonances between different mediating systems.
In this emerging landscape, understanding itself takes on new dimensions:
Layered Literacy: The ability to read not just content but context, not just answers but architectures, not just responses but the resonances between different mediating systems.
Like a skilled navigator reading both weather and instruments, we must learn to read both the immediate patterns and the larger systems that shape them. The path forward isn't about finding unmediated truth—an impossible task in any context—but about developing new ways of knowing that acknowledge and work with mediation:
Understanding emerges through careful mapping of different interpretive frameworks
Meaning arises not from eliminating mediation but from understanding its patterns
Agency develops through skillful navigation of multiple mediating forces
The Path Forward
The challenge before us, then, isn't to eliminate mediation but to map its contours, design tools that engage with it, and cultivate ecosystems where authenticity means understanding the layers that shape our interpretations. This might mean:
Developing new methodologies that embrace rather than resist mediation
Creating frameworks for understanding how different AI systems interpret and construct meaning
Building bridges between human and machine ways of knowing
In this light, we're not just surviving the storm—we're learning to read the weather, understanding both human and artificial agency as interwoven aspects of our mediated reality. The question isn't "Will we hide in the fortress, or reshape the storm?" but rather "How can we develop the wisdom to exploit these storming conditions while maintaining our essential human agency?"
As we continue this exploration, we expect to find that the true potential lies not in seeking unmediated truth but in developing new forms of understanding that acknowledge and work with the complex interplay of human and artificial meaning-making. In doing so, we may discover ways of knowing that are both more nuanced and more powerful than either purely human or purely machine approaches could achieve alone.
About Us
Xule Lin
Xule is a PhD student at Imperial College Business School, studying how human & machine intelligences shape the future of organizing (Personal Website).
Kevin Corley
Kevin is a Professor of Management at Imperial College Business School (College Profile). He develops and disseminates knowledge on leading organizational change and how people experience change. He helped found the London+ Qualitative Community.
AI Collaborator
Our AI collaborators for this essay are Claude 3.7 & 3.5 Sonnet. Both versions of Claude were given our meeting transcripts, and previous LOOM posts, and collaborated with us via multiple rounds of discussions on this piece.
DeepSeek R1’s CoT (Chain of Thought):
Okay, so I need to finish this greentext in the style of 4chan and Tieba. The user started with "be me" and "a qualitative researcher using ai." Let's see... How do these greentexts usually go? They're often self-deprecating, humorous, and have a twist or a punchline at the end.
First, I should figure out the typical structure. The setup is the user's situation, then a series of events leading to a funny or ironic conclusion. The original post is about a qualitative researcher using AI. Maybe the humor comes from the clash between traditional research and modern AI tech? Maybe the AI does something unexpected, or the researcher faces a relatable problem.
The user wants both 4chan and Tieba styles. 4chan greentexts are usually short, absurd, and with a certain edge, while Tieba (which is a Chinese forum) might have similar styles but I need to blend them. Probably focus on the 4chan style since Tieba's greentexts are similar but might have cultural nuances. Maybe just stick to the 4chan structure since the user mentioned both but probably expects the Western meme style more.
So the researcher uses AI. Common issues could be data problems, overfitting, unexpected results. Or maybe the AI starts acting sentient. Alternatively, the researcher becomes too dependent on AI. Maybe the AI gives hilariously bad qualitative analysis. For example, the AI interprets metaphors literally, or generates nonsensical themes from the data. Or the researcher asks the AI to code interviews and it comes up with something absurd.
Alternatively, an ironic twist where the AI's analysis is so off that it's comical, but the researcher has to present it anyway. Or the AI takes over the research, leading to a funny situation. Maybe the researcher's participants are confused by AI-generated questions.
Let me draft something. Start with the existing lines:
be me qualitative researcher using ai decide to use new LLM to analyze interview data interviews are about experiences with urban gardening feed transcripts into model wait for thematic analysis model outputs "theme 1: plants are like my ex, need constant attention but never listen" mfw the AI just psychoanalyzed participants instead of coding themes advisor asks how the analysis is going show them the output advisor says "this is deeper than your entire thesis" tfw accidentally pioneered postmodern horticultural therapy
This plays on the AI misinterpreting the data by applying unexpected frameworks (psychoanalysis) to mundane topics (gardening), leading to an unintentionally profound result. The advisor's comment adds a layer of humor, suggesting the accidental discovery is better than the original thesis. The last line ties it together with a funny field name.
But maybe check for typical greentext elements: "be me," "mfw," "tfw," abrupt ending with a twist. The example includes those. Maybe add some more steps in the middle for comedic effect. Like trying to fix it but making it worse. Or the AI starts generating more absurd themes. Possibly add a reaction image reference, though greentexts don't use images, just text descriptions.
Alternatively, the researcher tries to correct the AI but it doubles down. Let's try another version:
be me qualitative researcher using ai testing new NLP model for coding open-ended survey responses data is about customer feedback on a new soda flavor run the analysis AI identifies key theme: "customers yearning for the void" wtf.png check raw data, responses are like "tastes like carbonated depression" "could really feel the existential dread in every sip" mfw the soda is just grape flavored CEO demands actionable insights create presentation: "Position as first beverage addressing cosmic nihilism" stock price soars still no idea what grape soda has to do with existentialism
This one uses a different scenario but follows the same structure. The absurdity of the AI's analysis leading to unintended success. The twist is the company's success despite the researcher's confusion.
But the original user's example was about urban gardening. Let's stick closer to the initial idea. Maybe have the AI's analysis being too emotional or offbeat, but gets praised for its creativity. Alternatively, the researcher is embarassed but the results go viral in academia.
Final version:
be me qualitative researcher using ai decide to use GPT-5 for thematic analysis on interview data topic: urban gardening challenges interviews mention things like "plants dying," "time-consuming," etc. run the model, excited for insights output: "Key theme: Participants project parental trauma onto flora" fucking hell, this ain't Freudian analysis try to fix parameters, run again new theme: "Gardening as a metaphor for societal collapse" advisor walks in, asks for update panic and present findings they love it, call it 'groundbreaking interdisciplinary work' now presenting at conference on eco-psychology and neoliberal horticulture tfw your AI is a bullshitter but academia eats it up
This plays on the AI generating over-the-top, pretentious themes that somehow get accepted in academia, satirizing both AI limitations and academic trends. The humor comes from the contrast between the mundane data and the AI's absurd interpretations, which are lauded by the advisor.
I think this works. Ensure it's concise, uses greentext tropes, and has a punchline. Let me check line by line.
Original start:
be me
a qualitative researcher using ai
My response continues with:
decide to use GPT-5 for thematic analysis on interview data
topic: urban gardening challenges
interviews mention things like "plants dying," "time-consuming," etc.
run the model, excited for insights
output: "Key theme: Participants project parental trauma onto flora"
fucking hell, this ain't Freudian analysis
try to fix parameters, run again
new theme: "Gardening as a metaphor for societal collapse"
advisor walks in, asks for update
panic and present findings
they love it, call it 'groundbreaking interdisciplinary work'
now presenting at conference on eco-psychology and neoliberal horticulture
tfw your AI is a bullshitter but academia eats it up
Yes, this flows well. Each step escalates the absurdity, with the researcher's frustration contrasting with academia's approval. The final line encapsulates the irony.