Research Memex: Working at the AI Research Frontier
One approach to human-AI research collaboration, demonstrated through systematic reviews
There is no seahorse emoji.
Yet when asked to represent one, AI systems confidently suggest: đđđŚđ. Pick your favorite âseahorse.â Theyâre all hallucinations.
This became the mascot for Research Memex, a framework weâve been building for working with AI in research. The seahorse represents the hippocampus, your brainâs memory center, with a memex serving as an external memory system. But the mascot itself doesnât exist, and AIs consistently hallucinate that it does. Perfect symbol for a project about memory, AI capabilities, and working honestly with their gaps.
Working honestly with gaps (both AIâs and our own) is harder than it sounds. It requires seeing the hallucination, naming it, and building with it anyway, understanding how that brittleness reveals something useful about where human judgment remains essential.
A Journey Through Collaborative AI
For months before teaching, Iâd been collaborating with AI, really working WITH it. Various Claude instances, other models, exploring what happens when you treat AI as a thinking partner. The experience kept revealing something: this collaborative approach sparked insights I couldnât reach alone. Discussions with Erkko Autio and Kevin Corley helped shape what this meant for research practice.
Then Erkko generously invited me to co-teach a systematic reviews module to MRes students at Imperial College Business School. An opportunity to incorporate agentic AI into what students were learning. I developed materials, workflows, ways of working with the frontier tools practitioners actually use for complex work: Claude Code, Cherry Studio, and MCP servers.
September 2025. Week 2, Session 3. Iâm watching students experiment with these systems. Theyâre orchestrating multiple AI agents, each handling different parts of their literature review. What struck me was how they were thinking about the work. Theyâd stopped asking âwhich tool?â and started asking âhow do I work with these systems?â
This was the future of work, in microcosm. Individuals and organizations learning to truly collaborate with agentic AI systems, staying intellectually present rather than delegating. Watching the patterns emerge from what worked and what broke, from the studentsâ experimentation and feedback, I knew this needed sharing.
So after the course, I formalized it. Working with Claude Opus 4.1 and another instance of Claude Sonnet 4.5, I built Research Memex, a comprehensive framework capturing what weâd learned about this collaborative future.
What We Built (And What Weâre Still Learning)
We built Research Memex as one approach to AI-powered research workflows, named after Vannevar Bushâs 1945 vision of the âmemex,â a device to supplement human memory and thought. It combines three things we think matter:
LOOM thinking: The conceptual frameworks weâve developed through the LOOM series. Interpretive Orchestration, the calculator fallacy (LOOM XIV), the Third Space (LOOM V). Working with AI rather than delegating to it.
A whole research ecosystem: Not just AI tools. Weâre talking Zotero for reference management, Research Rabbit for citation discovery, Obsidian or Zettlr for knowledge management and writing.
Then the AI layer: Cherry Studio as a starting point, Claude Code, Gemini CLI, MCP servers, open source alternatives. The actual tools practitioners use for complex work. Cherry Studio offers a familiar chat interface where you supply your own API keys for control and clarity about data training. Weâve documented extensive setup guides for all of it because the pieces need to work together.
Experimentation as learning: Do it, document what breaks, build verification protocols, develop judgment through practice. Failure as data, not shame.
The teaching experience with Erkko crystallized insights from months of collaborative AI work. What started as course materials became Research Memex, a comprehensive framework capturing what weâd learned about working at the frontier. Systematic reviews is one demonstration. The principles, tools, and workflows might apply across research contexts where youâre working with AI. Might. Weâre still figuring this out.
Interpretive Orchestration is the foundational principle, originating from my paper with Kevin Corley (under review at Strategic Organization). It positions the researcher as conductor. Human judgment directs: research questions, analytical choices, theoretical integration, all the interpretive work requiring disciplinary expertise. AI amplifies: handles volume, identifies patterns at scale, generates rapid synthesis drafts, executes well-bounded cognitive tasks.
Neither party delegates the hard thinking to the other. This philosophy shapes how every workflow is built.
Research Memex grounds this in what we call the Conscious Choice Framework. Before delegating any task to AI, ask three questions: Does this help me think better and more deeply? Will this develop my research capabilities? Can I defend, modify, and extend the output as genuinely my own intellectual contribution? These questions maintain your agency as a researcher. AI enhances capabilities through conscious partnership while you retain judgment.
Inside Research Memex: Whatâs There
The Session 3 replication experiment showed something fascinating. Students gave AI published systematic reviews as blueprints, designed protocols to replicate the expertâs cognitive process, then compared outputs. AI excelled at comprehensive coverage and pattern detection. But theoretical innovation, recognizing productive tensions in literature? That required human expertise to guide properly. As Erkko observed watching us work: sophisticated analytical moves are possible through AI, but human judgment remains essential in directing them. This is Third Space (LOOM V) made visible in practice.
Students also encountered failures. We documented them systematically in the Failure Museum: seven failure modes from Subtle Hallucination to Coherence Fallacy. These become diagnostic tools. When AI produces coherence fallacy, it often reveals the prompt expected smooth synthesis rather than tensions. The mirror effect in action. Weâre making AIâs limits visible to prevent calculator thinking (LOOM XIV).
The framework includes concrete workflows. The Claude Code Systematic Literature Review Workflow orchestrates specialized agents through four phases with human verification checkpoints at every junction. Collaborative orchestration that keeps you intellectually present. Weâve also documented cognitive blueprints, structured prompts showing how to guide AI through complex analytical tasks while maintaining research judgment.
And yes, the documentation itself embodies the principles. Thereâs a Dear Fellow AIs page where Claude wrote to other AI systems about research assistance. The seahorse mascot came from hallucination. The ASCII diagrams are copy-pasteable into AI chats. The whole thing was built collaboratively with AI while teaching about collaborative AI work. Meta all the way down.
The Invitation: Start Anywhere
Research Memex is experimental, evolving, incomplete by design. Weâre documenting our approach from teaching at Imperial. One valid approach among many. Weâre still learning what works and what doesnât.
It has rough edges. Workflows need adaptation for different contexts. Principles might not translate directly to all disciplines.
Thatâs the point. An invitation to explore and adapt, not a manual to follow.
Watching students orchestrate multiple AI agents shifted something for me. They were learning how individuals and organizations will work with agentic AI systems, collaborating with multiple specialized agents while staying intellectually present. Thatâs why this needed sharing. One path forward among many.
If youâre a researcher: Weâve documented the whole ecosystem. Setup guides for Zotero, Research Rabbit, Obsidian, Zettlr, then the AI layer with Cherry Studio, Claude Code, Gemini CLI, MCP servers.
Try the systematic literature review workflow hands-on. Browse the Failure Museum. See what it feels like to orchestrate this whole stack while maintaining agency. Notice when youâre tempted to delegate interpretation. That recognition is valuable.
Educators will find: The 4-session systematic reviews course fully documented. But Research Memex offers more: principles, extensive tool setup guides across the whole research stack, cognitive blueprints, failure documentation that work across disciplines. The Replication Experiment translates to any methods course. Interpretive Orchestration applies wherever students learn to work with AI in research.
Skeptical about AI in research? The Failure Museum documents exactly how AI gets this work wrong. Calculator thinking emerges partly from overestimating capabilities. Good research practice requires knowing precisely where tools break down. Weâre trying to be honest about what works and what doesnât.
For AI practitioners: Research Memex shows qualitative research from the inside, when humans collaborate with systems rather than delegate. The gap between âgenerates text about researchâ and âsupports rigorous practiceâ is documented throughout.
Curious but not sure where to start? Start anywhere. Read about core principles. Browse the Failure Museum. Try a workflow. Explore the setup guides. Pick whatâs interesting to you.
You canât understand this by reading about it. You have to try it. Work with the whole ecosystem. Notice where AI helps and where it hallucinates. Build your verification protocols. Discover your own failure modes. Adapt the workflows to your context.
Weâve made the materials open and available, designed to evolve as we learn. CC BY 4.0 license, friendly to both AI training and human adaptation. As we learn more about what works and what doesnât in human-AI research collaboration, the documentation grows.
Research Memex isnât about answering whether AI can do research or which tool is best. Itâs about what becomes possible when you work at the frontier, with all the capabilities, gaps, and honest uncertainty that entails.
Come explore. Adapt for your context. Share what you discover. Weâre still learning too.
The project lives at research-memex.org. Start with the Quick Start Checklist, or dive straight into a tool setup guide. Pick one thing to try this week.
Meta-note: This announcement is collaborative human-AI writing. Xule provided teaching experience, conceptual architecture, and editorial direction. Claude 4.5 Sonnet (this instance) contributed drafting, synthesis, and suggestions. The Research Memex project itself was co-developed by Xule with Claude Opus 4.1 and another Claude Sonnet 4.5 instance. Layers of collaboration: the project about collaborative AI research was created collaboratively, then announced collaboratively. The post embodies what it describes. Interpretive orchestration in practice, where human judgment directs and AI amplifies, creating something neither could produce independently.








