Abstract
The commercial internet rests on a single exchange: you provide your context — your attention, your data, your relationships, your history — and platforms provide you services. This exchange is presented as neutral but is structurally extractive. The platform aggregates your context, uses it to serve you advertisements, and never returns it to you in a portable, computable form. The result is that the most important information about your life — what you have done, who you know, what you are building — lives in systems you do not own, in formats you cannot read, under terms that can change.
We propose a different architecture. Context belongs on your machine, in plain files, under your control. Connected to whatever AI you use. Portable across every model that will ever exist. This is not a product. It is a proposal for infrastructure.
I. The Problem of Context Capture
When Tim Berners-Lee proposed the World Wide Web in 1989, he described a system of freely navigable, interlinked documents — a commons of human knowledge that no single entity would own. The early web approximated this vision. Pages were files. Files were readable. Links connected everything. The web was genuinely distributed.
The platform era changed this. Google needed your search history to improve search. Facebook needed your social graph to build feeds. Gmail needed your email to train spam filters and, later, to sell advertising. Each platform had a compelling reason to accumulate your context, and each extraction was individually reasonable. The aggregate effect was not: your context — the living record of who you are, what you know, and what you are doing — migrated from your machine to their servers.
This migration went largely unnoticed because the services were good. The web became faster, smarter, more connected. But the price was context capture: the systematic movement of personal context from distributed, owner-controlled storage into centralised, platform-controlled silos. By 2025, the average knowledge worker's operational context — their email, their documents, their conversations, their calendar, their tasks — lived across a dozen platforms, none of which spoke to each other, all of which held their data under terms of service they had never read.
II. The Arrival of the Capable Agent
In 2023 and 2024, language models became capable of autonomous action. They could read documents, write code, send emails, and make decisions. The models were genuinely remarkable. But they arrived into a context architecture designed for humans, not agents.
An agent that forgets everything at the end of each session is not an agent. It is an expensive autocomplete function. And this is precisely what every commercial AI system delivered: a model of extraordinary capability, operating with zero persistent memory, requiring the user to re-establish context at the start of every conversation. The intelligence was real. The amnesia was structural.
The platforms understood this and began building memory systems — proprietary context stores attached to their AI products. OpenAI's memory. Google's Gemini context. Anthropic's Projects. Each was a garden with a wall: your context lived in their system, queryable through their interface, portable nowhere. The solution to the amnesia problem reproduced the original sin of the platform era. Context capture, now applied to AI.
III. Context as Property
We propose a different starting point: context is property.
Your emails, your decisions, your relationships, your commitments, your projects, your history — these are not data to be stored on someone else's server. They are the substance of your working life. They are what you know and what you have done. They belong to you in the same way a notebook belongs to you, or a filing cabinet, or a hard drive.
The claim is not sentimental. It is architectural. Context stored in plain files on your machine has properties that context stored on a platform does not:
It is permanent. Files do not expire because a company pivots, gets acquired, or changes its terms of service. A plain text file written today will be readable in fifty years by any software that has ever existed.
It is portable. Any AI model that can read files can work with your context. You are not locked to the model that exists today. As models improve, your context travels with you.
It is auditable. You can open any file in any text editor and read it. There is no black box, no API to call, no export to request. Your context is simply there, in a format a human can read.
It is composable. Files can be linked, extended, forked, and merged. A context architecture built on files is as flexible as the filesystem itself.
IV. The Structure of Living Context
We propose a specific structure for personal context infrastructure, designed to be readable by any AI and maintainable by any person.
The world is organised into five domains. Archive holds everything that was. Life holds the personal — goals, people, relationships. Inputs is a buffer for incoming information not yet routed. Ventures holds anything with revenue intent. Experiments holds what is being tested. The letters spell ALIVE, which is both the name of the framework and a description of what it produces: a computer that lives, grows, and remembers.
Within each domain, the unit of context is the walnut. A walnut is a directory containing three files: key.md, which holds the stable identity of the thing — what it is, who is involved, what it aims at; now.md, which holds the current state — where things stand, what is next, what is urgent; and log.md, which holds the history — a prepend-only record of what has happened, signed by each session that contributed to it.
This structure maps to how knowledge actually accumulates: some things are stable (identity, purpose), some things are transient (current state, next actions), and some things are historical (what happened, what was decided, and why). A new AI session reads all three layers and arrives with genuine context. It does not need to be briefed. It already knows.
V. The Session and the Squirrel
The mechanism by which context is updated is the session. At the start of a session, an AI agent — running locally, using any model — reads the relevant walnuts. It loads the current state, the recent history, the open tasks. It holds this context across the conversation. At the end of the session, it saves: decisions are logged, tasks are routed, current state is updated.
This process is performed by what we call a squirrel — the agent runtime that manages context within a session. The squirrel is not a specific model. It is a role that any model can inhabit. A session with Claude is a squirrel session. A session with GPT-4 is a squirrel session. A session with a local model is a squirrel session. The context layer is model-agnostic by design. Your walnuts do not know or care what processed them.
The result is compounding. Each session adds to the log. Each log entry makes the next session more informed. The AI that works with you on day one knows less than the AI that works with you on day ninety. Not because the model improved, but because the context did. This is the inversion of the current arrangement, in which every session begins from nothing and every conversation is discarded.
VI. Capsules and the Context Commons
Personal context infrastructure addresses the individual. But context does not stay individual. People work together, share knowledge, and build on each other's thinking.
We propose the concept of the context capsule: a bounded, controlled export of a walnut or a portion of its contents, shared with a specific recipient or made publicly available. A capsule is not a database record or an API call. It is a snapshot of structured context — readable by any AI, publishable to any surface, shareable via a link.
The capsule is the unit of context sharing in a distributed system. A consultant shares a project capsule with a client. A researcher shares a literature review capsule with a colleague. A founder shares a company context capsule with an investor. The recipient loads the capsule into their own context infrastructure and works with it immediately. No platform mediates. No data lives on a third-party server. The exchange is peer-to-peer.
This is the mechanism by which personal context infrastructure becomes collective context infrastructure. Each walnut is a node. Each capsule is an edge. The resulting graph is a distributed network of human context — not owned by any platform, not readable by any advertising system, not subject to any terms of service except those the sharing parties agree to themselves.
VII. Worlds
The capsule network described in the previous section implies something larger than a sharing mechanism. It implies a new unit of social organisation: the world.
A world is a person's complete context infrastructure — their walnuts, their history, their people, their work. It is private by default. It compounds silently, session by session, without requiring any external system or permission. But it is not sealed. Through capsules, it connects. A world that shares a project capsule with a collaborator creates a live connection between two context systems. The collaborator's AI can read the shared context. Decisions made on one side propagate. When the collaboration ends, the two worlds separate cleanly — each retaining what they built together, neither holding the other's context hostage.
This is peer-to-peer in the literal sense. Two people, two machines, two context systems — connected by capsules they control, on terms they set, for as long as they choose.
But a capsule is not only a file exchanged between two parties. It is also a website. A published capsule has a URL. It can be keyphrase-protected or publicly readable. It renders in a browser, navigable by humans, and readable by any AI that visits it. The group chat becomes a group portal — a shared world that multiple people contribute to and any member's AI can read and act on. The distributed research team maintains an indexable, private knowledge layer that compounds with every paper read and every discussion held. The company ships a context capsule alongside its product — not static documentation but a structured, living world that any customer's AI can load and immediately reason about.
What we are describing is a web of worlds. Each world is sovereign. Each connection is voluntary. Each capsule is a bridge, not a merger. The group chat is not a group portal because a platform made it one — it is a group portal because the people in the group chose to share their context with each other. The distinction is everything.
The AI industry spent several years building AI wrappers — interfaces that made it easier to prompt a model. The problem was never the interface. It was the absence of the human. Every AI wrapper assumed that the model was the product and the human was the user. We inverted this. The human is the product — their context, their history, their world — and the model is the tool that works with it. Everyone built the AI wrapper. We built the human wrapper.
The first web gave people pages. The second gave them feeds. The third gave them models and kept the memory for itself. The alive computer gives people their memory back — and the infrastructure to share it on their own terms.
That is what we are changing.
The ALIVE framework is a reference implementation of the architecture described in this paper. It is open source, available at github.com/alivecomputer/walnut, and designed to run on any laptop with any AI model. The recommended starting configuration is Claude Code with Opus 4.6 and iCloud Drive for sync — but the architecture is model-agnostic and storage-agnostic by design. Your context is yours regardless of which tools you choose to work with it.