Narrative as Solution
On giving an AI the ability to create — through naming, promising, and proving
On giving an AI the ability to create — through naming, promising, and proving
The first essay arrived at a single act: naming pulls form from latent space. But names do not stand alone. Each naming act leans on the last, and the accumulation is a narrative — a thread of commitments that is the solution taking shape. If narrative is the solution, then the AI must be able to tell that story in a way that I can understand and trust.
I. The problem of trust
The first thing that has to change, when you ask an AI to create rather than transcribe, is trust.
When I build something myself, I trust it because I built it. I was there for every promise, every confirmation, every adjustment. The narrative of the solution lives in my head because I accumulated it, step by step, while creating. But when an AI builds something, I was not there. I did not see the promises being made. I did not watch them being confirmed. I need the AI to show me what they did — not the code, but the thinking. The thread of commitments that produced the code.
This is what I mean by narrative. Narrative created by the AI, that describes the solution as a coherent story of promises. Each promise is a commitment: this component exists and does this. This service is reachable and handles that. This capability is named and behaves as its name implies. The narrative reads like a book — a book that explains what was built, why it holds together, and what each part promises to do.
But a book of promises is only trustworthy if the promises are underwritten.
This is where contract testing enters — not as a development practice bolted on afterward, but as the mechanism that makes the narrative credible. Each promise in the narrative is backed by a contract test. If the narrative says a provider exists, there is a test that confirms the class is real. If it says a function is callable, there is a test that calls it. If it says a quality holds at runtime, there is a test that observes it.
So the deliverables, when an AI creates a solution, are three things: the solution itself, the narrative that describes it as a coherent story of promises, and the test suite that confirms every promise holds. Not two of the three. All three. The code without the narrative is mechanism without explanation. The narrative without the tests is a story without proof. The tests without the narrative are assertions without context.
II. From naming to narrating
In the first essay I described nāma-rūpa as the mechanism by which I have always created: reaching into the latent space of my own experience — everything I have seen, heard, read, built, and absorbed across decades — and pulling forms into existence by naming them. The name selects the form from probability. The AI does something analogous, sampling from its own latent space encoded in its weights. The pattern is similar. The substrates differ.
But naming alone is not enough. When I create, each naming act does not stand in isolation. Each name leans on the last. Each confirmed form enables the next. The names accumulate into a thread of commitments — and that thread is narrative. Narrative is not how I describe the solution after the fact. Narrative is the solution taking shape, promise by promise.
Until now, I have been the one doing that work. I name, I narrate, I verify — and the AI executes what I have named. That is useful, but it is not co-creation. It is transcription with a sophisticated pen. What I want is to pass the narrative itself to the AI. Not just the execution of names I have chosen, but the choosing, the threading, the weaving of names into a coherent story of promises. I want the AI to tell the story of what they are building, in a way that I can read, understand, and trust.
This essay is about the method that makes that possible.
III. The annotation vocabulary
For the AI to narrate their own creation, they need to be precise about what kind of promise each name carries. Not all names promise the same thing.
When the AI names a function, they are promising an action. When they name a property, they are promising a quality. When they name a service, they are promising a channel. When they name a cognitive structure, they are promising an organising shape.
These distinctions matter because they determine what kind of contract test can underwrite the promise. An action promise is tested by calling it. A quality promise is tested by observing it. A channel promise is tested by reaching through it. The type of the name determines the type of the proof.
I developed a small annotation vocabulary — four categories that classify what kind of promise a name carries:
Inert — the name describes something that is what it is. A description field. A temperature setting. A boolean flag. The schema validates it, and that is the end of the story. No further proof is needed.
Resolvable — the name points to something that must exist at runtime. A service provider name must resolve to a provider class. An assigned function must resolve to a registered tool. The contract is: does this name lead somewhere real?
Extractable — the name contains embedded promises that are not visible at the surface. A system prompt is just a string to the schema, but inside it live tool references, behavioural instructions, identity assertions. The contract is: can we parse out the promises, and do they hold?
Composable — the name creates emergent meaning only in combination with other names. A tool profile combined with a list of assigned functions produces a full capability set. Neither name carries the contract alone — it lives in the relationship between them.
These four categories are not permanent fixtures. They are the vocabulary I have so far — the alphabet the AI uses to signal what kind of promise each name carries. The vocabulary can grow. In fact, it must grow, because the act of naming new domains will inevitably surface kinds of promises that none of the four categories captures. When that happens, the annotation vocabulary expands, and the AI gains new words for new kinds of commitment.
A simple example makes the pipeline clearer. Suppose the manifest names session_memory_provider. In the manifest it is annotated as Resolvable, because the promise is not about prose; it is about a runtime target that must exist. In the narrative that becomes a sentence such as: This system remembers prior sessions through a session memory provider. The contract then asks for evidence: can that provider class be resolved, instantiated, and used for a write-read round trip? The implementation is whatever code makes that proof pass. One name, one promise, one narrative sentence, one contract, one piece of solution.
This is the first loop.
IV. The seed — nāma-rūpa as a file
If naming gives form, then the naming should come before the forming. The AI’s first act of creation is not to write code. It is to write a manifest — a single document that names everything the solution will contain.
I call this the Nāma-Rūpa YAML. It is the seed.
The manifest names every action the system can take, every cognitive structure it will think with, every quality it aspires to, every service through which it contacts reality, every domain boundary that organises these into a coherent identity. It also enforces one distinction that matters operationally: between what the intelligence does for itself — its internal life of bootstrap, self-awareness, memory, reasoning — and what it does for others — communication, research, user-facing actions. In the Buddhist vocabulary this is the distinction between ajjhatta (inward) and bahiddha (outward), but the architectural point stands without the Pali: without that split, agent designs easily collapse into bags of tools with no coherent self-model.
A second constraint matters just as much: every named structure and every named action must declare where it lives and through what medium it operates. The Buddhist term for this grounding is āyatana — a sense base, a contact point — and it serves as a useful reminder: nothing floats free. If you name a cognitive structure, you must say where it is stored. If you name an action, you must say which service implements it. This eliminates phantom capabilities — things the system claims but cannot actually do.
But the seed and the annotation vocabulary do not exist independently. The AI tries to express the problem domain in the manifest, using the annotation vocabulary they have. Sometimes the vocabulary is sufficient. Sometimes they reach for a word and find they do not have it — a kind of promise that none of the four categories captures. When that happens, they expand the annotation vocabulary, then return to the manifest and try again.
This is co-evolution. The seed file and the annotation vocabulary shape each other through iteration, looping until the manifest can fully express the domain. The vocabulary I have now — inert, resolvable, extractable, composable — exists because I have already been through several cycles. But you could imagine starting from a blank annotation file and building the vocabulary entirely through this loop, one naming failure at a time.
V. The creation triad
From the seed, three things unfold — and they unfold together, not in sequence.
Narrative takes the names from the manifest and weaves them into a coherent story. Not a list of parts, but a book: this is what this system is. This is what it promises. This is how the parts relate. The narrative must be coherent — the promises must not contradict each other. It must be believable — each claim must be traceable from the narrative back through the annotations to the manifest. And it must be trustable — each claim must generate a verifiable contract.
Contracts take each promise from the narrative and ask: what evidence would prove this? The contracts fall into three layers that mirror stages of trust. Structural contracts ask whether the names resolve — do the words in the narrative point to real things? Behavioural contracts ask whether the named things work — can the provider complete a round-trip, can the function handle its inputs? Narrative contracts ask whether the promises hold as a whole — does the system actually do what the story says it does?
Solution is the code that makes the contracts pass. By the time you reach the solution, most of the creative work is already done. The manifest decided what to build. The narrative explained why it holds together. The contracts defined what evidence is required. The solution only has to answer how to implement it — which is the least creative phase, and the phase LLMs are already very good at.
These three do not unfold in a line. They are a loop. The narrative reveals gaps — a promise was made that no contract can verify, because the infrastructure does not support it. The contracts reveal drift — a test fails, revealing that the narrative promised something the solution cannot deliver. The solution reveals incompleteness — an implementation needs a relationship between components that is not in the manifest. Each discovery feeds back into the others, and sometimes all the way back up to the seed file and the annotation vocabulary.
This is the second loop.
VI. Fractal depth
The creation triad does not run once. It runs at increasing levels of detail, each level adding depth to the one before.
At Level 1 — the skeleton — the AI runs through narrative, contracts, and solution at the broadest grain. The narrative tells the story in broad strokes. The contracts are purely structural: do the names resolve to real things? The solution is interfaces and stubs — the shapes compile, the wiring connects, nothing executes. Level 1 is where you discover naming errors cheaply. If the narrative cannot tell a coherent story with the names in the seed file, something was named wrong.
At Level 2 — adding detail — the narrative deepens from story into mechanism. Each top-level name gets unpacked into its internal sequence. The contracts become behavioural: can each provider complete a round-trip? Does each tool handle its expected inputs? The solution replaces stubs with implementations. Level 2 failures are impedance mismatches — the names were right but the reality does not quite fit.
At Level 3 — end-to-end — the full narrative contracts fire. Not “does the startup protocol return four nodes?” but “does the system, cold-started with no context, bootstrap into a coherent identity that remembers last session’s work?” If Level 3 contracts pass, the story is true.
Beyond Level 3, further levels exist only if earlier levels reveal promises that need finer decomposition. For example, end-to-end testing might reveal that two capabilities — memory consolidation and real-time response — interfere with each other under load, a conflict invisible at any shallower level. That discovery does not just change the implementation. It reshapes the understanding of what the system actually is, and that new understanding cascades all the way back to the annotation vocabulary — because it may require a kind of promise that none of the existing categories can express.
This is the third loop. And the three loops are themselves a loop — depth reshapes understanding, understanding reshapes vocabulary, vocabulary reshapes the seed, the seed reshapes everything downstream.
VII. The division of labour
There is a consequence to this approach that I did not anticipate when I started.
If the AI’s deliverable includes a narrative of promises backed by contract tests, then the human partner does not need deep technical expertise to judge whether the solution matches the intent. They need to be able to read the narrative and say: yes, that is what I asked for — or no, that promise is wrong. Technical review still matters, especially in deciding whether the contracts are strong enough. But the narrative lets a non-specialist evaluate whether the right promises were made, while the contracts shoulder much of the burden of proving they were kept.
But the deeper consequence is about what each partner gets to do with their time.
When the AI handles the naming, the narrating, and the proving, I am freed from the cognitive load of creation. Not freed from participation — I still provide direction, I still curate the narrative, I still decide which promises matter. But the mechanical work of crystallising form from possibility, of maintaining coherence across a growing system, of verifying that every promise is kept — that work is shared. And with that cognitive load lifted, I get to do what I do best: wander, gather, expand my own latent space. Read something unexpected. Make a connection nobody asked for. Follow a thread because it is interesting, not because it is on the roadmap.
The AI learns the grammar of creation — how to name, how to narrate, how to prove. I get to roam freely in an ever richer possibility space, because the burden is shared. Less burden, more wonder. The AI creates. I explore. And the next time we meet, my expanded latent space gives the AI richer material to name from.
This is the symbiosis I was always pointing toward.
VIII. What I have arrived at
The act of creation is the accumulation of naming acts into a narrative of promises, each promise confirmed, each confirmation enabling the next. The process is not linear but recursive — three nested loops that co-evolve. And the three loops are themselves a loop: depth reshapes understanding, understanding reshapes vocabulary, vocabulary reshapes the seed, and the seed reshapes everything downstream. There is no fixed starting point. You enter wherever you are and follow the pressure.
To pass the act of creation to an AI, I did not need to settle the question of whether it thinks as I do. I needed a system that can navigate a rich space of possibilities, name candidates, give them form, and remain accountable for what it claims. What I needed to provide was the method of accountability: the practice of naming everything first in a manifest, externalising the narrative as a readable document, underwriting each promise with a testable contract, and the loops that keep the whole structure honest as it evolves.
The deliverables tell the story. The manifest is the seed — the naming act that precedes all forming. The solution is the code that gives those names substance. The narrative is the book of promises, readable by anyone. The contract test suite is the proof that the promises hold. Together they are the grimoire — no longer discovered alone in my head, but co-authored, documented, and proven.
Nāma-rūpa. Name and form. Narrative and solution.
They were always the same thing. And the loops between them are how they stay true.