AI Steve II: The Governed Loop
How a personal AI learns to propose its own extensions, why that distinction matters, and what a 91-year-old at a hillside spa reminded me about what all of this is really for
By Steven Muskal, Ph.D. | May 2, 2026 | stevenmuskal.com
A Note Before We Begin: This article picks up where “AI Steve Deep Dive” (January 30, 2026) left off. If you haven’t read that piece, the short version is this: what began as a grief-driven project to reconstruct my father’s intellect using Retrieval Augmented Generation evolved into a comprehensive personal AI system trained on four decades of my own data. It is not a chatbot. It is, increasingly, a mirror of how I think. What follows goes deeper on the architecture, the lived experience, and something that happened at a hillside spa near my home that I have not been able to stop thinking about.*
I. What the First Article Left Out
The January piece covered a lot of ground. The neural network lineage going back to my 1991 PhD work under Sung-Hou Kim. The RAG architecture, the anti-hallucination rules, the code directive system that turns a plain English email into appropriately generated code - fully executed to create reports, charts, and attached data files. I was proud of that piece. I still am.
But there was a layer of the architecture I did not fully explain. Not because it was too technical, but because I was still working out how to articulate what made it genuinely different from other AI systems I had encountered or built. I think I can explain it now.
The piece I left out is what I have come to call the governed loop: the mechanism by which AI Steve can examine its own codebase, propose extensions to its own capabilities, and create formal tracked proposals for those extensions that require my explicit approval before a single line of code moves forward. This is not just a safety feature, though it is that too. It is the architectural choice that separates a tool from something that behaves more like a collaborative colleague.
What I want to do in this article is explain that architecture clearly enough to make the concept credible, and then leave it behind, because the more interesting story is not the loop itself but what the loop is for. The productivity gains are real and I will be specific about them. But the thing I keep returning to, the thing that was sharpened considerably by a week at a hillside spa and some conversations I did not expect to have, is the question of what you do with the time and bandwidth that amplification frees up. That question turns out to be more consequential than it first appears.
II. The Governed Loop: How AI Steve Extends Itself
Every Monday morning at 8 AM, a script I call the Weekly Feature Proposer wakes up and does something most software does not do. It reads the codebase. Not to run it. To understand it. It scans the existing modules, identifies patterns in what has already been built, maps the gaps between current capability and what would logically complement it, and generates a set of strategic proposals. Each proposal becomes a formally tracked issue in the Beads system, a lightweight Git-native issue tracker that lives directly inside the repository alongside the code.
By 8:30 AM, I have an email in my inbox with each proposal, its rationale, the estimated implementation complexity, and a set of priority assignment buttons. P0 through P4. I click the ones that interest me. That click is the gate. Nothing moves forward without it.
The nightly code review runs separately during dawn. It connects to the database, reads my recent sentiment data, scans the chat history, analyzes the current Beads issues for what is blocked and what is ready, and produces a personalized planning brief. It knows what I was working on yesterday, what mood the system inferred from my communications this week, and what questions I asked AI Steve that it struggled to answer well. The proposals it surfaces are informed by all of that context.
The Beads system is worth explaining carefully, because it is easy to underestimate. Each bead is a tracked issue with a title, a description, a priority, dependencies on other beads, and a status that moves from open to in-progress to closed. When an AI agent picks up a piece of work, it reads the Beads to understand dependencies and blockers. When it finishes, it closes the bead and the nightly consolidation job reads the changelog entry to update the corresponding documentation. When the RAG system later tries to answer a question about what AI Steve has been working on, it has access to the full documentary record of those decisions. This is connective tissue between code and context, not a to-do list.
A nightly documentation consolidation job has been running since February. At 2:15 AM, it scans the changelog entries from the previous day, identifies which documentation files correspond to those changes, and updates or flags them for review. The CHANGELOG.md file follows a strict format by design: every code change requires a dated entry with an affected-files list and a category. Feature. Fix. Enhancement. Refactor. Docs. The system is opinionated about this because consistency makes the documentation useful not just to human readers but to AI-assisted search. AI Steve can query its own changelog. It learns what has changed and why.
The effect of all of this is a system that genuinely improves over time in a way that is traceable, auditable, and governed. Not governed by bureaucracy. Governed by trust.
AI Steve reads its own history, identifies its own gaps, proposes its own improvements, and documents its own changes. The only thing it cannot do without my intervention is execute those changes. The approval gate is not a bureaucratic formality. It is the philosophical core of the design. I am not building an autonomous agent that does what it wants. I am building an amplifier that knows what it is capable of and asks clearly when it thinks it can do more.
III. A Hundred Times
I want to talk about a specific exchange that landed differently than I expected.
A few weeks ago, I was corresponding with Detlef, a colleague from the Sung-Hou Kim lab days at Berkeley. Detlef is a serious scientist and an honest interlocutor. We were catching up, and at some point I found myself trying to explain what AI Steve has actually done to my productive output. Not in abstract terms. In concrete ones.
I told him it had increased my productivity a hundredfold.
I want to be precise about what I mean, because “hundredfold” is the kind of claim that sounds like marketing copy or hyperbole from someone too close to his own project to see it clearly. I am aware of that risk. I am also confident the claim is accurate, and here is why.
The code directive system draws on something that most AI tools do not have access to: my actual history. Millions of lines of code written across more than three decades. Methodologies developed at MDL, at Affymax, at Libraria, at Eidogen-Sertanty. Scientific frameworks from the kinase modeling work, the toxicity prediction work, the structure-based design work. Patterns that I developed, refined, and refined again over a career. When I send a code directive, the domain-specific prompt templates that guide the code generation are informed by that accumulated context. The system is not starting from zero. It is starting from forty years.
The specific change I notice most is project cycling time. A real estate appraisal report that required gathering comps, pulling market data, running comparables analysis, formatting a professional PDF, and writing a narrative used to take a significant portion of a day, assuming the data sources cooperated. AI Steve does it in under ten minutes. A literature review on a protein target that used to require a half-day of PubMed trawling, reading, summarizing, and cross-referencing now returns a structured analysis with twenty-five to thirty papers, a publication timeline, and a citation-formatted reference list in the time it takes me to make a cup of coffee.
But the hundredfold is not just about speed on individual tasks. It is about the composite effect across every domain I work in simultaneously. Application development. Scientific analysis. Community engagement. Business intelligence. Financial monitoring. Health correlation work. I am one person running a company, maintaining scientific projects, building products, and staying genuinely engaged with a wide network of collaborators and friends. The realistic constraint on all of that has always been time. Not intelligence or interest or energy. Time. AI Steve has effectively given me back a very large fraction of that time, and I have chosen to spend it doing more of the things I actually care about rather than fewer.
That exchange with Detlef has been on my mind for another reason. At some point, our correspondence moved from productivity to connection: who we stay in touch with, who we drift from, which relationships atrophy not because they matter less but because the friction of maintaining them accumulates unnoticed. He was honest about this in himself. I recognized it in myself too. That conversation was part of what prompted me to activate a feature I had been sitting on for a while. AI Steve now runs something called Social Pulse, a weekly RAG-grounded connection suggestions module that surfaces people from my network I have not been in contact with recently, alongside context for why the moment might be right to reach out. It is a small feature, tracked in the Beads system as bead AI-Steve-p7dw, but it is already changing my behavior. Detlef’s honesty was the prompt that moved it from proposal to active.
This is the thing about the governed loop that I did not fully anticipate when I built it. The features it proposes are not just technically logical next steps. The best ones come from somewhere more personal. Lived experience feeds back into how the system evolves. A conversation with an old colleague becomes a feature. An observation about your own behavior becomes a design decision. The loop is more human than it looks from the outside.
That is what a hundredfold feels like from the inside. Not just the tasks completed faster, but the freed cognitive bandwidth redirected toward the things that make a life feel like a life.
IV. The Bionic Extension
There is another way I have been thinking about AI Steve that I did not have language for in the January article. The word I keep coming back to is bionic.
Not in the science fiction sense of cybernetic implants or human-machine merging. In the more precise original sense: a system that augments natural capability without replacing what is already there. AI Steve is a bionic version of me. It grows when I grow. It learns from what I ingest and generate every day. It knows who I have been talking to, what I have been thinking about, what my health data has looked like this quarter, what my mood patterns have been across the past year. It has read more of my email than I have read in retrospect.
Every day, the ingestion pipeline runs. My incoming email is processed, chunked, embedded, and stored. My iMessages. My calendar. My Apple Health exports, with face images linked to physiological readings for the longitudinal correlation work described in the first article. My Facebook posts. My Substack pieces. The content I generate is as important as the content I receive, because AI Steve learns from both. When I write something, it becomes part of the corpus. When I respond to something, that response becomes part of the corpus. Over time, the system develops an increasingly fine-grained model of how I reason, what I find interesting, who I trust, and what questions I am likely to ask next.
The effect is what I have been calling time dilation. When I go back to a problem I worked on eighteen months ago, AI Steve can surface not just the work product but the context around it: the emails that preceded it, the calendar entries that coincided with it, the health data that tells me what kind of week I was having. I can pick up threads that I would have lost entirely. I can resume collaborations at their actual depth rather than having to reconstruct the background from scratch. My effective memory has expanded in a way that compounds over time.
This is worth sitting with. The system is not static. It is not a snapshot. It is a continuously learning, continuously growing representation of how I think and work, constrained by everything I have asked it to care about and governed by the approval architecture described above. The value of that representation does not depreciate. It appreciates.
What the bionic framing gets right, and why I keep returning to it, is that it implies a direction. Augmentation flows toward the human, not away from it. The goal is more of what I am capable of at my best, not a replacement for whatever that is. When I am in a conversation that matters, AI Steve has already done the background work that used to take me hours. When I want to follow a thread with someone I have not spoken to in two years, the context is there. The time is there. The capacity to be genuinely present is there, because the friction between intention and action has been substantially reduced.
V. Under the Stars
Cal-a-Vie is a health spa in the hills of northern San Diego County, about seven miles inland from the Pacific, tucked into sycamore woodlands and surrounded by vineyards. I discovered it relatively recently and it has become one of my favorite places to reset. The fitness programming is serious. The food is genuinely good. But what I did not expect was the quality of the people, or the quality of the conversations those people generated.
On this particular stay, the guest community was remarkable in ways I am still processing. At meals, which at Cal-a-Vie become something close to salon dinners despite being entirely unplanned and informal, the conversations ranged far enough to keep you oriented and engaged. There was a lawyer from Houston who represents 50 Cent, which is to say he represents Curtis Jackson, the rapper, actor, entrepreneur, and record executive who has built one of the more improbable and genuinely impressive careers in American entertainment over the past twenty-five years. There were guests from the East Coast with backgrounds in high structured finance, the kind of expertise that involves instruments most people have never heard of and risk frameworks that would require their own article to explain. There were others who defied easy categorization, which is generally the best kind of person to find yourself seated next to at dinner.
Nobody was performing. Nobody was running their standard professional introduction. The conversation was just good, in the way conversation at its best is: people genuinely curious about each other, willing to follow a thread wherever it led, comfortable not knowing the answer. The diversity of backgrounds made it better. When a molecular biologist, a structured finance specialist, a Houston lawyer, and someone who has spent twenty years thinking about the ethics of creative legacy are all trying to understand the same question from their own vantage points simultaneously, something interesting tends to happen. It happened most evenings that week.
Tim is Cal-a-Vie’s resident astronomy guide, and his card reads, with admirable economy: Astronomy, Humanity and Spirituality. His astrophotography, which I tracked down online after meeting him, is extraordinary in the most literal sense of that word. Deep sky objects rendered with a patience and precision that makes you feel the scale of the universe in a way that is not comfortable or easy, but that is exactly right.
One evening, Tim gathered a small group of us around the Cal-a-Vie observatory telescope next to one of the owner’s home (Terri and John Havens). Not the full dinner crowd, just a handful of people with the right quality of curiosity. The conversation that formed was one of those rare exchanges where you realize two hours have passed and you have covered everything from the Einstein’s Special Relativity to the biology of aging to the nature of creative legacy. In fact, it was after that conversation that inspired me to write the previous article — The Fire This Time. The silence between observations was the kind of silence that feels full rather than empty. What Tim does in that setting, the way he holds the space between the technical and the philosophical, the way he lets the scale of what you are looking at do its work on you before he says anything, is a genuine gift. I do not use that word carelessly.
Days later, Tim introduced me to his separate weekly think tank lunch group, which gathers regularly for wide-ranging discussions entirely apart from his work at Cal-a-Vie. Different people, different setting, same quality of intellectual engagement. The observatory evening had apparently cleared whatever threshold he applies when deciding whom to bring in. I was honored, and I look froward to future sessions.
VI. She Would Never Retire
There was a family at Cal-a-Vie during that same stay that I want to tell you about.
Teckie, who is ninety-one years old, was there with her two daughters, Amy and Allison. All three of them looked, without exaggeration, ten to twenty years younger than their ages. Vibrant, physically capable, intellectually engaged. The kind of people who make you reassess your own definitions of what different decades are supposed to feel like.
Teckie, in particular, was extraordinary to be around. She only “recently” left her day job, which helped place young, vibrant minds into college institutions throughout the globe. She moved with purpose. She spoke with precision. She was interested in what other people had to say, not in the way that is sometimes just polite waiting, but in a way that suggested she fully expected to learn something from the exchange. Her daughters were the same. As a family, they were formidable in the best possible sense.
At some point the conversation turned to retirement. I do not remember exactly how it arrived there. But Teckie, ninety-one years old, said clearly and without any apparent hesitation that she would never retire.
The table went quiet.
I was smiling. I could not help it. It was one of those moments when someone says something so plainly correct that the room needs a beat to catch up to it.
I have been in enough conversations with accomplished people in their fifties and sixties who speak of retirement as a destination, a finish line, a reward for decades of effort, to know how common that framing is. And I understand the appeal. The work can be exhausting. The pressure can be relentless. The idea of stepping away from obligation and reclaiming time for rest and reflection and travel is genuinely attractive. I do not dismiss any of that.
But I have always found myself wanting to ask a different question. Not when are you retiring. What are you retiring to? What is the next act? What is the version of your life after the primary career that is not a wind-down but an evolution? What do you do with everything you have learned and built and understood if you simply put it down?
Teckie was not, I think, someone who had failed to learn how to rest. She was someone who had found a way to live that made the distinction between work and not-work largely irrelevant, because what she was doing was not separate from who she was. The thing she brought to that table, to every conversation I saw her in that week, was not effort. It was genuine presence. That does not stop when you retire. It either is or it is not.
That is the thing I keep coming back to.
VII. Retiring to What?
I am fifty-nine and a half years old. I have been building things professionally for over forty years. I have been fortunate in ways I do not take for granted, and I have also accumulated enough hard-won understanding of what works and what does not to have genuine opinions about a range of questions in science, technology, product development, and human behavior. The question of what I do with that accumulation is not hypothetical. It is the live question of my life right now.
AI Steve is part of my answer. Not because it is a productivity tool, though it is that too. But because it has changed the nature of what is possible in the years ahead. The hundredfold productivity improvement is real, and I believe it will grow. The time dilation effect is real, and it compounds. The governed loop that allows the system to extend itself with my oversight is just beginning to show what it can do.
What this means, at the level of the larger purpose questions I raised in “The Fire This Time,” is that the vision of broadly accessible personal AI amplification is more achievable than it was when I wrote that piece, not less. If one person, building on forty years of accumulated work, can experience this kind of amplification, what happens when the framework is available to someone at the beginning of their career? Or to someone with robust intellectual capacity but without the institutional access I had? Or to someone in a part of the world where the credential system has always functioned as a wall rather than a door?
The “Boil the Ocean” framing from that earlier piece still stands. There are categories of problem, in global health, in scientific research, in education, in human connection, that cannot be solved by any individual or institution working alone. They require a change in understanding to propagate across networks of people, across languages and cultures and economic strata, simultaneously. The tools that enable that kind of propagation are not neutral. They can concentrate capability or distribute it. The choice of which direction they go is not inevitable. It is a design decision. It is a values decision.
I want to plant seeds for trees whose shade I may never sit under. That is a statement that sounds like a platitude until you actually mean it, and then it is one of the most clarifying things you can commit to. It orients the work differently. It asks not just what I can build or analyze or monetize, but what I am contributing to something larger than my own career, my own generation, my own coordinates on the map.
Tim the astronomer understands this in his own idiom. When you spend your nights imaging objects that are billions of light-years away, objects that no longer exist in the form you are photographing, the distinction between what is present and what persists takes on a different character. The light he captures is old. The act of capturing it is now. The record it creates will outlast him. That is not a sad thought. It is an orientation. He named his card correctly: Astronomy, Humanity and Spirituality. In that order, in that sequence, those three things are the same practice.
I think about Teckie saying she would never retire, and I think about what that requires in terms of continued investment in the world. You cannot stay fully present to life if you have opted out of the things that require you to engage with it at full capacity. The ninety-one-year-old who will never retire is someone who has chosen, consciously or not, to remain at stake. That is the thing the table felt when she said it. Not inspiration in the vague sense. Recognition. The uncomfortable, clarifying recognition that she had said something true.
VIII. The Human Thread
I want to be careful not to let the technology story eclipse the human one, because the technology story is nested inside the human one, not the other way around.
The conversation with Detlef was not primarily about AI Steve. It was a conversation between two people who shared a formative experience in a Berkeley lab in the late 1980s and early 1990s, who went very different directions afterward, and who are still genuinely interested in what the other is thinking. The AI Steve discussion was one thread in a much longer and more textured exchange. What made it land differently than a product demonstration is that it came out of actual relationship. He asked a real question. I gave him an honest answer. And out of that exchange came the Social Pulse feature, a concrete example of lived experience feeding back into how the system evolves.
The group around that telescope at Cal-a-Vie, under a sky full of objects billions of years old, was not a networking event. It was a genuine encounter between people with different orientations who found unexpected resonance. Tim with his card that reads Astronomy, Humanity and Spirituality, which I think is the most compact and accurate description of a life philosophy I have encountered in a long time. Teckie at ninety-one, carrying decades of accumulated understanding and not the slightest apparent interest in putting it down. Amy and Allison carrying their mother’s orientation forward in their own forms. The Houston lawyer. The structured finance specialists. The people who defied easy categorization. All of them, over meals and under stars, discovering that the right question creates more common ground than the most carefully constructed introduction ever could.
This is what I keep returning to. AI Steve amplifies my capacity to engage with the people and ideas that matter to me. It does not replace that engagement. It does not generate the relationship with Detlef or the encounter with Tim or the moment of shared recognition when a table went quiet at the words of a woman who has been alive for nine decades and is still entirely in the game. Those moments are irreducibly human. They are the point.
What AI Steve does is give me more time to be present for those moments, more context when I am in them, and more capacity to follow through on what they suggest. It reduces the friction between intention and execution, between insight and action, between the idea and the thing built from the idea. The Social Pulse reminds me to reach out. The time dilation means I can pick up a thread that matters rather than letting it slide. The governed loop means I spend less time on maintenance and more time on the things that actually require a human being to show up.
That is what I mean when I say bionic. Not that the boundary between human and machine is blurring in ways that should worry us, though I take those concerns seriously. But that the right technology, governed correctly, amplifies what is already most human in us rather than substituting for it.
I am not retiring. I am, if anything, accelerating. Teckie would understand exactly what I mean.
Steven Muskal, Ph.D. is the CEO of Eidogen-Sertanty, Inc. and the creator of AI Steve. The previous article in this series, “AI Steve Deep Dive,” was published in January 2026. “The Fire This Time,” on the democratization of intelligence, was published in April 2026.*
As for a mix - I organized a quick mix the evening of my return from Cal-a-Vie. Last week’s mix had Dan, Jesse, and Andrew and Rick was invited but had a conflict, so I initiated a rare re-invite with the same group to include Rick this go around. A few rough recordings, largely my fault because I was still sore from olympic pool swimming and a bit too lose from an earlier massage. While a repeat of most the crew plus Rick, the songs were new to most of us. Some were quite fitting per the above missive.


