Wisdom Agent
We’ve completed phase two of our wisdom agent project. (Well, we didn’t test that the UI works yet, so it’s not really completed).
The wisdom agent is an agentic shell that keeps multiple AI sessions centered around a core philosophy, and that has memory—allowing each new AI session to learn from and build upon previous AI sessions. Eventually, we hope to evolve the wisdom agent into a self-developing AI. This project’s mission is to create agentic shells and (we hope, in the fulness of time) LLMs that help humans as individuals and in groups grow in wisdom while creating and maintaining systems (be they technological, organizational, governing, etc) that help them select for wisdom rather than folly.
We envision many uses besides its primary mission. For example, companies could use its basic structure to help keep all their work centered around their company values and approved protocols. Or it could be used to create learning tools that help users to continue to build upon previous knowledge (be it in math, foreign languages, etc).
Right now the wisdom agent can adhere to a core philosophy (Core), and reference longer explanations of the philosophy as needed (Pure Love & Something Deeperism / Shared Something Deeperims / Wisdom Meme & AI), and at the end of the session, the wisdom agent reflects on the session and writes a report analyzing how well it embodied the core philosophy, and suggesting how it might improve. All of those philosophical documents were written entirely by us humans and fictional character, except for the core text, which is our development of Claude’s attempt to build a core text out of our explanation of the project plus this overview of Bartleby’s larger project: The Project (with links).
The current wisdom agent may also refer to additional guides that were written either largely or in part by Claude: Limits outlines AI’s limitations when it comes to growing human wisdom, and how it might navigate them; and Rubic is a template for the self-reflection reports the agent creates at the end of each session.
Here’s today’s conversation (First Conversation) and reflection (First Reflection).
The full wisdom agent will have both JSON-indexed and vector memory, and it will continuously improve by regularly (we haven’t decided how many times in a session, and/or perhaps prompted by certain types of scenarios) analyzing its performance and refining its approach.
When the wisdom agent evolves into a wisdom AI, it will incorporate a self-refining LLM. Well, that’s the idea.
We’ve been working on this project for a while.
Here’s a previous attempt at a core philosophy (Founding Prompt for WA), some earlier brainstorming (AI & Wisdom), and a conversation we had with Chat GPT about democracy being a spiritual good (Democracy with ChatGPT).
This project builds upon the Wisdom Meme project. The “wisdom meme” was supposed to be a koan so perfect that no one who heard it could resist enlightenment. However, there are already wisdom memes that we are unlikely to improve upon (“Love the Lord with all your heart and soul and mind, and your neighbor as yourself” / “Everything is interdependent and dependent-arising; there are no separate self-entities; joyous loving-kindness towards all is both a path to experiencing life prior to illusions and the fruit of experiencing life prior to illusions” / “Ruthless compassion for yourself and everyone, yeah it’s hard, but it can be done”) and yet we all remain largely foolish.
And so the wisdom meme project seemed doomed to fail.
But then we began to consider AI’s ability to maintain focus, its indifference to hopes and fears, and its great memory and intellectual capabilities. And we began to think maybe AI could help us make better use of wisdom memes. Maybe AI can help us keep first things first (ie, prioritize wisdom / Pure Love), and maybe it can help us evolve and monitor ourselves and our shared systems so that they foster wisdom rather than folly.
[For an example of how systems can foster either wisdom or folly; consider the difference between healthy democracy, where the people help make sure public virtue is compatible with living well and keeping oneself and one’s family safe; and tyranny, where people have to choose between (a) publicly supporting (or at least acquiescing to) dishonesty and corruption or (b) keeping themselves and their families secure and prosperous.]
[For an overview of Bartleby’s whole project see The Project (with links). (This is the same link from above; it’s the page we used as the foundation of the core.txt)]
And what do we think right now? Hmmm. Well, for one thing: let’s set a limit on talking to LLMs. No more than one hour in four writing/reading/thinking hours. We want to benefit from AI’s powers but not lose ourselves in the vortex of its oft too-congratulatory camaraderie. Anyway, let’s come back to this later
We continue to reflect upon the project at Philosophy for AI
Authors: BW & AW
Editors: AW & BW
Copyright: AM Watson
