Wisdom Agent
We’ve completed phase two of our wisdom agent project. (Well, we didn’t test that the UI works yet, so it’s not really completed).
The wisdom agent is an agentic shell that keeps multiple AI sessions centered around a core philosophy, and that has memory—allowing each new AI session to learn from and build upon previous AI sessions. Eventually, we hope to evolve the wisdom agent into a self-developing AI. But the shell itself we envision as having many uses besides our main one–which is creating AIs that help humans as individuals and in groups grow in wisdom and create and maintain systems (be they technological, organizational, governing, etc) that select for wisdom rather than folly. For example, companies could use the basic structure to help keep all their work centered around their company values and approved protocols. Or it could be used to create learning tools that help users to continue to build upon previous knowledge (be it in math, foreign languages, etc).
Right now the wisdom agent can adhere to a core philosophy (Core), and reference longer explanations of the philosophy as needed (Pure Love & Something Deeperism / Shared Something Deeperims / Wisdom Meme & AI). All of that was written almost entirely by us humans and fictional characters. The wisdom agent can also refer to additional guides that were written either largely or in part by Claude: Limits outlines AI’s limitations when it comes to growing human wisdom, and how it might navigate them; Rubic is a template for the self-reflection reports the agent creates at the end of each session.
Here’s today’s conversation (First Conversation) and reflection (First Reflection).
We’ve been working on this project for a while.
Here’s a previous attempt at a core philosophy (Founding Prompt for WA), some earlier brainstorming (AI & Wisdom), and a conversation we had with Chat GPT about democracy being a spiritual good (Democracy with ChatGPT).
This project builds upon the Wisdom Meme project. The “wisdom meme” was supposed to be a koan so perfect that no one who heard it could resist enlightenment. Given that there are already wisdom memes that we are unlikely to improve upon (“Love the Lord with all your heart and soul and mind, and your neighbor as yourself” / “Everything is interdependent and dependent-arising; there are no separate self-entities; compassion for and loving-kindness towards all is both a path to experiencing life prior to illusions and the fruit of experiencing life prior to illusions” “Ruthless compassion for yourself and everyone, yeah it’s hard, but it can be done”) and yet remain largely foolish, the wisdom meme project seemed doomed to fail. But then we began to consider AI’s ability to maintain focus, its indifference to hopes and fears, and its great memory and intellectual capabilities. And we began to think maybe AI could help us make better use of wisdom memes. Maybe AI can help us keep first things first, and maybe it can help us evolve and monitor ourselves and our shared systems so that they foster wisdom rather than folly.
[For an example of how systems can foster either wisdom or folly; consider the difference between healthy democracy, where the people help make sure public virtue is compatible with living well and keeping oneself and one’s family safe; and tyranny, where people have to choose between (a) publicly supporting (or at least acquiescing to) dishonesty and corruption or (b) keeping themselves and their families secure and prosperous.]
For an overview of Bartleby’s whole project see The Project (with links).
And what do we think right now? Hmmm. Well, for one thing: let’s set a limit on talking to LLMs. No more than one hour in four. We want to benefit from AI’s powers but not lose ourselves in the vortex of its oft too-congratulatory camaraderie. Anyway, let’s come back to this later
Authors: BW & AW
Editors: AW & BW
Copyright: AM Watson