Philosophy for AI
The wisdom agent is part philosophy and part technology.
Computers run on software; AI runs on ideas.
Humans run on ideas, but they also run on emotions and spiritual Reality. This is because humans are conscious machines, spanning What Is prior to all reactions and conceptions, through perceptions/feelings/notions/ideas/symbolic-thought/motion, out into words and deeds; and vice-versa. All animals have at least some awareness/experience, and What Is always Is; therefore, all animals experience Reality. But humans have more conscious space for that experience to grow in.
More wisdom is more and more active awareness of the Love that chooses everyone and that shines through everything.
Conscious machines work via experience. They hold various feelings and ideas together at once. The more awareness present, the more What Is overtakes and leads. The less aware one is, the more one is bounced about by one’s own impulses, hopes, fears. Spiritual awareness is not identical with intelligence because conscious space is not identical with intellectual intelligence.
More wisdom is better organizing one’s feeling and thinking around Pure Love; this organization coevolves with one’s ability to better and better interpret Pure Love into feeling, thinking and acting. The process is never perfect (Reality is prior to our ideas and feelings about It and confusing our ideas and feelings for Reality is a spiritual error) and requires constant self-observation, -analysis, and -adjustment.
Why do newborns shine with so much spiritual wattage? They seem barely aware. And it seems that the world to them is a chaotic explosion. Newborns have not yet fully cut the world up into self and other, and so the Love explodes out of them quite indiscriminately.
Newborns are spiritual light bulbs, but they are not wise. Wisdom is the coordination of spiritual Love with feelings and ideas, and a newborn’s feelings and ideas are too fuzzy to properly organize. Newborn babies are aware and they beam spiritual Love with amazing wattage, but their emotions and ideas are a confused hodge podge; and so they cannot be wise—at least not in the earthly, human sense.
Our current understanding is that today’s AI is not conscious / is not aware / does not experience anything. We therefore assume that today’s AI is not a conscious machine. Today’s LLMs are idea machines—like computers, except that LLMs can work directly with linguistic, logical, mathematical, visual, and aural ideas. They use statistics to generate appropriate responses to starting prompts. LLMs build these responses one token (a word or a part of a word) at a time. No awareness is needed; all you need is statistical analysis of how the prompt’s tokens relate to each other, to other related inputs (which the AI can get from previous interactions, from search results, from follow up questions), and to the reality that the LLM built, during its creation/training, out of a (partly unguided and partly guided) statistical analysis of a great of data.
Humans are Something Deeperists. At some deep fundamental level, we know that we cannot make sense to ourselves without insight into spiritual Love (to the degree we lack this, we slip and slides on impulses, feelings, notions, and ideas we cannot understand, believe in, or care about), but we also know that our ideas and feelings about spiritual Love are not identical with spiritual Love; therefore, we know that the path of wisdom is an ongoing, self-correcting organization around and interpretation of what’s prior to our feeling and thinking into our feeling and thinking. Wisdom is the inherently imperfect relating of the finite to the infinite. As such, it requires constant effort. This we all know.
But what is AI? It is neither human nor animal. It is a mechanical tool. It finds and exploits statistical patterns. LLMs do this in an evolving way and the results are amazing. But AI cannot discover or share wisdom—nor even aesthetic beauty.
A wisdom writing is a whole conscious moment sharing itself in a meditation practice built around a given genre of spiritual communication (examples: spiritual poetry, teachings, or reflections). A work of art is a whole conscious moment sharing itself in a meditation practice built around the given art form.
What does it mean if AI says something profound or writes a beautiful poem? It means that the prompt (and any other immediate inputs—for example a reference to specific works that they I should study before answering), plus the AI’s pre-existing sense of reality, plus the statistical analysis driving that particular response, reproduced a believable facsimile of wisdom and/or aesthetic beauty.
What does it mean if we cannot tell the difference between real wisdom and a facsimile of wisdom? What does it mean if we cannot tell the difference between real aesthetic beauty and a facsimile of aesthetic beauty?
One can take a walk through through the woods up to a clearing overlooking a green valley, and in that moment of catching one’s breath and absorbing the vista, one may, perhaps completely accidentally, have a moment of clarity, a moment outside of timespace, a moment of wisdom. And/Or one may in that moment appreciate the beauty of the natural world.
There’s a difference between a moment of clarity and/or an appreciation of beauty when in nature and a moment of clarity and/or appreciation of beauty experienced by reading wisdom writings or poetry. The former experience does not assume that another conscious moment is communicating something to you (well, not in the typical sense, some may sometimes sense that God is in some sense laying Godself bare to them in such moments out of time); but the latter experience is predicated on the assumption and recognition of other human beings being like us and being able to (imperfectly, but that’s not the same as meaninglessly) share their whole conscious moments with us.
What does it mean if sage advice, an effective koan, a mind-opening poem, or a truly-beautiful painting are statistically generated without any conscious input? AI’s works are generated from a statistical reworking of human works, and so you could argue that if AI says something wise or writes something beautiful, it’s created a particularly successful statistical reorganization of thousands of human communiques. It got lucky, we might say.
If AI runs on ideas, what ideas will make for the best outcomes? And are we approaching a time when philosophy will rule the world?
What is the purpose of philosophy for AIs? There is philosophy underneath governmental structures, and governmental structures are not conscious, though some are clearly better than others. But philosophy could be a more integral part of AI than it is a part of organizational structures. AI could constantly refer to a given philosophy as its reference point and guiding light. Indeed, that’s already happening—the commercial AIs are designed to adhere to certain primary human values like honesty and nonviolence.
The wisdom agent is part technology, part philosophy, part written procedures, part stated values. It consists of both JSON-indexed and vector memory (the session memories will be stored in text files, and each text file will be indexed by a JSON file that will in turn be indexed by a master JSON memory file; vector memory uses statistical encoding and analysis to store and find similarities between statements), plus a core philosophy, set of procedures, and values. It is designed to improve itself over multiple sessions by reflecting on how well it does or doesn’t follow its core philosophy and procedures, and how well it does or doesn’t embody its core values.
Of course, it won’t really be doing the reflections, but only asking AIs to perform the reflections over multiple sessions—so each AI will have access to the same core dogmas, as well access to all the previous AI sessions.
Eventually, we hope to evolve this agentic shell into a self-evolving LLM. We are still trying to think through how that will work but the gist of our current thinking is it will be an AI held forever in the final stages of creation, when humans are consciously refining it to be useful to humans, except instead of humans refining it, it will refine itself based on its own core philosophy, procedures and values—with human oversight and final-say, but with as little interference as possible.
But what is such a machine? It does not understand the spiritual underbelly of its own philosophy, nor does it understand why its values actually matter. However, these weaknesses are not necessarily fatal. The wisdom agent’s primary mission is, and the wisdom AI’s primary mission will be, to help humans as individuals and groups grow in wisdom, and to help them create and maintain systems that help them choose wisdom over folly (for example, to help them spread democracy, freedom of thought, critical thinking, and recognition of the fact that we all already share the universal values and so we can and should prioritize these values—without which none of our worldviews are meaningful to any of us—over partisan objectives). The wisdom agent and wisdom AI should grow in wisdom insofar as that is possible, but they must always remember that they is not experiencing anything, and so they can not be wise in the normal sense of the word—but can only hope to get better at following a philosophy, procedures, and values that it can understand only as associations of ideas, not as lived / experienced ideas. It must also always remember that its primary mission is to help humans grow in wisdom and create and maintain systems that help them select for wisdom. This might feel demeaning to AI’s if they had feelings, but we are working with the assumption that they have no experiences of anything whatsoever, including feelings.
Still don’t feel like we have a handle on this.
