The Engineered Wisdom Manifesto
Tools and systems developed by humanity have been driven in large part by motives to survive and thrive. They have served well to amplify our physical and mental capabilities. With AI we are now developing tools that don't just amplify our abilities but in a number of cases surpass our intelligence and replace us. Yet the development of this powerful technology still appears to be driven by the same old motives. This approach will not suffice and could prove disastrous.
AI Development Must Be Based on Robust Philosophical Foundations
Every day, AI systems make thousands of value judgments on our behalf. They decide what content deserves your attention. Which job candidate gets an interview. What medical treatment to recommend. How to balance free speech against harm. Whether to prioritize efficiency or fairness.
These aren't just technical problems but questions humans have wrestled with for millennia. Questions about what matters. What's good. What's wise.
Yet we're building these systems as if they're science and engineering challenges—as if optimizing a metric is the same as pursuing what's valuable, as if scaling is the same as wisdom.
The Gap
Today's AI alignment conversation happens mostly in two separate worlds:
World One: Scientific and Technical AI Safety
Researchers studying reinforcement learning, reward hacking, deceptive alignment, value specification, interpretability. Brilliant, essential work. Mostly happening in academic papers and AI labs.
World Two: Practical AI Deployment
Engineers and product managers shipping AI systems every day. Making value judgments in pull requests and product reviews. Often without any framework beyond "what's technically feasible" and "what hits our metrics."
These worlds seldom overlap.
Meanwhile, there's a third world that appears mostly absent from both:
World Three: Wisdom Traditions
Thousands of years of human thinking about values, judgment, wisdom, ethics. This spectrum spans from eastern religions to western philosophy with everything in between and beyond. They offer perspectives and frameworks such as: Buddhist insights on attachment and optimization, Virtue ethics on practical wisdom versus rule-following, Confucian perspectives on harmony in systems and Stoic frameworks for judgment under uncertainty.
These traditions aren't relics. They're the deepest thinking humanity has done about exactly the problems we face in AI: How do we build systems that pursue what's truly valuable? How do we make wise judgments? How do we avoid being corrupted by our own optimization targets?
What Engineered Wisdom Is
This is where those three worlds meet. Working at the intersection, it is an attempt to study and synthesize:
- Philosophical depth (philosophy, wisdom traditions, ethical frameworks)
- Scientific research (reinforcement learning, reward hacking, deceptive alignment, value specification, interpretability)
- Technical practice (software development, product management, program management)
- Industry lessons (what went wrong, what that means for AI)
- Practical implementation (how organizations actually build and ship systems)
It aims to then turn this synthesis into actionable nuggets for practitioners building AI today and the future.
Who Is It For
Engineered Wisdom is a space for people who:
- Ship or want to contribute to shipping AI systems and want to do it thoughtfully
- Believe technical excellence isn't enough
- Know metrics are proxies, not goals
- Feel the weight of value judgments in code
- Want frameworks, not just intuitions
- Care about getting it right, not just getting it done
It is for deep thinkers and especially for practitioners including:
- AI researchers and ML engineers
- Product managers and technical leaders
- Engineering managers and architects
- Anyone making decisions about AI systems
What Do We Do Here
We explore questions like:
- How do Buddhist insights on attachment help us avoid over-optimization?
- What can virtue ethics teach us about building judgment into AI systems?
- How do we encode wisdom, not just optimization?
- What did industry failures teach us about alignment risks?
- How do product teams actually make ethical tradeoffs explicit?
- What does it mean to build technology with soul?
We create:
- Frameworks that make abstract wisdom actionable
- Case studies showing what goes wrong and what goes right
- Practical tools for teams building AI systems
- Synthesis connecting technical, philosophical, and spiritual perspectives
- Honest reflection on mistakes and lessons learned
Core Principles
1. Wisdom is Practical
Ancient spiritual traditions and modern philosophy aren't abstractions—they're frameworks for action. We make them usable for modern practitioners.
2. Values Are Technical Decisions
Every architecture choice, every metric, every system design embeds values. We make those values explicit and debatable.
3. Optimization is Not Wisdom
Hitting targets is not the same as pursuing what matters. Scaling is not the same as getting it right. We need both—but we need wisdom to distinguish them.
4. We Learn From Failure
There are already a number of industry case studies in misaligned optimization. Rather than repeat those mistakes with more powerful systems, we study them.
5. Multiple Traditions, Deeper Understanding
No single framework has all the answers. Buddhist, Christian, Confucian, Stoic, virtue ethics, consequentialism—each offers insights. We synthesize.
6. Ship Thoughtfully
This isn't about not building AI. It's about building it with wisdom embedded from the start.
The Work Ahead
We're in the early innings of AI deployment. The decisions being made today—in product reviews, in architecture discussions, in metric selection—will shape systems that affect billions of people.
We can do this well or poorly.
This is hard work. It requires thinking deeply about values while shipping products. Engaging with thousand-year-old wisdom while debugging models. Being intellectually rigorous while moving at startup speed.
But it's possible. And it's necessary.
Join Us
This is your community.
Subscribe to get essays exploring the intersection of ancient wisdom and modern systems. Share your own experiences—the value judgments you face, the frameworks you've found useful, the mistakes you've learned from.
What you'll get:
- Bi-weekly essays synthesizing wisdom traditions, philosophy, and practical AI deployment
- Frameworks and tools you can use in your work
- Case studies from industry and beyond
- Honest reflection on building systems that matter
Subscribe on Substack