⚠️ Work in progress. Use at your own risk :)📝 This README is subject to change.
Design, test, and secure your AI assistants before they reach your users.
Aletheia is an open-source platform that guides you through the full lifecycle of an AI assistant, from first draft to production-ready. Think of it as a test bench for your AI. You don't use it in production, you use it before production, to make sure what you ship is solid.
Aletheia works for everyone. If you're not technical, everything is guided : you write in plain language and click through simple steps. If you are a developer, the full power is always one layer deeper : raw editors, code, and APIs when you need them.
Most teams build AI assistants by trial and error. Instructions pile up in Notion docs, Slack threads, or random files. There's no process to test them, no way to catch problems before deployment, and no shared space where everyone on the team can contribute.
You ship, hope for the best, and fix things after they break.
Aletheia changes that.
Design → Test → Secure → Evaluate → Version → Ship
1. Design : Write and iterate on your AI assistant in a clean editor. Use placeholders, logic, and reusable blocks to build something that adapts to any situation, no code needed.
2. Test : Try your assistant with real inputs. See exactly what it says. Replay tricky situations, compare results, and adjust until it behaves the way you want.
3. Secure : Before you ship, Aletheia checks your assistant for weaknesses and tells you in plain language what could go wrong and how to fix it.
4. Evaluate : Describe what a good response looks like in plain language. Aletheia checks every new version automatically against your criteria.
5. Version : Every change is saved. Go back to any previous version in one click. Compare two versions side by side.
6. Ship : Once everything looks good, send your assistant to wherever it needs to go.
Before you ship, Aletheia runs a series of checks to make sure your assistant can't be tricked or misused.
- Can a user manipulate your assistant to make it say things it shouldn't?
- Can someone extract your instructions by asking the right questions?
- Can your assistant be pushed to ignore its rules?
- Are there situations where your assistant behaves in unexpected or harmful ways?
Each check gives you a plain language result and a concrete suggestion to fix it. No security knowledge needed.
Custom rules
You can add your own rules in plain language:
"Never mention competitor names"
"Always respond in the same language as the user"
"Refuse any request unrelated to customer support"
Want to go further? Developers can write their own custom checks in code.
| Layer | Technology |
|---|---|
| Frontend | React |
| Backend | Elysia.js (TypeScript) |
| Database | PostgreSQL + Prisma |
| Auth | Better Auth |
Aletheia works with any OpenAI-compatible provider : cloud or local. Bring your own model.
Not a developer? A hosted version is coming soon.
# Clone the repository
git clone https://github.com/ChrysosLab/aletheia.git
cd aletheia
TODO
V1 : Prompts & AI assistants
- AI assistant editor with versioning
- Security test suite
- Quality checks in plain language
- Playground
V2 : Agents
V3 : Multi-agents
Contributions are welcome. Open an issue or submit a pull request.
MIT : free to use, modify, and distribute.
Aletheia : from the Greek ἀλήθεια, meaning truth and disclosure. Also an Isu from Assassin's Creed who guides, judges, and reveals.