As AI agents make more and more decisions autonomously on our behalf, we face a dilemma:
Should we take moral responsibility for their actions, or should we assume that the responsibility lies with those who trained the LLMs running these agents?
This choice is not just about our agents, but also about our own freedom. Because without responsibility, there can't be real freedom.
To help individuals, corporations and governments to take active, explicit and transparent stance on the ethical frameworks that their agents should follow, we propose a new file format called VALUES.md
.
Just like Claude Code uses CLAUDE.md files to follow user preferences on syntax, design patterns and style, we propose that all AI agents should autodiscover VALUES.md
specification and actively use it as part of their system prompt whenever possible.
Discover your values through 12 ethical dilemmas
Your responses are stored locally for privacy. You can optionally contribute anonymously to research at the end.