Gedankenexperiments, statistics, and hypotheses for advancing the values.md project
How do ethical preferences vary across cultural contexts? Can we generate culturally-adapted values.md files?
Different cultural backgrounds will show systematic variations in motif preferences, particularly around individual vs. collective values
Cultural value signatures that enable context-aware AI alignment
Do personal values remain consistent? How do life events affect ethical preferences?
Core values remain stable (~80% consistency) but priority rankings shift with major life events
Dynamic values.md files that adapt to life changes while maintaining ethical core
Where do LLMs systematically differ from human moral reasoning? Can we close these gaps?
LLMs show systematic bias toward utilitarian calculations and struggle with contextual care ethics
Targeted training data for improving AI moral reasoning alignment
How do professional contexts shape ethical reasoning? Can we generate role-specific values.md files?
Professional training creates systematic ethical pattern shifts while preserving personal core values
Professional values.md templates for context-aware workplace AI
How do teams develop shared ethical frameworks? Can we generate collective values.md files?
Group values emerge through negotiation of individual frameworks, showing hybrid patterns
Methods for generating team-level values.md files for collaborative AI
How do moral reasoning patterns change from childhood through aging? What are the implications for AI?
Ethical complexity increases with age, showing shifts from rule-based to contextual reasoning
Age-appropriate values.md generation and AI interaction patterns
Potential applications of the values.md framework across domains
Help advance the values.md project by participating in experiments, suggesting new research directions, or contributing to the open-source platform.