Creating space for the next breakthrough in AI safety research.
The field of AI Safety is moving fast, often leaving brilliant minds behind simply because they aren't inside a major lab.
Our vision is a world where the next key insight in interpretability comes from a philosophy major, an autodidact, or an undergraduate CS student. We are building the "Watercooler"—the place where informal, high-trust conversations happen before they become formal papers.
"Bridging the cultural gap between LessWrong style rationalists and academic ethicists through intellectual honesty."
We believe that meaningful progress in mechanistic transparency requires diverse perspectives. The most important safety insights often emerge from unexpected places—from those who bring fresh eyes to problems that insiders have grown accustomed to.
Our community serves as a decentralized research pipeline, connecting junior researchers, students, and independent thinkers with the broader AI safety ecosystem. We provide the infrastructure for high-trust dialogue without the barriers of formal institutions.
Whether you're a philosophy major questioning the foundations of alignment, an engineer curious about interpretability, or a student just beginning to explore these questions—you belong here.