We foster high-trust, informal dialogue between junior safety researchers, students, and engineers who feel alienated by formal corporate labs.
The field of AI Safety is moving fast, often leaving brilliant minds behind simply because they aren't inside a major lab.
Our vision is a world where the next key insight in interpretability comes from a philosophy major, an autodidact, or an undergraduate CS student. We are building the "Watercooler"—the place where informal, high-trust conversations happen before they become formal papers.
"Bridging the cultural gap between LessWrong style rationalists and academic ethicists through intellectual honesty."
The principles that guide our discourse and research.
We prioritize asking the hard questions over defending established positions. We explore mechanistic interpretability not just to solve it, but to understand it.
We admit when we are confused. We value truth-seeking over status-seeking. High-trust dialogue requires vulnerability about what we don't know.
We actively bridge the cultural divide. Whether you come from a rationalist blog background or a formal philosophy department, you belong here.
How we collaborate, learn, and grow together.
Join our daily Zoom calls where we dissect the latest papers from Redwood Research, Anthropic, and independent alignment researchers. We don't just skim; we analyze methodologies, critique assumptions, and brainstorm follow-up experiments.
Breaking into AI safety is hard. We pair senior researchers with anonymous juniors, students, and autodidacts for unfiltered career advice and technical guidance.
Designed for:
Our tech stack is simple because the value is in the people. We operate a standard Discourse forum for long-form thought and a Discord server for the daily watercooler chats.