This is a weekly newsletter listing newly announced AI safety events and training programs. Visit AISafety.com/events-and-training for the full directory of upcoming events and programs.
New events
ControlAI Policy Sprint June 13 (online).
1-day sprint red teaming the policy framework "A Narrow Path" – a comprehensive plan to address extinction risks from ASI, developed by ControlAI. It is already being actively pushed to lawmakers in the UK and US through their Direct Institutional Plan, making this red teaming exercise relevant to real-world policy implementation.AI Safety Meetup Los Angeles June 19 (Los Angeles, USA).
Co-hosted by BlueDot Impact, the AI Safety Awareness Project (AISAP), and AE Studio. This is a chance to hear from AE Studio's alignment team, engage in discussions and connect with others working on AI safety, and meet alumni from BlueDot programs, AISAP workshops, and members of local communities interested in AI safety.AI Futures Forum June 21 (Berlin, Germany).
Organized by BlueDot Impact and AI Safety Berlin, this event is designed to spark meaningful conversations, share opportunities, and facilitate lasting connections within Berlin’s AI safety ecosystem. There will be flash talks and small group discussions, followed by open space for deep dives, networking, and collaboration.Ooty AI Alignment Retreat 2.0 June 23–30 (Ooty, India).
Retreat involving hands-on projects and discussions at the intersection of AI safety and “post-rationalist thinking”. There will be coding, AI-assisted writing sprints, strategy talks, forecasting tabletop exercises, and contemplative practices like meditation. Non-technical people are welcome.EA Summit: Santiago July 19 (Santiago, Chile).
A day of talks, workshops, and discussions (mostly in Spanish) focused on high-impact projects in effective altruism – including AI safety. Whether you're just discovering EA or have been working on pressing global challenges for years, the summit is an opportunity to connect, exchange ideas, and discover new ways to make a difference.EAGxSãoPaulo 2025 August 22–24 (São Paulo, Brazil).
3-day conference designed for people committed to using evidence and reason to maximize their positive impact on the world – including those working on AI safety. The event fosters deep learning and high-quality connections around some of the most pressing global challenges.EA Summit: Philippines September 27 (Manila, Philippines).
1-day event for individuals across the Philippines who are thinking seriously about how they can make the world a better place – including through AI safety. It will bring together people from a wide range of fields to explore how evidence, reason, and compassion can guide our efforts toward doing the most good.EAGxAustralasia 2025 November 28–30 (Melbourne, Australia).
This is the major event of the year for the effective altruism community across Australia and New Zealand – including those working on AI safety. This year’s conference will be filled with opportunities to learn, connect, and boost your impact.
New training programs
Impact Research Groups July 12 – September 7 (London, UK).
Research program helping students take the first steps toward a high-impact research career. Over the course of 8 weeks, participants work in small teams to explore a research question in a key focus area, including AI governance and technical AI safety. Guidance from experienced mentors is provided.
In a time where AI is advancing at unprecedented speed, a few voices are quietly choosing a harder path:
One that puts safety before scale. Wisdom before hype. Humanity before power.
There’s a new initiative called
“Safe Superintelligence Inc. “
a lab built around one single goal:
To develop AGI that is safe by design, not just by hope or regulation.
Created by Ilya Sutskever, Daniel Gross, Daniel Levy
If you're someone with world-class technical skills and the ethical depth to match —
this is your call to action.
We don’t need more AI.
We need better, safer, more compassionate AI.
Spread the word. Support the mission.
https://ssi.safesuperintelligence.network/p/our-team/