AI Safety Events & Training: 2025 week 25 update
This is a weekly newsletter listing newly announced AI safety events and training programs. Visit AISafety.com/events-and-training for the full directory of upcoming events and programs.
New events
AI Safety Communicators Meet-up June 24 (online).
Aiming to bring together AI safety communicators from around the world, this event is an opportunity to meet others doing similar work, share insights, and explore potential collaborations. Hosted by AISafety.info (a project of Rob Miles), the event will be comprised of lightning talks and speed-networking.Exploring Multi-Agent Risks from Advanced AI June 26 (online).
The first in a new monthly seminar series, "Updates in Cooperative AI", from the Cooperative AI Foundation (CAIF). This seminar will explore the risks presented by AI-AI interactions within multi-agent systems, based on key findings from CAIF's recent report featuring contributions from Lewis Hammond, Gillian Hadfield, and Michael Dennis.Debate: Will there be an intelligence explosion? July 1 (London, UK).
There has been growing discussion about the possibility of making much faster progress in AI by automating the research pipeline. To debate this likelihood and its possible effects are Tyler Cowen, Michael Webb, Tom Davidson, and Connor Leahy. This event will bring together a group from policy, media, investing, and AI research to attend the debate and a reception afterwards.Post-AGI Civilizational Equilibria (PACE Workshop) July 14 (Vancouver, Canada).
This workshop will address the technical and institutional questions of how to safeguard human interests after AI surpasses human abilities. The hope is to gather insights from across various domains about the roles humans could play in a world transformed by AGI, and which positive equilibria are stable, if any.
New training programs
Algoverse AI Safety Program July 14 – October 3 (online).
12-week, part-time, mentored fellowship to help university students and professionals get started in AI safety research, especially those who already have programming and ML experience. The fellowship will involve an exploration of AI safety research agendas, technical exercises to upskill in relevant domains, and final research projects.ML4Good France August '25 August 30 – September 7 (France).
Intensive in-person bootcamp aiming to equip participants with the technical skills and critical understanding needed to tackle important AI safety challenges. There will be peer-coding sessions, presentations by experts in the field, reviews of AI safety literature, personal career advice and mentorship, and discussion groups. This edition is aimed at those currently based in Western Europe.ML4Good Brazil August '25 August 30 – September 7 (Brazil).
As above, but aimed at those currently based in South America.ML4Good Germany September '25 September 6–14 (Germany).
As above, but aimed at those currently based in Central and Eastern Europe.ML4Good Canada September '25 September 13–21 (Canada).
As above, but aimed at those currently based in Canada and Eastern USA.ML4Good UK September '25 September 15–23 (UK).
As above, but aimed at those currently based in the UK and Ireland.ML4Good Singapore September '25 September 20–28 (Singapore).
As above, but aimed at those currently based in Southeast Asia.ML4Good Governance September '25 September 18–26 (France).
Unlike ML4Good's technical AI safety bootcamps, this one is focused on governance, strategy, and communication. Workshops will include examining various types of governance, developing scenario planning for AI deployment outcomes, understanding practical policy levers and intervention points, and learning to articulate AI risks clearly and adapt messages for diverse audiences.