AI Safety Events & Training: 2025 week 20 update
This is a weekly newsletter listing newly announced AI safety events and training programs. Visit AISafety.com/events-and-training for the full directory of upcoming events and programs.
New events
Summer Institute on Law and AI July 11–15 (near Washington D.C., USA).
5-day gathering of law students, professionals, and academics eager to explore pressing issues at the intersection of AI, law, and policy. Hosted by the Institute for Law & AI, the event aims to equip current and future legal scholars with the knowledge, tools, and networks needed to ensure that advances in AI are beneficial and safe for everyone.LessWrong Community Weekend 2025 August 29 – September 1 (Berlin, Germany).
The world’s largest rationalist social gathering brings together 250+ aspiring rationalists from across Europe and beyond for 4 days of intellectual exploration and socialising. The event has an unconference format and while there are many activities unrelated to AI safety, those are optional and there will be plenty of AI safety people there to make it worthwhile.
New training programs
Machines with Morals: interdisciplinary perspectives June 5 (Cranfield, UK or online).
This free workshop will focus on machine ethics, bringing together people from a range of disciplines in academia, industry, and government. There will be 5 keynote speeches, followed by a hackathon. The aim is to foster a community of researchers/academics/industry interested in addressing the question of how we can embed human values in a robot.Finnish Alignment Engineering Bootcamp (FAEB) 2025 June 16 – July 27 (Helsinki, Finland & online).
6-week technical AI safety bootcamp run by Tutke and based on the ARENA curriculum. The program includes 5 weeks of remote learning followed by a collaborative project week in Helsinki. Best suited for those with a strong math background and Python coding experience who want to contribute to ensuring AI has a positive long-term impact.AI Safety, Ethics and Society Course June 23 – September 14 (online).
Run by the Center for AI Safety (CAIS), this course aims to provide a comprehensive introduction to how current AI systems work, why many experts are concerned that continued advances in AI may pose severe societal-scale risks, and how society can manage and mitigate these risks. It's based on the textbook by Dan Hendrycks, and no prior technical knowledge is necessary.Alignment Research Bootcamp Oxford (ARBOx2) June 30 – July 11 (Oxford, UK).
Run by OAISI, this 2-week intensive bootcamp will help participants rapidly build skills in ML safety, including building gpt-2-small, learning interpretability techniques, understanding RLHF, and replicating key research papers. Ideally suited for those new to mechanistic interpretability, with basic familiarity with linear algebra, Python, and AI safety.Human-aligned AI Summer School 2025 July 22–25 (Prague, Czechia).
4 intensive days of talks, workshops, and discussions covering the latest trends in and broader framings of AI alignment research. The school is focused on teaching and exploring approaches and frameworks, and less on presentation of the latest research results. The content is mostly technical – it is assumed the attendees understand current ML approaches and some of the underlying theoretical frameworks.AI Brains Accelerator August 6 – November (Washington D.C., USA & online)
Speculative Technologies is running a special cohort of the Brains Accelerator for ambitious AI research programs, with a special focus on security and governance capabilities. This program is meant to help talented researchers with experience in AI hardware and software build skills, refine ideas, and make the connections to spin up coordinated research programs in governments and nonprofits.
Featured resource
If Anyone Builds It, Everyone Dies – this new book from Eliezer Yudkowsky and Nate Soares is a no-nonsense primer on why building artificial superintelligence using current techniques will predictably lead to human extinction. Pre-ordering it now will help the book reach bestseller lists and thereby a wider audience, so please consider doing so.