This is a weekly newsletter listing newly announced AI safety events and training programs. Visit AISafety.com/events-and-training for the full directory of upcoming events and programs.
New events
Introduction to AI Evals June 11 (San Francisco, USA).
Workshop from the AI Safety Awareness Project (AISAP) teaching practical skills in evaluating LLMs and AI agents. Key concepts will include using LLM APIs, understanding jailbreaks, understanding why evaluations are essential yet challenging, and insights from Anthropic's “Alignment Faking" paper. Prior Python experience is required.Applied AI Safety: Real-World Risk Mitigation June 14 (Dubai, UAE).
Half-day event hosted by CodersHQ, under the UAE Minister of State for AI, exploring practical strategies and frameworks for AI safety, governance, and risk mitigation. Talks will cover both AI ethics and AI safety.
New training programs
AISST Introductory Technical AI Safety Fellowship June 16 – August 4 (Cambridge, USA).
Harvard's AI Safety Student Team (AISST) is running its regular 8-week introductory reading group on AI safety, covering topics like neural network interpretability, learning from human feedback, goal misgeneralization in reinforcement learning agents, and eliciting latent knowledge. Aimed (but not exclusively) at undergraduate, masters, and graduate students from Harvard.AISST Introductory AI Policy Fellowship June 16 – August 4 (Cambridge, USA).
Every semester, AISST also runs an 8-week introductory reading group on the foundational policy and governance issues posed by the development of advanced AI systems. The fellowship aims to introduce students interested in AI policy and governance to risks from advanced AI. The fellowship meets weekly in small groups.Canadian AI policy course June 25 – July 30 (online).
Run by AIGS Canada, the course will involve a small, focused cohort of people looking to jumpstart their careers in Canadian AI policy. Over 6 weeks, participants will obtain an understanding of the Canadian AI policy landscape and the opportunities it affords for reducing catastrophic risk.AI Security Bootcamp (AISB) August 4–29 (London, UK).
Intensive program designed to bring researchers and engineers up to speed on security fundamentals for AI systems. The bootcamp will cover cybersecurity fundamentals (cryptography, networks), AI infrastructure security (GPUs, supply chain security), and more novel attacks on ML systems (dataset trojans, model extraction).ARENA 6.0 September 1 – October 3 (London, UK).
The Alignment Research Engineer Accelerator (ARENA) is a 4–5 week ML bootcamp with a focus on AI safety. The mission is to provide talented individuals with the skills, tools, confidence, and connections necessary for upskilling in ML engineering, for the purpose of contributing directly to AI alignment in technical roles.
In a time where AI is advancing at unprecedented speed, a few voices are quietly choosing a harder path:
One that puts safety before scale. Wisdom before hype. Humanity before power.
There’s a new initiative called Safe Superintelligence Inc. — a lab built around one single goal:
To develop AGI that is safe by design, not just by hope or regulation.
Created by Ilya Sutskever
If you're someone with world-class technical skills and the ethical depth to match —
this is your call to action.
We don’t need more AI.
We need better, safer, more compassionate AI.
Spread the word. Support the mission
Just sharing, I hope you do not mind.
The True Threat of AI is Global Compute Governance, Job Loss, and Economic Displacement https://torrancestephensphd.substack.com/p/the-true-threat-of-ai-is-global-compute