AI Safety Events and Training: 2024 Week 47 update
This is a weekly newsletter listing newly announced AI safety events and training programs. Visit AISafety.com/events-and-training for the full list of upcoming events and programs.
Events
AI Safety: X Risks or Spectrum of Risks? Singapore Roundtable November 29, 2024 (online).
The Singapore Roundtable aims to foster a comprehensive understanding of AI safety in Singapore, an emerging global leader in AI innovation and governance. Join us for a dialogue with leading AI policy and governance experts to learn about the technical and regulatory challenges and opportunities faced by one of the fastest growing market in AI adoption. We bring together leading experts, policymakers, and stakeholders in the field of AI, to discuss the unique challenges and opportunities of AI safety in the context of Singapore and Southeast Asia.AI Evaluations - Research Sprint November 30, 2024 (UK).
Competition to design evaluation tasks that will be used in pre-deployment testing of the most world's most advanced AI models. This competition is run in partnership between Arcadia Impact and the UK's AI Safety Institute.Recurse Center's AI Safety Workshop December 7, 2024 (USA).
The Recurse Center, in partnership with the AI Safety Awareness Foundation, is hosting a comprehensive one-day workshop on AI safety and development. The event features a general introduction to current AI landscape and safety concerns, followed by two specialized technical tracks: Track A for Python programmers new to AI (focusing on neural networks and analyzing malicious reinforcement learning agents), and Track B for experienced LLM developers (covering GPT-2 training and transformer interpretability).
Training opportunities
CLR S-Risk Foundations Course Winter 2024 December 9 – January 18, 2025 (online).
The Foundations Course is designed to introduce people to CLR's research on how transformative AI (TAI) might be involved in the creation of large amounts of suffering (s-risks). Our priority areas for addressing these risks include work on multiagent AI safety, AI governance, epistemology, risks from malevolent or fanatical actors, and macrostrategy.
AISI - Residency on Autonomous Systems Team, 6 months in 2025 (UK).
You'll be mentored by a multi-disciplinary team including scientists, engineers and domain experts on autonomy risks. You will work in a team of other scholars to build evaluations.
NeurIPS 2024
This year’s NeurIPS, from Dec 10 through Dec 15, is happening in Vancouver. We identified the following safety-related workshops:
Socially Responsible Language Modelling Research (SoLaR) 2024, Dec 14
Pluralistic Alignment, Dec 14
Safe Generative AI, Dec 15
Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI, Dec 15
Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations, Dec 15