Events

Upcoming events

Apart logo

May 15—17 2024

Computational Mechanics for AI Safety
Adam Shai (PIBBSS)

Hybrid Online / In-person in Berkeley

Join us for an exciting hackathon where we’ll leverage Computational Mechanics to understand and control neural network behavior. This new AI Safety and interpretability approach is built upon a rigorous mathematical framework from physics and enables precise predictions about the internal geometry of AI systems, overcoming limitations of current interpretability methods. Collaborate with researchers and enthusiasts to stress-test the approach, create benchmarks, and develop a science of AI cognitive capabilities and a. Contribute to the advancement of AI safety and help build a safer, more controllable AI future. Sign up now and get ready to contribute to this new approach to AI Interpretability and Safety, and for your chance to win cash prizes!

Partners: Apart Labs

PAST events

March 2024

Agents, AI & Alignment Workshop

Oxford

Over the last few decades, computational mechanics has rigorously investigated the ultimate limits of prediction, together with the resources and internal representations required to predict optimally. In just the last few years, AI models trained to predict have developed impressive and surprising general capabilities. This workshop explores how computational mechanics can be applied and adapted to understand current and future AI models, with the aim of anticipating emerging capabilities and enabling alignment with human values. The program consists of a mix of tutorials, talks, and ample time for small-group discussions.

Lead organizers: Alexander Gietelink Oldenziel, Paul Riechers, Nora Ammann

acs-logo

October 2023

Workshop:
Active Inference meets AI alignment

Oxford

Active inference, and the Free Energy Principle, provide a powerful way of understanding agency, cognition and perception, as well as, more broadly, self-organising behaviour. The field of AI Alignment aims to understand the ways in which advanced, artificially implemented intelligent systems may come to cause important harms, and how to prevent that from happening. As such, both fields find themselves grappling with  fundamental questions about agency, intelligence, intentions/drives/values, individuality, and more.

Partners: Alignment of Complex Systems Group

October 2023

Agent Foundations for Alignment:
Clear Thinking for Messy Minds

Oxford

The theme of this workshop is to examine the theoretical underpinnings of Bayesian expected utility maximizers, see where the conditions need to be relaxed to accommodate more realistic models of agency and consider what consequences this may have for understanding how agents form, observe, believe, desire, act, interact, aggregate and reflect. During the Renaissance Italian humanists started to re-read the classics. First as an earnest celebration of ancient achievements in culture, science and philosophy, eventually evolving into the ambition to surpass ancient wisdom. The Agent Foundations for Alignment workshop will similarly revisit some of the classical theorems of Agent Foundations, celebrate and critically examine the Classics.

https://agentfoundations.net/

Lead organizers: Alexander Gietelink Oldenziel, Matt McDermott, Nora Ammann, Kaarel Hänni

pibbssLogo-1

Varied, Usually ~3 per year

PIBBSS Fellowship Retreats
/ Affiliate Retreats

Prague / Oxford / San Francisco

PIBBSS aims to foster a community of thinkers from different fields and connect them in resolving problems posed by Artificial Intelligence. During retreats usually lasting between 4 and 5 days, PIBBSS aims to create an environment of knowledge-sharing, reflection but also project-starting energy. Opening days tend to be more talk heavy, while later days are series of deep dive conversations. Ideal results from our retreats include intellectual progress on hard problems (attendees report being unblocked on problems where they were stuck for months), new connections, and new projects.