ICFP 2024
Mon 2 - Sat 7 September 2024 Milan, Italy
Mon 2 Sep 2024 17:00 - 17:30 at Orange 2 - Session 3

Emerging agentic AI systems have the potential to substantially accelerate progress on critical scientific and societal problems, but they also present substantial privacy, security, and safety challenges because they execute commands autonomously or semi-autonomously in environments where they have access to sensitive data and effects. Formal methods are a key approach to enforcing mathematically rigorous safeguards that limit the ability of agentic AIs to cause these kinds of harms.

In particular, we envision enforcing AI capability safety policies that impose strict constraints of various kinds on the effects of an AI agent. Specifying AI capability safety policies able to enforce these kinds of constraints in practical settings will necessarily be a large-scale, collaborative effort. In particular, it will require (1) employing a wide variety of approaches to specification and proof; (2) developing large-scale world models encompassing organizational access control and information flow models, legal models, and more general causal models of the world; and (3) developing robust AI safety policies and specifications that are likely to minimize the risk of catastrophic harm from future AI agents.

The proposed talk will outline the vision for a recently originated research project aiming to build a formally verified prototype of a foundational ``operating system'' for safeguarded AI, called Bastion, that grounds these activities within programming language theory, namely by combining \textbf{dependent type theory} (as a practical general-purpose theory of computational structures and proofs), \textbf{Dijkstra monads} (as a flexible formalism for reasoning about an AI agent’s computational effects), and \textbf{abstract types} (for modularizing reasoningto individual components that can be separately developed by various stakeholders, including AI safety researchers, formal methods experts, organizations of various scales, and governments seeking to develop actionable, specific policy).

Slides (Bastion_HOPE_2024_slides.pdf)1.92MiB
Abstract (hope_2024.pdf)653KiB

Mon 2 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

16:00 - 17:30
Session 3HOPE at Orange 2
16:00
30m
Talk
Arrows as applicatives in a monad
HOPE
Leo White Jane Street
File Attached
16:30
30m
Talk
Mechanized monadic equational reasoning for ML references
HOPE
Reynald Affeldt AIST, Jacques Garrigue Nagoya University, Takafumi Saikawa Nagoya University
File Attached
17:00
30m
Talk
Modularizing Reasoning about AI Capabilities via Abstract Dijkstra Monads
HOPE
Cyrus Omar University of Michigan, Patrick Ferris University of Cambridge, UK, Anil Madhavapeddy University of Cambridge, UK
File Attached