The Adjoint School 2025

The 2025 Adjoint School features projects mentored by Juliana Kaizer Vizzotto, Laura Scull, Justin Hsu, and Georgios Bakirtzis. It is organized by Ari Rosenfield, Elena Dimitriadis Bermejo, Innocent Obi, and Drew McNeely. The research week will be held at the University of Florida.

Important Dates


Research Projects

Modeling Quantum Parallelism Using Monads

Mentor: Juliana Kaizer Vizzotto
TA: TBD
Students: TBD


Category theory provides a mathematical framework for understanding structures and their relationships abstractly. We can use the tools from category theory for reasoning about abstraction, modularity, and compositionality, offering a powerful framework for modeling complex systems in computer science. In the context of quantum computing, we need to deal with the properties inherent of quantum information. Traditional categorical frameworks often model computations as sequential transformations, but quantum processes demand a representation that captures: i) the quantum parallelism caused by the non-local wave character of quantum information which is qualitatively different from the classical notion of parallelism; and also ii) the notion of observation, or measurement, in which the observed part of the quantum state and every other part that is entangled with it immediately lose their wave character.

In this project we will investigate the use of monads to model quantum parallelism, inspired by the work of Moggi on modeling computational effects. Moreover, quantum computation introduces additional complexity, particularly with respect to measurement and the collapse of quantum states. Then we will study how to construct a categorical representation for the traditional general model of quantum computing based on density matrices and superoperators using a generalization of monads, called arrows. Finally, we will investigate the use of relative monads in the context of quantum measurement.

Readings

  • What is the Categorical Model of Arrows?, R. Atkey.
  • Monads Need Not be Endofunctors, T. Altenkirch, J. Chapman, and T. Uustalu.
  • Structuring Quantum Effects: Superoperators as Arrows, J. K. Vizzotto, T. Altenkirch, A. Sabry.
Homotopy of Graphs

Mentor: Laura Scull
TA: TBD
Students: TBD


Graphs are discrete structures made of vertices connected by edges, making examples easily accessible. We take a categorical approach to these, and work in the category of graphs and graph homomorphisms between them. Even though many standard graph theory ideas can be phrased in these terms, this area remains relatively undeveloped.

This project will consider discrete homotopy theory, where we define the notion of homotopy between graph morphisms by adapting definitions from topological spaces. In particular, we will look at the theory of ×-homotopy as developed by Dochtermann and Chih-Scull. The resulting theory has some but not all of the formal properties of classical homotopy of spaces, and diverges in some interesting ways.

Our project will start with learning about the basic category of graphs and graph homomorphisms, and understanding categorical concepts such as limits, colimits and expnentials in this world. This offers an opportunity to play with concrete examples of abstract universal properties. We will then consider the following question: do homotopy limits and colimits exist for graphs? If so, what do they look like? This specific question will be our entry into the larger inquiries around what sort of structure is present in homotopy of graphs, and how it compares to the classical homotopy theory of topological spaces. We will develop this theme further in directions that most interest our group.


Readings

  • On the Concrete Categories of Graphs, G. McRae, D. Plessas, and L. Rafferty
  • A Homotopy Category for Graphs, T. Chih and L. Scull
Compositional Generalization in Reinforcement Learning

Mentor: Georgios Bakirtzis
TA: TBD
Students: TBD


Reinforcement learning is a form of semi-supervised learning. In reinforcement learning we have an environment, an agent that acts on this environment through actions, and a reward signal. It is the reward signal that makes reinforcement learning a powerful technique in the control of autonomous systems, but it is also the sparcity of this reward structure that engenders issues. Compositional methods decompose reinforcement learning to parts that are tractable. Categories provide a nice framework to think about compositional reinforcement learning.

An important open problem in reinforcement learning is /compositional generalization. This project will tackle the problem of compositional generalization in reinforcement learning in a category-theoretic computational framework in Julia. Expected outcomes are of this project are category theory derived algorithms and concrete experiments. Participants will be expected to be strong computationally, but not necessarily have experience in reinforcement learning.

Readings

  • Structure in Deep Reinforcement Learning: A Survey and Open Problems, A. Mohan, A. Zhang, and M. Lindauer
  • Categorical semantics of compositional reinforcement learning , G. Bakirtzis, M. Savvas, and U. Topcu.
Categorical Metric Structures for Numerical Analysis

Mentor: Justin Hsu
TA: TBD
Students: TBD


Numerical analysis studies computations that use finite approximations to continuous data, e.g., finite precision floating point numbers instead of the reals. A core challenge is to bound the amount of error incurred. Recent work develops several type systems to reason about roundoff error, supported by semantics in categories of metric spaces. This project will focus on categorical structures uncovered by these works, seeking to understand and generalize them.

More specifically, the first strand of work will investigate the neighborhood monad, a novel graded monad in the category of (pseudo)metric spaces. This monad supports the forward rounding error analysis in the NumFuzz type system. There are several known extensions incorporating particular computational effects (e.g., failure, non-determinism, randomization) but a more general picture is currently lacking.

The second strand of work will investigate backwards error lenses, a lens-like structure on metric spaces that supports the backward error analysis in the Bean type system. The construction resembles concepts from the lens literature, but a precise connection is not known. Connecting these lenses to known constructions could enable backwards error analysis for more complex programs.

Readings

  • Toward a Formal Theory of Graded Monads, S. Fujii, S. Katsumata, P-A. Melliès
  • The Dialectica Categories , V. de Paiva.

Sponsors