This tutorial provides a comprehensive introduction to graph learning and GNNs, covering foundational concepts and recent advancements in the field. We begin with an overview of traditional graph representation and embedding methods, and then focus on modern approaches such as Graph Neural Networks (GNNs), Message Passing Networks (MPNNs), and Graph Transformers (GTs). The second part delves into the expressivity and generalizability of current GNN architectures. We will explore what functions and tasks GNNs can learn, referencing recent research that connects GNN expressivity with the Weisfeiler-Lehman (WL) graph isomorphism test. We will also discuss the generalizability of MPNNs, including their VC dimension and implications for model performance. Next, we address key information-flow challenges in graph learning architectures, such as under-reaching, over-smoothing, and over-squashing. We will highlight recent research aimed at understanding and alleviating these issues, including graph rewiring techniques. The tutorial will conclude with a panel discussion on future directions in graph machine learning. We will explore the limitations of the GNNs, graph foundation models and we will discuss the potential for integrating graph learning with large language models (LLMs) to enhance reasoning and complex data analysis capabilities.
ICML Tutorial at ICML with Ameya Velingker (Google Research) on Graph Machine Learning. In addition to the foundations, we cover challenges in expressiveness and generalizability, and understanding and addressing under-reaching, over-smoothing, over-squashing, and graph rewiring techniques. We also moderate a panel discussion with Michael Bronstein, Bryan Perozzi, Christopher Morris and Michael Galkin about future steps in GNNs.
More information at the official web page.