[MLG @ ECML 25] Demystifying Common Beliefs of Graph ML

MLG Workshop

Abstract

This paper has been accepted in this workshop as a Contributed Talk. After a renaissance phase in which researchers revisited the message-passing paradigm through the lens of deep learning, the graph machine learning community shifted its attention towards a deeper and practical understanding of message-passing’s benefits and limitations. In this position paper, we notice how the fast pace of progress around the topics of oversmoothing and oversquashing, the homophily-heterophily dichotomy, and long-range tasks, came with the consolidation of commonly accepted beliefs and assumptions that are not always true nor easy to distinguish from each other. We argue that this has led to ambiguities around the investigated problems, preventing researchers from focusing on and addressing precise research questions while causing a good amount of misunderstandings. Our contribution wants to make such common beliefs explicit and encourage critical thinking around these topics, supported by simple but noteworthy counterexamples. The hope is to clarify the distinction between the different issues and promote separate but intertwined research directions to address them.

Date
Sep 15, 2025 5:00 PM β€” 5:30 PM
Location
Porto, Portugal

Best Paper Award πŸ† and Best Poster Award πŸ† at MLG 2025.

First person picture

Federico Errica and I presented the paper Oversmoothing, Oversquashing, Heterophily, Long-Range, and more: Demystifying Common Beliefs in Graph Machine Learning at the 22nd International Workshop on Mining and Learning with Graphs at the ECML-PKDD 2025 Conference.

This paper has been accepted in this workshop as a Contributed Talk and received the Best Paper Award and Best Poster Award πŸ†.

Adrian Arnaiz-Rodriguez
Adrian Arnaiz-Rodriguez
Postdoctoral Researcher

Postdoctoral Researcher in Trustworthy AI, AI Regulation and Graph Neural Networks @ ELLIS Alicante.