Archive
/
INF Seminars
/
INF_2025_01_22_Bresson
USI - Email
Università
della
Svizzera
italiana
INF
Newsletter
Browser version
Integrating Large Language Models and Graph Neural Networks
Host: Prof. Michael Bronstein
Wednesday
22.01
USI East Campus, Room D0.02
15:00 - 16:30
Prof. Xavier Bresson
National University of Singapore (NUS)
Abstract: Pre-trained language models on large-scale datasets have revolutionized text-based applications, enabling new capabilities in natural language processing. When documents are connected, they form a text-attributed graph (TAG), like the Internet, Wikipedia, social networks, scientific literature networks, biological networks, scene graphs, and knowledge graphs. Key applications for TAGs include recommendation (web), classification (node, link, graph), text- and visual-based reasoning, and retrieval-augmented generation (RAG). In this talk, I will introduce two approaches that integrate Large Language Models (LLMs) with Graph Neural Networks (GNNs). The first method demonstrates how LLMs’ reasoning capabilities can enhance TAG node features. The second approach introduces a technique called GraphRAG, which grounds LLM responses in a relevant sub-graph structure. This scalable technique regularizes the language model, significantly reducing incorrect responses, a.k.a. hallucinations.
Biography: Xavier Bresson is an Associate Professor in the Department of Computer Science at the National University of Singapore (NUS). His research focuses on Graph Deep Learning, a new framework that combines graph theory and neural networks to tackle complex data domains. He received the USD 2M NRF Fellowship, the largest individual grant in Singapore, to develop this new framework. He was also awarded several research grants in the U.S. and Hong Kong. He co-authored one of the most cited works in this domain (10th most cited paper at NeurIPS) and has significantly contributed to mature these emerging techniques. He has organized several conferences, workshops and tutorials on graph deep learning such as the IPAM'23 workshops on "Learning and Emergence in Molecular Systems", the IPAM'23'21 workshops on "Deep Learning and Combinatorial Optimization", the MLSys'21 workshop on "Graph Neural Networks and Systems", the IPAM'19 and IPAM'18 workshops on "New Deep Learning Techniques", and the NeurIPS'17, CVPR'17 and SIAM'18 tutorials on "Geometric Deep Learning on Graphs and Manifolds". He has been a regular invited speaker at universities and companies to share his work. He has also been a speaker at the NeurIPS'22, KDD'21’23, AAAI'21 and ICML'20 workshops on "Graph Representation Learning", and the ICLR'20 workshop on "Deep Neural Models and Differential Equations". He has taught undergraduate and graduate courses on Deep Learning and Graph Neural Networks since 2014.