Archive / INF Seminars / INF_2024_04_10_DanielMartin
USI - Email

Probabilistic and multimodal models of visual attention and perception in virtual reality


Host: Prof. Piotr Didyk




East Campus - D3.01
13:30 - 14:30

Daniel Martin
Universidad de Zaragoza
Abstract: Virtual reality (VR) is an emerging medium that has the potential to unlock unprecedented experiences. Since the late 1960s, this technology has advanced steadily, and can nowadays be a gateway to a completely different world. VR offers a degree of realism, immersion, and engagement never seen before, and lately, we have witnessed how newer virtual content is being continuously created. However, to get the most out of this promising medium, there is still much to learn about people’s visual attention and gaze behavior in the virtual universe. Questions like “What attracts users’ attention?” or “How malleable is the human brain when in a virtual experience?” have no definite answer yet. We argue that it is important to build a principled understanding of viewing and attentional behavior in VR.

In this talk, I will discuss some recent works that try to shed light on both aforementioned questions. First, I will focus on some of our latest models of gaze behavior, that account not only for the visual conspicuity of what we see, but also for inter- and intra-user variability, uncertainty, and even the impact of multimodal stimuli. Such models approach visual attention prediction from different perspectives, yet each of them reveals some interesting features crucial to better understanding how our gaze behaves. Then, I will briefly discuss one of our latest works on virtual manipulations, how it can be related to our visual attention, and one of its potential applications.

Biography: Dr. Daniel Martin is a postdoctoral researcher at the Universidad de Zaragoza, where he got his PhD in the Graphics and Imaging Lab, under the supervision of Prof. Belen Masia and Prof. Diego Gutierrez. His research mainly spans virtual reality and encompasses topics such as understanding and modeling visual attention and gaze behavior, multimodality, content generation, or studying diverse perceptual manipulations. He has been a research intern at Adobe Research, once under the supervision of Dr. Xin Sun, and another time supervised by Dr. Aaron Hertzmann and Dr. Stephen DiVerdi, and at Meta Reality Labs Research, supervised by Dr. Michael Proulx. He was also granted a Fulbright Predoctoral Scholarship to conduct his research in the US for six months. More details can be found at