最新消息

Professor Yoichi Sato 学术报告: Predicting visual attention from first-person videos


添加时间:2018-11-10 14:29:09



报告题目:Predicting visual attention from first-person videos

报告时间:2018年11月11日(星期日) 下午4点

报告地点:软件楼二层报告厅

报告人:Professor Yoichi Sato

 
Abstract:First-person videos recorded by a wearable camera can provide a close-up view of various types of human activities such as object manipulation with hands or interaction with other people, from our own perspective. In this talk, I will talk about predicting visual attention in first-person videos. Unlike existing gaze prediction methods that try to estimate gaze in a bottom-up manner, our method exploits high-level knowledge about patterns of eye movements that are unique to a task being carried out. I will explain the overall framework of our method as well as experimental results showing the state-of-the-art performance of our method on gaze estimation benchmark datasets.
 
Bio
Yoichi Sato is a professor at Institute of Industrial Science, the University of Tokyo. He received his B.S. degree from the University of Tokyo in 1990, and his MS and PhD degrees in robotics from School of Computer Science, Carnegie Mellon University in 1993 and 1997. His research interests include gaze sensing and analysis, first-person vision, physics-based vision, and reflectance analysis. He served/is serving in several conference organization and journal editorial roles including IEEE Transactions on Pattern Analysis and Machine Intelligence, International Journal of Computer Vision, Computer Vision and Image Understanding, ICCV 2021 Program Co-Chair, ACCV 2018 General Co-Chair, ACCV 2016 Program Co-Chair, and ECCV 2012 Program Co-Chair.

 



作者:姜玮