How Humans (and Machines) Integrate Language and Vision: Inaugural Lecture
When humans process text or speech, this often happens in a visual context, eg, when listening to a lecture, reading a map, or describing an image. Using an eye-tracker, human gaze can be accurately recorded during such tasks, and in this lecture I will show how eye-tracking data can be combined with computational modelling to understand how human cognition integrates linguistic and visual information. I will also touch upon applications such as computer-aided translation and automatic image annotation.
Frank Keller is professor of computational cognitive science in the School of Informatics at the University of Edinburgh. His background includes an undergraduate degree from Stuttgart University, a PhD from Edinburgh, and postdoctoral and visiting positions at Saarland University and MIT.
His research focuses on how people solve complex tasks such as understanding language or processing visual information. His work combines experimental techniques with computational modelling to investigate reading, sentence comprehension, translation, and language generation, both in isolation and in the context of visual information such as photographs or diagrams.
Professor Keller serves on the management committee of the European Network on Vision and Language, is a member of the governing board of the European Association for Computational Linguistics, and holds an ERC starting grant in the area of language and vision.
All are welcome to attend. RSVP Marjorie Dunlop mdunlop2 [at] inf.ed.ac.uk