Published: Sept. 12, 2018

The Department of Linguistics is pleased to welcome back Claire Bonial, a graduate of our own PhD program who now works for the聽Computational and Information Sciences Directorate at the聽Army Research Laboratory (ARL). In this Ling Circle talk,聽Dr. Bonial聽will share some of her work on human-robot dialogue.聽

Title: "Event semantics in text constructions, vision, and human-robot dialogue"
When: Monday, October 15
Where: Hellems 237聽

Abstract: 鈥淥k, robot, make a right and take a picture鈥 鈥 a simple instruction like this exemplifies some of the obstacles in our research on human-robot dialogue: how are聽make聽and聽take聽to be interpreted? What precise actions should be executed? In this presentation, I explore three challenges: 1) interpreting the semantics of constructions in which verb meanings are extended in novel usages, 2) recognizing activities and events in images/video by employing information about the objects and participants typically involved, and 3) mapping natural language instructions to the physically situated actions executed by a robot. Throughout these distinct research areas, I leverage both Neo- Davidsonian styles of event representation and the principles of Construction Grammar in addressing these challenges for interpretation and execution.

Speaker Bio:聽Claire Bonial is a computational linguist specializing in the murky world of event semantics. In her efforts to make this world computationally tractable, she has collaborated on a variety of Natural Language Processing semantic role labeling projects, including PropBank, VerbNet, and Abstract Meaning Representation. A focused contribution to these projects has been her theoretical and psycholinguistic research on both the syntax and semantics of English light verb constructions (e.g.,聽take a walk, make a mistake). Bonial received her Ph.D. in Linguistics and Cognitive Science in 2014 from the University of 欧美口爆视频 Boulder. She began her current position in the Computational and Information Sciences Directorate of the Army Research Laboratory (ARL) in 2015. Since joining ARL, she has expanded her research portfolio to include multi-modal representations of events (text and imagery/video), as well as human-robot dialogue.