Marie-Francine Moens

Date: Wednesday, March 8 at 2 pm
Place: LORIA, room A008
Speaker: Marie-Francine Moens (KU Leuven)

Title: Acquiring Knowledge from Multimodal Sources to Aid Language Understanding

Abstract: Human language understanding (HLU) by a machine is of large economic and social value. In this lecture we consider language understanding of written text. First, we give an overview of the latest methods for HLU that map language to a formal knowledge representation which facilitates other automated tasks. Most current HLU systems are trained on texts that are manually annotated, which are often lacking in open domain applications. In addition, much content is left implicit in a text, which when humans read a text is inferred by relying on their world and common sense knowledge. We go deeper into the field of representation learning that nowadays is very much studied in computational linguistics. This field investigates methods for representing language as statistical concepts or as vectors, allowing straightforward methods of compositionality. The methods often use deep learning and its underlying neural network technologies to learn concepts from large text collections in an unsupervised way (i.e., without the need for manual annotations). We show how these methods can help, but also demonstrate that these methods are still insufficient to automatically acquire the necessary background knowledge and more specifically world and common sense knowledge needed for language understanding. We go deeper in on how we can learn knowledge jointly from textual and visual data to help language understanding, which will be illustrated with the first results obtained in the MUSTER CHIST-ERA project.