Aller à l’une de ces dates :
08 November 2010, Journée sur la Traduction Automatique (Organisateurs : David Langlois et Kamel Smaili (opération )
09 November 2010, Mariet Theune, University of Twente (The Netherlands)
23 November 2010, François Mairesse, University of Cambridge (UK)
07 December 2010, Myroslava Dzikovska, University of Edinburgh (Scotland)
04 January 2010, Kristina Striegnitz, Union College, Schenectady (USA)
18 January 2010, Michaela Regneri University of Saarbruecken (Germany)
01 February 2010, Stephan SCHLOGL, Trinity College Dublin (Ireland)
15 February 2010, Nicolas Spyratos, LRI, Paris
Sauf indication contraire, les présentations ont lieu le mardi à 14h au LORIA en salle B013.
08 November 2010 ; 09h30-12h00 - 13h30-17h15, Salle A008
Journée sur la Traduction Automatique
09 November 2010 ; 14:00-15:00, Salle B013
Mariet Theune, University of Twente (The Netherlands)
Politeness and language style in dialogues with a Virtual Guide
In this talk I will discuss the Virtual Guide : an embodied conversational agent that can help users to find their way in a virtual environment, while adapting its affective linguistic style to that of the user. I will briefly describe the multimodal dialogue management and language & gesture generation in the Virtual Guide. The focus of the talk will be on how the Virtual Guide detects the level of politeness of the user’s utterances during the dialogue and subsequently aligns its own language to that of the user, using different politeness strategies. I will present the results of an evaluation of the Guide’s politeness model, and the outcomes of some initial user tests.
23 November 2010 ; 14:00-15:00, Salle B013
François Mairesse, University of Cambridge (UK) Crowdsourcing a statistical language generator using phrase-based factored language models to improve dialogue naturalness.
This talk will focus on a novel method for automatically learning how to map a meaning representation to natural language from data, in order to facilitate the development of natural spoken dialogue interfaces for complex domains. Most previous work on trainable language generation has focused on two paradigms : (a) using a statistical model to rank a set of pre-generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents BAGEL, a fully statistical language generator which uses Factored Language Models to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that BAGEL can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data. While data-driven methods for language generation largely reduce development and maintenance effort, they also provide a principled way to model the large linguistic variation found in human utterances. Hence this talk will also investigate how BAGEL can further improve dialogue naturalness through data-driven paraphrasing.
07 December 2010 ; 14:00-15:00, Salle A008
Myroslava Dzikovska, University of Edinburgh (Scotland)
Better tutoring with natural language dialogue
One-on-one tutoring is widely considered to be the most effective form of instruction, but it is not yet clear what makes it so effective. One possible explanation is that dialogue with a tutor keeps students engaged with the material and allows them to notice and address gaps in their understanding. Intelligent tutoring systems aim to deliver improved learning through interaction with a computer rather than a human, by replicating effective tutoring strategies in human-computer interaction.
The goal of our research is two-fold : to identify properties of natural language dialogue that lead to effective learning, and to investigate the use of more complex language that arises in tutorial dialogue. I will briefly discuss our human-human tutorial dialogue study, and then describe the Beetle2 tutorial dialogue system. Our system can serve as a platform for controlled experimentation and dialogue research, and in particular for evaluating effectiveness of different strategies for language interpretation, dialogue management and tutoring. First system evaluation was successfully completed in 2009, and I will discuss its results and future research directions.
04 January 2011 ; 14:00-15:00, Salle B013
Kristina Striegnitz, Union College, Schenectady (USA)
Can an EEG based video game controller detect non-verbal feedback for dialog systems ?
In face-to-face dialog, human speakers and listeners produce many non-verbal signals, such as arm gestures, head and body movements, facial expressions. One important function is to give feedback to the dialog partner to indicate understanding or confusion. Human speakers monitor their dialog partners for these signals and react to them quickly, sometimes in mid-sentence. Misunderstandings are usually avoided or dealt with efficiently and elegantly.
In many dialog systems, in contrast, the identification and handling of misunderstandings is still problematic and leads to clumsy sub-dialogs. So far, few dialog systems have made use of the non-verbal feedback provided by human users because the technology for detecting these signals has not been accurate enough and too expensive. Recently, this has started to change as camera based detection methods have become more accurate. Furthermore, a number of very affordable new video game controllers have come out that use cameras, accelerometers or EEG based technology to allow for new ways of human-computer interaction based on gestures, body movements, facial expressions or even thoughts.
In this talk, I present the first steps of a research program that investigates whether these game controllers can be used to detect non-verbal signals relevant for dialog and how this information can be integrated into a dialog system. More specifically, I will describe a study that tries to establish whether the Emotiv EPOC, an EEG based game controller, can be used to detect confusion. I will outline an experiment in which we video tape two participants solving a dialog task and record the signals detected by the Emotiv EPOC worn by one of the participants. I will present some preliminary results.
18 January 2011 ; 14:00-15:00, Salle B013
Michaela Regneri, University of Saarbruecken (Germany)
Learning Script Knowledge with Web Experiments
Scripts are fundamental pieces of commonsense knowledge that describe stereotypical event sequences of human activities (like "eating in a restaurant" or "visiting a doctor"). There have been a couple of attempts to learn script data from corpora, however, many scripts are shared implicit knowledge and usually not elaborated in detail. I’ll present our approach for learning this kind of script knowledge by "crowdsourcing" the Internet and generalizing over the gathered data instances. We collected natural-language descriptions of script-specific event sequences from volunteers over the web. With this data, we fed an algorithm that computes a graph representation of the script’s temporal structure using multiple sequence alignment. In this tallk, Í’m going to show the promising results of this explorative study, and sketch some ideas for future work.
01 February 2011 ; 14:00-15:00, Salle B013
Stephan SCHLOGL, Trinity College Dublin (Ireland)
Sketching Language User‐Centred Design of a Wizard of Oz Prototyping Tool
Language Technology (LT) based applications become more popular as technology improves. Prototyping early in the design process is critical for the development of high quality applications. It is difficult, however, to do low-fidelity prototyping (e.g. paper prototyping) of applications based on LT. One technique that has been used for this kind of prototyping is Wizard of Oz (WOZ). The idea of WOZ is that a ‘human wizard’ mimics the functionality of a computer system, or parts of it. To do so, an effective interface to support the task of the wizard is, however, crucial. While several wizard interfaces have been built to date, most of them were designed for specific experiments. The more general issue of supporting the sometimes highly demanding cognitive task of the wizard has remained largely unexplored. In my talk I want to report on work in progress that explores the wizard task and aims at defining a generic wizard user interface layout that can be used in different experimental settings using different Language Technology Components (LTCs). I will talk about a first WOZ experiment in which different wizards were observed and their behaviour was analysed. Finally I will show how the results of this experiment were combined with sketching in order to identify high-level concepts for a future generic wizard interface and how focusing on the role of the wizard in respect to the different LTCs in use allows for the description of a certain abstract system architecture to build on.
15 February 2011 ; 14:00-15:00, Salle B013
Nicolas Spyratos, LRI, Paris
Query personalization over large data tables Nicolas Spyratos, LRI-UMR 8623, Universite Paris Sud
Tailoring the information returned according to user needs and preferences is a major theme in today’s information systems, referred to as information personalization ; query personalization is just one of its multiple facets. The general idea is to allow the user of an information system to express conditions and preferences in a query, and have the system perform the following tasks : (a) retrieve the information satisfying the conditions and (b) present the results in a way that respects the preferences. We focus on queries over large data tables, such as those contained in data warehouses, electronic catalogues of e-commerce, or catalogues of digital libraries, and present rewriting techniques for handling such aspects as ranking query answers, top-k queries and skyline queries. This is work in progress, in the context of the European project ASSETS on digital libraries.