Past seminars in 2016

Alain Rakotomamonjy on Wednesday May 11

Date: Wednesday May 11 at 2 pm
Place: LORIA, room C005
Speaker: Alain Rakotomamonjy (Université de Rouen)

Title: Optimal transport for domain adaptation

Abstract:
Domain adaptation addresses one of the most challenging tasks in machine learning : coping with mismatch between learning and testing probability distributions. If adaptation is done correctly, models learned on a specific data representation become more robust when confronted to data depicting the same problems, but described through another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this talk, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the few labeled samples in the source domain as well as the data distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.


Lina Maria Rojas Barahona on Wednesday, September 28

Date: Wednesday , 28th September 2016 at 2pm
Place: LORIA, room C005
Speaker: Lina Maria Rojas Barahona (Cambridge University)

Title: Exploiting Sentence and Context Representations in Deep Neural Models for Spoken Language Understanding

Abstract:
This paper presents a deep learning architecture for the semantic decoder component of a Statistical Spoken Dialogue System. In a slot-filling dialogue, the semantic decoder predicts the dialogue act and a set of slot-value pairs from a set of n-best hypotheses returned by the Automatic Speech Recognition. Most current models for spoken language understanding assume (i) word-aligned semantic annotations as in sequence taggers and (ii) delexicalisation, or a mapping of input words to domain-specific concepts using heuristics that try to capture morphological variation but that do not scale to other domains nor to language variation (e.g., morphology, synonyms, paraphrasing ). In this work the semantic decoder is trained using unaligned semantic annotations and it uses distributed semantic representation learning to overcome the limitations of explicit delexicalisation. The proposed architecture uses a convolutional neural network for the sentence representation and a long-short term memory network for the context representation. Results are presented for the publicly available DSTC2 corpus and an In-car corpus which is similar to DSTC2 but has a significantly higher word error rate (WER).


Fabien Ringeval on Wednesday, October 5

Date: Wednesday October 5 at 2 pm
Place: LORIA, room C005
Speaker: Fabien Ringeval (Université Grenoble Alpes)

Title: Affective computing from speech: towards robust recognition of emotions in ecologically valid situations

Abstract:
Technologies for the automatic recognition of emotion from speech have gained a significant increasing attention in the last decade, from both academic and industry, as it has found many applications in domains as various as, health care, education, serious games, brand reputation, advertisement, and robotics. Whereas good performance has been reported in the literature for acted emotions, the automatic recognition of spontaneous emotions, as expressed in ecologically valid situations, still remains an open-challenge, because such emotions are subtle, their expression and meaning depend on the speaker, the language and the culture, and they might be produced in noisy environments, which complicates the extraction of relevant cues from the speech signal. In this talk, I will present the most recent advances in the field and will show that, deep learning based methods such as long short-term memory recurrent neural networks (LSTM-RNNs), can help to contextualise relevant cues and tackle asynchrony issues for the “time- and value-continuous » prediction of emotion, but also enhance both acoustic waveform and low-level descriptors when captured in noisy conditions. Finally, I will show that, even though end-to-end learning by convolutional and LSTM-RNNs can provide promising results, they do not announce, yet, the end of signal processing for hand-engineered features extraction, as such features combined with non context-aware predictors can generalise even better than those learned by end-to-end methods, providing that they are carefully designed.

Slides


Joël Legrand on Friday, October 14

Date: Friday October 14 at 1 pm
Place: LORIA, room B011-B013
Speaker: Joël Legrand (LIDIAP/EPFL, Lausanne, Switzerland and LORIA, Nancy, France)

Title: Word Sequence Modeling using Deep Learning
Abstract: For a long time, natural language processing (NLP) has relied on generative models with task specific and manually engineered features. Recently, the rapidly growing interest for deep learning has led to state-of-the-art results in various fields such as computer vision, speech processing and natural language processing. The central idea behind these approaches is to learn features and models simultaneously, in an end-to-end manner,
and making as few assumptions as possible. In NLP, word embeddings, mapping words in a dictionary on a continuous low-dimensional vector space, have proven to be very efficient for a large variety of tasks while requiring almost no a-priori linguistic assumptions. In this talk, I will present the results of my research on continuous representations of segments of sentences for the purpose of solving NLP tasks that involve complex sentence-level relationships. I will first introduce the key concepts of deep learning for NLP. I will then focus on two recent empirical studies concerning the tasks of syntactic parsing and bilingual word alignment. For each of them, I will present the main challenges as well as the deep learning-based solutions used to overcome them.

Recent publications:

[1]. Joint RNN-based greedy parsing and word composition.
Joël Legrand and Ronan Collobert. Proceeding of the 3rd International Conference on Learning Representations (ICLR 2015)

[2]. Neural Network-based Word Alignment through Score Aggregation.
Joël Legrand and Michael Auli and Ronan Collobert. Proceedings of the First Conference on Machine Translation (ACMT 2016)

Short bio:
Joël Legrand received his MSc degree in Computer Science from the Université de Lorraine and his Ph.D in Electrical Engineering from the École Polytechnique Fédérale de Lausanne. He recently joined the ORPAILLEUR team as a postdoctoral fellow, working on the PractiKPharma project.


Hans van Ditmarsch on Wednesday, October 19

Date: Wednesday, October 19 at 2 pm
Place: LORIA, room C005
Speaker: Hans van Ditmarsch (LORIA, Cello team)

Title: Epistemic Gossip Protocols

Abstract:
A well-studied phenomenon in network theory since the 1970s are optimal schedules to distribute information by one-to-one communication between nodes. One can take these communicative actions to be telephone calls, and protocols to spread information this way are known as gossip protocols or epidemic protocols. Statistical approaches to gossip have taken a large flight since then, witness for example the survey « Epidemic Information Dissemination in Distributed Systems » by Eugster et al. (IEEE Computer, 2004). It is typical to assume a global scheduler who executes a possibly non-deterministic or randomized protocol. A departure from this methodology is to investigate epistemic gossip protocols, where an agent (node) will call another agent not because it is so instructed by a scheduler, but based on its knowledge or ignorance of the distribution of secrets over the network and of other agents’ knowledge or ignorance of that. Such protocols are distributed and do not need a central scheduler. This comes at a cost: they may take longer to terminate than non-epistemic, globally scheduled, protocols. A number of works have appeared over the past years (Apt et al., Attamah et al., van Ditmarsch et al., van Eijck & Gattinger, Herzig & Maffre) of which we present a survey, including open problems yet to be solved by the community.


Romain Serizel on Wednesday,  November 2nd

Date: Wednesday, 2nd November 2016 at 2pm
Place: LORIA, room A008
Speaker: Romain Serizel (LORIA)

Title: Feature learning based on nonnegative matrix factorisation for speaker identification

Abstract: The main target of speaker identification is to assert whether or not the speaker in an audio recording is known and if he/she is known, to find his/her identity. A recent trend is to use feature learning based approaches to overcome the limitations of hand-craft features. This talk will review the dominant paradigm (the so-called I-vector approach) and will propose an alternative solution based on group nonnegative matrix factorisation (NMF). We will then propose to integrate this approach into a task-driven supervised framework that is inspired by supervised dictionary learning. The goal is to capture both the speaker variability and the session variability while exploiting the discriminative learning aspect of the task-driven approach.

Denis Paperno on Wednesday, November 30

Date: Wednesday, November 30 at 2 pm
Place: LORIA, room A008
Speaker: Denis Paperno (LORIA, Synalp team)

Title: Distributional Semantic Spaces: Creation and Applications

Abstract:
Distributional semantic vectors (also known as word embeddings) are increasingly popular in various natural language tasks. The talk will describe how distributional semantic models are created, investigate some of the model hyperparameters, and illustrate their applications.


Johan Bos on Wednesday, December 7

Date: Wednesday, December 7 at 2:15pm
Place: LORIA, Amphi C
Speaker: Johan Bos (Rijksuniversiteit Groningen)

Title: The Parallel Meaning Bank: A Large Corpus of Translated Texts Annotated with Formal Meaning Representations

Abstract:
Several large corpora annotated with meaning representations are nowadays available such as the Groningen Meaning Bank, the AMR Corpus, or Treebank Semantics. These are usually resources for a single language. In this paper I present a project with the aim to develop a meaning bank for translations of texts — in other words, a parallel meaning bank. The languages involved are English, Dutch, German and Italian. The idea is to use language technology developed for English and project the outcome of the analyses to the other languages. There are five steps of processing:
– Tokenisation: segmentation of words, multi-word expressions and sentences, using Elephant, a statistical tokenizer;
– Semantic Tagging: mapping word tokens to semantic tags (abstracting over traditional part-of-speech tags and named entities and a bit more);
– Symbolisation: assigning appropriate non-logical symbols to word tokens (combining lemmatization and normalisation);
– Syntactic Parsing: based on Combinatorial Categorial Grammar;
– Semantic Parsing: based on Discourse Representation Theory, using the semantic parser Boxer;
The first aim of the project is to provide appropriate compositional semantic analyses for the aforementioned language taking advantage of the translations. The second aim is to study the role of meaning in translations: even though you would expect that meaning is preserved in translations, human translators often perform little tricks involving meaning shifts and changes to arrive at better translations.