Aarne Talman

PhD Student in Language Technology at University of Helsinki

Research

My research focuses on Natural Language Inference and Natural Language Understanding.

The main task of Natural Language Inference (NLI) research is to build computational systems that are able to recognise valid inferences as well as contradictions from text input. NLI has central importance when studying natural language understanding, computational semantic as well as artificial intelligence more generally. Traditionally NLI researchers have focused on rule-based approaches and various “shallow” approaches like bag-of-words etc., however, recently since the publication of the Stanford NLI corpus, there has been a growing interest in deep learning approaches to NLI. These approaches offer significant benefits over the rule-based approaches, however they are still in many ways far from solving the problem.

One of my areas of focus is looking into how multilingual neural network models can help in natural language inference. The goal is to build language-independent abstract meaning representations by training neural networks with massively parallel multilingual datasets. I will be applying these abstract meaning representations to natural language inference tasks.

Projects

Found in Translation: Natural Language Understanding with Cross-lingual Grounding is an ERC funded project led by Jörg Tiedemann. With this project, we propose a line of research that will focus on the development of novel data-driven models that can learn language-independent abstract meaning representations from indirect supervision provided by human translations covering a substantial proportion of the linguistic diversity in the world. A guiding principle is cross-lingual grounding, the effect of resolving ambiguities through translation. Eventually, this will lead to language-independent meaning representations and we will test our ideas with multilingual machine translation and tasks that require semantic reasoning and inference.