Representation learning for natural language
Doktorsavhandling, 2018

Artificial neural networks have obtained astonishing results in a diverse number of tasks.
One of the reasons for the success is their ability to learn the whole task at once (endto-end learning), including the representations for data. This thesis will investigate representation learning for natural language through the study of a number of tasks ranging from automatic multi-document summarization to named entity recognition and the transformation of words into morphological forms specified by analogies.

In the first two papers, we investigate whether automatic multi-document summarization can benefit from learned representations, and what are the best ways of incorporating learned representations in an extractive summarization system. We propose a novel summarization approach that represents sentences using word embeddings, and a strategy for aggregating multiple sentence similarity scores to compute summaries that take multiple aspects into account. The approach is evaluated quantitatively using the de facto evaluation system ROUGE, and obtains state-of-the-art results on standard benchmark datasets for generic multi-document summarization.

The rest of the thesis studies models trained end-to-end for some specific tasks, and investigates how to train the models to perform well, and to learn internal representations of data that explain the factors of variation in the data.

Specifically, we investigate whether character-based recurrent neural networks (RNNs) can learn the necessary representations for tasks such as named entity recognition (NER) and morphological analogies, and what is the best way of learning the representations needed to solve the mentioned tasks. We devise a novel character-based recurrent neural network model that recognize medical terms in health record data. The model is trained on openly available data, and evaluated using standard metrics on sensitive medical health record data in Swedish. We conclude that the model learns to solve the task and is able to generalize from the training data domain to the test domain.

We then present a novel recurrent neural model that transforms a query word into the morphological form demonstrated by another word. The model is trained and evaluated using word analogies and takes as input the raw character-sequence of the words with no explicit features needed. We conclude that character-based RNNs can successfully learn good representations internally and that the proposed model performs well on the analogy task, beating the baseline with a large margin. As the model learns to transform words, it learns internal representations that disentangles morphological relations using only cues from the training objective, which is to perform well on the word transformation task.

In other settings, such cues may not be available at training time, and we therefore present a regularizer that improves disentanglement in the learned representations by penalizing the correlation between activations in a layer. In the second part of the thesis we have proposed models and associated training strategies that solves the tasks and simultaneously learns informative internal representations; in Paper V this is enforced by an explicit regularization signal, suitable for when such a signal is missing from the training data (e.g. in the case of autoencoders).

artificial neural networks

artificial intelligence

natural language processing

deep learning

machine learning

summarization

representation learning

MC, Hörsalsvägen 5, entrance floor, Chalmers
Opponent: Prof. Dr. Hinrich Schütze, CIS Ludwig-Maximilians Universität, München

Författare

Olof Mogren

Chalmers, Data- och informationsteknik, Datavetenskap

Extractive Summarization using Continuous Vector Space Models

Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC) EACL, April 26-30, 2014 Gothenburg, Sweden,; (2014)p. 31-39

Paper i proceeding

Extractive summarization by aggregating multiple similarities

International Conference Recent Advances in Natural Language Processing, RANLP,; Vol. 2015(2015)p. 451-457

Paper i proceeding

S. Almgren, S. Pavlov, and O. Mogren (2016). “Named Entity Recognition in Swedish Health Records with Character-Based Deep Bidirectional LSTMs”

O. Mogren and R. Johansson (2018). “Character-based recurrent neural networks for morphological relational reasoning”

M. Kågebäck and O. Mogren (2018). “Disentangled activations in deep networks”

The advances in artificial intelligence have been astonishing in recent years, with new algorithms showing super-human performance for a wide number of tasks. An important reason for this development is the availability of large datasets and powerful computers, making it possible to train larger machine learning models with higher learning capacity. Artificial neural networks (ANNs) are machine learning models that have been of paramount importance to the development. ANNs are composed of layers of artificial neurons, each of which can only perform a simple computation, but when stacked together in deep architectures, they can be trained to approximate complicated non-linear functions. These models have achieved fantastic results in tasks on various data modalities such as audio, vision, and text. One reason for the success is the internal vector representations computed by the layers, each transforming their input into numerical feature vectors which are increasingly useful for the end task. A complete model is often trained at once (end-to-end learning), and the representations are optimized during training to solve the given task.

This thesis studies the representations computed using artificial neural networks that are trained on and applied to natural language. In paper I and II, we apply learned representations for words to improve the performance of multi-document summarization. In Paper III, we study the use of deep neural sequence models working on the raw character stream as input, and how this class of models can be used to detect medical terms in text (such as drugs, symptoms, and body parts). The system is evaluated on medical health records in Swedish. In paper IV, we propose a novel deep neural sequence model trained to transform words into inflected forms as demonstrated by analogies: “see" is to “sees" as “eat" is to what? The model outperforms previous rule-based approaches by a massive margin, and when inspecting the internal representations computed by this model, one can see that it learns to distinguish classes of transformations of word forms, without being explicitly told to do so. This is an effect from training the model to transform words while provided with the analogous words forms. In other cases, however, the training objective may not provide such cues for the learning algorithm. In Paper V, we study how to improve the way learned representations disentangle the underlying factors of variation in the data. This can be useful for unsupervised representation learning, such as using autoencoders for task agnostic representations or when the final use case is unknown.

Ämneskategorier

Annan data- och informationsvetenskap

Språkteknologi (språkvetenskaplig databehandling)

Datorseende och robotik (autonoma system)

ISBN

978-91-7597-675-4

Technical report D - School of Computer Science and Engineering, Chalmers University of Technology: 155D

Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie: 4356

Utgivare

Chalmers

MC, Hörsalsvägen 5, entrance floor, Chalmers

Opponent: Prof. Dr. Hinrich Schütze, CIS Ludwig-Maximilians Universität, München

Mer information

Senast uppdaterat

2018-11-15