Implementing Natural Language Processing with Deep Learning

There are 2 primary topics that lay down the trends for NLP with Deep Learning:
1. Pre-Training using Unsupervised/Unlabeled data
2. OpenAI GPT-2 breakthrough

1. Pre-Training using Unsupervised/Unlabeled data

Supervised data is expensive and limited, how can we use Unsupervised data to supplement training with supervised fine-tuning to do better?
Let’s apply this to the problem of Machine Translation and see how this helps –
If you have 2 corpus of text (transcriptions or wikipedia articles) in different languages with no cross language mapping.
We can use this for pre-training, train an encoder and decoder LSTM (without attention) individually on both corpus and fit them together in a model and fine-tune over a labeled dataset. 

How does this help? Both the encoder and decoder LSTMs here have learnt the notion of their respective language distributions and are good as generative models for each of their languages. When you put them together (#2) the model learn to use the compressed representation and map them from source to target languages. Pre-training is equivalent to “smart” initialization in some way. Let’s take this further.

Implementing Natural Language Processing with Deep Learning 1

The issue with pre-training is the network was not originally trained to learn a source to destination mapping. Let’s see how Self-Training applied as Back-Translation helps here, we use a NMT (Neural Machine Translation) model to translate from English to French and then feed the output NMT1 as a reverse value pair to NMT2. Meaning, given the output of NMT1, model 2 learns how to generate the input. This is more of “augmented” supervised data where given noisy input of another model, your network is learning to predict the input of the prior model. 

Implementing Natural Language Processing with Deep Learning 2

So far so good, let’s look into a simpler approach to perform NMT with zero labeled data. This is done by learning word vectors in both the languages, the good thing about word vectors is they have inherent structure. This allows us to learn an affine mapping between the two languages a weight (W) in the image below.

Implementing Natural Language Processing with Deep Learning 3

Words with similar meanings are going to be mapped pretty close in embedding space between languages. Now using these on our previous encoder and decoder, we can train the network to produce translation between languages since we now know the meaning of words in both the languages.

Implementing Natural Language Processing with Deep Learning 4

A summary of the objective functions of the optimization: 

  • The same encoder is trained twice, once to learn to produce the source language back in a auto encoder fashion, this helps the network further cement the mapping of the source and target language via –  Src -> Target -> Src learning
  • Secondly, using supervised data to perform back-translation using output of another network.
Implementing Natural Language Processing with Deep Learning 5

Recently, the research team at Facebook augmented the language masking training objective of BERT to not only mask words in a language but provide the network inputs in 2 languages (supervised) and mask words in one. Thus, making it learn meaning between languages – this provided a significant performance improvement and the paper is titled Cross-Lingual BERT

Implementing Natural Language Processing with Deep Learning 6
Implementing Natural Language Processing with Deep Learning 7

2. GPT-2 

Next, let’s look at OpenAI’s GPT-2 model, this is basically a HUGE model (1.5 billion parameters) trained on quality data (high rated links on Reddit).

Implementing Natural Language Processing with Deep Learning 8

GPT-2 is trained as a language model but was evaluated on many tasks i.e. evaluated as a Zero Shot Model.(Zero shot learning means trying to do a task without ever training on it.)

Implementing Natural Language Processing with Deep Learning 9

Now the question is, how did we use GPT-2 perform translation even though it was never explicitly trained to perform translation? Answer to the question can be seen in the corpus snippet below – quotes are translated in a bunch of the input data into another language, the model smartly learns between both the languages a mapping which it uses inherently to perform translation.

Implementing Natural Language Processing with Deep Learning 10

Models have continued to grow in size and perform better on Vision and Language tasks. How long will the trend continue is a question we look forward to answer in future 🙂

This article has been published from a wire agency feed without modifications to the text. Only the headline has been changed.

Source link