Many of the articles have been focused on BERT — the model that came and dominated the world of natural language processing (NLP) and marked a new age for language models.
For those of you that may not have used transformers models (eg what BERT is) before, the process looks a little like this:
pip install transformers
- Initialize a pre-trained transformers model —
from_pretrained
. - Test it on some data.
- Maybe fine-tune the model (train it some more).
Now, this is a great approach, but if we only ever do this, we lack the understanding behind creating our own transformers models.
And, if we cannot create our own transformer models — we must rely on there being a pre-trained model that fits our problem, this is not always the case.
So in this article, we will explore the steps we must take to build our own transformer model — specifically a further developed version of BERT, called RoBERTa.
An Overview
There are a few steps to the process, so before we dive in let’s first summarize what we need to do. In total, there are four key parts:
- Getting the data
- Building a tokenizer
- Creating an input pipeline
- Training the model
Once we have worked through each of these sections, we will take the tokenizer and model we have built — and save them both so that we can then use them in the same way we usually would with from_pretrained
.
Getting The Data
As with any machine learning project, we need data. In terms of data for training a transformer model, we really are spoilt for choice — we can use almost any text data.
And, if there’s one thing that we have plenty of on the internet — it’s unstructured text data.
One of the largest datasets in the domain of text scraped from the internet is the OSCAR dataset.
The OSCAR dataset boasts a huge number of different languages — and one of the clearest use-cases for training from scratch is so that we can apply BERT to some less commonly used languages, such as Telugu or Navajo.
Unfortunately, the only language I can speak with any degree of competency is English — but my girlfriend is Italian, and so she — Laura, will be assessing the results of our Italian-speaking BERT model — FiliBERTo.
So, to download the Italian segment of the OSCAR dataset we will be using HuggingFace’s datasets
library — which we can install with pip install datasets
. Then we download OSCAR_IT with:
In [1]: from datasets import load_dataset
In [2]: dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
Let’s take a look at the dataset
object.
Thedataset
is aDatasetDict
containing a singletrain
dataset.
train: Dataset(
features: [‘id’, ‘text’],
num_rows: 28522082
})
})
We can access the dataset itself through the train
key. From here we can view
more information, like the number of rows and structure of the dataset.
In [5]: dataset[‘train’]
Out[5]:
Let’s take a look at a single sample:
Out[7]:
Great, now let’s store our data in a format that we can use when building our tokenizer. We need to create a set of plaintext files containing just the text
feature from our dataset, and we will split each sample using a newline \n
.
from tqdm.auto import tqdm
text_data = []
file_count = 0
for sample in tqdm(dataset['train']):
sample = sample['text'].replace('\n', '')
text_data.append(sample)
if len(text_data) == 10_000:
# once we git the 10K mark, save to file
with open(f'../../data/text/oscar_it/text_{file_count}.txt', 'w', encoding='utf-8') as fp:
fp.write('\n'.join(text_data))
text_data = []
file_count += 1
# after saving in 10K chunks, we will have ~2082 leftover samples, we save those now too
with open(f'../../data/text/oscar_it/text_{file_count}.txt', 'w', encoding='utf-8') as fp:
fp.write('\n'.join(text_data))
Over in our data/text/oscar_it
directory we will find:
Building a Tokenizer
Next up is the tokenizer! When using transformers we typically load a tokenizer, alongside its respective transformer model — the tokenizer is a key component in the process.
When building our tokenizer we will feed it all of our OSCAR data, specify our vocabulary size (number of tokens in the tokenizer), and any special tokens.
Now, the RoBERTa special tokens look like this:
Token | Use |
---|---|
<s> |
Beginning of sequence (BOS) or classifier (CLS) token |
</s> |
End of sequence (EOS) or seperator (SEP) token |
<unk> |
Unknown token |
<pad> |
Padding token |
<mask> |
Masking token |
Get a list of paths to each file in our oscar_it directory.
from pathlib import Path
paths = [str(x) for x in Path('../../data/text/oscar_it').glob('**/*.txt')]
Now we move onto training the tokenizer. We use a byte-level Byte-pair encoding (BPE) tokenizer. This allows us to build the vocabulary from an alphabet of single bytes, meaning all words will be decomposable into tokens.
from tokenizers import ByteLevelBPETokenizer
tokenizer = ByteLevelBPETokenizer()
tokenizer.train(files=paths[:5], vocab_size=30_522, min_frequency=2,
special_tokens=['<s>', '<pad>', '</s>', '<unk>', '<mask>'])
import os
os.mkdir('./filiberto')
tokenizer.save_model('filiberto')
- merges.txt — performs the initial mapping of text to tokens
- vocab.json — maps the tokens to token IDs
And with those, we can move on to initializing our tokenizer so that we can use it as we would use any other from_pretrained
tokenizer.
Initializing the Tokenizer
We first initialize the tokenizer using the two files we built before — using a simple from_pretrained
:
from transformers import RobertaTokenizer
# initialize the tokenizer using the tokenizer we initialized and saved to file
tokenizer = RobertaTokenizer.from_pretrained('filiberto', max_len=512)
# test our tokenizer on a simple sentence
tokens = tokenizer('ciao, come va?')
print(tokens)
tokens.input_ids
The input pipeline of our training process is the more complex part of the entire process. It consists of us taking our raw OSCAR training data, transforming it, and loading it into a DataLoader
ready for training.
Preparing the Data
We’ll start with a single sample and work through the preparation logic.
First, we need to open our file — the same files that we saved as .txt files earlier. We split each based on newline characters \n
as this indicates the individual samples.
with open('../../data/text/oscar_it/text_0.txt', 'r', encoding='utf-8') as fp:
lines = fp.read().split('\n')
batch = tokenizer(lines, max_length=512, padding='max_length', truncation=True)
len(batch)
- input_ids — our token_ids with ~15% of tokens masked using the mask token
<mask>
. - attention_mask — a tensor of 1s and 0s, marking the position of ‘real’ tokens/padding tokens — used in attention calculations.
- labels — our token_ids with no masking.
If you’re not familiar with MLM, I’ve explained it here.
Our attention_mask
and labels
tensors are simply extracted from our batch
. The input_ids
tensors require more attention however, for this tensor we mask ~15% of the tokens — assigning them the token ID 3
.
import torch
labels = torch.tensor([x.ids for x in batch])
mask = torch.tensor([x.attention_mask for x in batch])
# make copy of labels tensor, this will be input_ids
input_ids = labels.detach().clone()
# create random array of floats with equal dims to input_ids
rand = torch.rand(input_ids.shape)
# mask random 15% where token is not 0 [PAD], 1 [CLS], or 2 [SEP]
mask_arr = (rand < .15) * (input_ids != 0) * (input_ids != 1) * (input_ids != 2)
# loop through each row in input_ids tensor (cannot do in parallel)
for i in range(input_ids.shape[0]):
# get indices of mask positions from mask array
selection = torch.flatten(mask_arr[i].nonzero()).tolist()
# mask input_ids
input_ids[i, selection] = 3 # our custom [MASK] token == 3
We have 10000 tokenized sequences, each containing 512 tokens.
input_ids.shape
We can see the special tokens here, 1
is our [CLS] token, 2
our [SEP] token, 3
our [MASK] token, and at the end we have two 0
– or [PAD] – tokens.
input_ids[0][:200]
Building the DataLoader
Next, we define our Dataset
class — which we use to initialize our three encoded tensors as PyTorch torch.utils.data.Dataset
objects.
encodings = {'input_ids': input_ids, 'attention_mask': mask, 'labels': labels}
class Dataset(torch.utils.data.Dataset):
def __init__(self, encodings):
# store encodings internally
self.encodings = encodings
def __len__(self):
# return the number of samples
return self.encodings['input_ids'].shape[0]
def __getitem__(self, i):
# return dictionary of input_ids, attention_mask, and labels for index i
return {key: tensor[i] for key, tensor in self.encodings.items()}
Next we initialize our Dataset
.
dataset = Dataset(encodings)
And initialize the dataloader, which will load the data into the model during training.
loader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True)
Training the Model
We need two things for training, our DataLoader
and a model. The DataLoader
we have — but no model.
Initializing the Model
For training, we need a raw (not pre-trained) BERTLMHeadModel
. To create that, we first need to create a RoBERTa config object to describe the parameters we’d like to initialize FiliBERTo with.
from transformers import RobertaConfig
config = RobertaConfig(
vocab_size=30_522, # we align this to the tokenizer vocab_size
max_position_embeddings=514,
hidden_size=768,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1
)
from transformers import RobertaForMaskedLM
model = RobertaForMaskedLM(config)
Setup GPU/CPU usage.
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# and move our model over to the selected device
model.to(device)
Activate the training mode of our model, and initialize our optimizer (Adam with weighted decay – reduces chance of overfitting).
from transformers import AdamW
# activate training mode
model.train()
# initialize optimizer
optim = AdamW(model.parameters(), lr=1e-4)
Finally — training time! We train just as we usually would when training via PyTorch.
epochs = 2
for epoch in range(epochs):
# setup loop with TQDM and dataloader
loop = tqdm(loader, leave=True)
for batch in loop:
# initialize calculated gradients (from prev step)
optim.zero_grad()
# pull all tensor batches required for training
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
# process
outputs = model(input_ids, attention_mask=attention_mask,
labels=labels)
# extract loss
loss = outputs.loss
# calculate loss for every parameter that needs grad update
loss.backward()
# update parameters
optim.step()
# print relevant info to progress bar
loop.set_description(f'Epoch {epoch}')
loop.set_postfix(loss=loss.item())
model.save_pretrained('./filiberto') # and don't forget to save filiBERTo!
The Real Test
Now it’s time for the real test. We set up an MLM pipeline — and ask Laura to assess the results. You can watch the video review at 22:44 here:
We first initialize a pipeline
object, using the 'fill-mask'
argument. Then begin testing our model like so:
from transformers import pipeline
fill = pipeline('fill-mask', model='filiberto', tokenizer='filiberto')
fill(f'ciao {fill.tokenizer.mask_token} va?')
We start with “buongiorno, come va?” — or “good day, how are you?”:
fill(f'buongiorno, {fill.tokenizer.mask_token} va?')
Next up, a slightly harder phrase, “ciao, dove ci incontriamo oggi pomeriggio?” — or “hi, where are we going to meet this afternoon?”:
fill(f'ciao, dove ci {fill.tokenizer.mask_token} oggi pomeriggio? ')
✅ "hi, where do we see each other this afternoon?"
✅ "hi, where do we meet this afternoon?"
❌ "hi, where here we are this afternoon?"
✅ "hi, where are we meeting this afternoon?"
✅ "hi, where do we meet this afternoon?"
Finally, one more, harder sentence, “cosa sarebbe successo se avessimo scelto un altro giorno?” — or “what would have happened if we had chosen another day?”:
fill(f'cosa sarebbe successo se {fill.tokenizer.mask_token} scelto un altro giorno?')
✅ "what would have happened if we had chosen another day?"
✅ "what would have happened if I had chosen another day?"
✅ "what would have happened if they had chosen another day?"
✅ "what would have happened if you had chosen another day?"
❌ "what would have happened if another day was chosen?"
Overall, it looks like our model passed Laura’s tests — and we now have a competent Italian language model called FiliBERTo!
That’s it for this walkthrough of training a BERT model from scratch!
We’ve covered a lot of ground, from getting and formatting our data — all the way through to using language modeling to train our raw BERT model.
This article has been published from the source link without modifications to the text. Only the headline has been changed.
Source link