mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
add BERT experiment results (#333)
* add BERT experiment results * cleanup * formatting
This commit is contained in:
committed by
GitHub
parent
ed257789a4
commit
564362044a
@@ -17,111 +17,91 @@ The codes are using the 50k movie reviews from IMDb ([dataset source](https://ai
|
||||
Run the following code to create the `train.csv`, `validation.csv`, and `test.csv` datasets:
|
||||
|
||||
```bash
|
||||
python download-prepare-dataset.py
|
||||
python download_prepare_dataset.py
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Step 3: Run Models
|
||||
|
||||
The 124M GPT-2 model used in the main chapter, starting for the pretrained weights and only training the last transformer block plus output layers:
|
||||
The 124M GPT-2 model used in the main chapter, starting with pretrained weights, and finetuning all weights:
|
||||
|
||||
```bash
|
||||
python train-gpt.py
|
||||
python train_gpt.py --trainable_layers "all" --num_epochs 1
|
||||
```
|
||||
|
||||
```
|
||||
Ep 1 (Step 000000): Train loss 2.829, Val loss 3.433
|
||||
Ep 1 (Step 000050): Train loss 1.440, Val loss 1.669
|
||||
Ep 1 (Step 000100): Train loss 0.879, Val loss 1.037
|
||||
Ep 1 (Step 000150): Train loss 0.838, Val loss 0.866
|
||||
Ep 1 (Step 000000): Train loss 3.706, Val loss 3.853
|
||||
Ep 1 (Step 000050): Train loss 0.682, Val loss 0.706
|
||||
...
|
||||
Ep 1 (Step 004300): Train loss 0.174, Val loss 0.202
|
||||
Ep 1 (Step 004350): Train loss 0.309, Val loss 0.190
|
||||
Training accuracy: 88.75% | Validation accuracy: 91.25%
|
||||
Ep 2 (Step 004400): Train loss 0.263, Val loss 0.205
|
||||
Ep 2 (Step 004450): Train loss 0.226, Val loss 0.188
|
||||
...
|
||||
Ep 2 (Step 008650): Train loss 0.189, Val loss 0.171
|
||||
Ep 2 (Step 008700): Train loss 0.225, Val loss 0.179
|
||||
Training accuracy: 85.00% | Validation accuracy: 90.62%
|
||||
Ep 3 (Step 008750): Train loss 0.206, Val loss 0.187
|
||||
Ep 3 (Step 008800): Train loss 0.198, Val loss 0.172
|
||||
...
|
||||
Training accuracy: 96.88% | Validation accuracy: 90.62%
|
||||
Training completed in 18.62 minutes.
|
||||
Ep 1 (Step 004300): Train loss 0.199, Val loss 0.285
|
||||
Ep 1 (Step 004350): Train loss 0.188, Val loss 0.208
|
||||
Training accuracy: 95.62% | Validation accuracy: 95.00%
|
||||
Training completed in 9.48 minutes.
|
||||
|
||||
Evaluating on the full datasets ...
|
||||
|
||||
Training accuracy: 93.66%
|
||||
Validation accuracy: 90.02%
|
||||
Test accuracy: 89.96%
|
||||
Training accuracy: 95.64%
|
||||
Validation accuracy: 92.32%
|
||||
Test accuracy: 91.88%
|
||||
```
|
||||
|
||||
|
||||
<br>
|
||||
|
||||
---
|
||||
|
||||
<br>
|
||||
|
||||
A 340M parameter encoder-style [BERT](https://arxiv.org/abs/1810.04805) model:
|
||||
|
||||
```bash
|
||||
!python train_bert_hf --trainable_layers "all" --num_epochs 1 --model "bert"
|
||||
```
|
||||
|
||||
```
|
||||
Ep 1 (Step 000000): Train loss 0.848, Val loss 0.775
|
||||
Ep 1 (Step 000050): Train loss 0.655, Val loss 0.682
|
||||
...
|
||||
Ep 1 (Step 004300): Train loss 0.146, Val loss 0.318
|
||||
Ep 1 (Step 004350): Train loss 0.204, Val loss 0.217
|
||||
Training accuracy: 92.50% | Validation accuracy: 88.75%
|
||||
Training completed in 7.65 minutes.
|
||||
|
||||
Evaluating on the full datasets ...
|
||||
|
||||
Training accuracy: 94.35%
|
||||
Validation accuracy: 90.74%
|
||||
Test accuracy: 90.89%
|
||||
```
|
||||
|
||||
<br>
|
||||
|
||||
---
|
||||
|
||||
<br>
|
||||
|
||||
A 66M parameter encoder-style [DistilBERT](https://arxiv.org/abs/1910.01108) model (distilled down from a 340M parameter BERT model), starting for the pretrained weights and only training the last transformer block plus output layers:
|
||||
|
||||
|
||||
|
||||
```bash
|
||||
python train-bert-hf.py
|
||||
!python train_bert_hf.py --trainable_layers "all" --num_epochs 1 --model "distilbert"
|
||||
```
|
||||
|
||||
```
|
||||
Ep 1 (Step 000000): Train loss 0.693, Val loss 0.697
|
||||
Ep 1 (Step 000050): Train loss 0.532, Val loss 0.596
|
||||
Ep 1 (Step 000100): Train loss 0.431, Val loss 0.446
|
||||
Ep 1 (Step 000000): Train loss 0.693, Val loss 0.688
|
||||
Ep 1 (Step 000050): Train loss 0.452, Val loss 0.460
|
||||
...
|
||||
Ep 1 (Step 004300): Train loss 0.234, Val loss 0.351
|
||||
Ep 1 (Step 004350): Train loss 0.190, Val loss 0.222
|
||||
Training accuracy: 88.75% | Validation accuracy: 88.12%
|
||||
Ep 2 (Step 004400): Train loss 0.258, Val loss 0.270
|
||||
Ep 2 (Step 004450): Train loss 0.204, Val loss 0.295
|
||||
...
|
||||
Ep 2 (Step 008650): Train loss 0.088, Val loss 0.246
|
||||
Ep 2 (Step 008700): Train loss 0.084, Val loss 0.247
|
||||
Training accuracy: 98.75% | Validation accuracy: 90.62%
|
||||
Ep 3 (Step 008750): Train loss 0.067, Val loss 0.209
|
||||
Ep 3 (Step 008800): Train loss 0.059, Val loss 0.256
|
||||
...
|
||||
Ep 3 (Step 013050): Train loss 0.068, Val loss 0.280
|
||||
Ep 3 (Step 013100): Train loss 0.064, Val loss 0.306
|
||||
Training accuracy: 99.38% | Validation accuracy: 87.50%
|
||||
Training completed in 16.70 minutes.
|
||||
Ep 1 (Step 004300): Train loss 0.179, Val loss 0.272
|
||||
Ep 1 (Step 004350): Train loss 0.199, Val loss 0.182
|
||||
Training accuracy: 95.62% | Validation accuracy: 91.25%
|
||||
Training completed in 4.26 minutes.
|
||||
|
||||
Evaluating on the full datasets ...
|
||||
|
||||
Training accuracy: 98.87%
|
||||
Validation accuracy: 90.98%
|
||||
Test accuracy: 90.81%
|
||||
Training accuracy: 95.30%
|
||||
Validation accuracy: 91.12%
|
||||
Test accuracy: 91.40%
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
A 355M parameter encoder-style [RoBERTa](https://arxiv.org/abs/1907.11692) model, starting for the pretrained weights and only training the last transformer block plus output layers:
|
||||
|
||||
|
||||
```bash
|
||||
python train-bert-hf.py --bert_model roberta
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
A scikit-learn Logistic Regression model as a baseline.
|
||||
|
||||
```bash
|
||||
python train-sklearn-logreg.py
|
||||
```
|
||||
|
||||
```
|
||||
Dummy classifier:
|
||||
Training Accuracy: 50.01%
|
||||
Validation Accuracy: 50.14%
|
||||
Test Accuracy: 49.91%
|
||||
|
||||
|
||||
Logistic regression classifier:
|
||||
Training Accuracy: 99.80%
|
||||
Validation Accuracy: 88.60%
|
||||
Test Accuracy: 88.84%
|
||||
```
|
||||
|
||||
@@ -16,29 +16,47 @@ from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
||||
|
||||
|
||||
class IMDBDataset(Dataset):
|
||||
def __init__(self, csv_file, tokenizer, max_length=None, pad_token_id=50256):
|
||||
def __init__(self, csv_file, tokenizer, max_length=None, pad_token_id=50256, use_attention_mask=False):
|
||||
self.data = pd.read_csv(csv_file)
|
||||
self.max_length = max_length if max_length is not None else self._longest_encoded_length(tokenizer)
|
||||
self.pad_token_id = pad_token_id
|
||||
self.use_attention_mask = use_attention_mask
|
||||
|
||||
# Pre-tokenize texts
|
||||
# Pre-tokenize texts and create attention masks if required
|
||||
self.encoded_texts = [
|
||||
tokenizer.encode(text)[:self.max_length]
|
||||
tokenizer.encode(text, truncation=True, max_length=self.max_length)
|
||||
for text in self.data["text"]
|
||||
]
|
||||
# Pad sequences to the longest sequence
|
||||
|
||||
# Debug
|
||||
pad_token_id = 0
|
||||
|
||||
self.encoded_texts = [
|
||||
et + [pad_token_id] * (self.max_length - len(et))
|
||||
for et in self.encoded_texts
|
||||
]
|
||||
|
||||
if self.use_attention_mask:
|
||||
self.attention_masks = [
|
||||
self._create_attention_mask(et)
|
||||
for et in self.encoded_texts
|
||||
]
|
||||
else:
|
||||
self.attention_masks = None
|
||||
|
||||
def _create_attention_mask(self, encoded_text):
|
||||
return [1 if token_id != self.pad_token_id else 0 for token_id in encoded_text]
|
||||
|
||||
def __getitem__(self, index):
|
||||
encoded = self.encoded_texts[index]
|
||||
label = self.data.iloc[index]["label"]
|
||||
return torch.tensor(encoded, dtype=torch.long), torch.tensor(label, dtype=torch.long)
|
||||
|
||||
if self.use_attention_mask:
|
||||
attention_mask = self.attention_masks[index]
|
||||
else:
|
||||
attention_mask = torch.ones(self.max_length, dtype=torch.long)
|
||||
|
||||
return (
|
||||
torch.tensor(encoded, dtype=torch.long),
|
||||
torch.tensor(attention_mask, dtype=torch.long),
|
||||
torch.tensor(label, dtype=torch.long)
|
||||
)
|
||||
|
||||
def __len__(self):
|
||||
return len(self.data)
|
||||
@@ -52,10 +70,11 @@ class IMDBDataset(Dataset):
|
||||
return max_length
|
||||
|
||||
|
||||
def calc_loss_batch(input_batch, target_batch, model, device):
|
||||
def calc_loss_batch(input_batch, attention_mask_batch, target_batch, model, device):
|
||||
attention_mask_batch = attention_mask_batch.to(device)
|
||||
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
|
||||
# logits = model(input_batch)[:, -1, :] # Logits of last output token
|
||||
logits = model(input_batch).logits
|
||||
logits = model(input_batch, attention_mask=attention_mask_batch).logits
|
||||
loss = torch.nn.functional.cross_entropy(logits, target_batch)
|
||||
return loss
|
||||
|
||||
@@ -69,9 +88,9 @@ def calc_loss_loader(data_loader, model, device, num_batches=None):
|
||||
# Reduce the number of batches to match the total number of batches in the data loader
|
||||
# if num_batches exceeds the number of batches in the data loader
|
||||
num_batches = min(num_batches, len(data_loader))
|
||||
for i, (input_batch, target_batch) in enumerate(data_loader):
|
||||
for i, (input_batch, attention_mask_batch, target_batch) in enumerate(data_loader):
|
||||
if i < num_batches:
|
||||
loss = calc_loss_batch(input_batch, target_batch, model, device)
|
||||
loss = calc_loss_batch(input_batch, attention_mask_batch, target_batch, model, device)
|
||||
total_loss += loss.item()
|
||||
else:
|
||||
break
|
||||
@@ -87,11 +106,12 @@ def calc_accuracy_loader(data_loader, model, device, num_batches=None):
|
||||
num_batches = len(data_loader)
|
||||
else:
|
||||
num_batches = min(num_batches, len(data_loader))
|
||||
for i, (input_batch, target_batch) in enumerate(data_loader):
|
||||
for i, (input_batch, attention_mask_batch, target_batch) in enumerate(data_loader):
|
||||
if i < num_batches:
|
||||
attention_mask_batch = attention_mask_batch.to(device)
|
||||
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
|
||||
# logits = model(input_batch)[:, -1, :] # Logits of last output token
|
||||
logits = model(input_batch).logits
|
||||
logits = model(input_batch, attention_mask=attention_mask_batch).logits
|
||||
predicted_labels = torch.argmax(logits, dim=1)
|
||||
num_examples += predicted_labels.shape[0]
|
||||
correct_predictions += (predicted_labels == target_batch).sum().item()
|
||||
@@ -119,9 +139,9 @@ def train_classifier_simple(model, train_loader, val_loader, optimizer, device,
|
||||
for epoch in range(num_epochs):
|
||||
model.train() # Set model to training mode
|
||||
|
||||
for input_batch, target_batch in train_loader:
|
||||
for input_batch, attention_mask_batch, target_batch in train_loader:
|
||||
optimizer.zero_grad() # Reset loss gradients from previous batch iteration
|
||||
loss = calc_loss_batch(input_batch, target_batch, model, device)
|
||||
loss = calc_loss_batch(input_batch, attention_mask_batch, target_batch, model, device)
|
||||
loss.backward() # Calculate loss gradients
|
||||
optimizer.step() # Update model weights using loss gradients
|
||||
examples_seen += input_batch.shape[0] # New: track examples instead of tokens
|
||||
@@ -159,17 +179,33 @@ if __name__ == "__main__":
|
||||
parser.add_argument(
|
||||
"--trainable_layers",
|
||||
type=str,
|
||||
default="last_block",
|
||||
default="all",
|
||||
help=(
|
||||
"Which layers to train. Options: 'all', 'last_block', 'last_layer'."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--use_attention_mask",
|
||||
type=str,
|
||||
default="true",
|
||||
help=(
|
||||
"Whether to use a attention mask for padding tokens. Options: 'true', 'false'"
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--bert_model",
|
||||
type=str,
|
||||
default="distilbert",
|
||||
help=(
|
||||
"Which layers to train. Options: 'all', 'last_block', 'last_layer'."
|
||||
"Which model to train. Options: 'distilbert', 'bert'."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--num_epochs",
|
||||
type=int,
|
||||
default=1,
|
||||
help=(
|
||||
"Number of epochs."
|
||||
)
|
||||
)
|
||||
args = parser.parse_args()
|
||||
@@ -201,19 +237,21 @@ if __name__ == "__main__":
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
|
||||
|
||||
elif args.bert_model == "roberta":
|
||||
elif args.bert_model == "bert":
|
||||
|
||||
model = AutoModelForSequenceClassification.from_pretrained(
|
||||
"FacebookAI/roberta-large", num_labels=2
|
||||
"bert-base-uncased", num_labels=2
|
||||
)
|
||||
model.classifier.out_proj = torch.nn.Linear(in_features=1024, out_features=2)
|
||||
model.classifier = torch.nn.Linear(in_features=768, out_features=2)
|
||||
|
||||
if args.trainable_layers == "last_layer":
|
||||
pass
|
||||
elif args.trainable_layers == "last_block":
|
||||
for param in model.classifier.parameters():
|
||||
param.requires_grad = True
|
||||
for param in model.roberta.encoder.layer[-1].parameters():
|
||||
for param in model.bert.pooler.dense.parameters():
|
||||
param.requires_grad = True
|
||||
for param in model.bert.encoder.layer[-1].parameters():
|
||||
param.requires_grad = True
|
||||
elif args.trainable_layers == "all":
|
||||
for param in model.parameters():
|
||||
@@ -221,7 +259,7 @@ if __name__ == "__main__":
|
||||
else:
|
||||
raise ValueError("Invalid --trainable_layers argument.")
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-large")
|
||||
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
|
||||
|
||||
else:
|
||||
raise ValueError("Selected --bert_model not supported.")
|
||||
@@ -234,13 +272,36 @@ if __name__ == "__main__":
|
||||
# Instantiate dataloaders
|
||||
###############################
|
||||
|
||||
pad_token_id = tokenizer.encode(tokenizer.pad_token)
|
||||
|
||||
base_path = Path(".")
|
||||
|
||||
train_dataset = IMDBDataset(base_path / "train.csv", max_length=256, tokenizer=tokenizer, pad_token_id=pad_token_id)
|
||||
val_dataset = IMDBDataset(base_path / "validation.csv", max_length=256, tokenizer=tokenizer, pad_token_id=pad_token_id)
|
||||
test_dataset = IMDBDataset(base_path / "test.csv", max_length=256, tokenizer=tokenizer, pad_token_id=pad_token_id)
|
||||
if args.use_attention_mask.lower() == "true":
|
||||
use_attention_mask = True
|
||||
elif args.use_attention_mask.lower() == "false":
|
||||
use_attention_mask = False
|
||||
else:
|
||||
raise ValueError("Invalid argument for `use_attention_mask`.")
|
||||
|
||||
train_dataset = IMDBDataset(
|
||||
base_path / "train.csv",
|
||||
max_length=256,
|
||||
tokenizer=tokenizer,
|
||||
pad_token_id=tokenizer.pad_token_id,
|
||||
use_attention_mask=use_attention_mask
|
||||
)
|
||||
val_dataset = IMDBDataset(
|
||||
base_path / "validation.csv",
|
||||
max_length=256,
|
||||
tokenizer=tokenizer,
|
||||
pad_token_id=tokenizer.pad_token_id,
|
||||
se_attention_mask=use_attention_mask
|
||||
)
|
||||
test_dataset = IMDBDataset(
|
||||
base_path / "test.csv",
|
||||
max_length=256,
|
||||
tokenizer=tokenizer,
|
||||
pad_token_id=tokenizer.pad_token_id,
|
||||
use_attention_mask=use_attention_mask
|
||||
)
|
||||
|
||||
num_workers = 0
|
||||
batch_size = 8
|
||||
@@ -275,10 +336,9 @@ if __name__ == "__main__":
|
||||
torch.manual_seed(123)
|
||||
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5, weight_decay=0.1)
|
||||
|
||||
num_epochs = 3
|
||||
train_losses, val_losses, train_accs, val_accs, examples_seen = train_classifier_simple(
|
||||
model, train_loader, val_loader, optimizer, device,
|
||||
num_epochs=num_epochs, eval_freq=50, eval_iter=20,
|
||||
num_epochs=args.num_epochs, eval_freq=50, eval_iter=20,
|
||||
max_steps=None
|
||||
)
|
||||
|
||||
463
ch06/03_bonus_imdb-classification/train_bert_hf_spam.py
Normal file
463
ch06/03_bonus_imdb-classification/train_bert_hf_spam.py
Normal file
@@ -0,0 +1,463 @@
|
||||
# Copyright (c) Sebastian Raschka under Apache License 2.0 (see LICENSE.txt).
|
||||
# Source for "Build a Large Language Model From Scratch"
|
||||
# - https://www.manning.com/books/build-a-large-language-model-from-scratch
|
||||
# Code: https://github.com/rasbt/LLMs-from-scratch
|
||||
|
||||
import argparse
|
||||
import os
|
||||
from pathlib import Path
|
||||
import time
|
||||
import urllib
|
||||
import zipfile
|
||||
|
||||
import pandas as pd
|
||||
import torch
|
||||
from torch.utils.data import DataLoader
|
||||
from torch.utils.data import Dataset
|
||||
|
||||
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
||||
|
||||
|
||||
class SpamDataset(Dataset):
|
||||
def __init__(self, csv_file, tokenizer, max_length=None, pad_token_id=50256, no_padding=False):
|
||||
self.data = pd.read_csv(csv_file)
|
||||
self.max_length = max_length if max_length is not None else self._longest_encoded_length(tokenizer)
|
||||
|
||||
# Pre-tokenize texts
|
||||
self.encoded_texts = [
|
||||
tokenizer.encode(text)[:self.max_length]
|
||||
for text in self.data["Text"]
|
||||
]
|
||||
|
||||
if not no_padding:
|
||||
# Pad sequences to the longest sequence
|
||||
self.encoded_texts = [
|
||||
et + [pad_token_id] * (self.max_length - len(et))
|
||||
for et in self.encoded_texts
|
||||
]
|
||||
|
||||
def __getitem__(self, index):
|
||||
encoded = self.encoded_texts[index]
|
||||
label = self.data.iloc[index]["Label"]
|
||||
return torch.tensor(encoded, dtype=torch.long), torch.tensor(label, dtype=torch.long)
|
||||
|
||||
def __len__(self):
|
||||
return len(self.data)
|
||||
|
||||
def _longest_encoded_length(self, tokenizer):
|
||||
max_length = 0
|
||||
for text in self.data["Text"]:
|
||||
encoded_length = len(tokenizer.encode(text))
|
||||
if encoded_length > max_length:
|
||||
max_length = encoded_length
|
||||
return max_length
|
||||
|
||||
|
||||
def download_and_unzip(url, zip_path, extract_to, new_file_path):
|
||||
if new_file_path.exists():
|
||||
print(f"{new_file_path} already exists. Skipping download and extraction.")
|
||||
return
|
||||
|
||||
# Downloading the file
|
||||
with urllib.request.urlopen(url) as response:
|
||||
with open(zip_path, "wb") as out_file:
|
||||
out_file.write(response.read())
|
||||
|
||||
# Unzipping the file
|
||||
with zipfile.ZipFile(zip_path, "r") as zip_ref:
|
||||
zip_ref.extractall(extract_to)
|
||||
|
||||
# Renaming the file to indicate its format
|
||||
original_file = Path(extract_to) / "SMSSpamCollection"
|
||||
os.rename(original_file, new_file_path)
|
||||
print(f"File downloaded and saved as {new_file_path}")
|
||||
|
||||
|
||||
def random_split(df, train_frac, validation_frac):
|
||||
# Shuffle the entire DataFrame
|
||||
df = df.sample(frac=1, random_state=123).reset_index(drop=True)
|
||||
|
||||
# Calculate split indices
|
||||
train_end = int(len(df) * train_frac)
|
||||
validation_end = train_end + int(len(df) * validation_frac)
|
||||
|
||||
# Split the DataFrame
|
||||
train_df = df[:train_end]
|
||||
validation_df = df[train_end:validation_end]
|
||||
test_df = df[validation_end:]
|
||||
|
||||
return train_df, validation_df, test_df
|
||||
|
||||
|
||||
def create_dataset_csvs(new_file_path):
|
||||
df = pd.read_csv(new_file_path, sep="\t", header=None, names=["Label", "Text"])
|
||||
|
||||
# Create balanced dataset
|
||||
n_spam = df[df["Label"] == "spam"].shape[0]
|
||||
ham_sampled = df[df["Label"] == "ham"].sample(n_spam, random_state=123)
|
||||
balanced_df = pd.concat([ham_sampled, df[df["Label"] == "spam"]])
|
||||
balanced_df = balanced_df.sample(frac=1, random_state=123).reset_index(drop=True)
|
||||
balanced_df["Label"] = balanced_df["Label"].map({"ham": 0, "spam": 1})
|
||||
|
||||
# Sample and save csv files
|
||||
train_df, validation_df, test_df = random_split(balanced_df, 0.7, 0.1)
|
||||
train_df.to_csv("train.csv", index=None)
|
||||
validation_df.to_csv("validation.csv", index=None)
|
||||
test_df.to_csv("test.csv", index=None)
|
||||
|
||||
|
||||
class SPAMDataset(Dataset):
|
||||
def __init__(self, csv_file, tokenizer, max_length=None, pad_token_id=50256, use_attention_mask=False):
|
||||
self.data = pd.read_csv(csv_file)
|
||||
self.max_length = max_length if max_length is not None else self._longest_encoded_length(tokenizer)
|
||||
self.pad_token_id = pad_token_id
|
||||
self.use_attention_mask = use_attention_mask
|
||||
|
||||
# Pre-tokenize texts and create attention masks if required
|
||||
self.encoded_texts = [
|
||||
tokenizer.encode(text, truncation=True, max_length=self.max_length)
|
||||
for text in self.data["Text"]
|
||||
]
|
||||
self.encoded_texts = [
|
||||
et + [pad_token_id] * (self.max_length - len(et))
|
||||
for et in self.encoded_texts
|
||||
]
|
||||
|
||||
if self.use_attention_mask:
|
||||
self.attention_masks = [
|
||||
self._create_attention_mask(et)
|
||||
for et in self.encoded_texts
|
||||
]
|
||||
else:
|
||||
self.attention_masks = None
|
||||
|
||||
def _create_attention_mask(self, encoded_text):
|
||||
return [1 if token_id != self.pad_token_id else 0 for token_id in encoded_text]
|
||||
|
||||
def __getitem__(self, index):
|
||||
encoded = self.encoded_texts[index]
|
||||
label = self.data.iloc[index]["Label"]
|
||||
|
||||
if self.use_attention_mask:
|
||||
attention_mask = self.attention_masks[index]
|
||||
else:
|
||||
attention_mask = torch.ones(self.max_length, dtype=torch.long)
|
||||
|
||||
return (
|
||||
torch.tensor(encoded, dtype=torch.long),
|
||||
torch.tensor(attention_mask, dtype=torch.long),
|
||||
torch.tensor(label, dtype=torch.long)
|
||||
)
|
||||
|
||||
def __len__(self):
|
||||
return len(self.data)
|
||||
|
||||
def _longest_encoded_length(self, tokenizer):
|
||||
max_length = 0
|
||||
for text in self.data["Text"]:
|
||||
encoded_length = len(tokenizer.encode(text))
|
||||
if encoded_length > max_length:
|
||||
max_length = encoded_length
|
||||
return max_length
|
||||
|
||||
|
||||
def calc_loss_batch(input_batch, attention_mask_batch, target_batch, model, device):
|
||||
attention_mask_batch = attention_mask_batch.to(device)
|
||||
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
|
||||
# logits = model(input_batch)[:, -1, :] # Logits of last output token
|
||||
logits = model(input_batch, attention_mask=attention_mask_batch).logits
|
||||
loss = torch.nn.functional.cross_entropy(logits, target_batch)
|
||||
return loss
|
||||
|
||||
|
||||
# Same as in chapter 5
|
||||
def calc_loss_loader(data_loader, model, device, num_batches=None):
|
||||
total_loss = 0.
|
||||
if num_batches is None:
|
||||
num_batches = len(data_loader)
|
||||
else:
|
||||
# Reduce the number of batches to match the total number of batches in the data loader
|
||||
# if num_batches exceeds the number of batches in the data loader
|
||||
num_batches = min(num_batches, len(data_loader))
|
||||
for i, (input_batch, attention_mask_batch, target_batch) in enumerate(data_loader):
|
||||
if i < num_batches:
|
||||
loss = calc_loss_batch(input_batch, attention_mask_batch, target_batch, model, device)
|
||||
total_loss += loss.item()
|
||||
else:
|
||||
break
|
||||
return total_loss / num_batches
|
||||
|
||||
|
||||
@torch.no_grad() # Disable gradient tracking for efficiency
|
||||
def calc_accuracy_loader(data_loader, model, device, num_batches=None):
|
||||
model.eval()
|
||||
correct_predictions, num_examples = 0, 0
|
||||
|
||||
if num_batches is None:
|
||||
num_batches = len(data_loader)
|
||||
else:
|
||||
num_batches = min(num_batches, len(data_loader))
|
||||
for i, (input_batch, attention_mask_batch, target_batch) in enumerate(data_loader):
|
||||
if i < num_batches:
|
||||
attention_mask_batch = attention_mask_batch.to(device)
|
||||
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
|
||||
# logits = model(input_batch)[:, -1, :] # Logits of last output token
|
||||
logits = model(input_batch, attention_mask=attention_mask_batch).logits
|
||||
predicted_labels = torch.argmax(logits, dim=1)
|
||||
num_examples += predicted_labels.shape[0]
|
||||
correct_predictions += (predicted_labels == target_batch).sum().item()
|
||||
else:
|
||||
break
|
||||
return correct_predictions / num_examples
|
||||
|
||||
|
||||
def evaluate_model(model, train_loader, val_loader, device, eval_iter):
|
||||
model.eval()
|
||||
with torch.no_grad():
|
||||
train_loss = calc_loss_loader(train_loader, model, device, num_batches=eval_iter)
|
||||
val_loss = calc_loss_loader(val_loader, model, device, num_batches=eval_iter)
|
||||
model.train()
|
||||
return train_loss, val_loss
|
||||
|
||||
|
||||
def train_classifier_simple(model, train_loader, val_loader, optimizer, device, num_epochs,
|
||||
eval_freq, eval_iter, max_steps=None):
|
||||
# Initialize lists to track losses and tokens seen
|
||||
train_losses, val_losses, train_accs, val_accs = [], [], [], []
|
||||
examples_seen, global_step = 0, -1
|
||||
|
||||
# Main training loop
|
||||
for epoch in range(num_epochs):
|
||||
model.train() # Set model to training mode
|
||||
|
||||
for input_batch, attention_mask_batch, target_batch in train_loader:
|
||||
optimizer.zero_grad() # Reset loss gradients from previous batch iteration
|
||||
loss = calc_loss_batch(input_batch, attention_mask_batch, target_batch, model, device)
|
||||
loss.backward() # Calculate loss gradients
|
||||
optimizer.step() # Update model weights using loss gradients
|
||||
examples_seen += input_batch.shape[0] # New: track examples instead of tokens
|
||||
global_step += 1
|
||||
|
||||
# Optional evaluation step
|
||||
if global_step % eval_freq == 0:
|
||||
train_loss, val_loss = evaluate_model(
|
||||
model, train_loader, val_loader, device, eval_iter)
|
||||
train_losses.append(train_loss)
|
||||
val_losses.append(val_loss)
|
||||
print(f"Ep {epoch+1} (Step {global_step:06d}): "
|
||||
f"Train loss {train_loss:.3f}, Val loss {val_loss:.3f}")
|
||||
|
||||
if max_steps is not None and global_step > max_steps:
|
||||
break
|
||||
|
||||
# New: Calculate accuracy after each epoch
|
||||
train_accuracy = calc_accuracy_loader(train_loader, model, device, num_batches=eval_iter)
|
||||
val_accuracy = calc_accuracy_loader(val_loader, model, device, num_batches=eval_iter)
|
||||
print(f"Training accuracy: {train_accuracy*100:.2f}% | ", end="")
|
||||
print(f"Validation accuracy: {val_accuracy*100:.2f}%")
|
||||
train_accs.append(train_accuracy)
|
||||
val_accs.append(val_accuracy)
|
||||
|
||||
if max_steps is not None and global_step > max_steps:
|
||||
break
|
||||
|
||||
return train_losses, val_losses, train_accs, val_accs, examples_seen
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--trainable_layers",
|
||||
type=str,
|
||||
default="all",
|
||||
help=(
|
||||
"Which layers to train. Options: 'all', 'last_block', 'last_layer'."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--use_attention_mask",
|
||||
type=str,
|
||||
default="true",
|
||||
help=(
|
||||
"Whether to use a attention mask for padding tokens. Options: 'true', 'false'"
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--bert_model",
|
||||
type=str,
|
||||
default="distilbert",
|
||||
help=(
|
||||
"Which model to train. Options: 'distilbert', 'bert'."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--num_epochs",
|
||||
type=int,
|
||||
default=1,
|
||||
help=(
|
||||
"Number of epochs."
|
||||
)
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
###############################
|
||||
# Load model
|
||||
###############################
|
||||
|
||||
torch.manual_seed(123)
|
||||
if args.bert_model == "distilbert":
|
||||
|
||||
model = AutoModelForSequenceClassification.from_pretrained(
|
||||
"distilbert-base-uncased", num_labels=2
|
||||
)
|
||||
model.out_head = torch.nn.Linear(in_features=768, out_features=2)
|
||||
|
||||
if args.trainable_layers == "last_layer":
|
||||
pass
|
||||
elif args.trainable_layers == "last_block":
|
||||
for param in model.pre_classifier.parameters():
|
||||
param.requires_grad = True
|
||||
for param in model.distilbert.transformer.layer[-1].parameters():
|
||||
param.requires_grad = True
|
||||
elif args.trainable_layers == "all":
|
||||
for param in model.parameters():
|
||||
param.requires_grad = True
|
||||
else:
|
||||
raise ValueError("Invalid --trainable_layers argument.")
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
|
||||
|
||||
elif args.bert_model == "bert":
|
||||
|
||||
model = AutoModelForSequenceClassification.from_pretrained(
|
||||
"bert-base-uncased", num_labels=2
|
||||
)
|
||||
model.classifier = torch.nn.Linear(in_features=768, out_features=2)
|
||||
|
||||
if args.trainable_layers == "last_layer":
|
||||
pass
|
||||
elif args.trainable_layers == "last_block":
|
||||
for param in model.classifier.parameters():
|
||||
param.requires_grad = True
|
||||
for param in model.bert.pooler.dense.parameters():
|
||||
param.requires_grad = True
|
||||
for param in model.bert.encoder.layer[-1].parameters():
|
||||
param.requires_grad = True
|
||||
elif args.trainable_layers == "all":
|
||||
for param in model.parameters():
|
||||
param.requires_grad = True
|
||||
else:
|
||||
raise ValueError("Invalid --trainable_layers argument.")
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
|
||||
|
||||
else:
|
||||
raise ValueError("Selected --bert_model not supported.")
|
||||
|
||||
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
model.to(device)
|
||||
model.eval()
|
||||
|
||||
###############################
|
||||
# Instantiate dataloaders
|
||||
###############################
|
||||
|
||||
url = "https://archive.ics.uci.edu/static/public/228/sms+spam+collection.zip"
|
||||
zip_path = "sms_spam_collection.zip"
|
||||
extract_to = "sms_spam_collection"
|
||||
new_file_path = Path(extract_to) / "SMSSpamCollection.tsv"
|
||||
|
||||
base_path = Path(".")
|
||||
file_names = ["train.csv", "validation.csv", "test.csv"]
|
||||
all_exist = all((base_path / file_name).exists() for file_name in file_names)
|
||||
|
||||
if not all_exist:
|
||||
download_and_unzip(url, zip_path, extract_to, new_file_path)
|
||||
create_dataset_csvs(new_file_path)
|
||||
|
||||
if args.use_attention_mask.lower() == "true":
|
||||
use_attention_mask = True
|
||||
elif args.use_attention_mask.lower() == "false":
|
||||
use_attention_mask = False
|
||||
else:
|
||||
raise ValueError("Invalid argument for `use_attention_mask`.")
|
||||
|
||||
train_dataset = SPAMDataset(
|
||||
base_path / "train.csv",
|
||||
max_length=256,
|
||||
tokenizer=tokenizer,
|
||||
pad_token_id=tokenizer.pad_token_id,
|
||||
use_attention_mask=use_attention_mask
|
||||
)
|
||||
val_dataset = SPAMDataset(
|
||||
base_path / "validation.csv",
|
||||
max_length=256,
|
||||
tokenizer=tokenizer,
|
||||
pad_token_id=tokenizer.pad_token_id,
|
||||
use_attention_mask=use_attention_mask
|
||||
)
|
||||
test_dataset = SPAMDataset(
|
||||
base_path / "test.csv",
|
||||
max_length=256,
|
||||
tokenizer=tokenizer,
|
||||
pad_token_id=tokenizer.pad_token_id,
|
||||
use_attention_mask=use_attention_mask
|
||||
)
|
||||
|
||||
num_workers = 0
|
||||
batch_size = 8
|
||||
|
||||
train_loader = DataLoader(
|
||||
dataset=train_dataset,
|
||||
batch_size=batch_size,
|
||||
shuffle=True,
|
||||
num_workers=num_workers,
|
||||
drop_last=True,
|
||||
)
|
||||
|
||||
val_loader = DataLoader(
|
||||
dataset=val_dataset,
|
||||
batch_size=batch_size,
|
||||
num_workers=num_workers,
|
||||
drop_last=False,
|
||||
)
|
||||
|
||||
test_loader = DataLoader(
|
||||
dataset=test_dataset,
|
||||
batch_size=batch_size,
|
||||
num_workers=num_workers,
|
||||
drop_last=False,
|
||||
)
|
||||
|
||||
###############################
|
||||
# Train model
|
||||
###############################
|
||||
|
||||
start_time = time.time()
|
||||
torch.manual_seed(123)
|
||||
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5, weight_decay=0.1)
|
||||
|
||||
train_losses, val_losses, train_accs, val_accs, examples_seen = train_classifier_simple(
|
||||
model, train_loader, val_loader, optimizer, device,
|
||||
num_epochs=args.num_epochs, eval_freq=50, eval_iter=20,
|
||||
max_steps=None
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
execution_time_minutes = (end_time - start_time) / 60
|
||||
print(f"Training completed in {execution_time_minutes:.2f} minutes.")
|
||||
|
||||
###############################
|
||||
# Evaluate model
|
||||
###############################
|
||||
|
||||
print("\nEvaluating on the full datasets ...\n")
|
||||
|
||||
train_accuracy = calc_accuracy_loader(train_loader, model, device)
|
||||
val_accuracy = calc_accuracy_loader(val_loader, model, device)
|
||||
test_accuracy = calc_accuracy_loader(test_loader, model, device)
|
||||
|
||||
print(f"Training accuracy: {train_accuracy*100:.2f}%")
|
||||
print(f"Validation accuracy: {val_accuracy*100:.2f}%")
|
||||
print(f"Test accuracy: {test_accuracy*100:.2f}%")
|
||||
@@ -227,6 +227,14 @@ if __name__ == "__main__":
|
||||
"Options: 'longest_training_example', 'model_context_length' or integer value."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--num_epochs",
|
||||
type=int,
|
||||
default=1,
|
||||
help=(
|
||||
"Number of epochs."
|
||||
)
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
@@ -340,10 +348,9 @@ if __name__ == "__main__":
|
||||
torch.manual_seed(123)
|
||||
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5, weight_decay=0.1)
|
||||
|
||||
num_epochs = 3
|
||||
train_losses, val_losses, train_accs, val_accs, examples_seen = train_classifier_simple(
|
||||
model, train_loader, val_loader, optimizer, device,
|
||||
num_epochs=num_epochs, eval_freq=50, eval_iter=20,
|
||||
num_epochs=args.num_epochs, eval_freq=50, eval_iter=20,
|
||||
max_steps=None, trainable_token=args.trainable_token
|
||||
)
|
||||
|
||||
Reference in New Issue
Block a user