fixed minor issues (#252)

* fixed typo

* fixed var name in md text
This commit is contained in:
Daniel Kleine
2024-06-29 13:38:25 +02:00
committed by GitHub
parent e296e8f6be
commit 1e69c8e0b5
4 changed files with 14 additions and 14 deletions

View File

@@ -1715,7 +1715,7 @@
"source": [
"- Before we can start finetuning (/training), we first have to define the loss function we want to optimize during training\n",
"- The goal is to maximize the spam classification accuracy of the model; however, classification accuracy is not a differentiable function\n",
"- Hence, instead, we minimize the cross entropy loss as a proxy for maximizing the classification accuracy (you can learn more about this topic in lecture 8 of my freely available [Introduction to Deep Learning](https://sebastianraschka.com/blog/2021/dl-course.html#l08-multinomial-logistic-regression--softmax-regression) class)\n",
"- Hence, instead, we minimize the cross-entropy loss as a proxy for maximizing the classification accuracy (you can learn more about this topic in lecture 8 of my freely available [Introduction to Deep Learning](https://sebastianraschka.com/blog/2021/dl-course.html#l08-multinomial-logistic-regression--softmax-regression) class)\n",
"\n",
"- The `calc_loss_batch` function is the same here as in chapter 5, except that we are only interested in optimizing the last token `model(input_batch)[:, -1, :]` instead of all tokens `model(input_batch)`"
]
@@ -2370,7 +2370,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.11"
}
},
"nbformat": 4,