Automated link checking (#117)

* Automated link checking

* Fix links in Jupyter Nbs
This commit is contained in:
Sebastian Raschka
2024-04-12 19:08:34 -04:00
committed by GitHub
parent 33b27368a3
commit 55ebabf95c
3 changed files with 19 additions and 24 deletions

View File

@@ -1164,7 +1164,7 @@
"metadata": {},
"source": [
"- In this section, we finally implement the code for training the LLM\n",
"- We focus on a simple training function (if you are interested in augmenting this training function with more advanced techniques, such as learning rate warmup, cosine annealing, and gradient clipping, please refer to [Appendix D](../../appendix-D/03_main-chapter-code))\n",
"- We focus on a simple training function (if you are interested in augmenting this training function with more advanced techniques, such as learning rate warmup, cosine annealing, and gradient clipping, please refer to [Appendix D](../../appendix-D/01_main-chapter-code))\n",
"\n",
"<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/ch05_compressed/train-steps.webp\" width=300px>"
]
@@ -2028,7 +2028,7 @@
"metadata": {},
"source": [
"- Previously, we only trained a small GPT-2 model using a very small short-story book for educational purposes\n",
"- Interested readers can also find a longer pretraining run on the complete Project Gutenberg book corpus in [../03_bonus_pretraining_on_gutenberg](03_bonus_pretraining_on_gutenberg)\n",
"- Interested readers can also find a longer pretraining run on the complete Project Gutenberg book corpus in [../03_bonus_pretraining_on_gutenberg](../03_bonus_pretraining_on_gutenberg)\n",
"- Fortunately, we don't have to spend tens to hundreds of thousands of dollars to pretrain the model on a large pretraining corpus but can load the pretrained weights provided by OpenAI"
]
},
@@ -2438,7 +2438,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
"version": "3.10.10"
}
},
"nbformat": 4,