mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
Add appendix D
This commit is contained in:
31
README.md
31
README.md
@@ -9,7 +9,7 @@ This repository contains the code for coding, pretraining, and finetuning a GPT-
|
||||
|
||||
<a href="http://mng.bz/orYv"><img src="images/cover.jpg" width="250px"></a>
|
||||
|
||||
In [*Build a Large Language Model (from Scratch)*](http://mng.bz/orYv), you'll discover how LLMs work from the inside out. In this book, I'll guide you step by step through creating your own LLM, explaining each stage with clear text, diagrams, and examples.
|
||||
In [*Build a Large Language Model (From Scratch)*](http://mng.bz/orYv), you'll discover how LLMs work from the inside out. In this book, I'll guide you step by step through creating your own LLM, explaining each stage with clear text, diagrams, and examples.
|
||||
|
||||
The method described in this book for training and developing your own small-but-functional model for educational purposes mirrors the approach used in creating large-scale foundational models such as those behind ChatGPT.
|
||||
|
||||
@@ -31,21 +31,20 @@ Alternatively, you can view this and other files on GitHub at [https://github.co
|
||||
<br>
|
||||
<br>
|
||||
|
||||
| Chapter Title | Main Code (for quick access) | All Code + Supplementary |
|
||||
|------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|-------------------------------|
|
||||
| Ch 1: Understanding Large Language Models | No code | No code |
|
||||
| Ch 2: Working with Text Data | - [ch02.ipynb](ch02/01_main-chapter-code/ch02.ipynb)<br/>- [dataloader.ipynb](ch02/01_main-chapter-code/dataloader.ipynb) (summary)<br/>- [exercise-solutions.ipynb](ch02/01_main-chapter-code/exercise-solutions.ipynb) | [./ch02](./ch02) |
|
||||
| Ch 3: Coding Attention Mechanisms | - [ch03.ipynb](ch03/01_main-chapter-code/ch03.ipynb)<br/>- [multihead-attention.ipynb](ch03/01_main-chapter-code/multihead-attention.ipynb) (summary) <br/>- [exercise-solutions.ipynb](ch03/01_main-chapter-code/exercise-solutions.ipynb)| [./ch03](./ch03) |
|
||||
| Ch 4: Implementing a GPT Model from Scratch | - [ch04.ipynb](ch04/01_main-chapter-code/ch04.ipynb)<br/>- [gpt.py](ch04/01_main-chapter-code/gpt.py) (summary)<br/>- [exercise-solutions.ipynb](ch04/01_main-chapter-code/exercise-solutions.ipynb) | [./ch04](./ch04) |
|
||||
| Ch 5: Pretraining on Unlabeled Data | Q1 2024 | ... |
|
||||
| Ch 6: Finetuning for Text Classification | Q2 2024 | ... |
|
||||
| Ch 7: Finetuning with Human Feedback | Q2 2024 | ... |
|
||||
| Ch 8: Using Large Language Models in Practice | Q2/3 2024 | ... |
|
||||
| Appendix A: Introduction to PyTorch | - [code-part1.ipynb](appendix-A/03_main-chapter-code/code-part1.ipynb)<br/>- [code-part2.ipynb](appendix-A/03_main-chapter-code/code-part2.ipynb)<br/>- [DDP-script.py](appendix-A/03_main-chapter-code/DDP-script.py)<br/>- [exercise-solutions.ipynb](appendix-A/03_main-chapter-code/exercise-solutions.ipynb) | [./appendix-A](./appendix-A) |
|
||||
| Appendix B: References and Further Reading | No code | |
|
||||
| Appendix C: Exercises | No code | |
|
||||
|
||||
|
||||
| Chapter Title | Main Code (for quick access) | All Code + Supplementary |
|
||||
|------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|-------------------------------|
|
||||
| Ch 1: Understanding Large Language Models | No code | - |
|
||||
| Ch 2: Working with Text Data | - [ch02.ipynb](ch02/01_main-chapter-code/ch02.ipynb)<br/>- [dataloader.ipynb](ch02/01_main-chapter-code/dataloader.ipynb) (summary)<br/>- [exercise-solutions.ipynb](ch02/01_main-chapter-code/exercise-solutions.ipynb) | [./ch02](./ch02) |
|
||||
| Ch 3: Coding Attention Mechanisms | - [ch03.ipynb](ch03/01_main-chapter-code/ch03.ipynb)<br/>- [multihead-attention.ipynb](ch03/01_main-chapter-code/multihead-attention.ipynb) (summary) <br/>- [exercise-solutions.ipynb](ch03/01_main-chapter-code/exercise-solutions.ipynb)| [./ch03](./ch03) |
|
||||
| Ch 4: Implementing a GPT Model from Scratch | - [ch04.ipynb](ch04/01_main-chapter-code/ch04.ipynb)<br/>- [gpt.py](ch04/01_main-chapter-code/gpt.py) (summary)<br/>- [exercise-solutions.ipynb](ch04/01_main-chapter-code/exercise-solutions.ipynb) | [./ch04](./ch04) |
|
||||
| Ch 5: Pretraining on Unlabeled Data | Q1 2024 | ... |
|
||||
| Ch 6: Finetuning for Text Classification | Q2 2024 | ... |
|
||||
| Ch 7: Finetuning with Human Feedback | Q2 2024 | ... |
|
||||
| Ch 8: Using Large Language Models in Practice | Q2/3 2024 | ... |
|
||||
| Appendix A: Introduction to PyTorch | - [code-part1.ipynb](appendix-A/03_main-chapter-code/code-part1.ipynb)<br/>- [code-part2.ipynb](appendix-A/03_main-chapter-code/code-part2.ipynb)<br/>- [DDP-script.py](appendix-A/03_main-chapter-code/DDP-script.py)<br/>- [exercise-solutions.ipynb](appendix-A/03_main-chapter-code/exercise-solutions.ipynb) | [./appendix-A](./appendix-A) |
|
||||
| Appendix B: References and Further Reading | No code | - |
|
||||
| Appendix C: Exercises | No code | - |
|
||||
| Appendix D: Adding Bells and Whistles to the Training Loop | - [appendix-D.ipynb](appendix-D/01_main-chapter-code/appendix-D.ipynb) | [./appendix-D](./appendix-D) |
|
||||
<br>
|
||||
|
||||
> [!TIP]
|
||||
|
||||
Reference in New Issue
Block a user