PyTorch tips for better training performance (#525)

* PyTorch tips for better training performance

* formatting

* pep 8
This commit is contained in:
Sebastian Raschka
2025-02-12 16:10:34 -06:00
committed by GitHub
parent 3c29b67cd0
commit 908dd2f71e
6 changed files with 1853 additions and 0 deletions

View File

@@ -121,6 +121,7 @@ Several folders contain optional materials as a bonus for interested readers:
- [Llama 3.2 From Scratch](ch05/07_gpt_to_llama/standalone-llama32.ipynb)
- [Memory-efficient Model Weight Loading](ch05/08_memory_efficient_weight_loading/memory-efficient-state-dict.ipynb)
- [Extending the Tiktoken BPE Tokenizer with New Tokens](ch05/09_extending-tokenizers/extend-tiktoken.ipynb)
- [PyTorch Performance Tips for Faster LLM Training](ch05/10_llm-training-speed)
- **Chapter 6: Finetuning for classification**
- [Additional experiments finetuning different layers and using larger models](ch06/02_bonus_additional-experiments)
- [Finetuning different models on 50k IMDB movie review dataset](ch06/03_bonus_imdb-classification)