mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
* prevent `self.apply_chat_template` being applied for base Qwen models * - added no chat template comparison in `test_chat_wrap_and_equivalence` - removed duplicate comparison * Revert "- added no chat template comparison in `test_chat_wrap_and_equivalence`" This reverts commit3a5ee8cfa1. * Revert "prevent `self.apply_chat_template` being applied for base Qwen models" This reverts commitdf504397a8. * copied `download_file` in `utils` from https://github.com/rasbt/reasoning-from-scratch/blob/main/reasoning_from_scratch/utils.py * added copy of test `def test_tokenizer_equivalence()` from `reasoning-from-scratch` in `test_qwen3.py` * removed duplicate code fragment in`test_chat_wrap_and_equivalence` * use apply_chat_template * add toggle for instruct model * Update tokenizer usage --------- Co-authored-by: rasbt <mail@sebastianraschka.com>
Chapter 5: Pretraining on Unlabeled Data
Main Chapter Code
- 01_main-chapter-code contains the main chapter code
Bonus Materials
- 02_alternative_weight_loading contains code to load the GPT model weights from alternative places in case the model weights become unavailable from OpenAI
- 03_bonus_pretraining_on_gutenberg contains code to pretrain the LLM longer on the whole corpus of books from Project Gutenberg
- 04_learning_rate_schedulers contains code implementing a more sophisticated training function including learning rate schedulers and gradient clipping
- 05_bonus_hparam_tuning contains an optional hyperparameter tuning script
- 06_user_interface implements an interactive user interface to interact with the pretrained LLM
- 07_gpt_to_llama contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI
- 08_memory_efficient_weight_loading contains a bonus notebook showing how to load model weights via PyTorch's
load_state_dictmethod more efficiently - 09_extending-tokenizers contains a from-scratch implementation of the GPT-2 BPE tokenizer
- 10_llm-training-speed shows PyTorch performance tips to improve the LLM training speed
- 11_qwen3 A from-scratch implementation of Qwen3 0.6B and Qwen3 30B-A3B (Mixture-of-Experts) including code to load the pretrained weights of the base, reasoning, and coding model variants
- 12_gemma3 A from-scratch implementation of Gemma 3 270M and alternative with KV cache, including code to load the pretrained weights
