Files
Sebastian Raschka 8447d70b18 Some gemma 3 improvements (#1000)
* some gemma 3 improvements

* update url
2026-04-05 21:05:05 -05:00
..
2025-03-23 19:28:49 -05:00
2026-03-03 16:31:16 -06:00
2026-02-19 16:42:19 -06:00
2025-11-22 22:42:18 -06:00

Chapter 5: Pretraining on Unlabeled Data

 

Main Chapter Code

 

Bonus Materials

 

LLM Architectures From Scratch

 

  • 07_gpt_to_llama contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI
  • 11_qwen3 A from-scratch implementation of Qwen3 0.6B and Qwen3 30B-A3B (Mixture-of-Experts) including code to load the pretrained weights of the base, reasoning, and coding model variants
  • 12_gemma3 A from-scratch implementation of Gemma 3 270M and alternative with KV cache, including code to load the pretrained weights
  • 13_olmo3 A from-scratch implementation of Olmo 3 7B and 32B (Base, Instruct, and Think variants) and alternative with KV cache, including code to load the pretrained weights

 

Code-Along Video for This Chapter



Link to the video