Files
LLMs-from-scratch/ch05
Gerardo Moreno 491fd58463 Fix Olmo3 YaRN RoPE implementation bug (#940)
* Olmo3 fix RoPE YaRN implementation

* Update cell outputs

* update olmo layer debugger

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2026-01-03 18:59:57 -06:00
..
2025-03-23 19:28:49 -05:00
2025-11-17 17:29:49 -06:00
2025-11-22 22:42:18 -06:00
2025-11-22 22:42:18 -06:00
2025-11-22 22:42:18 -06:00

Chapter 5: Pretraining on Unlabeled Data

 

Main Chapter Code

 

Bonus Materials

 

LLM Architectures From Scratch

 

  • 07_gpt_to_llama contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI
  • 11_qwen3 A from-scratch implementation of Qwen3 0.6B and Qwen3 30B-A3B (Mixture-of-Experts) including code to load the pretrained weights of the base, reasoning, and coding model variants
  • 12_gemma3 A from-scratch implementation of Gemma 3 270M and alternative with KV cache, including code to load the pretrained weights
  • 13_olmo3 A from-scratch implementation of Olmo 3 7B and 32B (Base, Instruct, and Think variants) and alternative with KV cache, including code to load the pretrained weights

 

Code-Along Video for This Chapter



Link to the video