mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
Gemma 3 270M From Scratch (#771)
* Gemma 3 270M From Scratch * fix path * update readme
This commit is contained in:
committed by
GitHub
parent
e9c1c1da38
commit
a6b883c9f9
@@ -159,6 +159,7 @@ Several folders contain optional materials as a bonus for interested readers:
|
||||
- [Converting GPT to Llama](ch05/07_gpt_to_llama)
|
||||
- [Llama 3.2 From Scratch](ch05/07_gpt_to_llama/standalone-llama32.ipynb)
|
||||
- [Qwen3 Dense and Mixture-of-Experts (MoE) From Scratch](ch05/11_qwen3/)
|
||||
- [Gemma 3 From Scratch](ch05/12_gemma3/)
|
||||
- [Memory-efficient Model Weight Loading](ch05/08_memory_efficient_weight_loading/memory-efficient-state-dict.ipynb)
|
||||
- [Extending the Tiktoken BPE Tokenizer with New Tokens](ch05/09_extending-tokenizers/extend-tiktoken.ipynb)
|
||||
- [PyTorch Performance Tips for Faster LLM Training](ch05/10_llm-training-speed)
|
||||
|
||||
Reference in New Issue
Block a user