mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
Add chapter 3 coding along video link (#572)
This commit is contained in:
committed by
GitHub
parent
54474fb452
commit
4db0e826b7
@@ -9,4 +9,13 @@
|
||||
## Bonus Materials
|
||||
|
||||
- [02_bonus_efficient-multihead-attention](02_bonus_efficient-multihead-attention) implements and compares different implementation variants of multihead-attention
|
||||
- [03_understanding-buffers](03_understanding-buffers) explains the idea behind PyTorch buffers, which are used to implement the causal attention mechanism in chapter 3
|
||||
- [03_understanding-buffers](03_understanding-buffers) explains the idea behind PyTorch buffers, which are used to implement the causal attention mechanism in chapter 3
|
||||
|
||||
|
||||
|
||||
In the video below, I provide a code-along session that covers some of the chapter contents as supplementary material.
|
||||
|
||||
<br>
|
||||
<br>
|
||||
|
||||
[](https://www.youtube.com/watch?v=Ll8DtpNtvk)
|
||||
Reference in New Issue
Block a user