diff --git a/ch03/README.md b/ch03/README.md
index ad89208..789aa69 100644
--- a/ch03/README.md
+++ b/ch03/README.md
@@ -9,4 +9,13 @@
## Bonus Materials
- [02_bonus_efficient-multihead-attention](02_bonus_efficient-multihead-attention) implements and compares different implementation variants of multihead-attention
-- [03_understanding-buffers](03_understanding-buffers) explains the idea behind PyTorch buffers, which are used to implement the causal attention mechanism in chapter 3
\ No newline at end of file
+- [03_understanding-buffers](03_understanding-buffers) explains the idea behind PyTorch buffers, which are used to implement the causal attention mechanism in chapter 3
+
+
+
+In the video below, I provide a code-along session that covers some of the chapter contents as supplementary material.
+
+
+
+
+[](https://www.youtube.com/watch?v=Ll8DtpNtvk)
\ No newline at end of file