From 7084123d10ae7faef13241481cff634131c193d8 Mon Sep 17 00:00:00 2001 From: Sebastian Raschka Date: Wed, 1 Oct 2025 10:47:04 -0500 Subject: [PATCH] Note about output dimensions (#862) --- ch03/01_main-chapter-code/ch03.ipynb | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/ch03/01_main-chapter-code/ch03.ipynb b/ch03/01_main-chapter-code/ch03.ipynb index 46c82b8..c01d0f2 100644 --- a/ch03/01_main-chapter-code/ch03.ipynb +++ b/ch03/01_main-chapter-code/ch03.ipynb @@ -1900,8 +1900,32 @@ "metadata": {}, "source": [ "- Note that the above is essentially a rewritten version of `MultiHeadAttentionWrapper` that is more efficient\n", - "- The resulting output looks a bit different since the random weight initializations differ, but both are fully functional implementations that can be used in the GPT class we will implement in the upcoming chapters\n", - "- Note that in addition, we added a linear projection layer (`self.out_proj `) to the `MultiHeadAttention` class above. This is simply a linear transformation that doesn't change the dimensions. It's a standard convention to use such a projection layer in LLM implementation, but it's not strictly necessary (recent research has shown that it can be removed without affecting the modeling performance; see the further reading section at the end of this chapter)\n" + "- The resulting output looks a bit different since the random weight initializations differ, but both are fully functional implementations that can be used in the GPT class we will implement in the upcoming chapters" + ] + }, + { + "cell_type": "markdown", + "id": "c8bd41e1-32d4-4067-a6d0-fe756a6511a9", + "metadata": {}, + "source": [ + "---\n", + "\n", + "**A note about the output dimensions**\n", + "\n", + "- In the `MultiHeadAttention` above, I used `d_out=2` to use the same setting as in the `MultiHeadAttentionWrapper` class earlier\n", + "- The `MultiHeadAttentionWrapper`, due the the concatenation, returns the output head dimension `d_out * num_heads` (i.e., `2*2 = 4`)\n", + "- However, the `MultiHeadAttention` class (to make it more user-friendly) allows us to control the output head dimension directly via `d_out`; this means, if we set `d_out = 2`, the output head dimension will be 2, regardless of the number of heads\n", + "- In hindsight, as readers [pointed out](https://github.com/rasbt/LLMs-from-scratch/pull/859), it may be more intuitive to use `MultiHeadAttention` with `d_out = 4` so that it produces the same output dimensions as `MultiHeadAttentionWrapper` with `d_out = 2`.\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "9310bfa5-9aa9-40b4-8081-a5d8db5faf74", + "metadata": {}, + "source": [ + "- Note that in addition, we added a linear projection layer (`self.out_proj `) to the `MultiHeadAttention` class above. This is simply a linear transformation that doesn't change the dimensions. It's a standard convention to use such a projection layer in LLM implementation, but it's not strictly necessary (recent research has shown that it can be removed without affecting the modeling performance; see the further reading section at the end of this chapter)" ] }, {