fixed typos (#414)

* fixed typos

* fixed formatting

* Update ch03/02_bonus_efficient-multihead-attention/mha-implementations.ipynb

* del weights after load into model

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
This commit is contained in:
Daniel Kleine
2024-10-25 01:23:53 +02:00
committed by GitHub
parent 8b60460319
commit 0ed1e0d099
2 changed files with 18 additions and 14 deletions

View File

@@ -1843,7 +1843,7 @@
"id": "VlH7qYVdDKQr"
},
"source": [
"- Note that the Llama 3 model should ideally used with the correct prompt template that was used during finetuning (as discussed in chapter 7)\n",
"- Note that the Llama 3 model should ideally be used with the correct prompt template that was used during finetuning (as discussed in chapter 7)\n",
"- Below is a wrapper class around the tokenizer based on Meta AI's Llama 3-specific [ChatFormat code](https://github.com/meta-llama/llama3/blob/11817d47e1ba7a4959b025eb1ca308572e0e3963/llama/tokenizer.py#L202) that constructs the prompt template"
]
},
@@ -2099,7 +2099,7 @@
"metadata": {},
"outputs": [],
"source": [
"LLAMA32_CONFIG[\"context_length\"] = 8192"
"LLAMA31_CONFIG_8B[\"context_length\"] = 8192"
]
},
{
@@ -2319,7 +2319,8 @@
" combined_weights.update(current_weights)\n",
"\n",
"load_weights_into_llama(model, LLAMA31_CONFIG_8B, combined_weights)\n",
"model.to(device);"
"model.to(device);\n",
"del combined_weights # free up memory"
]
},
{
@@ -2466,7 +2467,7 @@
"metadata": {},
"outputs": [],
"source": [
"LLAMA32_CONFIG[\"context_length\"] = 8192"
"LLAMA32_CONFIG_1B[\"context_length\"] = 8192"
]
},
{
@@ -2594,7 +2595,8 @@
"current_weights = load_file(weights_file)\n",
"\n",
"load_weights_into_llama(model, LLAMA32_CONFIG_1B, current_weights)\n",
"model.to(device);"
"model.to(device);\n",
"del current_weights # free up memory"
]
},
{