Fix 8-billion-parameter spelling

This commit is contained in:
rasbt
2024-07-28 10:48:56 -05:00
parent 9a3b04f92f
commit a7869ad2bf
4 changed files with 18 additions and 18 deletions

View File

@@ -35,7 +35,7 @@
"id": "a128651b-f326-4232-a994-42f38b7ed520",
"metadata": {},
"source": [
"- This notebook uses an 8 billion parameter Llama 3 model through ollama to evaluate responses of instruction finetuned LLMs based on a dataset in JSON format that includes the generated model responses, for example:\n",
"- This notebook uses an 8-billion-parameter Llama 3 model through ollama to evaluate responses of instruction finetuned LLMs based on a dataset in JSON format that includes the generated model responses, for example:\n",
"\n",
"\n",
"\n",
@@ -108,7 +108,7 @@
"<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/ollama-eval/ollama-serve.webp?1\">\n",
"\n",
"\n",
"- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 8 billion parameters Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
"- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 8-billion-parameter Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
"\n",
"```bash\n",
"# 8B model\n",
@@ -132,9 +132,9 @@
"success \n",
"```\n",
"\n",
"- Note that `llama3` refers to the instruction finetuned 8 billion Llama 3 model\n",
"- Note that `llama3` refers to the instruction finetuned 8-billion-parameter Llama 3 model\n",
"\n",
"- Alternatively, you can also use the larger 70 billion parameters Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
"- Alternatively, you can also use the larger 70-billion-parameter Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
"\n",
"- After the download has been completed, you will see a command line prompt that allows you to chat with the model\n",
"\n",
@@ -640,7 +640,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.6"
}
},
"nbformat": 4,