mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
Fix 8-billion-parameter spelling
This commit is contained in:
@@ -35,7 +35,7 @@
|
||||
"id": "a128651b-f326-4232-a994-42f38b7ed520",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"- This notebook uses an 8 billion parameter Llama 3 model through ollama to generate a synthetic dataset using the \"hack\" proposed in the \"Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing\" paper ([https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464))\n",
|
||||
"- This notebook uses an 8-billion-parameter Llama 3 model through ollama to generate a synthetic dataset using the \"hack\" proposed in the \"Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing\" paper ([https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464))\n",
|
||||
"\n",
|
||||
"- The generated dataset will be an instruction dataset with \"instruction\" and \"output\" field similar to what can be found in Alpaca:\n",
|
||||
"\n",
|
||||
@@ -109,7 +109,7 @@
|
||||
"<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/ollama-eval/ollama-serve.webp?1\">\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 8 billion parameters Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
|
||||
"- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 8-billion-parameter Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"# 8B model\n",
|
||||
@@ -133,9 +133,9 @@
|
||||
"success \n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- Note that `llama3` refers to the instruction finetuned 8 billion Llama 3 model\n",
|
||||
"- Note that `llama3` refers to the instruction finetuned 8-billion-parameter Llama 3 model\n",
|
||||
"\n",
|
||||
"- Alternatively, you can also use the larger 70 billion parameters Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
|
||||
"- Alternatively, you can also use the larger 70-billion-parameter Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
|
||||
"\n",
|
||||
"- After the download has been completed, you will see a command line prompt that allows you to chat with the model\n",
|
||||
"\n",
|
||||
@@ -498,7 +498,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
Reference in New Issue
Block a user