mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
minor fixes (#246)
* removed duplicated white spaces * Update ch07/01_main-chapter-code/ch07.ipynb * Update ch07/05_dataset-generation/llama3-ollama.ipynb * removed duplicated white spaces * fixed title again --------- Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
This commit is contained in:
File diff suppressed because one or more lines are too long
@@ -267,7 +267,7 @@
|
||||
"Model saved as gpt2-medium355M-sft-phi3-prompt.pth\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"For comparison, you can run the original chapter 7 finetuning code via `python exercise_experiments.py --exercise_solution baseline`. \n",
|
||||
"For comparison, you can run the original chapter 7 finetuning code via `python exercise_experiments.py --exercise_solution baseline`. \n",
|
||||
"\n",
|
||||
"Note that on an Nvidia L4 GPU, the code above, using the Phi-3 prompt template, takes 1.5 min to run. In comparison, the Alpaca-style template takes 1.80 minutes to run. So, the Phi-3 template is approximately 17% faster since it results in shorter model inputs. \n",
|
||||
"\n",
|
||||
@@ -954,7 +954,7 @@
|
||||
"Model saved as gpt2-medium355M-sft-lora.pth\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"For comparison, you can run the original chapter 7 finetuning code via `python exercise_experiments.py --exercise_solution baseline`. \n",
|
||||
"For comparison, you can run the original chapter 7 finetuning code via `python exercise_experiments.py --exercise_solution baseline`. \n",
|
||||
"\n",
|
||||
"Note that on an Nvidia L4 GPU, the code above, using LoRA, takes 1.30 min to run. In comparison, the baseline takes 1.80 minutes to run. So, LoRA is approximately 28% faster.\n",
|
||||
"\n",
|
||||
|
||||
@@ -138,7 +138,7 @@
|
||||
"\n",
|
||||
"- After the download has been completed, you will see a command line prompt that allows you to chat with the model\n",
|
||||
"\n",
|
||||
"- Try a prompt like \"What do llamas eat?\", which should return an output similar to the following:\n",
|
||||
"- Try a prompt like \"What do llamas eat?\", which should return an output similar to the following:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
">>> What do llamas eat?\n",
|
||||
|
||||
@@ -139,7 +139,7 @@
|
||||
"\n",
|
||||
"- After the download has been completed, you will see a command line prompt that allows you to chat with the model\n",
|
||||
"\n",
|
||||
"- Try a prompt like \"What do llamas eat?\", which should return an output similar to the following:\n",
|
||||
"- Try a prompt like \"What do llamas eat?\", which should return an output similar to the following:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
">>> What do llamas eat?\n",
|
||||
|
||||
Reference in New Issue
Block a user