minor fixes (#246)

* removed duplicated white spaces

* Update ch07/01_main-chapter-code/ch07.ipynb

* Update ch07/05_dataset-generation/llama3-ollama.ipynb

* removed duplicated white spaces

* fixed title again

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
This commit is contained in:
Daniel Kleine
2024-06-26 00:30:30 +02:00
committed by GitHub
parent 9a9b3530c9
commit 81c843bdc0
10 changed files with 19 additions and 19 deletions

View File

@@ -267,7 +267,7 @@
"Model saved as gpt2-medium355M-sft-phi3-prompt.pth\n",
"```\n",
"\n",
"For comparison, you can run the original chapter 7 finetuning code via `python exercise_experiments.py --exercise_solution baseline`. \n",
"For comparison, you can run the original chapter 7 finetuning code via `python exercise_experiments.py --exercise_solution baseline`. \n",
"\n",
"Note that on an Nvidia L4 GPU, the code above, using the Phi-3 prompt template, takes 1.5 min to run. In comparison, the Alpaca-style template takes 1.80 minutes to run. So, the Phi-3 template is approximately 17% faster since it results in shorter model inputs. \n",
"\n",
@@ -954,7 +954,7 @@
"Model saved as gpt2-medium355M-sft-lora.pth\n",
"```\n",
"\n",
"For comparison, you can run the original chapter 7 finetuning code via `python exercise_experiments.py --exercise_solution baseline`. \n",
"For comparison, you can run the original chapter 7 finetuning code via `python exercise_experiments.py --exercise_solution baseline`. \n",
"\n",
"Note that on an Nvidia L4 GPU, the code above, using LoRA, takes 1.30 min to run. In comparison, the baseline takes 1.80 minutes to run. So, LoRA is approximately 28% faster.\n",
"\n",