Readability and code quality improvements (#959)

* Consistent dataset naming

* consistent section headers
This commit is contained in:
Sebastian Raschka
2026-02-17 19:44:56 -05:00
committed by GitHub
parent 7b1f740f74
commit be5e2a3331
48 changed files with 419 additions and 297 deletions

View File

@@ -89,6 +89,7 @@
"id": "8bbc68e9-75b3-41f1-ac2c-e071c3cd0813"
},
"source": [
" \n",
"## 7.1 Introduction to instruction finetuning"
]
},
@@ -133,6 +134,7 @@
"id": "5384f0cf-ef3c-4436-a5fa-59bd25649f86"
},
"source": [
" \n",
"## 7.2 Preparing a dataset for supervised instruction finetuning"
]
},
@@ -499,6 +501,7 @@
"id": "fcaaf606-f913-4445-8301-632ae10d387d"
},
"source": [
" \n",
"## 7.3 Organizing data into training batches"
]
},
@@ -1492,6 +1495,7 @@
"id": "d6aad445-8f19-4238-b9bf-db80767fb91a"
},
"source": [
" \n",
"## 7.5 Loading a pretrained LLM"
]
},
@@ -1724,6 +1728,7 @@
"id": "70d27b9d-a942-4cf5-b797-848c5f01e723"
},
"source": [
" \n",
"## 7.6 Finetuning the LLM on instruction data"
]
},
@@ -1995,6 +2000,7 @@
"id": "87b79a47-13f9-4d1f-87b1-3339bafaf2a3"
},
"source": [
" \n",
"## 7.7 Extracting and saving responses"
]
},
@@ -2251,6 +2257,7 @@
"id": "obgoGI89dgPm"
},
"source": [
" \n",
"## 7.8 Evaluating the finetuned LLM"
]
},
@@ -2847,6 +2854,7 @@
"id": "tIbNMluCDjVM"
},
"source": [
" \n",
"### 7.9.1 What's next\n",
"\n",
"- This marks the final chapter of this book\n",
@@ -2857,12 +2865,26 @@
"- An optional step that is sometimes followed after instruction finetuning, as described in this chapter, is preference finetuning\n",
"- Preference finetuning process can be particularly useful for customizing a model to better align with specific user preferences; see the [../04_preference-tuning-with-dpo](../04_preference-tuning-with-dpo) folder if you are interested in this\n",
"\n",
"- This GitHub repository also contains a large selection of additional bonus material you may enjoy; for more information, please see the [Bonus Material](https://github.com/rasbt/LLMs-from-scratch?tab=readme-ov-file#bonus-material) section on this repository's README page\n",
"\n",
"- This GitHub repository also contains a large selection of additional bonus material you may enjoy; for more information, please see the [Bonus Material](https://github.com/rasbt/LLMs-from-scratch?tab=readme-ov-file#bonus-material) section on this repository's README page"
]
},
{
"cell_type": "markdown",
"id": "0e2b7bc2-2e8d-483f-a8f5-e2aa093db189",
"metadata": {},
"source": [
" \n",
"### 7.9.2 Staying up to date in a fast-moving field\n",
"\n",
"- No code in this section\n",
"\n",
"- No code in this section"
]
},
{
"cell_type": "markdown",
"id": "e3d8327d-afb5-4d24-88af-e253889251cf",
"metadata": {},
"source": [
" \n",
"### 7.9.3 Final words\n",
"\n",
"- I hope you enjoyed this journey of implementing an LLM from the ground up and coding the pretraining and finetuning functions\n",