mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
16 lines
721 B
Markdown
16 lines
721 B
Markdown
# Chapter 7: Finetuning to Follow Instructions
|
|
|
|
## Main Chapter Code
|
|
|
|
- [01_main-chapter-code](01_main-chapter-code) contains the main chapter code and exercise solutions
|
|
|
|
## Bonus Materials
|
|
|
|
- [02_dataset-utilities](02_dataset-utilities) contains utility code that can be used for preparing an instruction dataset.
|
|
|
|
- [03_model-evaluation](03_model-evaluation) contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API.
|
|
|
|
- [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with DPO (in progress)
|
|
|
|
- [05_dataset-generation](05_dataset-generation) contains code to generate synthetic datasets for instruction finetuning
|