Direct Preference Optimization from scratch (#294)

This commit is contained in:
Sebastian Raschka
2024-08-04 08:57:36 -05:00
committed by GitHub
parent 3ea0798d44
commit 09dc080cf3
5 changed files with 3570 additions and 7 deletions

View File

@@ -10,6 +10,6 @@
- [03_model-evaluation](03_model-evaluation) contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API.
- [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with DPO (in progress)
- [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with Direct Preference Optimization (DPO)
- [05_dataset-generation](05_dataset-generation) contains code to generate synthetic datasets for instruction finetuning