mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
Direct Preference Optimization from scratch (#294)
This commit is contained in:
committed by
GitHub
parent
3ea0798d44
commit
09dc080cf3
@@ -10,6 +10,6 @@
|
||||
|
||||
- [03_model-evaluation](03_model-evaluation) contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API.
|
||||
|
||||
- [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with DPO (in progress)
|
||||
- [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with Direct Preference Optimization (DPO)
|
||||
|
||||
- [05_dataset-generation](05_dataset-generation) contains code to generate synthetic datasets for instruction finetuning
|
||||
|
||||
Reference in New Issue
Block a user