mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2026-04-10 12:33:42 +00:00
Add user interface to ch06 and ch07 (#366)
* Add user interface to ch06 and ch07 * pep8 * fix url
This commit is contained in:
committed by
GitHub
parent
6f6dfb6796
commit
76e9a9ec02
@@ -78,16 +78,15 @@ You can alternatively view this and other files on GitHub at [https://github.com
|
||||
| Appendix D: Adding Bells and Whistles to the Training Loop | - [appendix-D.ipynb](appendix-D/01_main-chapter-code/appendix-D.ipynb) | [./appendix-D](./appendix-D) |
|
||||
| Appendix E: Parameter-efficient Finetuning with LoRA | - [appendix-E.ipynb](appendix-E/01_main-chapter-code/appendix-E.ipynb) | [./appendix-E](./appendix-E) |
|
||||
|
||||
|
||||
<br>
|
||||
 
|
||||
|
||||
|
||||
The mental model below summarizes the contents covered in this book.
|
||||
|
||||
<img src="https://sebastianraschka.com/images/LLMs-from-scratch-images/mental-model.jpg" width="650px">
|
||||
|
||||
<br>
|
||||
 
|
||||
|
||||
|
||||
## Hardware Requirements
|
||||
|
||||
@@ -120,6 +119,7 @@ Several folders contain optional materials as a bonus for interested readers:
|
||||
- **Chapter 6:**
|
||||
- [Additional experiments finetuning different layers and using larger models](ch06/02_bonus_additional-experiments)
|
||||
- [Finetuning different models on 50k IMDB movie review dataset](ch06/03_bonus_imdb-classification)
|
||||
- [Building a User Interface to Interact With the GPT-based Spam Classifier](ch06/04_user_interface)
|
||||
- **Chapter 7:**
|
||||
- [Dataset Utilities for Finding Near Duplicates and Creating Passive Voice Entries](ch07/02_dataset-utilities)
|
||||
- [Evaluating Instruction Responses Using the OpenAI API and Ollama](ch07/03_model-evaluation)
|
||||
@@ -127,9 +127,10 @@ Several folders contain optional materials as a bonus for interested readers:
|
||||
- [Improving a Dataset for Instruction Finetuning](ch07/05_dataset-generation/reflection-gpt4.ipynb)
|
||||
- [Generating a Preference Dataset with Llama 3.1 70B and Ollama](ch07/04_preference-tuning-with-dpo/create-preference-data-ollama.ipynb)
|
||||
- [Direct Preference Optimization (DPO) for LLM Alignment](ch07/04_preference-tuning-with-dpo/dpo-from-scratch.ipynb)
|
||||
- [Building a User Interface to Interact With the Instruction Finetuned GPT Model](ch07/06_user_interface)
|
||||
|
||||
<br>
|
||||
 
|
||||
|
||||
|
||||
## Questions, Feedback, and Contributing to This Repository
|
||||
|
||||
|
||||
Reference in New Issue
Block a user