Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,8 @@ tags:
|
|
| 18 |
- rslora
|
| 19 |
- liger
|
| 20 |
- sociology
|
|
|
|
|
|
|
| 21 |
---
|
| 22 |
# Qwen2.5-1.5B-Instruct-Multiple-Choice-Maker
|
| 23 |
|
|
@@ -30,11 +32,11 @@ Special thanks to [mradermacher](https://huggingface.co/mradermacher) for quanti
|
|
| 30 |
|
| 31 |
### Training Details
|
| 32 |
- **Data Source**:
|
| 33 |
-
- The training dataset was generated from open-source sociology textbooks using a custom prompt powered by the [agentlans/Llama3.1-LexiHermes-SuperStorm](https://huggingface.co/agentlans/Llama3.1-LexiHermes-SuperStorm) model. The dataset contains
|
| 34 |
-
- Additional finetuning on [agentlans/finewebedu-multiple-choice](https://huggingface.co/datasets/agentlans/finewebedu-multiple-choice) dataset
|
| 35 |
- **Training Method**:
|
| 36 |
- Fine-tuning was conducted using [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) with rank 16 LoRA, alpha = 32, and rslora, leveraging the Liger kernel.
|
| 37 |
-
- The additional finetuning was done with the same settings but with 0.2 LoRA dropout
|
| 38 |
|
| 39 |
### Potential Applications
|
| 40 |
- **Education**: Automate the creation of multiple-choice questions for exams or quizzes.
|
|
|
|
| 18 |
- rslora
|
| 19 |
- liger
|
| 20 |
- sociology
|
| 21 |
+
datasets:
|
| 22 |
+
- agentlans/finewebedu-multiple-choice
|
| 23 |
---
|
| 24 |
# Qwen2.5-1.5B-Instruct-Multiple-Choice-Maker
|
| 25 |
|
|
|
|
| 32 |
|
| 33 |
### Training Details
|
| 34 |
- **Data Source**:
|
| 35 |
+
- The training dataset was generated from open-source sociology textbooks using a custom prompt powered by the [agentlans/Llama3.1-LexiHermes-SuperStorm](https://huggingface.co/agentlans/Llama3.1-LexiHermes-SuperStorm) model. The dataset contains 3739 rows. Due to licensing restrictions, the dataset is not provided here.
|
| 36 |
+
- Additional finetuning on [agentlans/finewebedu-multiple-choice](https://huggingface.co/datasets/agentlans/finewebedu-multiple-choice) dataset.
|
| 37 |
- **Training Method**:
|
| 38 |
- Fine-tuning was conducted using [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) with rank 16 LoRA, alpha = 32, and rslora, leveraging the Liger kernel.
|
| 39 |
+
- The additional finetuning was done with the same settings but with 0.2 LoRA dropout.
|
| 40 |
|
| 41 |
### Potential Applications
|
| 42 |
- **Education**: Automate the creation of multiple-choice questions for exams or quizzes.
|