Update README.md
Browse files
README.md
CHANGED
|
@@ -2,26 +2,41 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
|
| 7 |
This repo contains the **context-based instruction synthesizer** used in our paper **Instruction Pre-Training: Language Models are Supervised Multitask Learners**.
|
| 8 |
|
| 9 |
-
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive
|
| 10 |
|
| 11 |
<p align='center'>
|
| 12 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400">
|
| 13 |
</p>
|
| 14 |
|
| 15 |
-
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
<p align='center'>
|
| 19 |
-
<img src="
|
| 20 |
</p>
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
To prompt the synthesizer to generate instruction-response pairs based on a given raw text:
|
| 25 |
```python
|
| 26 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 27 |
|
|
@@ -75,8 +90,13 @@ for index, pair in enumerate(instruction_response_pairs):
|
|
| 75 |
print(f'## Instruction {index + 1}:\n{pair["Q"]}\n## Response {index + 1}:\n{pair["A"]}\n')
|
| 76 |
```
|
| 77 |
|
|
|
|
|
|
|
|
|
|
| 78 |
## Citation
|
| 79 |
If you find our work helpful, please cite us:
|
|
|
|
|
|
|
| 80 |
```bibtex
|
| 81 |
@inproceedings{
|
| 82 |
cheng2024adapting,
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
+
datasets:
|
| 6 |
+
- instruction-pretrain/ft-instruction-synthesizer-collection
|
| 7 |
---
|
| 8 |
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
|
| 9 |
This repo contains the **context-based instruction synthesizer** used in our paper **Instruction Pre-Training: Language Models are Supervised Multitask Learners**.
|
| 10 |
|
| 11 |
+
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
|
| 12 |
|
| 13 |
<p align='center'>
|
| 14 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400">
|
| 15 |
</p>
|
| 16 |
|
| 17 |
+
|
| 18 |
+
## Resources
|
| 19 |
+
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
|
| 20 |
+
|
| 21 |
+
- Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
|
| 22 |
+
- Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
|
| 23 |
+
- General Models Pre-Trained from Scratch:
|
| 24 |
+
- [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M)
|
| 25 |
+
- [InstructLLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLLM-1.3B)
|
| 26 |
+
- Domain-Specific Models Pre-Trained from Llama3-8B:
|
| 27 |
+
- [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
|
| 28 |
+
- [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
## Synthesize Instruction-Response Pairs based on Any Raw text
|
| 32 |
+
We conduct multitask fine-tuning on a language model to develop an instruction synthesizer capable of generating instruction-response pairs from any raw text. The fine-tuning data are available at [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
|
| 33 |
+
|
| 34 |
|
| 35 |
<p align='center'>
|
| 36 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/0889QyG59QM3rPeZlcTzZ.png" width="700">
|
| 37 |
</p>
|
| 38 |
|
| 39 |
+
For example, to prompt the synthesizer to generate instruction-response pairs based on a given raw text:
|
|
|
|
|
|
|
| 40 |
```python
|
| 41 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 42 |
|
|
|
|
| 90 |
print(f'## Instruction {index + 1}:\n{pair["Q"]}\n## Response {index + 1}:\n{pair["A"]}\n')
|
| 91 |
```
|
| 92 |
|
| 93 |
+
### To-Do
|
| 94 |
+
- [ ] Add example usages for synthesizing few-shot examples
|
| 95 |
+
|
| 96 |
## Citation
|
| 97 |
If you find our work helpful, please cite us:
|
| 98 |
+
|
| 99 |
+
[AdaptLLM](https://huggingface.co/papers/2309.09530)
|
| 100 |
```bibtex
|
| 101 |
@inproceedings{
|
| 102 |
cheng2024adapting,
|