RichardErkhov commited on
Commit
65c07bf
·
verified ·
1 Parent(s): 7b65919

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +161 -0
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Sailor-4B-Chat - bnb 8bits
11
+ - Model creator: https://huggingface.co/sail/
12
+ - Original model: https://huggingface.co/sail/Sailor-4B-Chat/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ language:
20
+ - en
21
+ - zh
22
+ - id
23
+ - th
24
+ - vi
25
+ - ms
26
+ - lo
27
+ datasets:
28
+ - CohereForAI/aya_dataset
29
+ - CohereForAI/aya_collection
30
+ - Open-Orca/OpenOrca
31
+ tags:
32
+ - multilingual
33
+ - sea
34
+ - sailor
35
+ - sft
36
+ - chat
37
+ - instruction
38
+ widget:
39
+ - text: "如何制作烤鱼?"
40
+ example_title: "Chinese"
41
+ - text: "How to bake fish?"
42
+ example_title: "English"
43
+ - text: "Bagaimana cara memanggang ikan?"
44
+ example_title: "Malay"
45
+ - text: "วิธีย่างปลา?"
46
+ example_title: "Thai"
47
+ - text: "Bagaimana membuat bakaran ikan?"
48
+ example_title: "Indonesian"
49
+ - text: "Làm thế nào để nướng cá?"
50
+ example_title: "Vietnamese"
51
+ license: apache-2.0
52
+ base_model: sail/Sailor-4B
53
+ inference: false
54
+ ---
55
+
56
+ <div align="center">
57
+ <img src="banner_sailor.jpg" width="700"/>
58
+ </div>
59
+
60
+ Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
61
+ Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
62
+ Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements.
63
+ We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
64
+ Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
65
+
66
+ > The logo was generated by MidJourney
67
+
68
+ ## Model Summary
69
+ - **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
70
+ - **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/)
71
+ - **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
72
+ - **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
73
+
74
+
75
+ ## Training details
76
+ Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
77
+ The pre-training corpus heavily leverages the publicly available corpus, including
78
+ [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
79
+ [SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
80
+ [CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
81
+ The instruction tuning corpus are all publicly available including
82
+ [aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
83
+ [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
84
+ [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
85
+
86
+ By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
87
+ Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
88
+ The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
89
+ Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
90
+
91
+ ## Requirements
92
+ The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
93
+
94
+ ## Quickstart
95
+
96
+ Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
97
+
98
+ ```python
99
+ from transformers import AutoModelForCausalLM, AutoTokenizer
100
+ device = "cuda"
101
+
102
+ model = AutoModelForCausalLM.from_pretrained(
103
+ 'sail/Sailor-4B-Chat',
104
+ torch_dtype="auto",
105
+ device_map="auto"
106
+ )
107
+
108
+ tokenizer = AutoTokenizer.from_pretrained('sail/Sailor-4B-Chat')
109
+ system_prompt= 'You are a helpful assistant'
110
+
111
+ prompt = "Beri saya pengenalan singkat tentang model bahasa besar."
112
+ # prompt = "Hãy cho tôi một giới thiệu ngắn gọn về mô hình ngôn ngữ lớn."
113
+ # prompt = "ให้ฉันแนะนำสั้น ๆ เกี่ยวกับโมเดลภาษาขนาดใหญ่"
114
+
115
+ messages = [
116
+ {"role": "system", "content": system_prompt},
117
+ {"role": "question", "content": prompt}
118
+ ]
119
+ text = tokenizer.apply_chat_template(
120
+ messages,
121
+ tokenize=False,
122
+ add_generation_prompt=True
123
+ )
124
+
125
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
126
+ input_ids = model_inputs.input_ids.to(device)
127
+
128
+ generated_ids = model.generate(
129
+ input_ids,
130
+ max_new_tokens=512,
131
+ )
132
+
133
+ generated_ids = [
134
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
135
+ ]
136
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
137
+ print(response)
138
+ ```
139
+
140
+ # License
141
+
142
+ Sailor is distributed under the terms of the Apache License 2.0.
143
+ No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
144
+
145
+ ## Citation
146
+
147
+ If you find sailor useful, please cite our work as follows:
148
+
149
+ ```
150
+ @article{dou2024sailor,
151
+ title={Sailor: Open Language Models for South-East Asia},
152
+ author={Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Lu, Wei and Lin, Min},
153
+ journal={arXiv preprint arXiv:2404.03608},
154
+ year={2024}
155
+ }
156
+ ```
157
+
158
+ # Contact Us
159
+
160
+ If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]).
161
+