Translation
Transformers
Safetensors
qwen3
text-generation
text-generation-inference
sleepyhead111 commited on
Commit
5fa878e
·
verified ·
1 Parent(s): 59efad4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ar
4
+ - es
5
+ - de
6
+ - fr
7
+ - it
8
+ - ja
9
+ - nl
10
+ - pl
11
+ - pt
12
+ - ru
13
+ - tr
14
+ - bg
15
+ - bn
16
+ - cs
17
+ - da
18
+ - el
19
+ - fa
20
+ - fi
21
+ - hi
22
+ - hu
23
+ - id
24
+ - ko
25
+ - no
26
+ - ro
27
+ - sk
28
+ - sv
29
+ - th
30
+ - uk
31
+ - vi
32
+ - am
33
+ - az
34
+ - bo
35
+ - he
36
+ - hr
37
+ - hy
38
+ - is
39
+ - jv
40
+ - ka
41
+ - kk
42
+ - km
43
+ - ky
44
+ - lo
45
+ - mn
46
+ - mr
47
+ - ms
48
+ - my
49
+ - ne
50
+ - ps
51
+ - si
52
+ - sw
53
+ - ta
54
+ - te
55
+ - tg
56
+ - tl
57
+ - ug
58
+ - ur
59
+ - uz
60
+ - yue
61
+ base_model:
62
+ - Qwen/Qwen3-8B-Base
63
+ license: apache-2.0
64
+ pipeline_tag: translation
65
+ ---
66
+
67
+ ## LMT
68
+ - Paper: [Beyond English: Toward Inclusive and Scalable Multilingual Machine Translation with LLMs](https://arxiv.org/abs/2511.07003)
69
+ - Github: [LMT](https://github.com/NiuTrans/LMT)
70
+
71
+ **LMT-60** is a suite of **Chinese-English-centric** MMT models trained on **90B tokens** mixed monolingual and bilingual tokens, covering **60 languages across 234 translation directions** and achieving **SOTA performance** among models with similar language coverage.
72
+ We release both the CPT and SFT versions of LMT-60 in four sizes (0.6B/1.7B/4B/8B). All checkpoints are available:
73
+ | Models | Model Link |
74
+ |:------------|:------------|
75
+ | LMT-60-0.6B-Base | [NiuTrans/LMT-60-0.6B-Base](https://huggingface.co/NiuTrans/LMT-60-0.6B-Base) |
76
+ | LMT-60-0.6B | [NiuTrans/LMT-60-0.6B](https://huggingface.co/NiuTrans/LMT-60-0.6B) |
77
+ | LMT-60-1.7B-Base | [NiuTrans/LMT-60-1.7B-Base](https://huggingface.co/NiuTrans/LMT-60-1.7B-Base) |
78
+ | LMT-60-1.7B | [NiuTrans/LMT-60-1.7B](https://huggingface.co/NiuTrans/LMT-60-1.7B) |
79
+ | LMT-60-4B-Base | [NiuTrans/LMT-60-4B-Base](https://huggingface.co/NiuTrans/LMT-60-4B-Base) |
80
+ | LMT-60-4B | [NiuTrans/LMT-60-4B](https://huggingface.co/NiuTrans/LMT-60-4B) |
81
+ | LMT-60-8B-Base | [NiuTrans/LMT-60-8B-Base](https://huggingface.co/NiuTrans/LMT-60-8B-Base) |
82
+ | LMT-60-8B | [NiuTrans/LMT-60-8B](https://huggingface.co/NiuTrans/LMT-60-8B) |
83
+
84
+ Our supervised fine-tuning (SFT) data are released at [NiuTrans/LMT-60-sft-data](https://huggingface.co/datasets/NiuTrans/LMT-60-sft-data)
85
+
86
+ ## Quickstart
87
+
88
+ ```python
89
+ from transformers import AutoModelForCausalLM, AutoTokenizer
90
+
91
+ model_name = "NiuTrans/LMT-60-8B"
92
+
93
+ tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
94
+ model = AutoModelForCausalLM.from_pretrained(model_name)
95
+
96
+ prompt = "Translate the following text from English into Chinese.\nEnglish: The concept came from China where plum blossoms were the flower of choice.\nChinese: "
97
+ messages = [{"role": "user", "content": prompt}]
98
+ text = tokenizer.apply_chat_template(
99
+ messages,
100
+ tokenize=False,
101
+ add_generation_prompt=True,
102
+ )
103
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
104
+
105
+ generated_ids = model.generate(**model_inputs, max_new_tokens=512, num_beams=5, do_sample=False)
106
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
107
+
108
+ outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
109
+
110
+ print("response:", outputs)
111
+ ```
112
+
113
+ ## Support Languages
114
+
115
+ | Resource Tier | Languages |
116
+ | :---- | :---- |
117
+ | High-resource Languages (13) | Arabic(ar), English(en), Spanish(es), German(de), French(fr), Italian(it), Japanese(ja), Dutch(nl), Polish(pl), Portuguese(pt), Russian(ru), Turkish(tr), Chinese(zh) |
118
+ | Medium-resource Languages (18) | Bulgarian(bg), Bengali(bn), Czech(cs), Danish(da), Modern Greek(el), Persian(fa), Finnish(fi), Hindi(hi), Hungarian(hu), Indonesian(id), Korean(ko), Norwegian(no), Romanian(ro), Slovak(sk), Swedish(sv), Thai(th), Ukrainian(uk), Vietnamese(vi) |
119
+ | Low-resouce Languages (29) | Amharic(am), Azerbaijani(az), Tibetan(bo), Modern Hebrew(he), Croatian(hr), Armenian(hy), Icelandic(is), Javanese(jv), Georgian(ka), Kazakh(kk), Central Khmer(km), Kirghiz(ky), Lao(lo), Chinese Mongolian(mn_cn), Marathi(mr), Malay(ms), Burmese(my), Nepali(ne), Pashto(ps), Sinhala(si), Swahili(sw), Tamil(ta), Telugu(te), Tajik(tg), Tagalog(tl), Uighur(ug), Urdu(ur), Uzbek(uz), Yue Chinese(yue) |
120
+
121
+ ## Citation
122
+
123
+ If you find our paper useful for your research, please kindly cite our paper:
124
+ ```bash
125
+ @misc{luoyf2025lmt,
126
+ title={Beyond English: Toward Inclusive and Scalable Multilingual Machine Translation with LLMs},
127
+ author={Yingfeng Luo, Ziqiang Xu, Yuxuan Ouyang, Murun Yang, Dingyang Lin, Kaiyan Chang, Tong Zheng, Bei Li, Peinan Feng, Quan Du, Tong Xiao, Jingbo Zhu},
128
+ year={2025},
129
+ eprint={2511.07003},
130
+ archivePrefix={arXiv},
131
+ primaryClass={cs.CL},
132
+ url={https://arxiv.org/abs/2511.07003},
133
+ }
134
+ ```