BM-K commited on
Commit
b7a6d1e
·
verified ·
1 Parent(s): 2ac83de

Initial commit

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,611 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - cross-encoder
5
+ - reranker
6
+ - generated_from_trainer
7
+ - dataset_size:1792739
8
+ - loss:CachedMultipleNegativesRankingLoss
9
+ base_model: tomaarsen/Qwen3-Reranker-0.6B-seq-cls
10
+ pipeline_tag: text-ranking
11
+ library_name: sentence-transformers
12
+ ---
13
+
14
+ # CrossEncoder based on tomaarsen/Qwen3-Reranker-0.6B-seq-cls
15
+
16
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [tomaarsen/Qwen3-Reranker-0.6B-seq-cls](https://huggingface.co/tomaarsen/Qwen3-Reranker-0.6B-seq-cls) on the json dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
17
+
18
+ ## Model Details
19
+
20
+ ### Model Description
21
+ - **Model Type:** Cross Encoder
22
+ - **Base model:** [tomaarsen/Qwen3-Reranker-0.6B-seq-cls](https://huggingface.co/tomaarsen/Qwen3-Reranker-0.6B-seq-cls) <!-- at revision 6a5829f5079c66e78d911e06fe21931cc00232f7 -->
23
+ - **Maximum Sequence Length:** 40960 tokens
24
+ - **Number of Output Labels:** 1 label
25
+ - **Training Dataset:**
26
+ - json
27
+ <!-- - **Language:** Unknown -->
28
+ <!-- - **License:** Unknown -->
29
+
30
+ ### Model Sources
31
+
32
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
33
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
34
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
35
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
36
+
37
+ ## Usage
38
+
39
+ ### Direct Usage (Sentence Transformers)
40
+
41
+ First install the Sentence Transformers library:
42
+
43
+ ```bash
44
+ pip install -U sentence-transformers
45
+ ```
46
+
47
+ Then you can load this model and run inference.
48
+ ```python
49
+ from sentence_transformers import CrossEncoder
50
+
51
+ # Download from the 🤗 Hub
52
+ model = CrossEncoder("cross_encoder_model_id")
53
+ # Get scores for pairs of texts
54
+ pairs = [
55
+ ['<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n<Instruct>: Given a web search query, retrieve relevant passages that answer the query\n<Query>: ATP란?\n', '<Document>: 아데노신 삼인산 아데노신 삼인산(, ATP)은 생명체의 주된 에너지원이다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n'],
56
+ ['<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n<Instruct>: Given a web search query, retrieve relevant passages that answer the query\n<Query>: 난촨구와 둥촨구는 어느 나라에 위치해 있습니까?\n', '<Document>: 난촨구(南川区)는 중국 충칭의 구이자 이전의 현이다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n'],
57
+ ['<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n<Instruct>: Given a web search query, retrieve relevant passages that answer the query\n<Query>: 그저우와 헤이룽장성 동닝은 어떤 나라와 접경하고 있습니까?\n', '<Document>: 허주(贺州)는 중화인민공화국 광시 좡족 자치구 북동부에 위치한 지급시이다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n'],
58
+ ['<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n<Instruct>: Given a web search query, retrieve relevant passages that answer the query\n<Query>: 가짜대나무(Pseudosasa)와 별꽃(Cerastium)은 모두 자생 식물과 관련이 있습니까?\n', '<Document>: 가짜사사(Pseudosasa)는 풀과에 속하는 동아시아 대나무의 속입니다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n'],
59
+ ['<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n<Instruct>: Given a web search query, retrieve relevant passages that answer the query\n<Query>: 샤허(Shahhe), 허베이(河北)와 조청(邹城)은 모두 현급 도시인가요?\n', '<Document>: 샤허(Shahe)는 중국 허베이성의 남부에 위치한 싱타이(Xingtai) 지구의 군급 도시입니다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n'],
60
+ ]
61
+ scores = model.predict(pairs)
62
+ print(scores.shape)
63
+ # (5,)
64
+
65
+ # Or rank different texts based on similarity to a single text
66
+ ranks = model.rank(
67
+ '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n<Instruct>: Given a web search query, retrieve relevant passages that answer the query\n<Query>: ATP란?\n',
68
+ [
69
+ '<Document>: 아데노신 삼인산 아데노신 삼인산(, ATP)은 생명체의 주된 에너지원이다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n',
70
+ '<Document>: 난촨구(南川区)는 중국 충칭의 구이자 이전의 현이다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n',
71
+ '<Document>: 허주(贺州)는 중화인민공화국 광시 좡족 자치구 북동부에 위치한 지급시이다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n',
72
+ '<Document>: 가짜사사(Pseudosasa)는 풀과에 속하는 동아시아 대나무의 속입니다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n',
73
+ '<Document>: 샤허(Shahe)는 중국 허베이성의 남부에 위치한 싱타이(Xingtai) 지구의 군급 도시입니다.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n',
74
+ ]
75
+ )
76
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
77
+ ```
78
+
79
+ <!--
80
+ ### Direct Usage (Transformers)
81
+
82
+ <details><summary>Click to see the direct usage in Transformers</summary>
83
+
84
+ </details>
85
+ -->
86
+
87
+ <!--
88
+ ### Downstream Usage (Sentence Transformers)
89
+
90
+ You can finetune this model on your own dataset.
91
+
92
+ <details><summary>Click to expand</summary>
93
+
94
+ </details>
95
+ -->
96
+
97
+ <!--
98
+ ### Out-of-Scope Use
99
+
100
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
101
+ -->
102
+
103
+ <!--
104
+ ## Bias, Risks and Limitations
105
+
106
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
107
+ -->
108
+
109
+ <!--
110
+ ### Recommendations
111
+
112
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
113
+ -->
114
+
115
+ ## Training Details
116
+
117
+ ### Training Dataset
118
+
119
+ #### json
120
+
121
+ * Dataset: json
122
+ * Size: 1,792,739 training samples
123
+ * Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, and <code>negative_3</code>
124
+ * Approximate statistics based on the first 1000 samples:
125
+ | | query | positive | negative_1 | negative_2 | negative_3 |
126
+ |:--------|:--------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
127
+ | type | string | string | string | string | string |
128
+ | details | <ul><li>min: 289 characters</li><li>mean: 317.46 characters</li><li>max: 406 characters</li></ul> | <ul><li>min: 90 characters</li><li>mean: 154.19 characters</li><li>max: 184 characters</li></ul> | <ul><li>min: 72 characters</li><li>mean: 149.13 characters</li><li>max: 184 characters</li></ul> | <ul><li>min: 79 characters</li><li>mean: 148.5 characters</li><li>max: 184 characters</li></ul> | <ul><li>min: 70 characters</li><li>mean: 149.09 characters</li><li>max: 184 characters</li></ul> |
129
+ * Samples:
130
+ | query | positive | negative_1 | negative_2 | negative_3 |
131
+ |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|
132
+ | <code><|im_start|>system<br>Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|><br><|im_start|>user<br><Instruct>: Given a web search query, retrieve relevant passages that answer the query<br><Query>: ATP란?<br></code> | <code><Document>: 아데노신 삼인산 아데노신 삼인산(, ATP)은 생명체의 주된 에너지원이다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> | <code><Document>: ATP ATP는 다음 뜻의 약자이다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> | <code><Document>: 해당 실제로 ADP는 ADPMg로, ATP는 ATPMg로 존재한다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> | <code><Document>: ATE ATE는 다음을 가리킨다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> |
133
+ | <code><|im_start|>system<br>Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|><br><|im_start|>user<br><Instruct>: Given a web search query, retrieve relevant passages that answer the query<br><Query>: 난촨구와 둥촨구는 어느 나라에 위치해 있습니까?<br></code> | <code><Document>: 난촨구(南川区)는 중국 충칭의 구이자 이전의 현이다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> | <code><Document>: 남풍현(南丰县)은 중국 장시성(江西省) 푸저우(福州)에 위치한 군이다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> | <code><Document>: 도교, 광둥 도교(道滘)는 중국 남부 광둥성 동관 시의 관할 하에 있는 도시입니다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> | <code><Document>: 동포구 동포구는 중국 쓰촨성의 구역입니다. 이곳은 메이산시의 관할 하에 있습니다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> |
134
+ | <code><|im_start|>system<br>Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|><br><|im_start|>user<br><Instruct>: Given a web search query, retrieve relevant passages that answer the query<br><Query>: 그저우와 헤이룽장성 동닝은 어떤 나라와 접경하고 있습니까?<br></code> | <code><Document>: 허주(贺州)는 중화인민공화국 광시 좡족 자치구 북동부에 위치한 지급시이다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> | <code><Document>: 지관구(지관구)는 중국 인민공화국 헤이룽장성 지시시의 구이자 시청 소재지입니다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> | <code><Document>: 헤동 가도(河东街道)는 중국 광시(广西) 리우저우(柳州) 청중 구(城中区)의 가도입니다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> | <code><Document>: 화닝현 (华宁县; 병음: Huáníng Xiàn)은 중국 윈난성 유시시에 위치해 있습니다.<|im_end|><br><|im_start|>assistant<br><think><br><br></think><br><br></code> |
135
+ * Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
136
+ ```json
137
+ {
138
+ "scale": 15,
139
+ "num_negatives": 61,
140
+ "activation_fn": "torch.nn.modules.activation.Sigmoid",
141
+ "mini_batch_size": 4
142
+ }
143
+ ```
144
+
145
+ ### Training Hyperparameters
146
+ #### Non-Default Hyperparameters
147
+
148
+ - `per_device_train_batch_size`: 1024
149
+ - `per_device_eval_batch_size`: 32
150
+ - `learning_rate`: 2e-05
151
+ - `num_train_epochs`: 1
152
+ - `warmup_ratio`: 0.05
153
+ - `bf16`: True
154
+ - `ddp_find_unused_parameters`: True
155
+ - `ddp_timeout`: 7200
156
+ - `batch_sampler`: no_duplicates
157
+
158
+ #### All Hyperparameters
159
+ <details><summary>Click to expand</summary>
160
+
161
+ - `overwrite_output_dir`: False
162
+ - `do_predict`: False
163
+ - `eval_strategy`: no
164
+ - `prediction_loss_only`: True
165
+ - `per_device_train_batch_size`: 1024
166
+ - `per_device_eval_batch_size`: 32
167
+ - `per_gpu_train_batch_size`: None
168
+ - `per_gpu_eval_batch_size`: None
169
+ - `gradient_accumulation_steps`: 1
170
+ - `eval_accumulation_steps`: None
171
+ - `torch_empty_cache_steps`: None
172
+ - `learning_rate`: 2e-05
173
+ - `weight_decay`: 0.0
174
+ - `adam_beta1`: 0.9
175
+ - `adam_beta2`: 0.999
176
+ - `adam_epsilon`: 1e-08
177
+ - `max_grad_norm`: 1.0
178
+ - `num_train_epochs`: 1
179
+ - `max_steps`: -1
180
+ - `lr_scheduler_type`: linear
181
+ - `lr_scheduler_kwargs`: {}
182
+ - `warmup_ratio`: 0.05
183
+ - `warmup_steps`: 0
184
+ - `log_level`: passive
185
+ - `log_level_replica`: warning
186
+ - `log_on_each_node`: True
187
+ - `logging_nan_inf_filter`: True
188
+ - `save_safetensors`: True
189
+ - `save_on_each_node`: False
190
+ - `save_only_model`: False
191
+ - `restore_callback_states_from_checkpoint`: False
192
+ - `no_cuda`: False
193
+ - `use_cpu`: False
194
+ - `use_mps_device`: False
195
+ - `seed`: 42
196
+ - `data_seed`: None
197
+ - `jit_mode_eval`: False
198
+ - `use_ipex`: False
199
+ - `bf16`: True
200
+ - `fp16`: False
201
+ - `fp16_opt_level`: O1
202
+ - `half_precision_backend`: auto
203
+ - `bf16_full_eval`: False
204
+ - `fp16_full_eval`: False
205
+ - `tf32`: None
206
+ - `local_rank`: 0
207
+ - `ddp_backend`: None
208
+ - `tpu_num_cores`: None
209
+ - `tpu_metrics_debug`: False
210
+ - `debug`: []
211
+ - `dataloader_drop_last`: True
212
+ - `dataloader_num_workers`: 0
213
+ - `dataloader_prefetch_factor`: None
214
+ - `past_index`: -1
215
+ - `disable_tqdm`: False
216
+ - `remove_unused_columns`: True
217
+ - `label_names`: None
218
+ - `load_best_model_at_end`: False
219
+ - `ignore_data_skip`: False
220
+ - `fsdp`: []
221
+ - `fsdp_min_num_params`: 0
222
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
223
+ - `fsdp_transformer_layer_cls_to_wrap`: None
224
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
225
+ - `deepspeed`: None
226
+ - `label_smoothing_factor`: 0.0
227
+ - `optim`: adamw_torch
228
+ - `optim_args`: None
229
+ - `adafactor`: False
230
+ - `group_by_length`: False
231
+ - `length_column_name`: length
232
+ - `ddp_find_unused_parameters`: True
233
+ - `ddp_bucket_cap_mb`: None
234
+ - `ddp_broadcast_buffers`: False
235
+ - `dataloader_pin_memory`: True
236
+ - `dataloader_persistent_workers`: False
237
+ - `skip_memory_metrics`: True
238
+ - `use_legacy_prediction_loop`: False
239
+ - `push_to_hub`: False
240
+ - `resume_from_checkpoint`: None
241
+ - `hub_model_id`: None
242
+ - `hub_strategy`: every_save
243
+ - `hub_private_repo`: None
244
+ - `hub_always_push`: False
245
+ - `hub_revision`: None
246
+ - `gradient_checkpointing`: False
247
+ - `gradient_checkpointing_kwargs`: None
248
+ - `include_inputs_for_metrics`: False
249
+ - `include_for_metrics`: []
250
+ - `eval_do_concat_batches`: True
251
+ - `fp16_backend`: auto
252
+ - `push_to_hub_model_id`: None
253
+ - `push_to_hub_organization`: None
254
+ - `mp_parameters`:
255
+ - `auto_find_batch_size`: False
256
+ - `full_determinism`: False
257
+ - `torchdynamo`: None
258
+ - `ray_scope`: last
259
+ - `ddp_timeout`: 7200
260
+ - `torch_compile`: False
261
+ - `torch_compile_backend`: None
262
+ - `torch_compile_mode`: None
263
+ - `include_tokens_per_second`: False
264
+ - `include_num_input_tokens_seen`: False
265
+ - `neftune_noise_alpha`: None
266
+ - `optim_target_modules`: None
267
+ - `batch_eval_metrics`: False
268
+ - `eval_on_start`: False
269
+ - `use_liger_kernel`: False
270
+ - `liger_kernel_config`: None
271
+ - `eval_use_gather_object`: False
272
+ - `average_tokens_across_devices`: False
273
+ - `prompts`: None
274
+ - `batch_sampler`: no_duplicates
275
+ - `multi_dataset_batch_sampler`: proportional
276
+ - `router_mapping`: {}
277
+ - `learning_rate_mapping`: {}
278
+
279
+ </details>
280
+
281
+ ### Training Logs
282
+ <details><summary>Click to expand</summary>
283
+
284
+ | Epoch | Step | Training Loss |
285
+ |:------:|:----:|:-------------:|
286
+ | 0.0034 | 1 | 1.2714 |
287
+ | 0.0069 | 2 | 1.3902 |
288
+ | 0.0103 | 3 | 1.3308 |
289
+ | 0.0137 | 4 | 1.2726 |
290
+ | 0.0172 | 5 | 1.2519 |
291
+ | 0.0206 | 6 | 1.1254 |
292
+ | 0.0241 | 7 | 0.9001 |
293
+ | 0.0275 | 8 | 0.7529 |
294
+ | 0.0309 | 9 | 0.9942 |
295
+ | 0.0344 | 10 | 0.8769 |
296
+ | 0.0378 | 11 | 0.6895 |
297
+ | 0.0412 | 12 | 0.6813 |
298
+ | 0.0447 | 13 | 0.6841 |
299
+ | 0.0481 | 14 | 0.6025 |
300
+ | 0.0515 | 15 | 0.619 |
301
+ | 0.0550 | 16 | 0.6005 |
302
+ | 0.0584 | 17 | 0.5917 |
303
+ | 0.0619 | 18 | 0.5658 |
304
+ | 0.0653 | 19 | 0.5571 |
305
+ | 0.0687 | 20 | 0.5411 |
306
+ | 0.0722 | 21 | 0.5374 |
307
+ | 0.0756 | 22 | 0.5304 |
308
+ | 0.0790 | 23 | 0.5103 |
309
+ | 0.0825 | 24 | 0.5184 |
310
+ | 0.0859 | 25 | 0.5036 |
311
+ | 0.0893 | 26 | 0.5213 |
312
+ | 0.0928 | 27 | 0.5399 |
313
+ | 0.0962 | 28 | 0.5414 |
314
+ | 0.0997 | 29 | 0.5177 |
315
+ | 0.1031 | 30 | 0.5248 |
316
+ | 0.1065 | 31 | 0.5196 |
317
+ | 0.1100 | 32 | 0.499 |
318
+ | 0.1134 | 33 | 0.514 |
319
+ | 0.1168 | 34 | 0.5154 |
320
+ | 0.1203 | 35 | 0.5114 |
321
+ | 0.1237 | 36 | 0.508 |
322
+ | 0.1271 | 37 | 0.5117 |
323
+ | 0.1306 | 38 | 0.495 |
324
+ | 0.1340 | 39 | 0.5304 |
325
+ | 0.1375 | 40 | 0.4956 |
326
+ | 0.1409 | 41 | 0.5274 |
327
+ | 0.1443 | 42 | 0.5181 |
328
+ | 0.1478 | 43 | 0.5103 |
329
+ | 0.1512 | 44 | 0.5116 |
330
+ | 0.1546 | 45 | 0.499 |
331
+ | 0.1581 | 46 | 0.5072 |
332
+ | 0.1615 | 47 | 0.5044 |
333
+ | 0.1649 | 48 | 0.5071 |
334
+ | 0.1684 | 49 | 0.5129 |
335
+ | 0.1718 | 50 | 0.5095 |
336
+ | 0.1753 | 51 | 0.5174 |
337
+ | 0.1787 | 52 | 0.4748 |
338
+ | 0.1821 | 53 | 0.4507 |
339
+ | 0.1856 | 54 | 0.4927 |
340
+ | 0.1890 | 55 | 0.452 |
341
+ | 0.1924 | 56 | 0.4999 |
342
+ | 0.1959 | 57 | 0.4744 |
343
+ | 0.1993 | 58 | 0.4486 |
344
+ | 0.2027 | 59 | 0.4725 |
345
+ | 0.2062 | 60 | 0.4723 |
346
+ | 0.2096 | 61 | 0.4747 |
347
+ | 0.2131 | 62 | 0.4317 |
348
+ | 0.2165 | 63 | 0.4668 |
349
+ | 0.2199 | 64 | 0.453 |
350
+ | 0.2234 | 65 | 0.4457 |
351
+ | 0.2268 | 66 | 0.4179 |
352
+ | 0.2302 | 67 | 0.4124 |
353
+ | 0.2337 | 68 | 0.4454 |
354
+ | 0.2371 | 69 | 0.4222 |
355
+ | 0.2405 | 70 | 0.4151 |
356
+ | 0.2440 | 71 | 0.4172 |
357
+ | 0.2474 | 72 | 0.422 |
358
+ | 0.2509 | 73 | 0.4088 |
359
+ | 0.2543 | 74 | 0.4107 |
360
+ | 0.2577 | 75 | 0.3977 |
361
+ | 0.2612 | 76 | 0.4141 |
362
+ | 0.2646 | 77 | 0.3991 |
363
+ | 0.2680 | 78 | 0.3955 |
364
+ | 0.2715 | 79 | 0.3864 |
365
+ | 0.2749 | 80 | 0.4147 |
366
+ | 0.2784 | 81 | 0.4084 |
367
+ | 0.2818 | 82 | 0.4139 |
368
+ | 0.2852 | 83 | 0.3999 |
369
+ | 0.2887 | 84 | 0.4305 |
370
+ | 0.2921 | 85 | 0.4188 |
371
+ | 0.2955 | 86 | 0.4171 |
372
+ | 0.2990 | 87 | 0.407 |
373
+ | 0.3024 | 88 | 0.3871 |
374
+ | 0.3058 | 89 | 0.389 |
375
+ | 0.3093 | 90 | 0.3813 |
376
+ | 0.3127 | 91 | 0.3814 |
377
+ | 0.3162 | 92 | 0.3732 |
378
+ | 0.3196 | 93 | 0.3899 |
379
+ | 0.3230 | 94 | 0.3655 |
380
+ | 0.3265 | 95 | 0.3638 |
381
+ | 0.3299 | 96 | 0.3784 |
382
+ | 0.3333 | 97 | 0.3729 |
383
+ | 0.3368 | 98 | 0.3665 |
384
+ | 0.3402 | 99 | 0.3579 |
385
+ | 0.3436 | 100 | 0.3414 |
386
+ | 0.3471 | 101 | 0.3304 |
387
+ | 0.3505 | 102 | 0.347 |
388
+ | 0.3540 | 103 | 0.3076 |
389
+ | 0.3574 | 104 | 0.3111 |
390
+ | 0.3608 | 105 | 0.3121 |
391
+ | 0.3643 | 106 | 0.3272 |
392
+ | 0.3677 | 107 | 0.3108 |
393
+ | 0.3711 | 108 | 0.3092 |
394
+ | 0.3746 | 109 | 0.2951 |
395
+ | 0.3780 | 110 | 0.3195 |
396
+ | 0.3814 | 111 | 0.2915 |
397
+ | 0.3849 | 112 | 0.2855 |
398
+ | 0.3883 | 113 | 0.2904 |
399
+ | 0.3918 | 114 | 0.2873 |
400
+ | 0.3952 | 115 | 0.273 |
401
+ | 0.3986 | 116 | 0.2779 |
402
+ | 0.4021 | 117 | 0.2939 |
403
+ | 0.4055 | 118 | 0.276 |
404
+ | 0.4089 | 119 | 0.2535 |
405
+ | 0.4124 | 120 | 0.2774 |
406
+ | 0.4158 | 121 | 0.2597 |
407
+ | 0.4192 | 122 | 0.2541 |
408
+ | 0.4227 | 123 | 0.2587 |
409
+ | 0.4261 | 124 | 0.27 |
410
+ | 0.4296 | 125 | 0.2724 |
411
+ | 0.4330 | 126 | 0.2446 |
412
+ | 0.4364 | 127 | 0.2747 |
413
+ | 0.4399 | 128 | 0.268 |
414
+ | 0.4433 | 129 | 0.2585 |
415
+ | 0.4467 | 130 | 0.2652 |
416
+ | 0.4502 | 131 | 0.2685 |
417
+ | 0.4536 | 132 | 0.2565 |
418
+ | 0.4570 | 133 | 0.2503 |
419
+ | 0.4605 | 134 | 0.2634 |
420
+ | 0.4639 | 135 | 0.2501 |
421
+ | 0.4674 | 136 | 0.2479 |
422
+ | 0.4708 | 137 | 0.2628 |
423
+ | 0.4742 | 138 | 0.2505 |
424
+ | 0.4777 | 139 | 0.2468 |
425
+ | 0.4811 | 140 | 0.2365 |
426
+ | 0.4845 | 141 | 0.2496 |
427
+ | 0.4880 | 142 | 0.248 |
428
+ | 0.4914 | 143 | 0.2604 |
429
+ | 0.4948 | 144 | 0.2477 |
430
+ | 0.4983 | 145 | 0.259 |
431
+ | 0.5017 | 146 | 0.2556 |
432
+ | 0.5052 | 147 | 0.2618 |
433
+ | 0.5086 | 148 | 0.2583 |
434
+ | 0.5120 | 149 | 0.2588 |
435
+ | 0.5155 | 150 | 0.2468 |
436
+ | 0.5189 | 151 | 0.2437 |
437
+ | 0.5223 | 152 | 0.2595 |
438
+ | 0.5258 | 153 | 0.2647 |
439
+ | 0.5292 | 154 | 0.2699 |
440
+ | 0.5326 | 155 | 0.2529 |
441
+ | 0.5361 | 156 | 0.2339 |
442
+ | 0.5395 | 157 | 0.2557 |
443
+ | 0.5430 | 158 | 0.2402 |
444
+ | 0.5464 | 159 | 0.2583 |
445
+ | 0.5498 | 160 | 0.2688 |
446
+ | 0.5533 | 161 | 0.2567 |
447
+ | 0.5567 | 162 | 0.2702 |
448
+ | 0.5601 | 163 | 0.2669 |
449
+ | 0.5636 | 164 | 0.2699 |
450
+ | 0.5670 | 165 | 0.2561 |
451
+ | 0.5704 | 166 | 0.2406 |
452
+ | 0.5739 | 167 | 0.2438 |
453
+ | 0.5773 | 168 | 0.2523 |
454
+ | 0.5808 | 169 | 0.2535 |
455
+ | 0.5842 | 170 | 0.2533 |
456
+ | 0.5876 | 171 | 0.2643 |
457
+ | 0.5911 | 172 | 0.2684 |
458
+ | 0.5945 | 173 | 0.2503 |
459
+ | 0.5979 | 174 | 0.2735 |
460
+ | 0.6014 | 175 | 0.2612 |
461
+ | 0.6048 | 176 | 0.2721 |
462
+ | 0.6082 | 177 | 0.2533 |
463
+ | 0.6117 | 178 | 0.2704 |
464
+ | 0.6151 | 179 | 0.2609 |
465
+ | 0.6186 | 180 | 0.2605 |
466
+ | 0.6220 | 181 | 0.2664 |
467
+ | 0.6254 | 182 | 0.2516 |
468
+ | 0.6289 | 183 | 0.2513 |
469
+ | 0.6323 | 184 | 0.2439 |
470
+ | 0.6357 | 185 | 0.258 |
471
+ | 0.6392 | 186 | 0.2534 |
472
+ | 0.6426 | 187 | 0.2638 |
473
+ | 0.6460 | 188 | 0.2535 |
474
+ | 0.6495 | 189 | 0.2481 |
475
+ | 0.6529 | 190 | 0.264 |
476
+ | 0.6564 | 191 | 0.2418 |
477
+ | 0.6598 | 192 | 0.2326 |
478
+ | 0.6632 | 193 | 0.2476 |
479
+ | 0.6667 | 194 | 0.2271 |
480
+ | 0.6701 | 195 | 0.229 |
481
+ | 0.6735 | 196 | 0.2303 |
482
+ | 0.6770 | 197 | 0.2272 |
483
+ | 0.6804 | 198 | 0.2309 |
484
+ | 0.6838 | 199 | 0.2159 |
485
+ | 0.6873 | 200 | 0.2178 |
486
+ | 0.6907 | 201 | 0.208 |
487
+ | 0.6942 | 202 | 0.2257 |
488
+ | 0.6976 | 203 | 0.2032 |
489
+ | 0.7010 | 204 | 0.2047 |
490
+ | 0.7045 | 205 | 0.2223 |
491
+ | 0.7079 | 206 | 0.1964 |
492
+ | 0.7113 | 207 | 0.1846 |
493
+ | 0.7148 | 208 | 0.1899 |
494
+ | 0.7182 | 209 | 0.1986 |
495
+ | 0.7216 | 210 | 0.1898 |
496
+ | 0.7251 | 211 | 0.1999 |
497
+ | 0.7285 | 212 | 0.1754 |
498
+ | 0.7320 | 213 | 0.1912 |
499
+ | 0.7354 | 214 | 0.1702 |
500
+ | 0.7388 | 215 | 0.17 |
501
+ | 0.7423 | 216 | 0.1768 |
502
+ | 0.7457 | 217 | 0.1647 |
503
+ | 0.7491 | 218 | 0.1711 |
504
+ | 0.7526 | 219 | 0.1507 |
505
+ | 0.7560 | 220 | 0.1657 |
506
+ | 0.7595 | 221 | 0.1498 |
507
+ | 0.7629 | 222 | 0.1557 |
508
+ | 0.7663 | 223 | 0.1651 |
509
+ | 0.7698 | 224 | 0.1446 |
510
+ | 0.7732 | 225 | 0.1519 |
511
+ | 0.7766 | 226 | 0.1453 |
512
+ | 0.7801 | 227 | 0.1561 |
513
+ | 0.7835 | 228 | 0.1557 |
514
+ | 0.7869 | 229 | 0.1493 |
515
+ | 0.7904 | 230 | 0.1476 |
516
+ | 0.7938 | 231 | 0.1453 |
517
+ | 0.7973 | 232 | 0.1312 |
518
+ | 0.8007 | 233 | 0.1531 |
519
+ | 0.8041 | 234 | 0.1498 |
520
+ | 0.8076 | 235 | 0.134 |
521
+ | 0.8110 | 236 | 0.1361 |
522
+ | 0.8144 | 237 | 0.1461 |
523
+ | 0.8179 | 238 | 0.148 |
524
+ | 0.8213 | 239 | 0.1465 |
525
+ | 0.8247 | 240 | 0.1452 |
526
+ | 0.8282 | 241 | 0.1399 |
527
+ | 0.8316 | 242 | 0.1291 |
528
+ | 0.8351 | 243 | 0.1354 |
529
+ | 0.8385 | 244 | 0.1719 |
530
+ | 0.8419 | 245 | 0.1555 |
531
+ | 0.8454 | 246 | 0.1472 |
532
+ | 0.8488 | 247 | 0.1516 |
533
+ | 0.8522 | 248 | 0.1579 |
534
+ | 0.8557 | 249 | 0.161 |
535
+ | 0.8591 | 250 | 0.1661 |
536
+ | 0.8625 | 251 | 0.155 |
537
+ | 0.8660 | 252 | 0.1706 |
538
+ | 0.8694 | 253 | 0.1527 |
539
+ | 0.8729 | 254 | 0.1695 |
540
+ | 0.8763 | 255 | 0.1904 |
541
+ | 0.8797 | 256 | 0.186 |
542
+ | 0.8832 | 257 | 0.1723 |
543
+ | 0.8866 | 258 | 0.1881 |
544
+ | 0.8900 | 259 | 0.1915 |
545
+ | 0.8935 | 260 | 0.1969 |
546
+ | 0.8969 | 261 | 0.1967 |
547
+ | 0.9003 | 262 | 0.2038 |
548
+ | 0.9038 | 263 | 0.1917 |
549
+ | 0.9072 | 264 | 0.19 |
550
+ | 0.9107 | 265 | 0.2161 |
551
+ | 0.9141 | 266 | 0.222 |
552
+ | 0.9175 | 267 | 0.2361 |
553
+ | 0.9210 | 268 | 0.2538 |
554
+ | 0.9244 | 269 | 0.2408 |
555
+ | 0.9278 | 270 | 0.2372 |
556
+ | 0.9313 | 271 | 0.2292 |
557
+ | 0.9347 | 272 | 0.238 |
558
+ | 0.9381 | 273 | 0.2243 |
559
+ | 0.9416 | 274 | 0.2443 |
560
+ | 0.9450 | 275 | 0.2435 |
561
+ | 0.9485 | 276 | 0.2476 |
562
+ | 0.9519 | 277 | 0.2259 |
563
+ | 0.9553 | 278 | 0.2327 |
564
+ | 0.9588 | 279 | 0.2345 |
565
+ | 0.9622 | 280 | 0.2413 |
566
+
567
+ </details>
568
+
569
+ ### Framework Versions
570
+ - Python: 3.11.12
571
+ - Sentence Transformers: 5.0.0
572
+ - Transformers: 4.53.1
573
+ - PyTorch: 2.8.0+cu128
574
+ - Accelerate: 1.5.2
575
+ - Datasets: 2.21.0
576
+ - Tokenizers: 0.21.1
577
+
578
+ ## Citation
579
+
580
+ ### BibTeX
581
+
582
+ #### Sentence Transformers
583
+ ```bibtex
584
+ @inproceedings{reimers-2019-sentence-bert,
585
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
586
+ author = "Reimers, Nils and Gurevych, Iryna",
587
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
588
+ month = "11",
589
+ year = "2019",
590
+ publisher = "Association for Computational Linguistics",
591
+ url = "https://arxiv.org/abs/1908.10084",
592
+ }
593
+ ```
594
+
595
+ <!--
596
+ ## Glossary
597
+
598
+ *Clearly define terms in order to be accessible across audiences.*
599
+ -->
600
+
601
+ <!--
602
+ ## Model Card Authors
603
+
604
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
605
+ -->
606
+
607
+ <!--
608
+ ## Model Card Contact
609
+
610
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
611
+ -->
added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
config.json ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3ForSequenceClassification"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 151643,
8
+ "eos_token_id": 151645,
9
+ "head_dim": 128,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 1024,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_types": [
21
+ "full_attention",
22
+ "full_attention",
23
+ "full_attention",
24
+ "full_attention",
25
+ "full_attention",
26
+ "full_attention",
27
+ "full_attention",
28
+ "full_attention",
29
+ "full_attention",
30
+ "full_attention",
31
+ "full_attention",
32
+ "full_attention",
33
+ "full_attention",
34
+ "full_attention",
35
+ "full_attention",
36
+ "full_attention",
37
+ "full_attention",
38
+ "full_attention",
39
+ "full_attention",
40
+ "full_attention",
41
+ "full_attention",
42
+ "full_attention",
43
+ "full_attention",
44
+ "full_attention",
45
+ "full_attention",
46
+ "full_attention",
47
+ "full_attention",
48
+ "full_attention"
49
+ ],
50
+ "max_position_embeddings": 40960,
51
+ "max_window_layers": 28,
52
+ "model_type": "qwen3",
53
+ "num_attention_heads": 16,
54
+ "num_hidden_layers": 28,
55
+ "num_key_value_heads": 8,
56
+ "pad_token_id": 151643,
57
+ "rms_norm_eps": 1e-06,
58
+ "rope_scaling": null,
59
+ "rope_theta": 1000000,
60
+ "sentence_transformers": {
61
+ "activation_fn": "torch.nn.modules.activation.Sigmoid",
62
+ "version": "5.0.0"
63
+ },
64
+ "sliding_window": null,
65
+ "tie_word_embeddings": true,
66
+ "torch_dtype": "float32",
67
+ "transformers_version": "4.53.1",
68
+ "use_cache": true,
69
+ "use_sliding_window": false,
70
+ "vocab_size": 151669
71
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8768cb13c91eca7dcf4b21741856c9a012b382634206149299a2625398beea76
3
+ size 2383145520
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bc04542e8e8fa70d398aea108486408a0320c9d5b460b448358363cd06382ac
3
+ size 11422922
tokenizer_config.json ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|im_end|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 40960,
235
+ "pad_token": "<|endoftext|>",
236
+ "split_special_tokens": false,
237
+ "tokenizer_class": "Qwen2Tokenizer",
238
+ "unk_token": null
239
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff