yjoonjang commited on
Commit
8aee430
·
verified ·
1 Parent(s): 12250d7

Add new CrossEncoder model

Browse files
Files changed (7) hide show
  1. README.md +503 -0
  2. config.json +34 -0
  3. model.safetensors +3 -0
  4. special_tokens_map.json +37 -0
  5. tokenizer.json +0 -0
  6. tokenizer_config.json +65 -0
  7. vocab.txt +0 -0
README.md ADDED
@@ -0,0 +1,503 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - cross-encoder
7
+ - generated_from_trainer
8
+ - dataset_size:78704
9
+ - loss:PListMLELoss
10
+ base_model: microsoft/MiniLM-L12-H384-uncased
11
+ datasets:
12
+ - microsoft/ms_marco
13
+ pipeline_tag: text-ranking
14
+ library_name: sentence-transformers
15
+ metrics:
16
+ - map
17
+ - mrr@10
18
+ - ndcg@10
19
+ model-index:
20
+ - name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
21
+ results:
22
+ - task:
23
+ type: cross-encoder-reranking
24
+ name: Cross Encoder Reranking
25
+ dataset:
26
+ name: NanoMSMARCO R100
27
+ type: NanoMSMARCO_R100
28
+ metrics:
29
+ - type: map
30
+ value: 0.5062
31
+ name: Map
32
+ - type: mrr@10
33
+ value: 0.4946
34
+ name: Mrr@10
35
+ - type: ndcg@10
36
+ value: 0.5634
37
+ name: Ndcg@10
38
+ - task:
39
+ type: cross-encoder-reranking
40
+ name: Cross Encoder Reranking
41
+ dataset:
42
+ name: NanoNFCorpus R100
43
+ type: NanoNFCorpus_R100
44
+ metrics:
45
+ - type: map
46
+ value: 0.3309
47
+ name: Map
48
+ - type: mrr@10
49
+ value: 0.5983
50
+ name: Mrr@10
51
+ - type: ndcg@10
52
+ value: 0.3493
53
+ name: Ndcg@10
54
+ - task:
55
+ type: cross-encoder-reranking
56
+ name: Cross Encoder Reranking
57
+ dataset:
58
+ name: NanoNQ R100
59
+ type: NanoNQ_R100
60
+ metrics:
61
+ - type: map
62
+ value: 0.5849
63
+ name: Map
64
+ - type: mrr@10
65
+ value: 0.5876
66
+ name: Mrr@10
67
+ - type: ndcg@10
68
+ value: 0.6396
69
+ name: Ndcg@10
70
+ - task:
71
+ type: cross-encoder-nano-beir
72
+ name: Cross Encoder Nano BEIR
73
+ dataset:
74
+ name: NanoBEIR R100 mean
75
+ type: NanoBEIR_R100_mean
76
+ metrics:
77
+ - type: map
78
+ value: 0.474
79
+ name: Map
80
+ - type: mrr@10
81
+ value: 0.5602
82
+ name: Mrr@10
83
+ - type: ndcg@10
84
+ value: 0.5174
85
+ name: Ndcg@10
86
+ ---
87
+
88
+ # CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
89
+
90
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
91
+
92
+ ## Model Details
93
+
94
+ ### Model Description
95
+ - **Model Type:** Cross Encoder
96
+ - **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
97
+ - **Maximum Sequence Length:** 512 tokens
98
+ - **Number of Output Labels:** 1 label
99
+ - **Training Dataset:**
100
+ - [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
101
+ - **Language:** en
102
+ <!-- - **License:** Unknown -->
103
+
104
+ ### Model Sources
105
+
106
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
107
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
108
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
109
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
110
+
111
+ ## Usage
112
+
113
+ ### Direct Usage (Sentence Transformers)
114
+
115
+ First install the Sentence Transformers library:
116
+
117
+ ```bash
118
+ pip install -U sentence-transformers
119
+ ```
120
+
121
+ Then you can load this model and run inference.
122
+ ```python
123
+ from sentence_transformers import CrossEncoder
124
+
125
+ # Download from the 🤗 Hub
126
+ model = CrossEncoder("yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-minmax")
127
+ # Get scores for pairs of texts
128
+ pairs = [
129
+ ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
130
+ ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
131
+ ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
132
+ ]
133
+ scores = model.predict(pairs)
134
+ print(scores.shape)
135
+ # (3,)
136
+
137
+ # Or rank different texts based on similarity to a single text
138
+ ranks = model.rank(
139
+ 'How many calories in an egg',
140
+ [
141
+ 'There are on average between 55 and 80 calories in an egg depending on its size.',
142
+ 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
143
+ 'Most of the calories in an egg come from the yellow yolk in the center.',
144
+ ]
145
+ )
146
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
147
+ ```
148
+
149
+ <!--
150
+ ### Direct Usage (Transformers)
151
+
152
+ <details><summary>Click to see the direct usage in Transformers</summary>
153
+
154
+ </details>
155
+ -->
156
+
157
+ <!--
158
+ ### Downstream Usage (Sentence Transformers)
159
+
160
+ You can finetune this model on your own dataset.
161
+
162
+ <details><summary>Click to expand</summary>
163
+
164
+ </details>
165
+ -->
166
+
167
+ <!--
168
+ ### Out-of-Scope Use
169
+
170
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
171
+ -->
172
+
173
+ ## Evaluation
174
+
175
+ ### Metrics
176
+
177
+ #### Cross Encoder Reranking
178
+
179
+ * Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
180
+ * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
181
+ ```json
182
+ {
183
+ "at_k": 10,
184
+ "always_rerank_positives": true
185
+ }
186
+ ```
187
+
188
+ | Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
189
+ |:------------|:---------------------|:---------------------|:---------------------|
190
+ | map | 0.5062 (+0.0166) | 0.3309 (+0.0699) | 0.5849 (+0.1653) |
191
+ | mrr@10 | 0.4946 (+0.0171) | 0.5983 (+0.0985) | 0.5876 (+0.1609) |
192
+ | **ndcg@10** | **0.5634 (+0.0229)** | **0.3493 (+0.0243)** | **0.6396 (+0.1390)** |
193
+
194
+ #### Cross Encoder Nano BEIR
195
+
196
+ * Dataset: `NanoBEIR_R100_mean`
197
+ * Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
198
+ ```json
199
+ {
200
+ "dataset_names": [
201
+ "msmarco",
202
+ "nfcorpus",
203
+ "nq"
204
+ ],
205
+ "rerank_k": 100,
206
+ "at_k": 10,
207
+ "always_rerank_positives": true
208
+ }
209
+ ```
210
+
211
+ | Metric | Value |
212
+ |:------------|:---------------------|
213
+ | map | 0.4740 (+0.0839) |
214
+ | mrr@10 | 0.5602 (+0.0921) |
215
+ | **ndcg@10** | **0.5174 (+0.0621)** |
216
+
217
+ <!--
218
+ ## Bias, Risks and Limitations
219
+
220
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
221
+ -->
222
+
223
+ <!--
224
+ ### Recommendations
225
+
226
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
227
+ -->
228
+
229
+ ## Training Details
230
+
231
+ ### Training Dataset
232
+
233
+ #### ms_marco
234
+
235
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
236
+ * Size: 78,704 training samples
237
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
238
+ * Approximate statistics based on the first 1000 samples:
239
+ | | query | docs | labels |
240
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
241
+ | type | string | list | list |
242
+ | details | <ul><li>min: 11 characters</li><li>mean: 33.66 characters</li><li>max: 97 characters</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> |
243
+ * Samples:
244
+ | query | docs | labels |
245
+ |:----------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
246
+ | <code>what year did the us acquire land from the miami indians</code> | <code>['By 1846, most of the Miami had been removed to Indian Territory (now Oklahoma). The Miami Tribe of Oklahoma is the only federally recognized tribe of Miami Indians in the United States. The Miami Nation of Indiana is an unrecognized tribe. The Miami of Kekionga remained allies of the British, but were not openly hostile to the United States (US) (except when attacked by Augustin de La Balme in 1780). The U.S. government did not trust their neutrality, however.', 'In June 1816, a constitutional convention was held and a state government was formed. The territory was dissolved on December 11, 1816, by an act of Congress granting statehood to Indiana. In February 1815, the United States House of Representatives began debate on granting Indiana Territory statehood. In early 1816, the Territory approved a census and Pennington was named to be the census enumerator.', 'Stuart Banner, a law professor, does not deny that between the early 17th century and the end of the 19th, nearly the enti...</code> | <code>[1, 1, 0, 0, 0, ...]</code> |
247
+ | <code>what is a business director</code> | <code>['Intel Board of Directors. A director is a person from a group of managers who leads or supervises a particular area of a company, program, or project. Companies that use this term often have many directors spread throughout different business functions or roles (e.g. director of human resources). The director usually reports directly to a vice president or to the CEO directly in order to let them know the progress of the organization. An executive director within a company or an organization is usually from the board of directors and oversees a specific department within the organization such as Marketing, Finance, Production and IT.', 'company director. An appointed or elected member of the board of directors of a company who, with other directors, has the responsibility for determining and implementing the company’s policy.', 'Microsoft Outlook 2013 with Business Contact Manager is a great customer relationship management (CRM) tool for small business owners because they can use it...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
248
+ | <code>why is the thyroid gland important</code> | <code>['The thyroid is a small, butterfly-shaped gland located at the base of your neck. It is one of many glands in the endocrine system in the body that regulate the function, growth and development of virtually every cell, tissue and organ in the body. Endocrine glands secrete hormones directly into the bloodstream.', 'Thyroid dysfunction is when the thyroid gland, a small, butterfly-shaped gland located at the base of your neck, produces too much thyroid hormone. This is when you body’s endocrine system speed up, which is referred to as hyperthyroidism.', 'Thyroxine is the most important hormone produced by the thyroid gland. When the gland produces little or too much of this hormone, the body system faces major challenges. For example, if the thyroid is under-active, this could result in Goitre, which is a swelling at the neck.', 'The anterior pituitary makes several important hormones-growth hormone, puberty hormones (or gonadotrophins), thyroid stimulating hormone (TSH, which stimulat...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
249
+ * Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
250
+ ```json
251
+ {
252
+ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
253
+ "activation_fct": "torch.nn.modules.linear.Identity",
254
+ "mini_batch_size": null,
255
+ "respect_input_order": true
256
+ }
257
+ ```
258
+
259
+ ### Evaluation Dataset
260
+
261
+ #### ms_marco
262
+
263
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
264
+ * Size: 1,000 evaluation samples
265
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
266
+ * Approximate statistics based on the first 1000 samples:
267
+ | | query | docs | labels |
268
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
269
+ | type | string | list | list |
270
+ | details | <ul><li>min: 11 characters</li><li>mean: 33.08 characters</li><li>max: 94 characters</li></ul> | <ul><li>min: 1 elements</li><li>mean: 5.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 1 elements</li><li>mean: 5.50 elements</li><li>max: 10 elements</li></ul> |
271
+ * Samples:
272
+ | query | docs | labels |
273
+ |:-------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
274
+ | <code>how many assistants does michelle have</code> | <code>['Never in the history of the White House has a First Lady spent so much on so many personal assistants, all paid from taxpayer dollars. Hilary Clinton had three (3)! Michelle has 26, from makeup artist Ingrid Miles and hairstylist Johnny Wright to her “chief of staff” Susan Sher whose salary is $172,200.00!', 'Allegations that Michelle Obama has an excessively large staff compared to other first ladies is nothing new. In 2009, FactCheck.org and Snopes.com debunked the claim circulated in a chain e-mail that Michelle Obama had an unprecedented number of staffers, with 22.', "Of course since Michelle Obama's Twitter account tweeted today to announce that it couldn't Tweet, the situation probably won't become urgent until Thursday. But we finally have an answer to the question: How many assistants does it take to Tweet a link for Michelle Obama. The answer: 16. You filthy Republicans.", "Myra Gutin, an expert on first ladies and politics at Rider University in New Jersey, said that as of...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
275
+ | <code>how long and at what temperature to bake salmon</code> | <code>["Oven Temperature. Another thing that determines how long the salmon is baked is oven temperature. Typically, recipes for baking salmon call for an oven temperature of 350 to 450 degrees. The salmon should always be put into a pre-heated oven. Cooking in an oven that hasn't been pre-heated can cause drying of the fish. 1 A two-inch thick fillet will bake for 20 minutes. 2 A 1-1/2 filet will take 15 minutes and so on. 3 Check the salmon frequently. 4 Start checking at about 10 minutes, and keep checking until the flesh of the fish is just barely an opaque pink.", 'Preheat the oven to 450 degrees F. Season salmon with salt and pepper. Place salmon, skin side down, on a non-stick baking sheet or in a non-stick pan with an oven-proof handle. Bake until salmon is cooked through, about 12 to 15 minutes. Serve with the Toasted Almond Parsley Salad and squash, if desired. Mince the shallot and add to a small bowl.', 'Report Abuse. I preheat the oven to 500 degrees, really hot. Put the salm...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
276
+ | <code>what is gene deletion</code> | <code>["Deletion on a chromosome. In genetics, a deletion (also called gene deletion, deficiency, or deletion mutation) (sign: δ) is a mutation (a genetic aberration) in which a part of a chromosome or a sequence of DNA is lost during DNA replication. Any number of nucleotides can be deleted, from a single base to an entire piece of chromosome. The smallest single base deletion mutations are believed occur by a single base flipping in the template DNA, followed by template DNA strand slippage, within the DNA polymerase active site. 1 ' Terminal Deletion' — a deletion that occurs towards the end of a chromosome. 2 Intercalary Deletion / Interstitial Deletion — a deletion that occurs from the interior of a chromosome. 3 Microdeletion — a relatively small amount of deletion (up to 5Mb that could include a dozen genes).", '22q11.2 deletion syndrome (which is also known by several other names, listed below) is a disorder caused by the deletion of a small piece of chromosome 22. The deletion occ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
277
+ * Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
278
+ ```json
279
+ {
280
+ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
281
+ "activation_fct": "torch.nn.modules.linear.Identity",
282
+ "mini_batch_size": null,
283
+ "respect_input_order": true
284
+ }
285
+ ```
286
+
287
+ ### Training Hyperparameters
288
+ #### Non-Default Hyperparameters
289
+
290
+ - `eval_strategy`: steps
291
+ - `per_device_train_batch_size`: 16
292
+ - `per_device_eval_batch_size`: 16
293
+ - `learning_rate`: 2e-05
294
+ - `num_train_epochs`: 1
295
+ - `warmup_ratio`: 0.1
296
+ - `seed`: 12
297
+ - `bf16`: True
298
+ - `load_best_model_at_end`: True
299
+
300
+ #### All Hyperparameters
301
+ <details><summary>Click to expand</summary>
302
+
303
+ - `overwrite_output_dir`: False
304
+ - `do_predict`: False
305
+ - `eval_strategy`: steps
306
+ - `prediction_loss_only`: True
307
+ - `per_device_train_batch_size`: 16
308
+ - `per_device_eval_batch_size`: 16
309
+ - `per_gpu_train_batch_size`: None
310
+ - `per_gpu_eval_batch_size`: None
311
+ - `gradient_accumulation_steps`: 1
312
+ - `eval_accumulation_steps`: None
313
+ - `torch_empty_cache_steps`: None
314
+ - `learning_rate`: 2e-05
315
+ - `weight_decay`: 0.0
316
+ - `adam_beta1`: 0.9
317
+ - `adam_beta2`: 0.999
318
+ - `adam_epsilon`: 1e-08
319
+ - `max_grad_norm`: 1.0
320
+ - `num_train_epochs`: 1
321
+ - `max_steps`: -1
322
+ - `lr_scheduler_type`: linear
323
+ - `lr_scheduler_kwargs`: {}
324
+ - `warmup_ratio`: 0.1
325
+ - `warmup_steps`: 0
326
+ - `log_level`: passive
327
+ - `log_level_replica`: warning
328
+ - `log_on_each_node`: True
329
+ - `logging_nan_inf_filter`: True
330
+ - `save_safetensors`: True
331
+ - `save_on_each_node`: False
332
+ - `save_only_model`: False
333
+ - `restore_callback_states_from_checkpoint`: False
334
+ - `no_cuda`: False
335
+ - `use_cpu`: False
336
+ - `use_mps_device`: False
337
+ - `seed`: 12
338
+ - `data_seed`: None
339
+ - `jit_mode_eval`: False
340
+ - `use_ipex`: False
341
+ - `bf16`: True
342
+ - `fp16`: False
343
+ - `fp16_opt_level`: O1
344
+ - `half_precision_backend`: auto
345
+ - `bf16_full_eval`: False
346
+ - `fp16_full_eval`: False
347
+ - `tf32`: None
348
+ - `local_rank`: 0
349
+ - `ddp_backend`: None
350
+ - `tpu_num_cores`: None
351
+ - `tpu_metrics_debug`: False
352
+ - `debug`: []
353
+ - `dataloader_drop_last`: False
354
+ - `dataloader_num_workers`: 0
355
+ - `dataloader_prefetch_factor`: None
356
+ - `past_index`: -1
357
+ - `disable_tqdm`: False
358
+ - `remove_unused_columns`: True
359
+ - `label_names`: None
360
+ - `load_best_model_at_end`: True
361
+ - `ignore_data_skip`: False
362
+ - `fsdp`: []
363
+ - `fsdp_min_num_params`: 0
364
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
365
+ - `fsdp_transformer_layer_cls_to_wrap`: None
366
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
367
+ - `deepspeed`: None
368
+ - `label_smoothing_factor`: 0.0
369
+ - `optim`: adamw_torch
370
+ - `optim_args`: None
371
+ - `adafactor`: False
372
+ - `group_by_length`: False
373
+ - `length_column_name`: length
374
+ - `ddp_find_unused_parameters`: None
375
+ - `ddp_bucket_cap_mb`: None
376
+ - `ddp_broadcast_buffers`: False
377
+ - `dataloader_pin_memory`: True
378
+ - `dataloader_persistent_workers`: False
379
+ - `skip_memory_metrics`: True
380
+ - `use_legacy_prediction_loop`: False
381
+ - `push_to_hub`: False
382
+ - `resume_from_checkpoint`: None
383
+ - `hub_model_id`: None
384
+ - `hub_strategy`: every_save
385
+ - `hub_private_repo`: None
386
+ - `hub_always_push`: False
387
+ - `gradient_checkpointing`: False
388
+ - `gradient_checkpointing_kwargs`: None
389
+ - `include_inputs_for_metrics`: False
390
+ - `include_for_metrics`: []
391
+ - `eval_do_concat_batches`: True
392
+ - `fp16_backend`: auto
393
+ - `push_to_hub_model_id`: None
394
+ - `push_to_hub_organization`: None
395
+ - `mp_parameters`:
396
+ - `auto_find_batch_size`: False
397
+ - `full_determinism`: False
398
+ - `torchdynamo`: None
399
+ - `ray_scope`: last
400
+ - `ddp_timeout`: 1800
401
+ - `torch_compile`: False
402
+ - `torch_compile_backend`: None
403
+ - `torch_compile_mode`: None
404
+ - `dispatch_batches`: None
405
+ - `split_batches`: None
406
+ - `include_tokens_per_second`: False
407
+ - `include_num_input_tokens_seen`: False
408
+ - `neftune_noise_alpha`: None
409
+ - `optim_target_modules`: None
410
+ - `batch_eval_metrics`: False
411
+ - `eval_on_start`: False
412
+ - `use_liger_kernel`: False
413
+ - `eval_use_gather_object`: False
414
+ - `average_tokens_across_devices`: False
415
+ - `prompts`: None
416
+ - `batch_sampler`: batch_sampler
417
+ - `multi_dataset_batch_sampler`: proportional
418
+
419
+ </details>
420
+
421
+ ### Training Logs
422
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
423
+ |:----------:|:--------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
424
+ | -1 | -1 | - | - | 0.0224 (-0.5181) | 0.2459 (-0.0791) | 0.0785 (-0.4221) | 0.1156 (-0.3398) |
425
+ | 0.0002 | 1 | 2.2034 | - | - | - | - | - |
426
+ | 0.0508 | 250 | 2.1047 | - | - | - | - | - |
427
+ | 0.1016 | 500 | 1.9773 | 1.9326 | 0.1454 (-0.3951) | 0.2533 (-0.0717) | 0.2214 (-0.2793) | 0.2067 (-0.2487) |
428
+ | 0.1525 | 750 | 1.911 | - | - | - | - | - |
429
+ | 0.2033 | 1000 | 1.8706 | 1.8490 | 0.4764 (-0.0640) | 0.3298 (+0.0048) | 0.5301 (+0.0295) | 0.4455 (-0.0099) |
430
+ | 0.2541 | 1250 | 1.8645 | - | - | - | - | - |
431
+ | 0.3049 | 1500 | 1.857 | 1.8414 | 0.5404 (-0.0001) | 0.3443 (+0.0192) | 0.6513 (+0.1506) | 0.5120 (+0.0566) |
432
+ | 0.3558 | 1750 | 1.8524 | - | - | - | - | - |
433
+ | 0.4066 | 2000 | 1.841 | 1.8224 | 0.5780 (+0.0375) | 0.3498 (+0.0247) | 0.6080 (+0.1074) | 0.5119 (+0.0565) |
434
+ | 0.4574 | 2250 | 1.8239 | - | - | - | - | - |
435
+ | 0.5082 | 2500 | 1.8221 | 1.8216 | 0.5538 (+0.0134) | 0.3481 (+0.0230) | 0.6245 (+0.1238) | 0.5088 (+0.0534) |
436
+ | 0.5591 | 2750 | 1.8238 | - | - | - | - | - |
437
+ | 0.6099 | 3000 | 1.8377 | 1.8066 | 0.5280 (-0.0124) | 0.3363 (+0.0113) | 0.5669 (+0.0663) | 0.4771 (+0.0217) |
438
+ | 0.6607 | 3250 | 1.8357 | - | - | - | - | - |
439
+ | 0.7115 | 3500 | 1.8221 | 1.8041 | 0.5424 (+0.0020) | 0.3481 (+0.0230) | 0.5980 (+0.0973) | 0.4962 (+0.0408) |
440
+ | 0.7624 | 3750 | 1.8245 | - | - | - | - | - |
441
+ | 0.8132 | 4000 | 1.8287 | 1.8026 | 0.5627 (+0.0223) | 0.3564 (+0.0314) | 0.6185 (+0.1178) | 0.5125 (+0.0572) |
442
+ | 0.8640 | 4250 | 1.8125 | - | - | - | - | - |
443
+ | **0.9148** | **4500** | **1.8198** | **1.803** | **0.5634 (+0.0229)** | **0.3493 (+0.0243)** | **0.6396 (+0.1390)** | **0.5174 (+0.0621)** |
444
+ | 0.9656 | 4750 | 1.8193 | - | - | - | - | - |
445
+ | -1 | -1 | - | - | 0.5634 (+0.0229) | 0.3493 (+0.0243) | 0.6396 (+0.1390) | 0.5174 (+0.0621) |
446
+
447
+ * The bold row denotes the saved checkpoint.
448
+
449
+ ### Framework Versions
450
+ - Python: 3.11.11
451
+ - Sentence Transformers: 3.5.0.dev0
452
+ - Transformers: 4.49.0
453
+ - PyTorch: 2.6.0+cu124
454
+ - Accelerate: 1.5.2
455
+ - Datasets: 3.4.0
456
+ - Tokenizers: 0.21.1
457
+
458
+ ## Citation
459
+
460
+ ### BibTeX
461
+
462
+ #### Sentence Transformers
463
+ ```bibtex
464
+ @inproceedings{reimers-2019-sentence-bert,
465
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
466
+ author = "Reimers, Nils and Gurevych, Iryna",
467
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
468
+ month = "11",
469
+ year = "2019",
470
+ publisher = "Association for Computational Linguistics",
471
+ url = "https://arxiv.org/abs/1908.10084",
472
+ }
473
+ ```
474
+
475
+ #### PListMLELoss
476
+ ```bibtex
477
+ @inproceedings{lan2014position,
478
+ title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
479
+ author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
480
+ booktitle={UAI},
481
+ volume={14},
482
+ pages={449--458},
483
+ year={2014}
484
+ }
485
+ ```
486
+
487
+ <!--
488
+ ## Glossary
489
+
490
+ *Clearly define terms in order to be accessible across audiences.*
491
+ -->
492
+
493
+ <!--
494
+ ## Model Card Authors
495
+
496
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
497
+ -->
498
+
499
+ <!--
500
+ ## Model Card Contact
501
+
502
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
503
+ -->
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "models/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize=minmax/final",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "sentence_transformers": {
27
+ "activation_fn": "torch.nn.modules.activation.Sigmoid"
28
+ },
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.49.0",
31
+ "type_vocab_size": 2,
32
+ "use_cache": true,
33
+ "vocab_size": 30522
34
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be164d8f8ccdcb42506a6b58b5a1ea75fc3cd73d11cffce5652ab5f40b21580b
3
+ size 133464836
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 512,
51
+ "model_max_length": 512,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff