CrossEncoder based on cross-encoder/ms-marco-MiniLM-L6-v2

This is a Cross Encoder model finetuned from cross-encoder/ms-marco-MiniLM-L6-v2 using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the ๐Ÿค— Hub
model = CrossEncoder("cross_encoder_model_id")
# Get scores for pairs of texts
pairs = [
    ['The item is a promotional display featuring a variety of phone cases, including solid blue cases, cases with artistic designs, and one showcasing a kitten wearing a Santa hat.', 'A black phone case.'],
    ['It was a black umbrella with a loop.', 'A new, mustard-yellow, waffle-knit long-sleeved henley shirt features a three-button placket, a chest pocket with a "Custom Supply" label, and an "L.O.G.G." tag at the neckline.'],
    ['A white sneaker with black, pink, and silver accents.', 'A blue backpack has an orange and white front with black straps.'],
    ['Oh, that sleek white TYESO tumbler with the silver top, I was just about to try it out for keeping my coffee warm all day.', 'It is a white, metal TYESO brand vacuum-insulated bottle/mug with a silver rim and a black lid with a clear straw.'],
    ['It is a bright orange backpack with a small pink strawberry charm.', 'The medium-sized black backpack, likely made of nylon or a similar synthetic material, features a white rectangular tag with "MUSIC IS POWER" printed on it and appears to be in good condition.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'The item is a promotional display featuring a variety of phone cases, including solid blue cases, cases with artistic designs, and one showcasing a kitten wearing a Santa hat.',
    [
        'A black phone case.',
        'A new, mustard-yellow, waffle-knit long-sleeved henley shirt features a three-button placket, a chest pocket with a "Custom Supply" label, and an "L.O.G.G." tag at the neckline.',
        'A blue backpack has an orange and white front with black straps.',
        'It is a white, metal TYESO brand vacuum-insulated bottle/mug with a silver rim and a black lid with a clear straw.',
        'The medium-sized black backpack, likely made of nylon or a similar synthetic material, features a white rectangular tag with "MUSIC IS POWER" printed on it and appears to be in good condition.',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Binary Classification

Metric Value
accuracy 0.8988
accuracy_threshold 0.1037
f1 0.8318
f1_threshold -0.4537
precision 0.7978
recall 0.8688
average_precision 0.9072

Training Details

Training Dataset

Unnamed Dataset

  • Size: 114,138 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 15 characters
    • mean: 106.73 characters
    • max: 361 characters
    • min: 14 characters
    • mean: 110.94 characters
    • max: 403 characters
    • min: 0.0
    • mean: 0.3
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    The item is a promotional display featuring a variety of phone cases, including solid blue cases, cases with artistic designs, and one showcasing a kitten wearing a Santa hat. A black phone case. 0.0
    It was a black umbrella with a loop. A new, mustard-yellow, waffle-knit long-sleeved henley shirt features a three-button placket, a chest pocket with a "Custom Supply" label, and an "L.O.G.G." tag at the neckline. 0.0
    A white sneaker with black, pink, and silver accents. A blue backpack has an orange and white front with black straps. 0.0
  • Loss: BinaryCrossEntropyLoss with these parameters:
    {
        "activation_fn": "torch.nn.modules.linear.Identity",
        "pos_weight": null
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss eval_average_precision
0.0701 500 0.414 0.8339
0.1402 1000 0.3334 0.8344
0.2103 1500 0.2989 0.8549
0.2803 2000 0.2984 0.8596
0.3504 2500 0.2921 0.8707
0.4205 3000 0.2882 0.8734
0.4906 3500 0.2831 0.8802
0.5607 4000 0.2878 0.8828
0.6308 4500 0.2651 0.8857
0.7009 5000 0.2693 0.8854
0.7710 5500 0.2731 0.8876
0.8410 6000 0.2666 0.8905
0.9111 6500 0.2594 0.8925
0.9812 7000 0.2631 0.8956
1.0 7134 - 0.8921
1.0513 7500 0.2434 0.8955
1.1214 8000 0.2374 0.8969
1.1915 8500 0.2197 0.8962
1.2616 9000 0.2487 0.8980
1.3317 9500 0.2406 0.8990
1.4017 10000 0.2384 0.8995
1.4718 10500 0.2339 0.9021
1.5419 11000 0.2292 0.9034
1.6120 11500 0.2214 0.9046
1.6821 12000 0.2264 0.9049
1.7522 12500 0.2384 0.9058
1.8223 13000 0.2309 0.9072

Framework Versions

  • Python: 3.12.10
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.1
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.11.0
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
14
Safetensors
Model size
22.7M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for andrewma5/harvard-loop-reranker

Evaluation results