--- language: - en license: apache-2.0 tags: - sentence-transformers - sparse-encoder - sparse - splade - generated_from_trainer - dataset_size:99000 - loss:SpladeLoss - loss:SparseMultipleNegativesRankingLoss - loss:FlopsLoss base_model: distilbert/distilbert-base-uncased widget: - text: How do I know if a girl likes me at school? - text: What are some five star hotel in Jaipur? - text: Is it normal to fantasize your wife having sex with another man? - text: What is the Sahara, and how do the average temperatures there compare to the ones in the Simpson Desert? - text: What are Hillary Clinton's most recognized accomplishments while Secretary of State? datasets: - sentence-transformers/quora-duplicates pipeline_tag: feature-extraction library_name: sentence-transformers metrics: - cosine_accuracy - cosine_accuracy_threshold - cosine_f1 - cosine_f1_threshold - cosine_precision - cosine_recall - cosine_ap - cosine_mcc - dot_accuracy - dot_accuracy_threshold - dot_f1 - dot_f1_threshold - dot_precision - dot_recall - dot_ap - dot_mcc - euclidean_accuracy - euclidean_accuracy_threshold - euclidean_f1 - euclidean_f1_threshold - euclidean_precision - euclidean_recall - euclidean_ap - euclidean_mcc - manhattan_accuracy - manhattan_accuracy_threshold - manhattan_f1 - manhattan_f1_threshold - manhattan_precision - manhattan_recall - manhattan_ap - manhattan_mcc - max_accuracy - max_accuracy_threshold - max_f1 - max_f1_threshold - max_precision - max_recall - max_ap - max_mcc - active_dims - sparsity_ratio - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 - query_active_dims - query_sparsity_ratio - corpus_active_dims - corpus_sparsity_ratio co2_eq_emissions: emissions: 29.19330199735101 energy_consumed: 0.07510458396754072 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K ram_total_size: 31.777088165283203 hours_used: 0.306 hardware_used: 1 x NVIDIA GeForce RTX 3090 model-index: - name: splade-distilbert-base-uncased trained on Quora Duplicates Questions results: - task: type: sparse-binary-classification name: Sparse Binary Classification dataset: name: quora duplicates dev type: quora_duplicates_dev metrics: - type: cosine_accuracy value: 0.759 name: Cosine Accuracy - type: cosine_accuracy_threshold value: 0.8012633323669434 name: Cosine Accuracy Threshold - type: cosine_f1 value: 0.6741573033707865 name: Cosine F1 - type: cosine_f1_threshold value: 0.542455792427063 name: Cosine F1 Threshold - type: cosine_precision value: 0.528169014084507 name: Cosine Precision - type: cosine_recall value: 0.9316770186335404 name: Cosine Recall - type: cosine_ap value: 0.6875984052094628 name: Cosine Ap - type: cosine_mcc value: 0.5059561809366392 name: Cosine Mcc - type: dot_accuracy value: 0.754 name: Dot Accuracy - type: dot_accuracy_threshold value: 47.276466369628906 name: Dot Accuracy Threshold - type: dot_f1 value: 0.6759581881533101 name: Dot F1 - type: dot_f1_threshold value: 40.955284118652344 name: Dot F1 Threshold - type: dot_precision value: 0.5398886827458256 name: Dot Precision - type: dot_recall value: 0.9037267080745341 name: Dot Recall - type: dot_ap value: 0.6070585464263578 name: Dot Ap - type: dot_mcc value: 0.5042382773971489 name: Dot Mcc - type: euclidean_accuracy value: 0.677 name: Euclidean Accuracy - type: euclidean_accuracy_threshold value: -14.295218467712402 name: Euclidean Accuracy Threshold - type: euclidean_f1 value: 0.48599545798637395 name: Euclidean F1 - type: euclidean_f1_threshold value: -0.5385364294052124 name: Euclidean F1 Threshold - type: euclidean_precision value: 0.3213213213213213 name: Euclidean Precision - type: euclidean_recall value: 0.9968944099378882 name: Euclidean Recall - type: euclidean_ap value: 0.20430811061248494 name: Euclidean Ap - type: euclidean_mcc value: -0.04590966956831287 name: Euclidean Mcc - type: manhattan_accuracy value: 0.677 name: Manhattan Accuracy - type: manhattan_accuracy_threshold value: -163.6865234375 name: Manhattan Accuracy Threshold - type: manhattan_f1 value: 0.48599545798637395 name: Manhattan F1 - type: manhattan_f1_threshold value: -2.7509355545043945 name: Manhattan F1 Threshold - type: manhattan_precision value: 0.3213213213213213 name: Manhattan Precision - type: manhattan_recall value: 0.9968944099378882 name: Manhattan Recall - type: manhattan_ap value: 0.20563864564607998 name: Manhattan Ap - type: manhattan_mcc value: -0.04590966956831287 name: Manhattan Mcc - type: max_accuracy value: 0.759 name: Max Accuracy - type: max_accuracy_threshold value: 47.276466369628906 name: Max Accuracy Threshold - type: max_f1 value: 0.6759581881533101 name: Max F1 - type: max_f1_threshold value: 40.955284118652344 name: Max F1 Threshold - type: max_precision value: 0.5398886827458256 name: Max Precision - type: max_recall value: 0.9968944099378882 name: Max Recall - type: max_ap value: 0.6875984052094628 name: Max Ap - type: max_mcc value: 0.5059561809366392 name: Max Mcc - type: active_dims value: 83.36341094970703 name: Active Dims - type: sparsity_ratio value: 0.9972687434981421 name: Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoMSMARCO type: NanoMSMARCO metrics: - type: dot_accuracy@1 value: 0.24 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.44 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.56 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.74 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.24 name: Dot Precision@1 - type: dot_precision@3 value: 0.14666666666666667 name: Dot Precision@3 - type: dot_precision@5 value: 0.11200000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.07400000000000001 name: Dot Precision@10 - type: dot_recall@1 value: 0.24 name: Dot Recall@1 - type: dot_recall@3 value: 0.44 name: Dot Recall@3 - type: dot_recall@5 value: 0.56 name: Dot Recall@5 - type: dot_recall@10 value: 0.74 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.46883808093835555 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.3849920634920634 name: Dot Mrr@10 - type: dot_map@100 value: 0.39450094910993877 name: Dot Map@100 - type: query_active_dims value: 84.87999725341797 name: Query Active Dims - type: query_sparsity_ratio value: 0.9972190551977781 name: Query Sparsity Ratio - type: corpus_active_dims value: 104.35554504394531 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9965809729033503 name: Corpus Sparsity Ratio - type: dot_accuracy@1 value: 0.24 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.44 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.6 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.74 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.24 name: Dot Precision@1 - type: dot_precision@3 value: 0.14666666666666667 name: Dot Precision@3 - type: dot_precision@5 value: 0.12000000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.07400000000000001 name: Dot Precision@10 - type: dot_recall@1 value: 0.24 name: Dot Recall@1 - type: dot_recall@3 value: 0.44 name: Dot Recall@3 - type: dot_recall@5 value: 0.6 name: Dot Recall@5 - type: dot_recall@10 value: 0.74 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.46663046446554135 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.3821587301587301 name: Dot Mrr@10 - type: dot_map@100 value: 0.39141822290426725 name: Dot Map@100 - type: query_active_dims value: 94.9000015258789 name: Query Active Dims - type: query_sparsity_ratio value: 0.9968907672653863 name: Query Sparsity Ratio - type: corpus_active_dims value: 115.97699737548828 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9962002163234556 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNQ type: NanoNQ metrics: - type: dot_accuracy@1 value: 0.18 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.44 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.52 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.58 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.18 name: Dot Precision@1 - type: dot_precision@3 value: 0.14666666666666667 name: Dot Precision@3 - type: dot_precision@5 value: 0.10400000000000001 name: Dot Precision@5 - type: dot_precision@10 value: 0.06000000000000001 name: Dot Precision@10 - type: dot_recall@1 value: 0.17 name: Dot Recall@1 - type: dot_recall@3 value: 0.41 name: Dot Recall@3 - type: dot_recall@5 value: 0.48 name: Dot Recall@5 - type: dot_recall@10 value: 0.55 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.3711173352982992 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.32435714285714284 name: Dot Mrr@10 - type: dot_map@100 value: 0.32104591506684527 name: Dot Map@100 - type: query_active_dims value: 76.81999969482422 name: Query Active Dims - type: query_sparsity_ratio value: 0.9974831269348396 name: Query Sparsity Ratio - type: corpus_active_dims value: 139.53028869628906 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9954285338871539 name: Corpus Sparsity Ratio - type: dot_accuracy@1 value: 0.18 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.46 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.5 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.64 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.18 name: Dot Precision@1 - type: dot_precision@3 value: 0.1533333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.10000000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.066 name: Dot Precision@10 - type: dot_recall@1 value: 0.17 name: Dot Recall@1 - type: dot_recall@3 value: 0.43 name: Dot Recall@3 - type: dot_recall@5 value: 0.46 name: Dot Recall@5 - type: dot_recall@10 value: 0.61 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.39277722565932277 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.33549999999999996 name: Dot Mrr@10 - type: dot_map@100 value: 0.3266050492721919 name: Dot Map@100 - type: query_active_dims value: 85.72000122070312 name: Query Active Dims - type: query_sparsity_ratio value: 0.9971915339354989 name: Query Sparsity Ratio - type: corpus_active_dims value: 156.10665893554688 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.994885438079564 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNFCorpus type: NanoNFCorpus metrics: - type: dot_accuracy@1 value: 0.28 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.42 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.46 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.52 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.28 name: Dot Precision@1 - type: dot_precision@3 value: 0.24 name: Dot Precision@3 - type: dot_precision@5 value: 0.2 name: Dot Precision@5 - type: dot_precision@10 value: 0.16 name: Dot Precision@10 - type: dot_recall@1 value: 0.010055870806195594 name: Dot Recall@1 - type: dot_recall@3 value: 0.03299225609257712 name: Dot Recall@3 - type: dot_recall@5 value: 0.043240249260663235 name: Dot Recall@5 - type: dot_recall@10 value: 0.0575687615260951 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.1901013298743406 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.3606904761904762 name: Dot Mrr@10 - type: dot_map@100 value: 0.06747201795263198 name: Dot Map@100 - type: query_active_dims value: 92.18000030517578 name: Query Active Dims - type: query_sparsity_ratio value: 0.9969798833528217 name: Query Sparsity Ratio - type: corpus_active_dims value: 196.1699981689453 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.993572832770823 name: Corpus Sparsity Ratio - type: dot_accuracy@1 value: 0.3 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.42 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.48 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.52 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.3 name: Dot Precision@1 - type: dot_precision@3 value: 0.24666666666666665 name: Dot Precision@3 - type: dot_precision@5 value: 0.21600000000000003 name: Dot Precision@5 - type: dot_precision@10 value: 0.174 name: Dot Precision@10 - type: dot_recall@1 value: 0.020055870806195596 name: Dot Recall@1 - type: dot_recall@3 value: 0.03516880470242261 name: Dot Recall@3 - type: dot_recall@5 value: 0.07436160102717629 name: Dot Recall@5 - type: dot_recall@10 value: 0.08924749441772001 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.2174721143005973 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.3753888888888888 name: Dot Mrr@10 - type: dot_map@100 value: 0.08327101018955965 name: Dot Map@100 - type: query_active_dims value: 101.91999816894531 name: Query Active Dims - type: query_sparsity_ratio value: 0.9966607693411655 name: Query Sparsity Ratio - type: corpus_active_dims value: 217.09109497070312 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9928873895887982 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoQuoraRetrieval type: NanoQuoraRetrieval metrics: - type: dot_accuracy@1 value: 0.9 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.96 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.96 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 1.0 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.9 name: Dot Precision@1 - type: dot_precision@3 value: 0.38666666666666655 name: Dot Precision@3 - type: dot_precision@5 value: 0.24799999999999997 name: Dot Precision@5 - type: dot_precision@10 value: 0.13599999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.804 name: Dot Recall@1 - type: dot_recall@3 value: 0.9053333333333333 name: Dot Recall@3 - type: dot_recall@5 value: 0.9326666666666666 name: Dot Recall@5 - type: dot_recall@10 value: 0.99 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.940813094731721 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.9366666666666665 name: Dot Mrr@10 - type: dot_map@100 value: 0.9174399766899767 name: Dot Map@100 - type: query_active_dims value: 80.30000305175781 name: Query Active Dims - type: query_sparsity_ratio value: 0.9973691107053353 name: Query Sparsity Ratio - type: corpus_active_dims value: 83.33353424072266 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9972697223563096 name: Corpus Sparsity Ratio - type: dot_accuracy@1 value: 0.9 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.96 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 1.0 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 1.0 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.9 name: Dot Precision@1 - type: dot_precision@3 value: 0.38666666666666655 name: Dot Precision@3 - type: dot_precision@5 value: 0.25599999999999995 name: Dot Precision@5 - type: dot_precision@10 value: 0.13599999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.804 name: Dot Recall@1 - type: dot_recall@3 value: 0.9086666666666667 name: Dot Recall@3 - type: dot_recall@5 value: 0.97 name: Dot Recall@5 - type: dot_recall@10 value: 0.99 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.9434418368741703 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.94 name: Dot Mrr@10 - type: dot_map@100 value: 0.9210437710437711 name: Dot Map@100 - type: query_active_dims value: 87.4000015258789 name: Query Active Dims - type: query_sparsity_ratio value: 0.9971364916609043 name: Query Sparsity Ratio - type: corpus_active_dims value: 90.32620239257812 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.997040619802353 name: Corpus Sparsity Ratio - task: type: sparse-nano-beir name: Sparse Nano BEIR dataset: name: NanoBEIR mean type: NanoBEIR_mean metrics: - type: dot_accuracy@1 value: 0.4 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.565 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.625 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.71 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.4 name: Dot Precision@1 - type: dot_precision@3 value: 0.22999999999999998 name: Dot Precision@3 - type: dot_precision@5 value: 0.166 name: Dot Precision@5 - type: dot_precision@10 value: 0.10750000000000001 name: Dot Precision@10 - type: dot_recall@1 value: 0.30601396770154893 name: Dot Recall@1 - type: dot_recall@3 value: 0.4470813973564776 name: Dot Recall@3 - type: dot_recall@5 value: 0.5039767289818324 name: Dot Recall@5 - type: dot_recall@10 value: 0.5843921903815238 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.4927174602106791 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5016765873015872 name: Dot Mrr@10 - type: dot_map@100 value: 0.4251147147048482 name: Dot Map@100 - type: query_active_dims value: 83.54500007629395 name: Query Active Dims - type: query_sparsity_ratio value: 0.9972627940476937 name: Query Sparsity Ratio - type: corpus_active_dims value: 123.28323480743562 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9959608402199255 name: Corpus Sparsity Ratio - type: dot_accuracy@1 value: 0.4021664050235479 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.5765463108320251 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.6598116169544741 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.7337833594976453 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.4021664050235479 name: Dot Precision@1 - type: dot_precision@3 value: 0.25656724228152794 name: Dot Precision@3 - type: dot_precision@5 value: 0.20182103610675042 name: Dot Precision@5 - type: dot_precision@10 value: 0.14312715855572997 name: Dot Precision@10 - type: dot_recall@1 value: 0.23408727816164185 name: Dot Recall@1 - type: dot_recall@3 value: 0.3568914414902249 name: Dot Recall@3 - type: dot_recall@5 value: 0.4275402562349963 name: Dot Recall@5 - type: dot_recall@10 value: 0.5040607961406979 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.45167521970189345 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5088102589020956 name: Dot Mrr@10 - type: dot_map@100 value: 0.37853024172675503 name: Dot Map@100 - type: query_active_dims value: 105.61787400444042 name: Query Active Dims - type: query_sparsity_ratio value: 0.9965396149005816 name: Query Sparsity Ratio - type: corpus_active_dims value: 163.73635361872905 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9946354644643625 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoClimateFEVER type: NanoClimateFEVER metrics: - type: dot_accuracy@1 value: 0.14 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.32 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.42 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.52 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.14 name: Dot Precision@1 - type: dot_precision@3 value: 0.11333333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.09200000000000001 name: Dot Precision@5 - type: dot_precision@10 value: 0.064 name: Dot Precision@10 - type: dot_recall@1 value: 0.07166666666666666 name: Dot Recall@1 - type: dot_recall@3 value: 0.14833333333333332 name: Dot Recall@3 - type: dot_recall@5 value: 0.19 name: Dot Recall@5 - type: dot_recall@10 value: 0.25 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.1928494772790168 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.2526666666666666 name: Dot Mrr@10 - type: dot_map@100 value: 0.14153388517603807 name: Dot Map@100 - type: query_active_dims value: 102.33999633789062 name: Query Active Dims - type: query_sparsity_ratio value: 0.9966470088350079 name: Query Sparsity Ratio - type: corpus_active_dims value: 217.80722045898438 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9928639269884351 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoDBPedia type: NanoDBPedia metrics: - type: dot_accuracy@1 value: 0.56 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.78 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.82 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.88 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.56 name: Dot Precision@1 - type: dot_precision@3 value: 0.5133333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.488 name: Dot Precision@5 - type: dot_precision@10 value: 0.436 name: Dot Precision@10 - type: dot_recall@1 value: 0.042268334576683116 name: Dot Recall@1 - type: dot_recall@3 value: 0.1179684188048045 name: Dot Recall@3 - type: dot_recall@5 value: 0.17514937366700764 name: Dot Recall@5 - type: dot_recall@10 value: 0.2739338942789917 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.5024388532207343 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6801666666666667 name: Dot Mrr@10 - type: dot_map@100 value: 0.38220472918007364 name: Dot Map@100 - type: query_active_dims value: 79.80000305175781 name: Query Active Dims - type: query_sparsity_ratio value: 0.9973854923317031 name: Query Sparsity Ratio - type: corpus_active_dims value: 146.68072509765625 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.995194262332165 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoFEVER type: NanoFEVER metrics: - type: dot_accuracy@1 value: 0.64 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.72 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.82 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.88 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.64 name: Dot Precision@1 - type: dot_precision@3 value: 0.2533333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.176 name: Dot Precision@5 - type: dot_precision@10 value: 0.09399999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.6066666666666667 name: Dot Recall@1 - type: dot_recall@3 value: 0.7033333333333333 name: Dot Recall@3 - type: dot_recall@5 value: 0.8033333333333332 name: Dot Recall@5 - type: dot_recall@10 value: 0.8633333333333333 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.7368677901493659 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.7063809523809523 name: Dot Mrr@10 - type: dot_map@100 value: 0.697561348294107 name: Dot Map@100 - type: query_active_dims value: 104.22000122070312 name: Query Active Dims - type: query_sparsity_ratio value: 0.9965854137598879 name: Query Sparsity Ratio - type: corpus_active_dims value: 228.74359130859375 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9925056159062776 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoFiQA2018 type: NanoFiQA2018 metrics: - type: dot_accuracy@1 value: 0.2 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.28 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.4 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.46 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.2 name: Dot Precision@1 - type: dot_precision@3 value: 0.12666666666666665 name: Dot Precision@3 - type: dot_precision@5 value: 0.10400000000000001 name: Dot Precision@5 - type: dot_precision@10 value: 0.07 name: Dot Precision@10 - type: dot_recall@1 value: 0.09469047619047619 name: Dot Recall@1 - type: dot_recall@3 value: 0.15076984126984128 name: Dot Recall@3 - type: dot_recall@5 value: 0.25362698412698415 name: Dot Recall@5 - type: dot_recall@10 value: 0.3211825396825397 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.23331922670891586 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.27135714285714285 name: Dot Mrr@10 - type: dot_map@100 value: 0.18392178053045694 name: Dot Map@100 - type: query_active_dims value: 89.73999786376953 name: Query Active Dims - type: query_sparsity_ratio value: 0.9970598257694853 name: Query Sparsity Ratio - type: corpus_active_dims value: 131.34085083007812 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9956968465097282 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoHotpotQA type: NanoHotpotQA metrics: - type: dot_accuracy@1 value: 0.8 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.9 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.92 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.94 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.8 name: Dot Precision@1 - type: dot_precision@3 value: 0.3933333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.264 name: Dot Precision@5 - type: dot_precision@10 value: 0.14200000000000002 name: Dot Precision@10 - type: dot_recall@1 value: 0.4 name: Dot Recall@1 - type: dot_recall@3 value: 0.59 name: Dot Recall@3 - type: dot_recall@5 value: 0.66 name: Dot Recall@5 - type: dot_recall@10 value: 0.71 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6848748058213975 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.8541666666666665 name: Dot Mrr@10 - type: dot_map@100 value: 0.6060670580971632 name: Dot Map@100 - type: query_active_dims value: 111.23999786376953 name: Query Active Dims - type: query_sparsity_ratio value: 0.9963554158356671 name: Query Sparsity Ratio - type: corpus_active_dims value: 166.19056701660156 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9945550564505407 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoSCIDOCS type: NanoSCIDOCS metrics: - type: dot_accuracy@1 value: 0.34 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.56 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.66 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.78 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.34 name: Dot Precision@1 - type: dot_precision@3 value: 0.26 name: Dot Precision@3 - type: dot_precision@5 value: 0.2 name: Dot Precision@5 - type: dot_precision@10 value: 0.14200000000000002 name: Dot Precision@10 - type: dot_recall@1 value: 0.07166666666666668 name: Dot Recall@1 - type: dot_recall@3 value: 0.16066666666666665 name: Dot Recall@3 - type: dot_recall@5 value: 0.20566666666666664 name: Dot Recall@5 - type: dot_recall@10 value: 0.2916666666666667 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.2850130343263586 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.47407142857142853 name: Dot Mrr@10 - type: dot_map@100 value: 0.20070977606957205 name: Dot Map@100 - type: query_active_dims value: 113.77999877929688 name: Query Active Dims - type: query_sparsity_ratio value: 0.9962721971437226 name: Query Sparsity Ratio - type: corpus_active_dims value: 226.21810913085938 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9925883589171464 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoArguAna type: NanoArguAna metrics: - type: dot_accuracy@1 value: 0.08 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.32 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.38 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.44 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.08 name: Dot Precision@1 - type: dot_precision@3 value: 0.10666666666666666 name: Dot Precision@3 - type: dot_precision@5 value: 0.07600000000000001 name: Dot Precision@5 - type: dot_precision@10 value: 0.044000000000000004 name: Dot Precision@10 - type: dot_recall@1 value: 0.08 name: Dot Recall@1 - type: dot_recall@3 value: 0.32 name: Dot Recall@3 - type: dot_recall@5 value: 0.38 name: Dot Recall@5 - type: dot_recall@10 value: 0.44 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.26512761684329256 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.20850000000000002 name: Dot Mrr@10 - type: dot_map@100 value: 0.2135415485154769 name: Dot Map@100 - type: query_active_dims value: 202.02000427246094 name: Query Active Dims - type: query_sparsity_ratio value: 0.9933811675423477 name: Query Sparsity Ratio - type: corpus_active_dims value: 176.61155700683594 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.994213630921734 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoSciFact type: NanoSciFact metrics: - type: dot_accuracy@1 value: 0.44 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.58 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.7 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.78 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.44 name: Dot Precision@1 - type: dot_precision@3 value: 0.19999999999999996 name: Dot Precision@3 - type: dot_precision@5 value: 0.14800000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.08599999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.415 name: Dot Recall@1 - type: dot_recall@3 value: 0.55 name: Dot Recall@3 - type: dot_recall@5 value: 0.665 name: Dot Recall@5 - type: dot_recall@10 value: 0.76 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.5848481832222858 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5400476190476191 name: Dot Mrr@10 - type: dot_map@100 value: 0.5247408283859897 name: Dot Map@100 - type: query_active_dims value: 102.4800033569336 name: Query Active Dims - type: query_sparsity_ratio value: 0.9966424217496581 name: Query Sparsity Ratio - type: corpus_active_dims value: 216.64508056640625 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9929020024714499 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoTouche2020 type: NanoTouche2020 metrics: - type: dot_accuracy@1 value: 0.40816326530612246 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.7551020408163265 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.8775510204081632 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.9591836734693877 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.40816326530612246 name: Dot Precision@1 - type: dot_precision@3 value: 0.43537414965986393 name: Dot Precision@3 - type: dot_precision@5 value: 0.38367346938775504 name: Dot Precision@5 - type: dot_precision@10 value: 0.3326530612244898 name: Dot Precision@10 - type: dot_recall@1 value: 0.027119934527989286 name: Dot Recall@1 - type: dot_recall@3 value: 0.08468167459585536 name: Dot Recall@3 - type: dot_recall@5 value: 0.12088537223378343 name: Dot Recall@5 - type: dot_recall@10 value: 0.21342642144981977 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.36611722725361623 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5941286038224813 name: Dot Mrr@10 - type: dot_map@100 value: 0.24827413478914825 name: Dot Map@100 - type: query_active_dims value: 97.30612182617188 name: Query Active Dims - type: query_sparsity_ratio value: 0.9968119349378752 name: Query Sparsity Ratio - type: corpus_active_dims value: 147.016357421875 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9951832659255005 name: Corpus Sparsity Ratio --- # splade-distilbert-base-uncased trained on Quora Duplicates Questions This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval. ## Model Details ### Model Description - **Model Type:** SPLADE Sparse Encoder - **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 30522 dimensions - **Similarity Function:** Dot Product - **Training Dataset:** - [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder) ### Full Model Architecture ``` SparseEncoder( (0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: DistilBertForMaskedLM (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SparseEncoder # Download from the 🤗 Hub model = SparseEncoder("tomaarsen/splade-distilbert-base-uncased-quora-duplicates") # Run inference sentences = [ 'What accomplishments did Hillary Clinton achieve during her time as Secretary of State?', "What are Hillary Clinton's most recognized accomplishments while Secretary of State?", 'What are Hillary Clinton’s qualifications to be President?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 30522] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[ 83.9635, 60.9402, 26.0887], # [ 60.9402, 85.6474, 33.3293], # [ 26.0887, 33.3293, 104.0980]]) ``` ## Evaluation ### Metrics #### Sparse Binary Classification * Dataset: `quora_duplicates_dev` * Evaluated with [SparseBinaryClassificationEvaluator](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseBinaryClassificationEvaluator) | Metric | Value | |:-----------------------------|:-----------| | cosine_accuracy | 0.759 | | cosine_accuracy_threshold | 0.8013 | | cosine_f1 | 0.6742 | | cosine_f1_threshold | 0.5425 | | cosine_precision | 0.5282 | | cosine_recall | 0.9317 | | cosine_ap | 0.6876 | | cosine_mcc | 0.506 | | dot_accuracy | 0.754 | | dot_accuracy_threshold | 47.2765 | | dot_f1 | 0.676 | | dot_f1_threshold | 40.9553 | | dot_precision | 0.5399 | | dot_recall | 0.9037 | | dot_ap | 0.6071 | | dot_mcc | 0.5042 | | euclidean_accuracy | 0.677 | | euclidean_accuracy_threshold | -14.2952 | | euclidean_f1 | 0.486 | | euclidean_f1_threshold | -0.5385 | | euclidean_precision | 0.3213 | | euclidean_recall | 0.9969 | | euclidean_ap | 0.2043 | | euclidean_mcc | -0.0459 | | manhattan_accuracy | 0.677 | | manhattan_accuracy_threshold | -163.6865 | | manhattan_f1 | 0.486 | | manhattan_f1_threshold | -2.7509 | | manhattan_precision | 0.3213 | | manhattan_recall | 0.9969 | | manhattan_ap | 0.2056 | | manhattan_mcc | -0.0459 | | max_accuracy | 0.759 | | max_accuracy_threshold | 47.2765 | | max_f1 | 0.676 | | max_f1_threshold | 40.9553 | | max_precision | 0.5399 | | max_recall | 0.9969 | | **max_ap** | **0.6876** | | max_mcc | 0.506 | | active_dims | 83.3634 | | sparsity_ratio | 0.9973 | #### Sparse Information Retrieval * Datasets: `NanoMSMARCO`, `NanoNQ`, `NanoNFCorpus`, `NanoQuoraRetrieval`, `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020` * Evaluated with [SparseInformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) | Metric | NanoMSMARCO | NanoNQ | NanoNFCorpus | NanoQuoraRetrieval | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 | |:----------------------|:------------|:-----------|:-------------|:-------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:------------|:------------|:---------------| | dot_accuracy@1 | 0.24 | 0.18 | 0.3 | 0.9 | 0.14 | 0.56 | 0.64 | 0.2 | 0.8 | 0.34 | 0.08 | 0.44 | 0.4082 | | dot_accuracy@3 | 0.44 | 0.46 | 0.42 | 0.96 | 0.32 | 0.78 | 0.72 | 0.28 | 0.9 | 0.56 | 0.32 | 0.58 | 0.7551 | | dot_accuracy@5 | 0.6 | 0.5 | 0.48 | 1.0 | 0.42 | 0.82 | 0.82 | 0.4 | 0.92 | 0.66 | 0.38 | 0.7 | 0.8776 | | dot_accuracy@10 | 0.74 | 0.64 | 0.52 | 1.0 | 0.52 | 0.88 | 0.88 | 0.46 | 0.94 | 0.78 | 0.44 | 0.78 | 0.9592 | | dot_precision@1 | 0.24 | 0.18 | 0.3 | 0.9 | 0.14 | 0.56 | 0.64 | 0.2 | 0.8 | 0.34 | 0.08 | 0.44 | 0.4082 | | dot_precision@3 | 0.1467 | 0.1533 | 0.2467 | 0.3867 | 0.1133 | 0.5133 | 0.2533 | 0.1267 | 0.3933 | 0.26 | 0.1067 | 0.2 | 0.4354 | | dot_precision@5 | 0.12 | 0.1 | 0.216 | 0.256 | 0.092 | 0.488 | 0.176 | 0.104 | 0.264 | 0.2 | 0.076 | 0.148 | 0.3837 | | dot_precision@10 | 0.074 | 0.066 | 0.174 | 0.136 | 0.064 | 0.436 | 0.094 | 0.07 | 0.142 | 0.142 | 0.044 | 0.086 | 0.3327 | | dot_recall@1 | 0.24 | 0.17 | 0.0201 | 0.804 | 0.0717 | 0.0423 | 0.6067 | 0.0947 | 0.4 | 0.0717 | 0.08 | 0.415 | 0.0271 | | dot_recall@3 | 0.44 | 0.43 | 0.0352 | 0.9087 | 0.1483 | 0.118 | 0.7033 | 0.1508 | 0.59 | 0.1607 | 0.32 | 0.55 | 0.0847 | | dot_recall@5 | 0.6 | 0.46 | 0.0744 | 0.97 | 0.19 | 0.1751 | 0.8033 | 0.2536 | 0.66 | 0.2057 | 0.38 | 0.665 | 0.1209 | | dot_recall@10 | 0.74 | 0.61 | 0.0892 | 0.99 | 0.25 | 0.2739 | 0.8633 | 0.3212 | 0.71 | 0.2917 | 0.44 | 0.76 | 0.2134 | | **dot_ndcg@10** | **0.4666** | **0.3928** | **0.2175** | **0.9434** | **0.1928** | **0.5024** | **0.7369** | **0.2333** | **0.6849** | **0.285** | **0.2651** | **0.5848** | **0.3661** | | dot_mrr@10 | 0.3822 | 0.3355 | 0.3754 | 0.94 | 0.2527 | 0.6802 | 0.7064 | 0.2714 | 0.8542 | 0.4741 | 0.2085 | 0.54 | 0.5941 | | dot_map@100 | 0.3914 | 0.3266 | 0.0833 | 0.921 | 0.1415 | 0.3822 | 0.6976 | 0.1839 | 0.6061 | 0.2007 | 0.2135 | 0.5247 | 0.2483 | | query_active_dims | 94.9 | 85.72 | 101.92 | 87.4 | 102.34 | 79.8 | 104.22 | 89.74 | 111.24 | 113.78 | 202.02 | 102.48 | 97.3061 | | query_sparsity_ratio | 0.9969 | 0.9972 | 0.9967 | 0.9971 | 0.9966 | 0.9974 | 0.9966 | 0.9971 | 0.9964 | 0.9963 | 0.9934 | 0.9966 | 0.9968 | | corpus_active_dims | 115.977 | 156.1067 | 217.0911 | 90.3262 | 217.8072 | 146.6807 | 228.7436 | 131.3409 | 166.1906 | 226.2181 | 176.6116 | 216.6451 | 147.0164 | | corpus_sparsity_ratio | 0.9962 | 0.9949 | 0.9929 | 0.997 | 0.9929 | 0.9952 | 0.9925 | 0.9957 | 0.9946 | 0.9926 | 0.9942 | 0.9929 | 0.9952 | #### Sparse Nano BEIR * Dataset: `NanoBEIR_mean` * Evaluated with [SparseNanoBEIREvaluator](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "msmarco", "nq", "nfcorpus", "quoraretrieval" ] } ``` | Metric | Value | |:----------------------|:-----------| | dot_accuracy@1 | 0.4 | | dot_accuracy@3 | 0.565 | | dot_accuracy@5 | 0.625 | | dot_accuracy@10 | 0.71 | | dot_precision@1 | 0.4 | | dot_precision@3 | 0.23 | | dot_precision@5 | 0.166 | | dot_precision@10 | 0.1075 | | dot_recall@1 | 0.306 | | dot_recall@3 | 0.4471 | | dot_recall@5 | 0.504 | | dot_recall@10 | 0.5844 | | **dot_ndcg@10** | **0.4927** | | dot_mrr@10 | 0.5017 | | dot_map@100 | 0.4251 | | query_active_dims | 83.545 | | query_sparsity_ratio | 0.9973 | | corpus_active_dims | 123.2832 | | corpus_sparsity_ratio | 0.996 | #### Sparse Nano BEIR * Dataset: `NanoBEIR_mean` * Evaluated with [SparseNanoBEIREvaluator](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "climatefever", "dbpedia", "fever", "fiqa2018", "hotpotqa", "msmarco", "nfcorpus", "nq", "quoraretrieval", "scidocs", "arguana", "scifact", "touche2020" ] } ``` | Metric | Value | |:----------------------|:-----------| | dot_accuracy@1 | 0.4022 | | dot_accuracy@3 | 0.5765 | | dot_accuracy@5 | 0.6598 | | dot_accuracy@10 | 0.7338 | | dot_precision@1 | 0.4022 | | dot_precision@3 | 0.2566 | | dot_precision@5 | 0.2018 | | dot_precision@10 | 0.1431 | | dot_recall@1 | 0.2341 | | dot_recall@3 | 0.3569 | | dot_recall@5 | 0.4275 | | dot_recall@10 | 0.5041 | | **dot_ndcg@10** | **0.4517** | | dot_mrr@10 | 0.5088 | | dot_map@100 | 0.3785 | | query_active_dims | 105.6179 | | query_sparsity_ratio | 0.9965 | | corpus_active_dims | 163.7364 | | corpus_sparsity_ratio | 0.9946 | ## Training Details ### Training Dataset #### quora-duplicates * Dataset: [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb) * Size: 99,000 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Samples: | anchor | positive | negative | |:----------------------------------------------------------------------|:---------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | What are the best GMAT coaching institutes in Delhi NCR? | Which are the best GMAT coaching institutes in Delhi/NCR? | What are the best GMAT coaching institutes in Delhi-Noida Area? | | Is a third world war coming? | Is World War 3 more imminent than expected? | Since the UN is unable to control terrorism and groups like ISIS, al-Qaeda and countries that promote terrorism (even though it consumed those countries), can we assume that the world is heading towards World War III? | | Should I build iOS or Android apps first? | Should people choose Android or iOS first to build their App? | How much more effort is it to build your app on both iOS and Android? | * Loss: [SpladeLoss](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters: ```json { "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')", "lambda_corpus": 3e-05, "lambda_query": 5e-05 } ``` ### Evaluation Dataset #### quora-duplicates * Dataset: [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb) * Size: 1,000 evaluation samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Samples: | anchor | positive | negative | |:-------------------------------------------------------------------|:------------------------------------------------------------|:-----------------------------------------------------------------| | What happens if we use petrol in diesel vehicles? | Why can't we use petrol in diesel? | Why are diesel engines noisier than petrol engines? | | Why is Saltwater taffy candy imported in Switzerland? | Why is Saltwater taffy candy imported in Laos? | Is salt a consumer product? | | Which is your favourite film in 2016? | What movie is the best movie of 2016? | What will the best movie of 2017 be? | * Loss: [SpladeLoss](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters: ```json { "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')", "lambda_corpus": 3e-05, "lambda_query": 5e-05 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 12 - `per_device_eval_batch_size`: 12 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `bf16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 12 - `per_device_eval_batch_size`: 12 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {}
### Training Logs | Epoch | Step | Training Loss | Validation Loss | quora_duplicates_dev_max_ap | NanoMSMARCO_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 | |:-------:|:--------:|:-------------:|:---------------:|:---------------------------:|:-----------------------:|:------------------:|:------------------------:|:------------------------------:|:-------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:| | 0.0242 | 200 | 6.2275 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.0485 | 400 | 0.4129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.0727 | 600 | 0.3238 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.0970 | 800 | 0.2795 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.1212 | 1000 | 0.255 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.1455 | 1200 | 0.2367 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.1697 | 1400 | 0.25 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.1939 | 1600 | 0.2742 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.2 | 1650 | - | 0.1914 | 0.6442 | 0.3107 | 0.2820 | 0.1991 | 0.8711 | 0.4157 | - | - | - | - | - | - | - | - | - | | 0.2182 | 1800 | 0.2102 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.2424 | 2000 | 0.1797 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.2667 | 2200 | 0.2021 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.2909 | 2400 | 0.1734 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.3152 | 2600 | 0.1849 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.3394 | 2800 | 0.1871 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.3636 | 3000 | 0.1685 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.3879 | 3200 | 0.1512 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.4 | 3300 | - | 0.1139 | 0.6637 | 0.4200 | 0.3431 | 0.1864 | 0.9222 | 0.4679 | - | - | - | - | - | - | - | - | - | | 0.4121 | 3400 | 0.1165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.4364 | 3600 | 0.1518 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.4606 | 3800 | 0.1328 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.4848 | 4000 | 0.1098 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.5091 | 4200 | 0.1389 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.5333 | 4400 | 0.1224 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.5576 | 4600 | 0.09 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.5818 | 4800 | 0.1162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.6 | 4950 | - | 0.0784 | 0.6666 | 0.4404 | 0.3688 | 0.2239 | 0.9478 | 0.4952 | - | - | - | - | - | - | - | - | - | | 0.6061 | 5000 | 0.1054 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.6303 | 5200 | 0.0949 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.6545 | 5400 | 0.1315 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.6788 | 5600 | 0.1246 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.7030 | 5800 | 0.1047 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.7273 | 6000 | 0.0861 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.7515 | 6200 | 0.103 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.7758 | 6400 | 0.1062 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **0.8** | **6600** | **0.1275** | **0.0783** | **0.6856** | **0.4666** | **0.3928** | **0.2175** | **0.9434** | **0.5051** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | | 0.8242 | 6800 | 0.1131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.8485 | 7000 | 0.0651 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.8727 | 7200 | 0.0657 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.8970 | 7400 | 0.1065 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.9212 | 7600 | 0.0691 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.9455 | 7800 | 0.1136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.9697 | 8000 | 0.0834 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.9939 | 8200 | 0.0867 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 1.0 | 8250 | - | 0.0720 | 0.6876 | 0.4688 | 0.3711 | 0.1901 | 0.9408 | 0.4927 | - | - | - | - | - | - | - | - | - | | -1 | -1 | - | - | - | 0.4666 | 0.3928 | 0.2175 | 0.9434 | 0.4517 | 0.1928 | 0.5024 | 0.7369 | 0.2333 | 0.6849 | 0.2850 | 0.2651 | 0.5848 | 0.3661 | * The bold row denotes the saved checkpoint. ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.075 kWh - **Carbon Emitted**: 0.029 kg of CO2 - **Hours Used**: 0.306 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.1 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### SpladeLoss ```bibtex @misc{formal2022distillationhardnegativesampling, title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective}, author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant}, year={2022}, eprint={2205.04733}, archivePrefix={arXiv}, primaryClass={cs.IR}, url={https://arxiv.org/abs/2205.04733}, } ``` #### SparseMultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` #### FlopsLoss ```bibtex @article{paria2020minimizing, title={Minimizing flops to learn efficient sparse representations}, author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s}, journal={arXiv preprint arXiv:2004.05665}, year={2020} } ```