KyberNull's picture
Upload pruned model
2bbd440 verified
|
raw
history blame
1.46 kB
metadata
pipeline_tag: sentence-similarity
language: en
license: mit
tags:
  - passage-retrieval
  - sentence-similarity
  - pruned
library_name: sentence-transformers
base_model: intfloat/multilingual-e5-base
base_model_relation: quantized

πŸ‡¬πŸ‡§ english-multilingual-e5-base

This model is a 58.0% smaller version of intfloat/multilingual-e5-base for the English language, created using the mtem-pruner space.

This pruned model should perform similarly to the original model for English language tasks with a much smaller memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not commonly used in English were removed from the original multilingual model's vocabulary.

Usage

You can use this model with the Transformers library:

from transformers import AutoModel, AutoTokenizer

model_name = "KyberNull/english-multilingual-e5-base"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)

Or with the sentence-transformers library:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("KyberNull/english-multilingual-e5-base")

Credits: cc @antoinelouis