models-ner-pii
Collection
7 items
โข
Updated
This is a version of broadfield-dev/bert-mini-ner-pii-training-tuned-12270113 that has been converted to ONNX and optimized.
broadfield-dev/bert-mini-ner-pii-training-tuned-12270113token-classification17FP32 (No Quantization)For a lightweight mobile/serverless setup, you only need onnxruntime and tokenizers.
pip install onnxruntime tokenizers
from tokenizers import Tokenizer
import onnxruntime as ort
import numpy as np
# 1. Load the lightweight tokenizer (No Transformers dependency needed)
tokenizer = Tokenizer.from_pretrained("broadfield-dev/bert-mini-ner-pii-mobile")
# 2. Load the ONNX model
session = ort.InferenceSession("model.onnx")
# 3. Preprocess (Simple text encoding)
text = "Run inference on mobile!"
encoding = tokenizer.encode(text)
# Prepare inputs (Exact names vary by model, usually input_ids + attention_mask)
inputs = {
"input_ids": np.array([encoding.ids], dtype=np.int64),
"attention_mask": np.array([encoding.attention_mask], dtype=np.int64)
}
# 4. Run Inference
outputs = session.run(None, inputs)
print("Output logits shape:", outputs[0].shape)
This model was exported using Optimum.
It includes the FP32 (No Quantization) quantization settings and a pre-compiled tokenizer.json for fast loading.
Base model
prajjwal1/bert-mini