--- library_name: transformers tags: - text-classification - prompt-classification - user-vs-system - transformers - distilbert --- # ๐Ÿง  DistilBERT Prompt Classifier This is a fine-tuned DistilBERT model for classifying prompt types as either **"user prompt"** or **"system prompt"**. It is useful for distinguishing between different roles in conversation-based datasets like those used in chatbots, assistants, or instruction tuning. ## โœจ Model Details - **Model Name:** distilbert-prompt-classifier - **Developed by:** [Mayuresh Mane](mailto:mayuresh.mane@reaktr.ai) - **Base Model:** `distilbert-base-uncased` - **Task:** Text Classification (Binary) - **Labels:** `0 = system prompt`, `1 = user prompt` - **Language:** English - **License:** Apache 2.0 - **Framework:** ๐Ÿค— Transformers ## ๐Ÿ”— Model Sources - **Model Hub:** [rushi-shaharao/distilbert-prompt-classifier](https://huggingface.co/rushi-shaharao/distilbert-prompt-classifier) ## ๐Ÿ’ก Uses ### โœ… Direct Use You can use this model to classify any single prompt into either a system or user prompt. ### ๐Ÿšซ Out-of-Scope Use - Not intended for multi-language prompt classification. - May not generalize well to noisy or adversarial text outside of prompt-type formatting. ## ๐Ÿงช How to Use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import torch.nn.functional as F model_name = "rushi-shaharao/distilbert-prompt-classifier" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) prompt = "You are a helpful assistant." inputs = tokenizer(prompt, return_tensors="pt", truncation=True, padding=True) with torch.no_grad(): outputs = model(**inputs) probs = F.softmax(outputs.logits, dim=1) predicted_class = torch.argmax(probs, dim=1).item() label_map = {0: "system prompt", 1: "user prompt"} print(f"Predicted: {label_map[predicted_class]} ({probs[0][predicted_class]:.2f} confidence)")