NEED AI - Content Moderation
BERT model for detecting toxic and inappropriate content in user messages and reviews.
Model Details
- Base Model: unitary/toxic-bert
- Task: Text Classification
- Fine-tuned for: NEED Service Marketplace Platform
- Language: English
- License: MIT
Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# or AutoModelForSequenceClassification for moderation
# or SentenceTransformer for semantic-search
model_name = "yogami9/need-content-moderation"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# Example usage
input_text = "Your input here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
Training Data
Trained on curated datasets specific to the NEED platform's service categories and user interactions.
Limitations
- Optimized for English language
- Best performance on NEED platform-specific queries
- May require fine-tuning for other domains
Contact
- Organization: NEED Service App
- Email: [email protected]
- GitHub: https://github.com/Need-Service-App
Related Models
All NEED AI models:
- Downloads last month
- 21
Model tree for yogami9/need-content-moderation
Base model
unitary/toxic-bert