bert-medium-tiny
Model Description
Fine-tuned BERT model for sentiment classification on SST-2 dataset
Base Model
- Base Model: google-bert/bert-base-uncased
- Task: text-classification
- Dataset: sst2
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("takedarn/bert-medium-tiny")
model = AutoModelForSequenceClassification.from_pretrained("takedarn/bert-medium-tiny")
text = "This movie is great!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1)
print(f"Predicted class: {predicted_class.item()}")
Training Details
This model was fine-tuned using the following configuration:
- Task: text-classification
- Dataset: sst2
- Base model: google-bert/bert-base-uncased
Citation
If you use this model, please cite:
@misc{bert_medium_tiny,
author = {Your Name},
title = {bert-medium-tiny},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/takedarn/bert-medium-tiny}
}
- Downloads last month
- 1
Model tree for takedarn/bert-medium-tiny
Base model
google-bert/bert-base-uncased