Sentiment Analysis Model

This model is used in our transcription service, where the audio is first transcribed and then analysed via this model.

The model expects a sentence and return a number from 1 to 5 where 1 is the most negative sentiment and 5 is the most positive one. There is a parsing present that checks the confidence and if it is below 0.7, it checks for the second most probable result, averages them and uses math.ceil for optimistic behavior.

The model is trained on BERT (nlptown/bert-base-multilingual-uncased-sentiment), which has an MIT license, and distilled llm results.

This model was trained for 20 epochs where the result is:

Precision Recall F1-score Support
Class 1 0.95 0.88 0.92 43
Class 2 0.78 0.86 0.82 37
Class 3 0.80 0.72 0.76 39
Class 4 0.79 0.88 0.83 66
Class 5 0.85 0.78 0.81 45
Accuracy 0.83 230
Macro avg 0.84 0.82 0.83 230
Weighted avg 0.83 0.83 0.83 230

sentiment_model_6:

Version Changelog
1.0 initial training
1.1 fine-tuning time and datetime to a neutral sentiment
1.2 fine-tuning numbers to a neutral sentiment
Downloads last month
27
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for badydaniel/sentiment-analysis

Finetuned
(31)
this model