Belle-whisper-large-v3-turbo-zh-ggml-quantized

The quantized (q5_0, q5_1, q8_0) GGML version of Belle-whisper-large-v3-turbo-zh, used for whisper.cpp

More info about the model, please refer to https://huggingface.co/BELLE-2/Belle-whisper-large-v3-turbo-zh

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for uosx/Belle-whisper-large-v3-turbo-zh-ggml-quantized