You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

πŸ“ Overview

Tensordyne builds advanced AI-inference systems, enabling faster, more affordable, and sustainable generative AI.

This repository provides resources to quickly get started with Whisper-large-v3 on the Tensordyne Inference System and its SDK.

🧩 Model Details

  • Quantization: post-training quantization of the base model, no fine-tuning or additional training has been performed
  • Supported data types: Tensordyne FP16 (tFP16), Tensordyne FP8 (tFP8), mixed-precision

βš™οΈ Quantization

The Tensordyne SDK offers multiple post-training quantization strategies to convert AI models for efficient inference on the Tensordyne Inference System β€” fully customizable for your optimization targets.
We showcase several preselected quantization variants that can be applied on-the-fly to quantize to Tensordyne data types here. The calibration-based strategies are defined by quantization configurations provided as .json.

The quantized models are evaluated on a subset of the LibriSpeech ASR test set. Negative WER drops indicate that the model performs better than the float base model.

Model Configuration Absolute WER [%] Relative WER Drop vs. BF16 Details
BF16 1.933 % – The baseline model trained in BF16
calibration_based_tFP16 1.921 % -0.61 % calibration-based tFP16 quantization
layerwise_mixed_precision 1.909 % -1.23 % calibration-based mixed-precision: tFP8, outliers in tFP16

πŸš€ Getting Started

Refer to the Tensordyne Hugging Face Hub tutorial for instructions on using the artifacts provided in this repository.
Our hosted documentation provides more information on Tensordyne's quantization strategies and introduces you to our SDK.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Tensordyne/whisper-large-v3

Quantized
(15)
this model