Original model: https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct

Quantitation documentation: https://docs.openvino.ai/nightly/notebooks/qwen2-vl-with-output.html

Quantitation config:

import nncf

compression_configuration = {
    "mode": nncf.CompressWeightsMode.INT8_ASYM,
}
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support