OPEA
/

Safetensors
qwen2
4-bit precision
auto-round
cicdatopea commited on
Commit
8f7abbc
·
verified ·
1 Parent(s): fb72325

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -25,7 +25,7 @@ language:
25
 
26
  This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with `revision="14dbc8"` to use AutoGPTQ format
27
 
28
- ⚠️ Important: This model is used for internal testing with Hugginface. Please do not delete or modify without approval.
29
 
30
  ## How To Use
31
 
 
25
 
26
  This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with `revision="14dbc8"` to use AutoGPTQ format
27
 
28
+ ⚠️ Important: This model is used for internal testing with Hugginface and VLLM. Please do not delete or modify without approval.
29
 
30
  ## How To Use
31