Please can we get this model on the same quant ?
#1
by
groxaxo
- opened
Hi could you please quantize to q2ks this model inclusionAI/Ling-flash-2.0 ? Thank you very much ! Keep up the good work
@groxaxo
here is the model Intel/Ling-flash-2.0-gguf-q2ks-mixed-AutoRound.
If you have model requirements in the future, please add an issue on our github https://github.com/intel/auto-round so that we can respond faster.
Thank you very much, will do