W4A16 GPTQ quantized version of mistralai/Mistral-Large-Instruct-2411
Using intel/auto-round version: git+7b8e280
Generation command-line
auto-round --model mistral-large --scheme "W4A16" --format "auto_gptq" --output_dir "./mistral-large-gptq"
Notice
Licensed by Mistral AI under the Mistral AI Research License. You can find copy of license in LICENSE.md.
- Downloads last month
- 2
Model tree for hell0ks/Mistral-Large-Instruct-2411-AutoRound-GPTQ-4bit
Base model
mistralai/Mistral-Large-Instruct-2411