Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

eaddario
/
Mistral-Small-3.2-24B-Instruct-2506-pruned-GGUF

Text Generation
GGUF
English
quant
pruned
experimental
Model card Files Files and versions
xet
Community
Mistral-Small-3.2-24B-Instruct-2506-pruned-GGUF
187 GB
  • 1 contributor
History: 14 commits
eaddario's picture
eaddario
Pruned & layer-wise quantization Q3_K_S
0804d81 verified 3 months ago
  • imatrix
    Generate imatrices 3 months ago
  • logits
    Generate base model logits 3 months ago
  • .gitattributes
    1.6 kB
    Update .gitattributes 3 months ago
  • .gitignore
    6.78 kB
    Add .gitignore 3 months ago
  • Mistral-Small-3.2-24B-Instruct-2506-pruned-F16.gguf
    44.9 GB
    xet
    Convert & prune safetensor to GGUF @ F16 3 months ago
  • Mistral-Small-3.2-24B-Instruct-2506-pruned-Q3_K_S.gguf
    9.01 GB
    xet
    Pruned & layer-wise quantization Q3_K_S 3 months ago
  • Mistral-Small-3.2-24B-Instruct-2506-pruned-Q4_K_M.gguf
    12.5 GB
    xet
    Pruned & layer-wise quantization Q4_K_M 3 months ago
  • Mistral-Small-3.2-24B-Instruct-2506-pruned-Q4_K_S.gguf
    11.6 GB
    xet
    Pruned & layer-wise quantization Q4_K_S 3 months ago
  • Mistral-Small-3.2-24B-Instruct-2506-pruned-Q5_K_M.gguf
    15.3 GB
    xet
    Pruned & layer-wise quantization Q5_K_M 3 months ago
  • Mistral-Small-3.2-24B-Instruct-2506-pruned-Q5_K_S.gguf
    14.5 GB
    xet
    Pruned & layer-wise quantization Q5_K_S 3 months ago
  • Mistral-Small-3.2-24B-Instruct-2506-pruned-Q6_K.gguf
    18.9 GB
    xet
    Pruned & layer-wise quantization Q6_K 3 months ago
  • Mistral-Small-3.2-24B-Instruct-2506-pruned-Q8_0.gguf
    20.3 GB
    xet
    Pruned & layer-wise quantization Q8_0 3 months ago
  • README.md
    431 Bytes
    Update README.md 3 months ago