MetalGPT-1 GGUF
This repository contains unofficial GGUF conversions of the nn-tech/MetalGPT-1 model for use with GGUF-compatible runtimes.
MetalGPT-1 is a 32B chat model based on Qwen/Qwen3-32B, further trained with both continual pre-training and supervised fine-tuning on domain-specific data from the mining and metallurgy industry.
⚠️ Disclaimer:
This repository is not affiliated with the original authors of MetalGPT-1.
These are pure quantizations of the original model weights - no additional training, fine-tuning, or modifications were applied.
Quality, correctness, and safety of the quantized variants are not guaranteed.
See the original model card: https://huggingface.co/nn-tech/MetalGPT-1
GGUF variants in this repository
The following GGUF quantized variants of MetalGPT-1 are provided:
| File name | Quantization | Size (GB) | Notes |
|---|---|---|---|
MetalGPT-1-32B-Q8_0.gguf |
Q8_0 | 34.8 | Best quality among these quants; requires more VRAM |
MetalGPT-1-32B-Q6_K.gguf |
Q6_K | 26.9 | High quality; lower VRAM usage than Q8_0 |
MetalGPT-1-32B-Q4_K_M.gguf |
Q4_K_M | 19.8 | Good quality; memory-efficient |
MetalGPT-1-32B-Q4_K_S.gguf |
Q4_K_S | 18.8 | Slightly more aggressive quantization than Q4_K_M |
Choose a variant based on your hardware and quality requirements:
- Q4_K_M / Q4_K_S: best options for low‑VRAM environments.
- Q6_K / Q8_0: better fidelity for demanding generation quality.
Note: Try adding the
/thinktag to your prompts if you want to explicitly trigger reasoning capabilities.
VRAM guidance
These numbers are rough rules of thumb for 32B GGUF inference; actual VRAM/RAM usage depends on runtime/backend, context size (KV cache), and overhead.
- < 24 GB VRAM: you’ll likely need partial GPU offload (some weights/layers stay in system RAM). Prefer Q4_K_M / Q4_K_S.
- ~24 GB VRAM: Q4 variants typically fit best; higher quants may still require partial offload depending on context size.
- ~32 GB VRAM: Q6_K is a reasonable target; may still require tuning/offload for large contexts.
- 40 GB+ VRAM: Q8_0 is usually the go-to “max fidelity quant” option among the listed files.
- 80 GB+ VRAM: consider running the original (non-quantized) weights instead of quants if you want maximum fidelity.
Note: partial offload (keeping some layers in system RAM) can significantly reduce throughput vs full GPU offload.
Usage with LM Studio
- Download LM Studio from here.
- Search for "NuisanceValue/MetalGPT-1-GGUF" in the model hub within LM Studio.
- Select a quantization variant.
- Once downloaded, select the model in the menu.
Usage with Ollama
- Install Ollama from the official website and ensure the
ollamacommand is available in your terminal. - In the terminal, run the model directly from Hugging Face (you can specify the desired quantization tag after a colon):
ollama run hf.co/NuisanceValue/MetalGPT-1-GGUF:Q4_K_M - After the first run, the model will appear in your local model list:
ollama list
Note: You can also use Ollama through a web UI such as OpenWebUI by configuring it to connect to your Ollama server.
Usage with llama.cpp
Download one of the GGUF files (for example MetalGPT-1-32B-Q4_K_M.gguf) and run:
./llama-cli \
-m MetalGPT-1-32B-Q4_K_M.gguf \
-p "Назови плюсы и минусы хлоридной и сульфатной технологии производства никеля." \
--temp 0.7 \
--top-p 0.8 \
--top-k 70 \
--n-predict 512 \
--ctx-size 8192
Tip (GPU offload): you can add
-ngl N(aka--n-gpu-layers) — it controls how many layers are offloaded to VRAM, while the rest stays in system RAM. Start with-ngl -1(try to offload all layers); if you hit an out-of-memory error, lower it (e.g.,-ngl 20,-ngl 30, …) until it fits.
Usage with llama-cpp-python
Install llama-cpp-python if you haven't already:
pip install llama-cpp-python
Then use the following code snippet to load the model and generate text:
from llama_cpp import Llama
# Path to your GGUF file
model_path = "MetalGPT-1-32B-Q4_K_M.gguf"
# Initialize the model
llm = Llama(
model_path=model_path,
n_gpu_layers=-1, # Offload all layers to GPU. If you get an OOM error, change this number to offload some layers to RAM (e.g., to 20 or 30).
n_ctx=8192, # Context window (adjust based on VRAM)
verbose=False
)
messages = [
{"role": "system", "content": "Ты специалист в области металлургии."},
{"role": "user", "content": "Назови плюсы и минусы хлоридной и сульфатной технологии производства никеля."},
]
output = llm.create_chat_completion(
messages=messages,
max_tokens=2048,
temperature=0.7,
top_p=0.8
)
print(output["choices"][0]["message"]["content"])
- Downloads last month
- 221
4-bit
6-bit
8-bit