NVIDIA PersonaPlex 7B v1 โ€” Q8_0 GGUF

This model was converted and quantized from personaplex-7b-v1 to GGUF using moshi.cpp.

To use the model and learn more go to moshi.cpp.

Quantization

Property Value
Original Model nvidia/personaplex-7b-v1
Quantization Q8_0
Format GGUF
File model-q8_0.gguf
Downloads last month
377
GGUF
Model size
8B params
Architecture
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for alvanalrakib/personaplex-7b-q8-GGUF

Quantized
(7)
this model