NVIDIA PersonaPlex 7B v1 โ Q8_0 GGUF
This model was converted and quantized from personaplex-7b-v1 to GGUF using moshi.cpp.
To use the model and learn more go to moshi.cpp.
Quantization
| Property | Value |
|---|---|
| Original Model | nvidia/personaplex-7b-v1 |
| Quantization | Q8_0 |
| Format | GGUF |
| File | model-q8_0.gguf |
- Downloads last month
- 377
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support