Update README.md
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ pipeline_tag: text-generation
|
|
| 20 |
|
| 21 |
# Manticore 13B Chat GGML
|
| 22 |
|
| 23 |
-
This is GGML format quantised 4-bit, 5-bit and 8-bit models of [OpenAccess AI Collective's Manticore 13B Chat](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg
|
| 24 |
|
| 25 |
This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
| 26 |
|
|
|
|
| 20 |
|
| 21 |
# Manticore 13B Chat GGML
|
| 22 |
|
| 23 |
+
This is GGML format quantised 4-bit, 5-bit and 8-bit models of [OpenAccess AI Collective's Manticore 13B Chat](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg).
|
| 24 |
|
| 25 |
This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
| 26 |
|