Update README.md
Browse files
README.md
CHANGED
|
@@ -87,6 +87,9 @@ model-index:
|
|
| 87 |
# Sagicc/granite-8b-code-instruct-Q5_K_M-GGUF
|
| 88 |
This model was converted to GGUF format from [`ibm-granite/granite-8b-code-instruct`](https://huggingface.co/ibm-granite/granite-8b-code-instruct) using llama.cpp after addded support for small Granite Code models in b3026 ['llama.cpp release'](https://github.com/ggerganov/llama.cpp/releases/tag/b3026).
|
| 89 |
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-8b-code-instruct) for more details on the model.
|
|
|
|
|
|
|
|
|
|
| 90 |
## Use with llama.cpp
|
| 91 |
|
| 92 |
Install llama.cpp through brew.
|
|
|
|
| 87 |
# Sagicc/granite-8b-code-instruct-Q5_K_M-GGUF
|
| 88 |
This model was converted to GGUF format from [`ibm-granite/granite-8b-code-instruct`](https://huggingface.co/ibm-granite/granite-8b-code-instruct) using llama.cpp after addded support for small Granite Code models in b3026 ['llama.cpp release'](https://github.com/ggerganov/llama.cpp/releases/tag/b3026).
|
| 89 |
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-8b-code-instruct) for more details on the model.
|
| 90 |
+
|
| 91 |
+
## For now only works with llama.cpp
|
| 92 |
+
|
| 93 |
## Use with llama.cpp
|
| 94 |
|
| 95 |
Install llama.cpp through brew.
|