Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -11,6 +11,8 @@ A collection of GGUF and quantizations for [`jina-embeddings-v4`](https://huggin
|
|
| 11 |
> [!IMPORTANT]
|
| 12 |
> We highly recommend to first read [this blog post for more technical details and customized llama.cpp build](https://jina.ai/news/optimizing-ggufs-for-decoder-only-embedding-models).
|
| 13 |
|
|
|
|
|
|
|
| 14 |
|
| 15 |
|
| 16 |
## Overview
|
|
@@ -34,9 +36,9 @@ All models above provide F16, Q8_0, Q6_K, Q5_K_M, Q4_K_M, Q3_K_M and dynamic qua
|
|
| 34 |
- They can not output multi-vector embeddings.
|
| 35 |
- You must add `Query: ` or `Passage: ` in front of the input. [Check this table for the details](#consistency-wrt-automodelfrom_pretrained).
|
| 36 |
|
| 37 |
-
## Multimodal
|
| 38 |
|
| 39 |
-
|
| 40 |
|
| 41 |
## Get Embeddings
|
| 42 |
|
|
|
|
| 11 |
> [!IMPORTANT]
|
| 12 |
> We highly recommend to first read [this blog post for more technical details and customized llama.cpp build](https://jina.ai/news/optimizing-ggufs-for-decoder-only-embedding-models).
|
| 13 |
|
| 14 |
+
> [!TIP]
|
| 15 |
+
> Multimodal v4-GGUF is now available, [check out this blog post for the walkthrough](https://jina.ai/news/multimodal-embeddings-in-llama-cpp-and-gguf/).
|
| 16 |
|
| 17 |
|
| 18 |
## Overview
|
|
|
|
| 36 |
- They can not output multi-vector embeddings.
|
| 37 |
- You must add `Query: ` or `Passage: ` in front of the input. [Check this table for the details](#consistency-wrt-automodelfrom_pretrained).
|
| 38 |
|
| 39 |
+
## Multimodal Retrieval Model
|
| 40 |
|
| 41 |
+
We forked llama.cpp and make it work with image input and embedding output. [Check out this new blog post for the walkthrough](https://jina.ai/news/multimodal-embeddings-in-llama-cpp-and-gguf/).
|
| 42 |
|
| 43 |
## Get Embeddings
|
| 44 |
|