Update README.md
Browse files
README.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
-
-
|
| 4 |
-
- google/siglip2-so400m-patch14-384
|
| 5 |
tags:
|
| 6 |
- captioning
|
| 7 |
---
|
|
@@ -106,4 +105,4 @@ vLLM provides the highest performance inference for JoyCaption, and an OpenAI co
|
|
| 106 |
vllm serve fancyfeast/llama-joycaption-beta-one-hf-llava --max-model-len 4096 --enable-prefix-caching
|
| 107 |
```
|
| 108 |
|
| 109 |
-
VLMs are a bit finicky on vLLM, and vLLM is memory hungry, so you may have to adjust settings for your particular environment, such as forcing eager mode, adjusting max-model-len, adjusting gpu_memory_utilization, etc.
|
|
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
+
- fancyfeast/llama-joycaption-beta-one-hf-llava
|
|
|
|
| 4 |
tags:
|
| 5 |
- captioning
|
| 6 |
---
|
|
|
|
| 105 |
vllm serve fancyfeast/llama-joycaption-beta-one-hf-llava --max-model-len 4096 --enable-prefix-caching
|
| 106 |
```
|
| 107 |
|
| 108 |
+
VLMs are a bit finicky on vLLM, and vLLM is memory hungry, so you may have to adjust settings for your particular environment, such as forcing eager mode, adjusting max-model-len, adjusting gpu_memory_utilization, etc.
|