NeoChen1024 commited on
Commit
abc66ab
·
verified ·
1 Parent(s): ed1c1f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -1,7 +1,6 @@
1
  ---
2
  base_model:
3
- - meta-llama/Llama-3.1-8B-Instruct
4
- - google/siglip2-so400m-patch14-384
5
  tags:
6
  - captioning
7
  ---
@@ -106,4 +105,4 @@ vLLM provides the highest performance inference for JoyCaption, and an OpenAI co
106
  vllm serve fancyfeast/llama-joycaption-beta-one-hf-llava --max-model-len 4096 --enable-prefix-caching
107
  ```
108
 
109
- VLMs are a bit finicky on vLLM, and vLLM is memory hungry, so you may have to adjust settings for your particular environment, such as forcing eager mode, adjusting max-model-len, adjusting gpu_memory_utilization, etc.
 
1
  ---
2
  base_model:
3
+ - fancyfeast/llama-joycaption-beta-one-hf-llava
 
4
  tags:
5
  - captioning
6
  ---
 
105
  vllm serve fancyfeast/llama-joycaption-beta-one-hf-llava --max-model-len 4096 --enable-prefix-caching
106
  ```
107
 
108
+ VLMs are a bit finicky on vLLM, and vLLM is memory hungry, so you may have to adjust settings for your particular environment, such as forcing eager mode, adjusting max-model-len, adjusting gpu_memory_utilization, etc.