Modification to have 1 image instead of 3
Browse files
blog/openvino_vlm/openvino-vlm.md
CHANGED
|
@@ -18,14 +18,6 @@ Let’s first recap: A Vision Language Model (VLM) can understand both text and
|
|
| 18 |
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/chat1.png">
|
| 19 |
</figure>
|
| 20 |
|
| 21 |
-
<figure class="image text-center">
|
| 22 |
-
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/chat2.png">
|
| 23 |
-
</figure>
|
| 24 |
-
|
| 25 |
-
<figure class="image text-center">
|
| 26 |
-
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/chat3.png">
|
| 27 |
-
</figure>
|
| 28 |
-
|
| 29 |
It’s impressive, but not exactly accessible to use. Let’s take [CogVLM](https://github.com/THUDM/CogVLM), for example, it is a powerful open source vision-language model with around 17 billion parameters (10B vision encoder \+ 7B language model) which can require [about 80GB of RAM](https://inference.roboflow.com/foundation/cogvlm/) to run the model in full precision. Inference is still relatively slow: captioning a single image takes 10 to 13 seconds on an NVIDIA T4 GPU ([RoboflowBenchmark](https://inference.roboflow.com/foundation/cogvlm/?utm_source=chatgpt.com)). Users attempting to run CogVLM on CPUs have reported crashes or memory errors even with 64 GB of RAM, highlighting its impracticality for typical local deployment ([GitHub Issue](https://github.com/THUDM/CogVLM/issues/162)), just to mention one model, this is the challenge faced recently with most small VLMs.
|
| 30 |
|
| 31 |
In contrast, SmolVLM is purpose-built for low-resource environments, and it becomes a highly efficient solution for deploying vision-language models on laptops or edge devices.
|
|
@@ -137,6 +129,7 @@ print(generated_texts[0])
|
|
| 137 |
```
|
| 138 |
Try the complete notebook [here](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb).
|
| 139 |
|
|
|
|
| 140 |
## Conclusion
|
| 141 |
|
| 142 |
Multimodal AI is becoming more accessible thanks to smaller, optimized models like **SmolVLM**, along with tools such as **Hugging Face Optimum** and **OpenVINO**. While deploying vision-language models locally still comes with challenges, this workflow shows that it's possible to run lightweight image-and-text models on modest hardware.
|
|
|
|
| 18 |
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/chat1.png">
|
| 19 |
</figure>
|
| 20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
It’s impressive, but not exactly accessible to use. Let’s take [CogVLM](https://github.com/THUDM/CogVLM), for example, it is a powerful open source vision-language model with around 17 billion parameters (10B vision encoder \+ 7B language model) which can require [about 80GB of RAM](https://inference.roboflow.com/foundation/cogvlm/) to run the model in full precision. Inference is still relatively slow: captioning a single image takes 10 to 13 seconds on an NVIDIA T4 GPU ([RoboflowBenchmark](https://inference.roboflow.com/foundation/cogvlm/?utm_source=chatgpt.com)). Users attempting to run CogVLM on CPUs have reported crashes or memory errors even with 64 GB of RAM, highlighting its impracticality for typical local deployment ([GitHub Issue](https://github.com/THUDM/CogVLM/issues/162)), just to mention one model, this is the challenge faced recently with most small VLMs.
|
| 22 |
|
| 23 |
In contrast, SmolVLM is purpose-built for low-resource environments, and it becomes a highly efficient solution for deploying vision-language models on laptops or edge devices.
|
|
|
|
| 129 |
```
|
| 130 |
Try the complete notebook [here](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb).
|
| 131 |
|
| 132 |
+
|
| 133 |
## Conclusion
|
| 134 |
|
| 135 |
Multimodal AI is becoming more accessible thanks to smaller, optimized models like **SmolVLM**, along with tools such as **Hugging Face Optimum** and **OpenVINO**. While deploying vision-language models locally still comes with challenges, this workflow shows that it's possible to run lightweight image-and-text models on modest hardware.
|