mmproj?

#2
by cpuQ - opened

hello, maybe its too early to ask, but will there be mmproj files for this model?

Not sure if this is still actual, but I took one from the unsloth/gemma-4-31B-it-GGUF, and it seems to work fine

yes i have also tried using mmproj from unsloth, bartowski, and another quantized model by mradermacher, but i saw there were refusals and inconsistencies from small tests.

I'm not really sure why no mmproj files where provided for this model. I will queue it again and see if they are getting generated this time and if not why mmproj extraction fails.

I assume this likely happened due to using an older llama.cpp version at the time. It now got propperly recognized as vision model and queued to nico1:

nico1 ~# llmc add force -6000 si https://huggingface.co/aifeifei798/Gemma-4-31B-Cognitive-Unshackled worker nico1 llama nico
submit tokens: ["force","-6000","static","imatrix","worker","nico1","llama","nico","https://huggingface.co/aifeifei798/Gemma-4-31B-Cognitive-Unshackled","provenance","api-10.28.1.6-2026-04-09T23:16:21"]
https://huggingface.co/aifeifei798/Gemma-4-31B-Cognitive-Unshackled
["https://huggingface.co/aifeifei798/Gemma-4-31B-Cognitive-Unshackled",["-2000","static","imatrix","llama","nico","worker","rich1","provenance","api-10.28.1.7-2026-04-04T18:55:19"],1775321720],
https://huggingface.co/aifeifei798/Gemma-4-31B-Cognitive-Unshackled already in llmjob.submit.txt
forcing.
aifeifei798/Gemma-4-31B-Cognitive-Unshackled: vision model, forcing worker nico1

-6000 63 sI Gemma-4-31B-Cognitive-Unshackled run/noquant mmproj extraction hf Q8_0

Nice!

awesome! thank you

cpuQ changed discussion status to closed

Sign up or log in to comment