Missing <think> tags

#3
by sousekd - opened

Currently llama.cpp does not return <think> token in response. If you know how to fix that, please share in the "Community" section!

I was researching the reasoning formatting issues of Kimi-K2-Thinking and found this llama.cpp param:

-sp,   --special                        special tokens output enabled (default: false)

@unsloth mentions it here:
https://docs.unsloth.ai/models/kimi-k2-thinking-how-to-run-locally

I did not have a chance to test it yet, but I am quite positive it might help :).

Thanks, I tried --special with PrimeIntellect_INTELLECT-3-Q8_0, but it didn't work. I'll try it with some other models as well.

Sign up or log in to comment