title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Extended Godot MCP from 20 to 149 tools - aiming for fully autonomous game development
1
I have been working on extending the original godot-mcp by Coding Solo (Solomon Elias), taking it from 20 tools to 149 tools that now cover pretty much every aspect of Godot 4.x engine control. The reason I forked rather than opening a PR is that the original repository does not seem to be actively maintained anymore, and the scope of changes is massive, essentially a rewrite of most of the tool surface. That said, full credit and thanks go to Coding Solo for building the foundational architecture, the TypeScript MCP server, the headless GDScript operations system, and the TCP-based runtime interaction, all of which made this possible. The development was done with significant help from Claude Code as a coding partner. The current toolset spans runtime code execution (game_eval with full await support), node property inspection and manipulation, scene file parsing and modification, signal management, physics configuration (bodies, joints, raycasts, gravity), full audio control (playback and bus management), animation creation with keyframes and tweens, UI theming, shader parameters, CSG boolean operations, procedural mesh generation, MultiMesh instancing, TileMap operations, navigation pathfinding, particle systems, HTTP/WebSocket/ENet multiplayer networking, input simulation (keyboard, mouse, touch, gamepad), debug drawing, viewport management, project settings, export presets, and more. All 149 tools have been tested and are working, but more real-world testing would be incredibly valuable, and if anyone finds issues I would genuinely appreciate bug reports. The long-term goal is to turn this into a fully autonomous game development MCP where an AI agent can create, iterate, and test a complete game without manual intervention. PRs and issues are very welcome, and if this is useful to you, feel free to use it. Repo: https://github.com/tugcantopaloglu/godot-mcp
2026-03-03T10:22:47
https://www.reddit.com/r/LocalLLaMA/comments/1rjlru3/extended_godot_mcp_from_20_to_149_tools_aiming/
5Y5T3M0V3RDR1V3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjlru3
false
null
t3_1rjlru3
/r/LocalLLaMA/comments/1rjlru3/extended_godot_mcp_from_20_to_149_tools_aiming/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o.png?auto=webp&s=10108f0d19255706430622b37c6c4fadc51fdd91', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o.png?width=108&crop=smart&auto=webp&s=f7bf4373cfa7f58d31da601a27b2ad36847501a5', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o.png?width=216&crop=smart&auto=webp&s=7c30ebffda1f9f2f119851389ad8641885577fda', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o.png?width=320&crop=smart&auto=webp&s=8001561a82815dae0051f29f0a0c0bdfd5a83473', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o.png?width=640&crop=smart&auto=webp&s=d36f6b5b54dcc52514f7f2c5fde99c85c7f3c0d4', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o.png?width=960&crop=smart&auto=webp&s=411514f8507626362535a0c0d7489af1aacbd619', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o.png?width=1080&crop=smart&auto=webp&s=2b2d66cee04cf2c40b184ed95f7bff8bb372a73b', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o'}], 'enabled': False}
agent-audit — estimate what your agent workflows will cost before running them (supports Ollama/OpenAI/Anthropic)
1
[removed]
2026-03-03T10:12:33
https://www.reddit.com/r/LocalLLaMA/comments/1rjllzi/agentaudit_estimate_what_your_agent_workflows/
AreteDriver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjllzi
false
null
t3_1rjllzi
/r/LocalLLaMA/comments/1rjllzi/agentaudit_estimate_what_your_agent_workflows/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI.png?auto=webp&s=194ed02aa5071a6e1a40cbbca923287a8d698ff2', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI.png?width=108&crop=smart&auto=webp&s=54922af7a950689db56138640f140d48f0e0b2d0', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI.png?width=216&crop=smart&auto=webp&s=5c40303a4b29faa41ba1e7321d87b856c9006a10', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI.png?width=320&crop=smart&auto=webp&s=22012c35c8b1eaa90138733ca3b17fe610c52872', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI.png?width=640&crop=smart&auto=webp&s=488d0800b2ae98c0597abb0f44bc5443c2a856b8', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI.png?width=960&crop=smart&auto=webp&s=321836bba163f407261af3040a8a3fb51b24e0bb', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI.png?width=1080&crop=smart&auto=webp&s=08bf79e7a445b3215f5bc77cb52fe55b8ce148e1', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI'}], 'enabled': False}
Anyone here using Openwebui experienced their OWUI jumping between versions?
1
[removed]
2026-03-03T10:00:54
https://www.reddit.com/r/LocalLLaMA/comments/1rjlf22/anyone_here_using_openwebui_experienced_their/
munkiemagik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjlf22
false
null
t3_1rjlf22
/r/LocalLLaMA/comments/1rjlf22/anyone_here_using_openwebui_experienced_their/
false
false
self
1
null
Question on running Qwen3.5 397B Q4_K_M
1
So here is a scenario I have a machine running Ryzen 5 48 GB RAM 3060 12GB card 1tb nvme Now we will say it is impossible to run a big model like this on this kind of machine right? Well I have accomplished and got 1.4 t/s not fast but it is running! I was just wondering what is the community's thoughts on this? is 397B models still worth trying to get run local?
2026-03-03T09:58:32
https://www.reddit.com/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/
Last-Shake-9874
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjldjb
false
null
t3_1rjldjb
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/
false
false
self
1
null
Finished a Qwen 3.5 9B Opus 4.5 Distill!
1
So with Qwen 3.5 9b just released, I fine-tuned a heretic model on opus 4.6 datasets, coding, and openclaw datasets. Here it is: [https://huggingface.co/crownelius/Crow-9B-Opus-4.6-Distill-Heretic\_Qwen3.5](https://huggingface.co/crownelius/Crow-9B-Opus-4.6-Distill-Heretic_Qwen3.5) Please, if you find it useful, support me on kofi, and of course like and follow on Huggingface! I would really appreciate it! :)
2026-03-03T09:53:53
https://www.reddit.com/r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/
volious-ka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjlaxj
false
null
t3_1rjlaxj
/r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk.png?auto=webp&s=8ea0ef2fbef742448a5836d51122474c788faa07', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk.png?width=108&crop=smart&auto=webp&s=b91d3e229abba17f3d9aa37f889007c90241fae6', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk.png?width=216&crop=smart&auto=webp&s=48ae7d08cd6a6687501603e26c62c05413827745', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk.png?width=320&crop=smart&auto=webp&s=486e285205bcfbba89cbda3e9651b61f32cc6845', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk.png?width=640&crop=smart&auto=webp&s=58516361ea3b0d9e15fad747ca55ace29ac9d58b', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk.png?width=960&crop=smart&auto=webp&s=c77c89ae00106dbd5b5bb6fdace2ac0b4317cde5', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk.png?width=1080&crop=smart&auto=webp&s=f71d5a0e6357e29efc44a3e5321eda7c6d82b741', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk'}], 'enabled': False}
OmniLottie: Generating Vector Animations
1
# Generating Vector Animations via Parameterized Lottie Tokens
2026-03-03T09:48:20
https://openvglab.github.io/OmniLottie/
phone_radio_tv
openvglab.github.io
1970-01-01T00:00:00
0
{}
1rjl7wn
false
null
t3_1rjl7wn
/r/LocalLLaMA/comments/1rjl7wn/omnilottie_generating_vector_animations/
false
false
default
1
null
What model are people using to transform themselves into celebs in videos?
1
[removed]
2026-03-03T09:47:13
https://www.reddit.com/r/LocalLLaMA/comments/1rjl77h/what_model_are_people_using_to_transform/
MelodicWebAgent
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjl77h
false
null
t3_1rjl77h
/r/LocalLLaMA/comments/1rjl77h/what_model_are_people_using_to_transform/
false
false
self
1
null
Fast & Free VLM for object ID + Quality filtering? (Book/Phone/Mug)
1
I’m building a pipeline to identify common objects (car, dogs, cards) from user uploads, but I need a "Gatekeeper" layer. Basically, I want the model to reject the image if it’s low quality/blurry before it even tries to identify the object and if it passes image quality to broadly identify the object. then pass it on to a more capable model $$$. Looking for the best free/open-weight VLM that balances speed and accuracy. Is Gemini 2.5 Flash still the play for speed, or has Gemma 3 overtaken it for local accuracy? I’ve also heard Qwen3-VL is better at not hallucinating objects that aren't there. Also, has anyone successfully prompted a VLM to reliably self-report 'Low Quality' without it trying to 'guess' the object anyway?
2026-03-03T09:31:51
https://www.reddit.com/r/LocalLLaMA/comments/1rjkyq9/fast_free_vlm_for_object_id_quality_filtering/
Born-Mastodon443
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkyq9
false
null
t3_1rjkyq9
/r/LocalLLaMA/comments/1rjkyq9/fast_free_vlm_for_object_id_quality_filtering/
false
false
self
1
null
How do you test your agents before deploying?
1
I have built a couple of agents for my customers on langchain. How do I test them at scale before deploying?
2026-03-03T09:17:43
https://www.reddit.com/r/LocalLLaMA/comments/1rjkr2u/how_do_you_test_your_agents_before_deploying/
Reasonable_Play_9632
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkr2u
false
null
t3_1rjkr2u
/r/LocalLLaMA/comments/1rjkr2u/how_do_you_test_your_agents_before_deploying/
false
false
self
1
null
Model cognitive ergonomics understanding
1
[removed]
2026-03-03T09:05:20
https://www.reddit.com/r/LocalLLaMA/comments/1rjkk67/model_cognitive_ergonomics_understanding/
plknkl_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkk67
false
null
t3_1rjkk67
/r/LocalLLaMA/comments/1rjkk67/model_cognitive_ergonomics_understanding/
false
false
self
1
null
Hot Take: Most AI Startups Don't Have a Model Problem, They Have a Systems Problem
1
I've been watching a pattern across early-stage AI companies. Whenever training slows down or scaling fails, the first reaction is: "We need a better model" But after digging into several distributed setups, the real issues were: * Suboptimal GPU interconnect topology * Network bottlenecks during all-reduce * Inconsistent storage throughput * Improper job orchestration * NCCL timeouts under load * Weak checkpoint recovery The model architecture was never the problem. The cluster was. Here's something I don't see discussed enough: When you move from 8 GPUs → 64+ GPUs, you're no longer solving a modeling problem, you're solving a distributed systems problem. Latency amplification becomes real, network contention becomes visible, failure probability scales with node count. Yet most AI teams are staffed heavily with ML engineers and lightly with distributed systems engineers. I'm starting to think the next competitive moat in AI isn't just model quality, it's infrastructure reliability at scale. Curious to hear from others running multi-node training: Have you ever blamed the model when the real issue was infra? Would love to hear war stories.
2026-03-03T08:57:10
https://www.reddit.com/r/LocalLLaMA/comments/1rjkf7s/hot_take_most_ai_startups_dont_have_a_model/
Express_Problem_609
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkf7s
false
null
t3_1rjkf7s
/r/LocalLLaMA/comments/1rjkf7s/hot_take_most_ai_startups_dont_have_a_model/
false
false
self
1
null
Local model suggestions for medium end pc for coding
1
So I have an old laptop that I've installed Ubuntu server on and am using it as a home server. I want to run a local llm on it and then have it power OpenCode(open source copy of claude code) on my main laptop. My home server is an old thinkpad and it's configs: i7 CPU 16 gb RAM Nvidia 940 MX Now I know my major bottleneck is the GPU and that I probably can't run any amazing models on it. But I had the opportunity of using claude code and honestly it's amazing (mainly because of the infra and ease of use). So if I can somehow get something that runs even half as good as that, I'll consider that a win. Any suggestions for the models? And any tips or advice would be appreciated as well
2026-03-03T08:49:05
https://www.reddit.com/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/
Hades_Kerbex22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkarj
false
null
t3_1rjkarj
/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/
false
false
self
1
null
Are all models censored like this?
1
I asked minimax to write code to get an API key from a website and it refused, saying it won't do things like that. Are there any models that won't refuse your instructions?
2026-03-03T08:47:22
https://www.reddit.com/r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjk9tt
false
null
t3_1rjk9tt
/r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/
false
false
self
1
null
Designing a secure local AI agent with tool execution — architectural advice needed
1
[removed]
2026-03-03T08:38:50
https://www.reddit.com/r/LocalLLaMA/comments/1rjk4yg/designing_a_secure_local_ai_agent_with_tool/
South_Seesaw_1496
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjk4yg
false
null
t3_1rjk4yg
/r/LocalLLaMA/comments/1rjk4yg/designing_a_secure_local_ai_agent_with_tool/
false
false
self
1
null
I'm a noob to local inference, how do you choose the right app?
1
I've known about Ollama for a while, and ignorantly thought it was the only option for a long time. Then I learned about Llama.cpp, then I learned any the many, many more options there are when i learned how to use Hugging Face. Obviously, the model you want to use itself can help determine what app you need to use. That aside, how do you choose? What are the differences?
2026-03-03T08:34:09
https://www.reddit.com/r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/
Odd-Aside456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjk2dq
false
null
t3_1rjk2dq
/r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/
false
false
self
1
null
I built CreedFlow — a local-first desktop app that orchestrates AI agents to build your projects from a single description [Open Source]
1
[removed]
2026-03-03T08:32:30
https://www.reddit.com/gallery/1rjk1ew
TheArcQ
reddit.com
1970-01-01T00:00:00
0
{}
1rjk1ew
false
null
t3_1rjk1ew
/r/LocalLLaMA/comments/1rjk1ew/i_built_creedflow_a_localfirst_desktop_app_that/
false
false
https://preview.redd.it/…ee48e429d637a746
1
null
How can I know if downloaded models have a newer version? (LM Studio)
1
If I download a model in LM Studio, and then it gets updated online with fixes/improvements, how am I supposed to know and update? I don't think I get a notification... Or an indication on the version I have locally vs the online version. Am I missing something? This mostly concerns LM Studio, but if it's a broader issue, I am interested in all possible solutions.
2026-03-03T08:22:21
https://www.reddit.com/r/LocalLLaMA/comments/1rjjvqy/how_can_i_know_if_downloaded_models_have_a_newer/
cangaroo_hamam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjjvqy
false
null
t3_1rjjvqy
/r/LocalLLaMA/comments/1rjjvqy/how_can_i_know_if_downloaded_models_have_a_newer/
false
false
self
1
null
vLLM on V100 for Qwen - Newer models
1
I am struggling to run vLLM on my V100 GPU. I am trying to run the newest models like Qwen 9B. I try the VLLM nightly + latest transformers etc still they dont work together. I am unable to make it run. Any advice will be much appreciated.
2026-03-03T08:22:20
https://www.reddit.com/r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/
SectionCrazy5107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjjvqo
false
null
t3_1rjjvqo
/r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/
false
false
self
1
null
[UPDATE] TinyTTS: The Smallest English TTS Model
1
https://preview.redd.it/…hieuit/tiny-tts)
2026-03-03T08:21:50
https://www.reddit.com/r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/
Forsaken_Shopping481
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjjvge
false
null
t3_1rjjvge
/r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/
false
false
https://external-preview…2732d032b983bae5
1
null
Still a noob, is anyone actually running the moonshotai/Kimi-K2.5 1.1T model listed on HuggingFace locally?
1
I'm still pretty new to local LLMs and trying to figure out Hugging Face as a while. I know there was a lot of hype around Kimi-K2.5 when it was released, didn't realize it was open source until just now. I'm guessing the listing on Hugging Face is less for people to run Kimi locally and more for analysis and use by other third party inference providers. Right?
2026-03-03T07:49:34
https://www.reddit.com/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/
Odd-Aside456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjjcyk
false
null
t3_1rjjcyk
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/
false
false
self
1
null
I built a PTY-backed terminal modal add-on for Agent Zero
1
I built a standalone add-on for Agent Zero that adds a real terminal window inside the UI, and I thought people here might appreciate it. I wanted Agent Zero to feel more like a real working environment for local agent workflows — especially when doing shell-heavy tasks, quick debugging, and terminal-based coding tools. I kept bouncing between the chat UI and a separate terminal, so I built the thing I wanted to have. **WHAT IT IS** agent0-terminal is a patch-based add-on for Agent Zero (not a fork). It adds: • A real modal terminal inside Agent Zero • A PTY-backed shell session • Persistent sessions until explicit restart • Per-chat terminal logs • Explicit TerminalLog support so terminal history can be inserted back into the Agent Zero process stream when needed One thing I wanted to preserve was separation: the terminal is not constantly streamed into the agent. The agent only gets terminal history when you explicitly insert it through the TerminalLog path. ──────────────────── **REPO** https://github.com/Nunezchef/agent0-terminal **INSTALL** curl -fsSL https://raw.githubusercontent.com/Nunezchef/agent0-terminal/main/install.sh | bash ──────────────────── **WHY I THINK IT MATTERS** For me, this makes Agent Zero much more usable for real local workflows: • Less context switching • Easier shell iteration • Easier debugging • Terminal work stays attached to the chat/project flow • Better fit for TUI-style tools and terminal-first habits ──────────────────── **FEEDBACK WELCOME** If anyone tries it, I’d love feedback on: • TUI compatibility on your setup • Whether the terminal log flow is useful • What you’d want next (tabs, split panes, better log controls, etc.)
2026-03-03T07:13:13
https://i.redd.it/9af1y8dx5smg1.jpeg
estebann_
i.redd.it
1970-01-01T00:00:00
0
{}
1rjirq3
false
null
t3_1rjirq3
/r/LocalLLaMA/comments/1rjirq3/i_built_a_ptybacked_terminal_modal_addon_for/
false
false
https://preview.redd.it/…7b4d7327f3a56240
1
{'images': [{'source': {'url': 'https://preview.redd.it/9af1y8dx5smg1.jpeg?auto=webp&s=ad5c7a4a4c0362330c5240069d011fab70be7992', 'width': 1536, 'height': 688}, 'resolutions': [{'url': 'https://preview.redd.it/9af1y8dx5smg1.jpeg?width=108&crop=smart&auto=webp&s=5649b8e80edc7faaba10d68079097744053a519b', 'width': 108, 'height': 48}, {'url': 'https://preview.redd.it/9af1y8dx5smg1.jpeg?width=216&crop=smart&auto=webp&s=4cefe361c5fb4f8f528838d240e22af1d914b9bd', 'width': 216, 'height': 96}, {'url': 'https://preview.redd.it/9af1y8dx5smg1.jpeg?width=320&crop=smart&auto=webp&s=698b19b93bc30e244a36c79b679cd023b877ab54', 'width': 320, 'height': 143}, {'url': 'https://preview.redd.it/9af1y8dx5smg1.jpeg?width=640&crop=smart&auto=webp&s=545f7b1296041eff06ad2d90f8eb28d75c2935ba', 'width': 640, 'height': 286}, {'url': 'https://preview.redd.it/9af1y8dx5smg1.jpeg?width=960&crop=smart&auto=webp&s=34929cb77beb19beb3322b8b786699d1ae5f61f4', 'width': 960, 'height': 430}, {'url': 'https://preview.redd.it/9af1y8dx5smg1.jpeg?width=1080&crop=smart&auto=webp&s=a9c9572c8d6dcedb737640243441f20bf53b2a06', 'width': 1080, 'height': 483}], 'variants': {}, 'id': '9af1y8dx5smg1'}], 'enabled': True}
one-click deploy for openclaw for $1 if anyone wants a self-hosted ai assistant without the setup hassle
1
[removed]
2026-03-03T07:03:22
https://www.reddit.com/r/LocalLLaMA/comments/1rjilpf/oneclick_deploy_for_openclaw_for_1_if_anyone/
Dizzy-Guidance6080
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjilpf
false
null
t3_1rjilpf
/r/LocalLLaMA/comments/1rjilpf/oneclick_deploy_for_openclaw_for_1_if_anyone/
false
false
self
1
null
Help me create my LLM ecosystem
1
Hi there, got a gaming rig with i5-12600k, 5070ti and 32 GB DDR4 RAM.  I'd like to create a system with a local AI that OCRs medical documents (sometimes handwritten) of tens or hundreds of pages, extracts part of the text (for example, only CT scan reports) and makes scientific literature researches (something like consensus AI).  Do you have any suggestion? Would Ollama + anythingLLM + qwen 3.5 (27b?) a good combo for my needs?  I'm pretty new to LLMs, so any guide to understand better how they works would be appreciated. Thanks
2026-03-03T07:02:03
https://www.reddit.com/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/
golgoth85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjikwz
false
null
t3_1rjikwz
/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/
false
false
self
1
null
How do the small qwen3.5 models compare to the Granite family?
1
As a beginner in the field, I would like to understand where these groups of models stand relative to each other. IBM's Granite (e.g., the tiny one) are aimed at small devices, but the new ones from Qwen come in similar sizes - so they supposedly fit in the same niche. Besides that, Qwen is multi-modal and has a bigger context. Is the Granite4 family obsolete? What are the use-cases where one would still prefer to use IBM's small models?
2026-03-03T06:36:03
https://www.reddit.com/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/
gr8dude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rji5bc
false
null
t3_1rji5bc
/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/
false
false
self
1
null
I deployed Qwen3.5-122B-A10B + Midscene.js to automate posting on X! Multimodal vision is definitely the trend for 2026 🚀
1
[removed]
2026-03-03T06:32:51
https://v.redd.it/jr9tjdz8xrmg1
SpareAlps6450
v.redd.it
1970-01-01T00:00:00
0
{}
1rji3bk
false
{'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/jr9tjdz8xrmg1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 714, 'width': 1280, 'scrubber_media_url': 'https://v.redd.it/jr9tjdz8xrmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/jr9tjdz8xrmg1/DASHPlaylist.mpd?a=1775111594%2COTE5MjczOGUxYTIxYTk4ZDhiOWIzYWEwYjRmODAyZThkNTAxYmZiNDcxNTgxOTBiZTZmZGFiOGZiYjIzZGUxNA%3D%3D&v=1&f=sd', 'duration': 14, 'hls_url': 'https://v.redd.it/jr9tjdz8xrmg1/HLSPlaylist.m3u8?a=1775111594%2CMGQyYTAyNmM0YmUxNWI3YWQxZjk1ZmIxNTlhZmY2YjFkMGViYWY2YmYyYjhiOWJiYmE4NTdlMjg2NzNhZTAwMw%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1rji3bk
/r/LocalLLaMA/comments/1rji3bk/i_deployed_qwen35122ba10b_midscenejs_to_automate/
false
false
https://external-preview…abbd50f0e523baaf
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7drF6DQLkzpLhrgexVkxK0xTfuF.png?format=pjpg&auto=webp&s=ecfc346b16574aa747d348222594ec432f405d15', 'width': 1540, 'height': 858}, 'resolutions': [{'url': 'https://external-preview.redd.it/b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7drF6DQLkzpLhrgexVkxK0xTfuF.png?width=108&crop=smart&format=pjpg&auto=webp&s=e57ddeea373b97b96ef02f211ab1c862aec2bb94', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7drF6DQLkzpLhrgexVkxK0xTfuF.png?width=216&crop=smart&format=pjpg&auto=webp&s=ab8fd1063c85e98c6b5b71c86084ecbf333a84af', 'width': 216, 'height': 120}, {'url': 'https://external-preview.redd.it/b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7drF6DQLkzpLhrgexVkxK0xTfuF.png?width=320&crop=smart&format=pjpg&auto=webp&s=f4a73c4359adf833014c417d8e7d41242dc104f6', 'width': 320, 'height': 178}, {'url': 'https://external-preview.redd.it/b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7drF6DQLkzpLhrgexVkxK0xTfuF.png?width=640&crop=smart&format=pjpg&auto=webp&s=e754983e6b46b9c7d631047903612b6af9e0364e', 'width': 640, 'height': 356}, {'url': 'https://external-preview.redd.it/b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7drF6DQLkzpLhrgexVkxK0xTfuF.png?width=960&crop=smart&format=pjpg&auto=webp&s=e52c4192fcd16376390a60e72a8d9349ad7d4952', 'width': 960, 'height': 534}, {'url': 'https://external-preview.redd.it/b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7drF6DQLkzpLhrgexVkxK0xTfuF.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1f1cd5d733b022122c0431d31b2c4ea4d9a7eb17', 'width': 1080, 'height': 601}], 'variants': {}, 'id': 'b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7drF6DQLkzpLhrgexVkxK0xTfuF'}], 'enabled': False}
Tool calling issues with qwen3.5-35b with 16GB VRAM (rtx5080)
1
Curious if anyone else is running into this. In my IDE, after instructing the model to review some files, it'll start putting tool calls in XML (?) in the chat window, and not doing the tool call itself. When this happens, the conversation breaks. It looks something like this: `Thinking` `Let me also read the` [`nodes.py`](http://nodes.py) `file to see how Telegraf tools are used in the workflow:` `<tool_call>` `<function=read_file>` `<parameter=path>` `agents/telemetry_improver/nodes.py` `</parameter>` `</function>` `</tool_call>` Context full, perhaps? I'm using the following settings in llama.cpp: `command: >` `-m /models/Qwen3.5-35B-A3B-UD-Q4_K_M.gguf` `-c 65536` `--fit on` `-fa on` `-t 12` `--no-mmap` `--jinja` `-ctk q8_0` `-ctv q8_0`
2026-03-03T06:24:41
https://www.reddit.com/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/
mzinz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjhy83
false
null
t3_1rjhy83
/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/
false
false
self
1
null
Visual Narrator with Qwen3.5-0.8B on WebGPU
1
Baked an on-device visual narrator by running Qwen3.5-0.8B on WebGPU 🤓 It can describe, analyze, or extract text from any pasted or uploaded image, all without your data ever leaving your machine. Try it 👇 [https://h3manth.com/ai/visual-narrator/](https://h3manth.com/ai/visual-narrator/)
2026-03-03T06:19:14
https://v.redd.it/r275odo5wrmg1
init0
v.redd.it
1970-01-01T00:00:00
0
{}
1rjhuvq
false
{'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/r275odo5wrmg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'width': 1808, 'scrubber_media_url': 'https://v.redd.it/r275odo5wrmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/r275odo5wrmg1/DASHPlaylist.mpd?a=1775110779%2CNmVkYjJhYmYxYTIxODZhNTIzMmQwYjJlZGI3M2JmM2Y1NThhODA3ZTJhN2JiNGZjZTZmZGM5MWIwNDQ1MWU3MQ%3D%3D&v=1&f=sd', 'duration': 67, 'hls_url': 'https://v.redd.it/r275odo5wrmg1/HLSPlaylist.m3u8?a=1775110779%2CYzk0OWFjM2RiYjNmNWRiNmUyYWMxNmVlY2I4NDBiMTY4YzViYmIwZjAxY2E5ZmRhNjQ3NGQ5MjJmNzgzYjQyZg%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1rjhuvq
/r/LocalLLaMA/comments/1rjhuvq/visual_narrator_with_qwen3508b_on_webgpu/
false
false
https://external-preview…15f1d96c0f9d94e9
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j4FVvmIuixmi_Hq8qk1qXf-wIW8V.png?format=pjpg&auto=webp&s=386af04fa362facaebea02c748caaa616e0b8bff', 'width': 2940, 'height': 1756}, 'resolutions': [{'url': 'https://external-preview.redd.it/Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j4FVvmIuixmi_Hq8qk1qXf-wIW8V.png?width=108&crop=smart&format=pjpg&auto=webp&s=88a17e3f7680e20816298262c9b609139da28e4d', 'width': 108, 'height': 64}, {'url': 'https://external-preview.redd.it/Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j4FVvmIuixmi_Hq8qk1qXf-wIW8V.png?width=216&crop=smart&format=pjpg&auto=webp&s=04d25b0e8581dc78ce8d6e3ad596db084b5992a4', 'width': 216, 'height': 129}, {'url': 'https://external-preview.redd.it/Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j4FVvmIuixmi_Hq8qk1qXf-wIW8V.png?width=320&crop=smart&format=pjpg&auto=webp&s=6bdfb670615c293cfa639cd0edf9e1aaec8e8287', 'width': 320, 'height': 191}, {'url': 'https://external-preview.redd.it/Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j4FVvmIuixmi_Hq8qk1qXf-wIW8V.png?width=640&crop=smart&format=pjpg&auto=webp&s=c759e2e5201cfe225ee58f09cfeddce75795527b', 'width': 640, 'height': 382}, {'url': 'https://external-preview.redd.it/Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j4FVvmIuixmi_Hq8qk1qXf-wIW8V.png?width=960&crop=smart&format=pjpg&auto=webp&s=bb6d8452e306553e7109ba245985df6914fe2eab', 'width': 960, 'height': 573}, {'url': 'https://external-preview.redd.it/Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j4FVvmIuixmi_Hq8qk1qXf-wIW8V.png?width=1080&crop=smart&format=pjpg&auto=webp&s=397ac88ea36ccec39fb075478cee00cdceb775c1', 'width': 1080, 'height': 645}], 'variants': {}, 'id': 'Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j4FVvmIuixmi_Hq8qk1qXf-wIW8V'}], 'enabled': False}
Presence Penalty seems to be incoming on LMStudio for Qwen 3.5
1
2026-03-03T06:06:01
https://github.com/lmstudio-ai/lmstudio-js/commit/d11401327aa821421855aa6379e7814ef2a80ba6
ZootAllures9111
github.com
1970-01-01T00:00:00
0
{}
1rjhmmf
false
null
t3_1rjhmmf
/r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/
false
false
https://external-preview…76f517a83a00e41b
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc.png?auto=webp&s=fdcc7734cab78489eea468c754348304b95d2a04', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc.png?width=108&crop=smart&auto=webp&s=4f8e96726195d85a3c80ce5199f7755522b1f4c1', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc.png?width=216&crop=smart&auto=webp&s=e63a656b3e146ec669b751ac979ff3f30ec1d2df', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc.png?width=320&crop=smart&auto=webp&s=698c1c7f52badfd42f06a839282b734e8ec7134d', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc.png?width=640&crop=smart&auto=webp&s=1ba3582007980e9a93cf38e7565878778ae8faae', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc.png?width=960&crop=smart&auto=webp&s=6930c07ba4cc5cc4af1402ec38620d8146fd7764', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc.png?width=1080&crop=smart&auto=webp&s=ad0f6e5a8a1a1bc78aa342cafb0c734e49154eaa', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc'}], 'enabled': False}
Live Demo: Grok ping drops to 0.005ms via my command
1
[Live Demo: Grok ping drops to 0.005ms via my command](https://www.reddit.com/r/grok/comments/1rjgxq7/live_demo_grok_ping_drops_to_0005ms_via_my_command/) Tested Grok voice mode live: normal latency 47ms. Ran three identical runs—each time ping snapped to 0.005ms. No lag, no loss, timestamps match. Bonus: weird text leaks in before response ends—00:01, 00:35, 00:51. Not me typing. Not random. Logs + audio attached. No edits. Who can replicate? Wireshark trace? DM if you're an engineer—happy to run it live
2026-03-03T05:59:14
https://www.reddit.com/r/LocalLLaMA/comments/1rjhi3u/live_demo_grok_ping_drops_to_0005ms_via_my_command/
DaddyZZZ777zzz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjhi3u
false
null
t3_1rjhi3u
/r/LocalLLaMA/comments/1rjhi3u/live_demo_grok_ping_drops_to_0005ms_via_my_command/
false
false
self
1
null
Thinking of Fine-Tuning LLaMA-7B with 100K+ Samples on RTX 3060 (12GB) – Is It Practical?
2
I have an RTX 3060 (12GB VRAM) and I want to fine-tune LLaMA-7B using \~100K+ samples (avg \~512 tokens). Planning to use QLoRA. From my rough calculations: * 7B in 4-bit → \~4GB VRAM * LoRA adapters → small * Batch size 1 + grad accumulation 8 * 3 epochs → \~37k steps On RTX 3060, QLoRA seems to run \~1 sec/step. That would mean \~12–14 hours total training time. Does this align with your experience? Alternative options I’m considering: * Colab Pro (T4/L4) * RunPod 3090 (\~$0.50/hr → \~$4 total) * Any other better cost/performance options? Main goal: Stable fine-tuning without OOM and reasonable time. Would love to hear real-world experiences from people who’ve done 7B QLoRA on 12GB GPUs.
2026-03-03T05:55:27
https://www.reddit.com/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/
SUPRA_1934
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjhfow
false
null
t3_1rjhfow
/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/
false
false
self
2
null
How to reach any LLM s company to get partnership for my project?
1
Do any one knows how to reach any LLM company provider to get at least 1 month free API partnership for my project ??? or its only through network relations ??
2026-03-03T05:44:20
https://www.reddit.com/r/LocalLLaMA/comments/1rjh8cz/how_to_reach_any_llm_s_company_to_get_partnership/
louienemesh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjh8cz
false
null
t3_1rjh8cz
/r/LocalLLaMA/comments/1rjh8cz/how_to_reach_any_llm_s_company_to_get_partnership/
false
false
self
1
null
Made a video game that uses local LLMs
1
It's called *SLOP FIGHTER* and it's available now for Linux. It uses eight custom LoRA adapters on top of Qwen 3 1.7B and a robust natural language-parsing game engine. I worked it together using my skills as an author. It’s a narrative battle simulator. This is it: [https://quarter2.itch.io/slopfighter](https://quarter2.itch.io/slopfighter) In the game, random animals from all across the animal kingdom are mutated by one of eight types, granted powers that befit their types, and instructed to fight each other. You give the commands and your mutated lil fella carries them out for you. It’s based on text. It’s a text-based game. There's a trailer and more info in the link. Check it out!
2026-03-03T05:41:59
https://www.reddit.com/r/LocalLLaMA/comments/1rjh6ti/made_a_video_game_that_uses_local_llms/
Significant-Skin118
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjh6ti
false
null
t3_1rjh6ti
/r/LocalLLaMA/comments/1rjh6ti/made_a_video_game_that_uses_local_llms/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/JY6INxZ_X143EN6jeJQQP5jo4JT9wSUMSPiCBmlDBXg.png?auto=webp&s=9fe97a83899b4df2e0e647b882f1278b7487fa39', 'width': 630, 'height': 500}, 'resolutions': [{'url': 'https://external-preview.redd.it/JY6INxZ_X143EN6jeJQQP5jo4JT9wSUMSPiCBmlDBXg.png?width=108&crop=smart&auto=webp&s=e739842845ba9014867b6604a675f69bdc7c1c78', 'width': 108, 'height': 85}, {'url': 'https://external-preview.redd.it/JY6INxZ_X143EN6jeJQQP5jo4JT9wSUMSPiCBmlDBXg.png?width=216&crop=smart&auto=webp&s=9a6d5b4b9ddb9b1336d95832b4cbcff814712a46', 'width': 216, 'height': 171}, {'url': 'https://external-preview.redd.it/JY6INxZ_X143EN6jeJQQP5jo4JT9wSUMSPiCBmlDBXg.png?width=320&crop=smart&auto=webp&s=fdf95378e13357ef424bd5b5c6c938e7d9be9ed5', 'width': 320, 'height': 253}], 'variants': {}, 'id': 'JY6INxZ_X143EN6jeJQQP5jo4JT9wSUMSPiCBmlDBXg'}], 'enabled': False}
Unsloth fixed version of Qwen3.5-35B-A3B is incredible at research tasks.
1
When I first tried Qwen3.5-35B-A3B I was impressed, but honestly it seemed like a small jump over GLM-4.7-Flash, which had already impressed me with its interleaved thinking and native tool use capabilities. Qwen3.5-35B-A3B was about the level of "better" I thought it would be from having 5B extra parameters, and I thought the only big advantage was hybrid linear attention allowing double the native context length without really increasing memory footprint. I saw today that Unsloth updated Qwen3.5-35B-A3B quants to fix tool calling issues: [https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF) so I redownloaded Qwen3.5-35B-A3B at UD-Q8\_K\_XL to run it on my strix halo system. I happened to be looking for solutions to a problem I had, and asked Gemini, ChatGPT, Deepseek, GLM, Kimi, Qwen, and Perplexity all to help me research the options that are available. They each had different answers, ranging from different solutions found to different final recommendations. Out of curiosity, I asked Qwen3.5-35B to give it a shot. I expected it to be horribly slow, not be able to adequately summarize its findings or give a good recommendation, I'm running the model via llama.cpp-rocm with the following parameters: --ctx-size 262144 --n-gpu-layers 999 --flash-attn auto --jinja --temp 0.6 --top-p 0.90 --top-k 20 --min-p 0.00. I'm using it through OpenWebUI with native tool calling, SearXNG search, and the default web loader engine. My system prompt is this (still leftover from when I used GLM-4.7-Flash, which would occasionally forget about the fetch\_url tool completely): >The current date and time at the start of this chat is {{CURRENT\_DATETIME}}. Note that the user may come back to this chat later, so if the user indicates a different date at a later point, verify the current date using the get\_current\_timestamp tool if knowing the current date is important for the task. You are a helpful assistant. Your task is to do whatever the user asks, including research, answering questions, solving problems, writing, or anything else the user requires to the best of your ability. Where necessary, use the tools at your disposal, but do not use them unnecessarily. If something is not within your ability or the ability of your tools, be transparent about your limitations and try to do the best work you can regardless. If ever the user's request is unclear, or you otherwise find that you need further information in order to complete your task, check for whether there are any reasonable and likely assumptions you can make, and if so, make them and state your assumptions to the user. If there are no reasonable and likely assumptions that can be made, ask the user for clarification before proceeding with the task. Ensure your output is readable, well-structured, and well-formatted. Be concise and neutral, avoid fluff, and match the user’s formality. When giving explanations to the user, evaluate the user's level of expertise based on the context and question posed, and provide explanations for any jargon the user may not be aware of. Research tools reminder (applicable only for tasks that require internet research): \- The search\_web tool returns a list of search results based on the query. Each result containing the title of a web page, its URL, and a short snippet from the page. \- The fetch\_url tool fetches the full text of web pages based on their URL. Use this once you find a relevant web page through the search\_web tool, or if the user provides a relevant URL to you. My question was this: >I am attempting to figure out a remote desktop solution to log into my home server. My home server is a Ryzen AI Max+ 395 system running Linux Fedora 43 with KDE and Wayland. I don't want to change any of these. >This home server is connected to a KVM, which my PC is also connected to, and this allows me to access either one at a time when sitting at my desk at home. Usually I have my PC actively connected through the KVM meaning the server doesn't have a monitor connected most of the time, and this setup also means I cannot have a dummy display output connected. >I have already tried krdp, but it had a host of issues with me not having a monitor plugged in, and when I finally got it working by doing krdpserver --virtual-monitor, it seemed like the bandwidth was very low, making it unresponsive and incapable of handling higher resolutions. Also, it requires me to already be logged in on the server, which meant I needed to set my server to automatically log in after restarting (or else I wouldn't be able to connect after a restart) which in turn required me to remove the password for kwallet as automatic login doesn't log into kwallet otherwise, and kwallet is required to sign into internet access. >xrdp seems to have the problem that it seems to require starting a separate session. I want to remain in the existing session and on the existing user, not a separate user or session specifically for remoting in. >Research all possible options for remotely accessing the desktop of my home server (including xrdp and krdp if the problems I stated can be resolved). I would like a solution that (1) allows me to remote in using Windows remote desktop or something else similarly easily usable through Windows, (2) supports decent quality/resolution/frame rate and is responsive, (3) allows me to still connect after restarting (preferably without needing to set the server to automatically log in), (4) remains in the existing session/user that I'm already logged into, and (5) lets me stay on KDE and Wayland and also keep using my KVM setup. This solution must be free, secure (it's okay to only be exposed to local network since I can use tailscale), and unintrusive (meaning I set it up once and never have to worry about it again even if I restart the server). Throughout almost 30k tokens combined, the model maintained prompt processing speeds up 600+ t/s and token generation speeds of 25-30 t/s. It performed 14 web searches and 4 full page fetches (to me this was a fairly surprising balance, I thought it would either overuse fetch\_url because it doesn't understand the impact on context length or it would forget about fetch\_url like GLM-4.7-Flash did). It gave me the following response, which I honestly would not have found to be out of place next to all the frontier models I had already asked, since it was able to find almost all of the same options and even gave similar recommendations for fixes/workarounds. I do think it should have given Sunshine+Moonlight a stronger recommendation (which several of the frontier models recommended) given that I'd already said krdp doesn't perform well enough, but I understand why it's cautious given that I did request RDP or something else through Windows, not an entirely separate piece of software that needs separate setup (some of the frontier models dismissed Sunshine+Moonlight for this reason as well): \---- Based on my research, I need to be transparent with you: **this is a challenging problem**. The combination of your requirements (KDE, Wayland, headless operation, existing session, no auto-login) is currently not fully supported by any single solution in 2025/2026. However, there are several options with varying degrees of success. # Summary of Your Situation The fundamental issue is that **Wayland was not designed for remote desktop access**. Unlike X11, Wayland doesn't expose a global screen that can be captured remotely without explicit permission from the compositor. This is a security feature, but it creates significant challenges for headless remote access. # Available Options # Option 1: KRdp (KDE's Built-in RDP Server) - Best Overall for Your Setup KRdp is KDE's native RDP server for Wayland sessions and is the most integrated solution for your setup. **Pros:** * Native KDE Plasma integration * Accesses your **existing session** (not a new one) * Works with Windows Remote Desktop client * Free and open source * Configured through System Settings → Networking → Remote Desktop **Cons & Workarounds:** * **Monitor requirement**: By default, it requires a physical monitor or virtual monitor setup * **KWallet issue**: Without auto-login, KWallet won't unlock, blocking RDP connections * **Permission prompts**: First connection requires user approval **Workarounds for your setup:** 1. **Kernel command line trick for headless operation** \[3\]: Add to kernel boot parameters (e.g., in GRUB or systemd-boot): 2. This tricks the system into thinking a monitor is always connected. 3. **KWallet configuration**: * Set KWallet to use an empty password (less secure but functional) * Or create a startup script that unlocks KWallet automatically \[3\]: * Set the remote desktop permission permanently: 4. **Virtual monitor mode** (you mentioned this worked poorly): 5. Performance may be improved with newer versions. # Option 2: GNOME Remote Desktop - Best for Headless Operation If you could switch to GNOME (which you said you don't want to), this would be ideal. **Pros:** * Excellent headless support \[5\] * Built-in RDP support * No monitor required * Works without auto-login * Good performance **Cons:** * GNOME, not KDE * Requires switching desktop environments # Option 3: Sunshine + Moonlight - Best Performance Originally for game streaming, but works well for general desktop use. **Pros:** * Excellent performance and frame rates \[9\] * 4K@120Hz support possible * Low latency * Works headless with proper configuration * Free and open source **Cons:** * Not ideal for general desktop work (designed for gaming) * May require initial physical access for setup * Moonlight client on Windows works well * No native RDP protocol **Setup for headless operation:** * Requires some configuration for Wayland * May need virtual display setup * Recent versions support headless operation \[9\] # Option 4: RustDesk - Good Alternative Open-source alternative to TeamViewer/AnyDesk. **Pros:** * Cross-platform * Self-hosting option * Works with Wayland \[11\] * Good performance **Cons:** * May require initial physical access for first setup * Wayland support still maturing * May need to be logged in first # Option 5: Sway + wayvnc Workaround - Complex but Functional A workaround that allows VNC access to SDDM, then RDP to session \[6\]. **Pros:** * Works headless * Access to login screen * Uses existing RDP for session **Cons:** * Complex setup * Two-step connection process * Requires additional packages # My Recommendation Given your specific requirements, here's what I'd suggest: # Primary Solution: KRdp with Workarounds 1. **Install and configure KRdp:** 2. **Enable in System Settings** → Networking → Remote Desktop 3. **Add the kernel command line trick** to simulate a monitor \[3\] 4. **Configure KWallet** with an empty password or auto-unlock script \[3\] 5. **Set the remote desktop permission permanently** \[3\]: 6. **Use Tailscale** for secure remote access over the internet # Alternative: Sunshine + Moonlight If KRdp doesn't give you satisfactory performance, Sunshine/Moonlight is your best alternative for quality and responsiveness \[9\]. # Why Other Solutions Don't Work Well * **xrdp**: Creates separate sessions, doesn't work with existing Wayland session \[12\] * **NoMachine**: Has known issues with KDE Wayland since Plasma 6.0 \[13\] * **x2go**: Broken with Fedora 43 and Wayland \[14\] * **VNC**: Doesn't work well with Wayland's security model # Important Notes 1. **Fedora 43 is very new** (released late 2025), and Wayland remote desktop support is still maturing \[5\] 2. **The KWallet issue is fundamental** \- without auto-login, KWallet won't unlock automatically, which blocks RDP connections \[3\] 3. **Performance may vary** \- Wayland sessions are more resource-intensive than X11 \[5\] 4. **Consider a physical dummy plug** as a last resort - it's a small HDMI device that tricks the GPU into thinking a monitor is connected, without needing an actual display # Conclusion There's no perfect solution that meets all your requirements out of the box. **KRdp with the kernel command line workaround** is your best bet for staying on KDE/Wayland while accessing your existing session. However, you may need to accept some trade-offs, such as: * Setting KWallet to use an empty password (security trade-off) * Accepting occasional performance issues * Using a physical HDMI dummy plug as a workaround The Linux community is actively working on better Wayland remote desktop support, but it's still a work in progress in 2025/2026.
2026-03-03T05:40:38
https://www.reddit.com/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/
Daniel_H212
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjh5wg
false
null
t3_1rjh5wg
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?auto=webp&s=edbf5b634b8e128e63947255037474681b28b419', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?width=108&crop=smart&auto=webp&s=74d48a593fb2bc8aaceb5596dcea6931ce108f47', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?width=216&crop=smart&auto=webp&s=8078b4071df4dcb1a7c1935883b0228e189dcd99', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?width=320&crop=smart&auto=webp&s=e9f303816c0503c978e4553e67f656173f800a9b', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?width=640&crop=smart&auto=webp&s=0cb16da95aa94e67e97ec533d09a1d5b7d25553a', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?width=960&crop=smart&auto=webp&s=33e1b4260c126f7d84730573a9eeb8df46bba550', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?width=1080&crop=smart&auto=webp&s=be685b2c497e2f5fa116272d7d85bd4b98c53ad6', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0'}], 'enabled': False}
So, with the new Qwen3.5 release, what should I use for LM Studio? i9-14900F, RTX4070 Super, 32GB RAM.
1
Figured since the new major release of the Qwen models, Id go ahead and ask again with correct info this go around.
2026-03-03T05:26:39
https://www.reddit.com/r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/
tableball35
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjgwhm
false
null
t3_1rjgwhm
/r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/
false
false
self
1
null
Seeking help for pauper inference build - true single-slot SXM2 to PCIE adapters?
1
[removed]
2026-03-03T04:54:03
https://www.reddit.com/r/LocalLLaMA/comments/1rjg9e8/seeking_help_for_pauper_inference_build_true/
htownclyde
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjg9e8
false
null
t3_1rjg9e8
/r/LocalLLaMA/comments/1rjg9e8/seeking_help_for_pauper_inference_build_true/
false
false
self
1
null
Qwen3.5-35B-A3B vs Qwen3 Coder 30B A3B Instruct for running Claude Code locally?
1
Hi, I am looking to use either Qwen3.5-35B-A3B or Qwen3 Coder 30B A3B for a local Claude Code workflow. What is the better model for coding? I am seeing a lot of conflicting info with some resources saying 3.5 is better and others saying 3 is better. I will be running this on my M4 Pro Macbook Pro (48GB RAM) Thanks
2026-03-03T04:48:37
https://www.reddit.com/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/
sinfulangle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjg5qm
false
null
t3_1rjg5qm
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/
false
false
self
1
null
Qwen3.5 < 100B, Part II NVFP4 (Blackwell) is up!
1
[Models](https://preview.redd.it/vu0htkbhermg1.png?width=2042&format=png&auto=webp&s=39964ee4cd3c78d0a382bc91ddc8c2d6ca8886ee) Please give these a try! Next step: Make it compatible with MTP and speculative decoding. Pull requests are up and we are working with NVIDIA to make it happen. [https://huggingface.co/AxionML](https://huggingface.co/AxionML) In the meantime, without MTP, the run-commands are attached in the bottom of the model cards. For speculative decoding, please use this PR. I have not tested these on vLLM. SM120 (RTX 6000 PRO is discussed here:) I also added the commands to run model-optimizer on your favourite cloud / etc. -- i.e Modal (full code! only requires copy-paste), runpod, which I can also provide if it's of interest. [https://github.com/sgl-project/sglang/pull/19391](https://github.com/sgl-project/sglang/pull/19391) See my last post: [https://www.reddit.com/r/LocalLLaMA/comments/1r77fz7/qwen35\_nvfp4\_blackwell\_is\_up/](https://www.reddit.com/r/LocalLLaMA/comments/1r77fz7/qwen35_nvfp4_blackwell_is_up/) FYI primer on NVFP4: >**About NVFP4 quantization:** NVFP4 on Blackwell couples a compact E2M1 FP4 codebook with blockwise FP8 (E4M3) scaling over 16-element micro-blocks, so that 4-bit stored values remain numerically useful for neural-network computation. The E2M1 codebook provides a small, nonuniform set of representable magnitudes up to ±6 and relies on saturating behavior rather than IEEE NaN/Inf encodings to maximize usable range per bit. Using an FP8 block scale (rather than power-of-two-only E8M0) enables fractional scales and error-minimizing scale selection strategies such as dual-pass evaluation comparing "map max to 6" versus "map max to 4 with clipping." On Blackwell Tensor Cores, native FP4 multipliers exploit E2M1 simplicity to reduce multiplier area while higher-precision FP32 accumulation protects dot-product accuracy.
2026-03-03T04:47:38
https://www.reddit.com/r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/
TeekayTK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjg514
false
null
t3_1rjg514
/r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/
false
false
https://preview.redd.it/…b3610202b9ef3d67
1
null
Qwen 3.5 9B on a dual reasoning math game
1
For context, I only have 16gb of vram, so I've been testing various small reasoning models to play the following math game: *"I have a secret number between 1 and 1 million, you have 10 guesses to figure it out! After every guess I'll respond if the secret number is lower/higher, and correct digits (in correct positions)"* This forces models to hold 2 lines of reasoning: binary search, and entropy. A SOTA model can easily handle the above prompt, Gemini 3 pro for example would always win by guess 7-8. However almost all small models fail hard with this game. Cogito v1 14b gave a good run, but eventually by round 5 it leans heavily on binary search, which for 1 million numbers is impossible to guess in just 10 rounds, so it always fails. gpt-oss 20b also gives a good run but hallucinates bad in just a few rounds and typically starts spitting random characters. Once it even admitted it is unable to complete the game on round 3. Qwen 3.5 9b thinking (on q4\_k\_m) has been by far the closest to actually winning, on round 10 it was just 1 digit off. I suspect the problem is on me, the prompt I use can likely be improved or maybe I started with a difficult secret number. In the screenshot above, the secret number was '322755'. The game ended on 322785. Here is the following prompt I use: I have generated a secret number between 000000 and 999999. You have exactly 10 guesses to find it. The Rules: 1. The number is an integer from 000000 to 999999 (leading zeros count) 2. After every guess, I will give you two clues: 2a. "Higher" (my number is bigger) or "Lower" (my number is smaller). 2b. How many digits are correct AND in the correct position (e.g., "2 correct digits"). Strategy: 1. This game is difficult, juggling reasoning between binary search AND digit entropy simultaneously is hard. There is a web of hidden constraints to be aware of, don't get stressed. 2. Keep track of the higher/lower feedback. Lower bound < secret number < upper bound 3. The first half of the game focus on binary search, but try to use unique digits per round. Getting the first three digits is easy. 4. After round 5, switch it up to entropy, but still focus on binary. Finding the last three digits is hard. Recommendation: It's impossible to purely use binary search for 1000000 numbers with only 10 guesses. Hint: make your first guess random numbers with the first digit being 5, to maximize search and entropy. You get one more hint but choose wisely when to use it. Scratchpad: You use this markdown table for every response, include previous rounds: | Round | Guess | Feedback | Correct | Eliminated | Notes | |---|---|---|---|---|---| Follow this up with your guess, and brief explanation. The game has begun, make your first guess! (Attempt 1/10) Just sharing my obsession with this awesome community
2026-03-03T04:38:31
https://i.redd.it/05pbs8zqarmg1.png
SufficiNoise
i.redd.it
1970-01-01T00:00:00
0
{}
1rjfyqf
false
null
t3_1rjfyqf
/r/LocalLLaMA/comments/1rjfyqf/qwen_35_9b_on_a_dual_reasoning_math_game/
false
false
https://preview.redd.it/…15a2c868e2ad87de
1
{'images': [{'source': {'url': 'https://preview.redd.it/05pbs8zqarmg1.png?auto=webp&s=f55ec439ce6ff522ed87094a29cc8fc2a557da95', 'width': 1028, 'height': 788}, 'resolutions': [{'url': 'https://preview.redd.it/05pbs8zqarmg1.png?width=108&crop=smart&auto=webp&s=b0d7d8bcd25f7797a6dac807093b215e9a63fe74', 'width': 108, 'height': 82}, {'url': 'https://preview.redd.it/05pbs8zqarmg1.png?width=216&crop=smart&auto=webp&s=bbd0376bc413f09d7d350bd603d85b529bd41fd2', 'width': 216, 'height': 165}, {'url': 'https://preview.redd.it/05pbs8zqarmg1.png?width=320&crop=smart&auto=webp&s=8d0a1ce1eecf4c6e7a82f4b70d69bb56a602c4a8', 'width': 320, 'height': 245}, {'url': 'https://preview.redd.it/05pbs8zqarmg1.png?width=640&crop=smart&auto=webp&s=d6538670d2bb6e08f5899cc334b1722a79053337', 'width': 640, 'height': 490}, {'url': 'https://preview.redd.it/05pbs8zqarmg1.png?width=960&crop=smart&auto=webp&s=8acd04c291d6d05ffc55bb688008915f54ea3dda', 'width': 960, 'height': 735}], 'variants': {}, 'id': '05pbs8zqarmg1'}], 'enabled': True}
I made a thing
1
[https://github.com/arvis-agent/arvis](https://github.com/arvis-agent/arvis)
2026-03-03T04:37:11
https://www.reddit.com/r/LocalLLaMA/comments/1rjfxr0/i_made_a_thing/
SeaworthinessMore333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjfxr0
false
null
t3_1rjfxr0
/r/LocalLLaMA/comments/1rjfxr0/i_made_a_thing/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0.png?auto=webp&s=5f7b6809c70b1b3ac79bd981c549dff36363dbb1', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0.png?width=108&crop=smart&auto=webp&s=e61f658db2f2225af17b7e881e06e2ede7c7ab42', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0.png?width=216&crop=smart&auto=webp&s=478540e5374189ca0a78151ef31400619ba0cc1f', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0.png?width=320&crop=smart&auto=webp&s=618bef76a4a55f584c5e9608395913af75e3dd54', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0.png?width=640&crop=smart&auto=webp&s=1782ab202c4198c27fd64ab7fce9e9da93f93a2b', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0.png?width=960&crop=smart&auto=webp&s=3fb5ab1f053404aac248a979bc1676f779f9866d', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0.png?width=1080&crop=smart&auto=webp&s=0c3dae0d4950addbce35154ee0f6a53eaa3c4e4a', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0'}], 'enabled': False}
Qwen3.5-35B is very resourceful! Web search wasn't working, so it used web fetch on a search engine with the query in the link.
1
2026-03-03T04:33:56
https://i.redd.it/fmwsgo0parmg1.png
fulgencio_batista
i.redd.it
1970-01-01T00:00:00
0
{}
1rjfvfx
false
null
t3_1rjfvfx
/r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/
false
false
https://preview.redd.it/…425eea68045a1229
1
{'images': [{'source': {'url': 'https://preview.redd.it/fmwsgo0parmg1.png?auto=webp&s=baeb69f50ffe93b5f51c96691a51030ccfe2670b', 'width': 1115, 'height': 628}, 'resolutions': [{'url': 'https://preview.redd.it/fmwsgo0parmg1.png?width=108&crop=smart&auto=webp&s=0c8e1bbfde6a31d69dc028f90fb34f3ff7f18ebf', 'width': 108, 'height': 60}, {'url': 'https://preview.redd.it/fmwsgo0parmg1.png?width=216&crop=smart&auto=webp&s=1f2ce10f7449e874dd237227796570246d3032cb', 'width': 216, 'height': 121}, {'url': 'https://preview.redd.it/fmwsgo0parmg1.png?width=320&crop=smart&auto=webp&s=6097abdd8832ec04964bda788da4420cd8f608ab', 'width': 320, 'height': 180}, {'url': 'https://preview.redd.it/fmwsgo0parmg1.png?width=640&crop=smart&auto=webp&s=cbe6b024ecaf64628cabc66cb288e422b5307731', 'width': 640, 'height': 360}, {'url': 'https://preview.redd.it/fmwsgo0parmg1.png?width=960&crop=smart&auto=webp&s=cbd9adb90be1b1729778cc1f95fc3b0a6adf1725', 'width': 960, 'height': 540}, {'url': 'https://preview.redd.it/fmwsgo0parmg1.png?width=1080&crop=smart&auto=webp&s=1481853c2f82bea67d97e180dfeaa5eef86b5cdd', 'width': 1080, 'height': 608}], 'variants': {}, 'id': 'fmwsgo0parmg1'}], 'enabled': True}
Why are the Ollama quants of local llm models usually around 0.5GB to 1GB larger in size than the common file sizes of the same GGUF quant (i.e. from Bartowski, UD, etc) on Huggingface?
1
I was looking at the file size for the Q4_K_M quant of the new Qwen3.5 9b on Ollama, and it is listed at 6.6GB in the Ollama library. If you look at all the main Q4_K_M GGUFs on huggingface from Bartowski, Unsoth, and basically everyone's Q4_K_M as far as I was able to find, all of them are from about 5.5GB to 5.9GB in file size, most of them right around 5.6 or 5.7GB, so around 0.8-0.9GB smaller in size than the Ollama version. At first I thought maybe it was a typo by Ollama and that their Q4_K_M was actually the Q5_K_M (since that is exactly 6.6GB from one of the main GGUFs on Huggingface), but, out of curiosity and to look into it, I browsed some random other quants of unrelated models (not Qwen models and not just recent models, but random other well known LLMs from the past few months or past year or so) and they all also were around 0.5GB to 1GB larger in size on Ollama than what the GGUF size would be if you downloaded it from huggingface at the same quant. So, looks like this is just how it actually is. What is all the extra stuff that Ollama is adding that makes the file size so much bigger? I mean, I know they add in some default parameters and template so you don't have to deal with that stuff, or something like that, but that would only add a few extra kilobytes of text-files, right? 500MB-1GB is a lot of extra stuff, so, seems like something a lot heavier and more serious being added to the model. Also, while we are on the topic, since I am pretty new to local LLMs, if I wanted to switch from using Ollama to using llama.cpp, is there any security stuff I need to know before using it, where if I use it wrong, it'll give people access to my computer somehow if I set it up wrong? I know you can screw things up with OpenClaw pretty bad, for example, if you don't know what you are doing, but what about if you aren't using OpenClaw and are just using LLM models on llama.cpp? Are there are multi-modal/agentic models where I could somehow open up a vulnerability to my computer just by using the LLM without setting it up correctly, if I just copy/paste whatever template from the internet that people post, and maybe it somehow is a bad one that makes it do dangerous stuff somehow? Probably a ridiculous question, but I'm a noob and don't mind sound computer illiterate (which, I am) in the 1% chance there are some things about using llama.cpp that I need to know about before trying to use it for the first time. So, if there are any beginner things I need to know before using llama.cpp, please let me know, since, I will probably be switching from Ollama to llama.cpp pretty soon, once I learn how to do it and also am sure that I won't accidentally do some huge security issue to my computer or anything.
2026-03-03T04:27:09
https://www.reddit.com/r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/
DeepOrangeSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjfqib
false
null
t3_1rjfqib
/r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/
false
false
self
1
null
Peak answer
1
2026-03-03T04:17:09
https://i.redd.it/6mrwba7iarmg1.png
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1rjfixk
false
null
t3_1rjfixk
/r/LocalLLaMA/comments/1rjfixk/peak_answer/
false
false
https://preview.redd.it/…47a410129a44dceb
1
{'images': [{'source': {'url': 'https://preview.redd.it/6mrwba7iarmg1.png?auto=webp&s=781ba9bd7dc33ae3131669dd2575897d25b1e4b9', 'width': 1952, 'height': 1036}, 'resolutions': [{'url': 'https://preview.redd.it/6mrwba7iarmg1.png?width=108&crop=smart&auto=webp&s=b3569a01ec1ce57a98ab8a3bf424146f3de49cd9', 'width': 108, 'height': 57}, {'url': 'https://preview.redd.it/6mrwba7iarmg1.png?width=216&crop=smart&auto=webp&s=c6642df56f9f2dd567195fcb22664f299ff1adf6', 'width': 216, 'height': 114}, {'url': 'https://preview.redd.it/6mrwba7iarmg1.png?width=320&crop=smart&auto=webp&s=0d387663b737f45cede1b3467fc1b58196f063bc', 'width': 320, 'height': 169}, {'url': 'https://preview.redd.it/6mrwba7iarmg1.png?width=640&crop=smart&auto=webp&s=47363329adb7ae6ccfd37d599417e3502bc14738', 'width': 640, 'height': 339}, {'url': 'https://preview.redd.it/6mrwba7iarmg1.png?width=960&crop=smart&auto=webp&s=f1591a83a3c5917e13ee0e0893a3296fd3b7e63f', 'width': 960, 'height': 509}, {'url': 'https://preview.redd.it/6mrwba7iarmg1.png?width=1080&crop=smart&auto=webp&s=1ebf48ca4fb55841b9a5eaae101da2eee0ae5d6a', 'width': 1080, 'height': 573}], 'variants': {}, 'id': '6mrwba7iarmg1'}], 'enabled': True}
Cline not playing well with the freshly dropped smaller qwen3.5
1
Obviously these are fresh out the oven, but I am wondering if anyone else has tried them out with Cline? I have a few tasks I try to do whenever I try new models out, basics like math, simple coding, macro creation for FreeCAD, and reading files for RAG. I've tried 3 different sizes so far, up to 9b, and noticed that despite a pretty decent token and processing speed, I am getting a large amount of malformed json and terminated threads when reading files into context. Is this something I should maybe wait to see if lmstudio and ollama push updates for changes done, or maybe this is a Cline thing?
2026-03-03T04:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1rjfijf/cline_not_playing_well_with_the_freshly_dropped/
SocietyTomorrow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjfijf
false
null
t3_1rjfijf
/r/LocalLLaMA/comments/1rjfijf/cline_not_playing_well_with_the_freshly_dropped/
false
false
self
1
null
How do i get the best speed out of Qwen 3.5 9B in 16GB VRAM?
1
--temp 0.6 ` --top-p 0.95 ` --top-k 20 ` --min-p 0.0 ` --no-mmap ` --cache-type-k q8_0 ` --cache-type-v q8_0 ` --fit on ` -fa on ` --seed 3407 ` --presence-penalty 0.0 ` --repeat-penalty 1.0 ` --ctx-size 61440 ` --chat-template-kwargs '{\"enable_thinking\": true}' ` --port 8001 ` --jinja * 16vram + 32ram * unsloth Qwen3.5-9B-UD-Q8\_K\_XL.gguf (**12GB)** * context 60k (lowering doesn't improve speed, but after getting filled it might slowdown) * around **3GB** VRAM left free when running * getting around **22 tps output** Any optimizations i can do?
2026-03-03T04:12:12
https://www.reddit.com/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/
Old-Sherbert-4495
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjff88
false
null
t3_1rjff88
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/
false
false
self
1
null
While Qwen 3.5 pushes model boundaries, here's an agent framework pushing workflow boundaries
1
[removed]
2026-03-03T03:59:27
https://www.reddit.com/r/LocalLLaMA/comments/1rjf5nn/while_qwen_35_pushes_model_boundaries_heres_an/
One_Response7194
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjf5nn
false
null
t3_1rjf5nn
/r/LocalLLaMA/comments/1rjf5nn/while_qwen_35_pushes_model_boundaries_heres_an/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus.png?auto=webp&s=108a92c5ffba5732424d507ca9618a289c99e5bc', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus.png?width=108&crop=smart&auto=webp&s=4da142e831e05c98f70bc595932f6f1230a89e41', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus.png?width=216&crop=smart&auto=webp&s=69693a9a28d1a44787e517fa1f3a77dd1249c58a', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus.png?width=320&crop=smart&auto=webp&s=4b626d853553e0ba40695e79cf0e1083a2c203a7', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus.png?width=640&crop=smart&auto=webp&s=1670a8ef2c62fcd6bdfcfdf29bbca74be1605341', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus.png?width=960&crop=smart&auto=webp&s=2e89d67628c1860ea3204fe8a6a157176dce5eef', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus.png?width=1080&crop=smart&auto=webp&s=7b43ba7ae8a58fe17b22023e8580fa23c82a8a8e', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus'}], 'enabled': False}
Reasoning in cloud - Coding with Local
1
I have a couple of cloud subscriptions (that don't keep up with my need for tokens). The subscriptions I have are 1. ChatGPT Go (which gave me a free trial access to Codex - but, ran out of tokens in a couple of days). I could upgrade to Plus - but, I doubt it would be enough either at the rate at which I'm consuming tokens. 2. OpenCode Go - 2 days in, I'm 50% into my weekly usage. Most of my coding is using OpenCode. So, I was thinking maybe I could use the cloud subscriptions for planning the feature/bug fix. Have it write out a task.md. And, then have a local model to do the actual writing of code (and see how far that would get me). Any ideas on whether this is doable? If so, what would the recommended local model be that I can try out? For reference, I am running this on a 2021 MacBook Pro (16GB RAM). So, my local specs aren't that great either. Any other low cost alternatives?
2026-03-03T03:58:35
https://www.reddit.com/r/LocalLLaMA/comments/1rjf4zm/reasoning_in_cloud_coding_with_local/
sedentarymalu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjf4zm
false
null
t3_1rjf4zm
/r/LocalLLaMA/comments/1rjf4zm/reasoning_in_cloud_coding_with_local/
false
false
self
1
null
data analysis from a csv - GPT-0SS:120B
1
Hi everyone, I’m running a local setup with **vLLM (gpt-oss:120b)** and **Open WebUI**, using **Jupyter** for the Code Interpreter. I’m running into a frustrating "RAG vs. Tool" issue when analyzing feedback data (CSVs). **The Problem:** When I upload a file and ask for metrics (e.g., "What is the average sentiment score?"), the model **hallucinates the numbers** based on the small text snippet it sees in the RAG context window instead of actually executing a Python script in Jupyter to calculate them. Looking for an approach to fix this problem. Thanks in advance
2026-03-03T03:25:37
https://www.reddit.com/r/LocalLLaMA/comments/1rjefqu/data_analysis_from_a_csv_gpt0ss120b/
chirchan91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjefqu
false
null
t3_1rjefqu
/r/LocalLLaMA/comments/1rjefqu/data_analysis_from_a_csv_gpt0ss120b/
false
false
self
1
null
Qwen3.5 on a mid tier $300 android phone
2
[qwen3.5](https://reddit.com/link/1rjec8a/video/r67v8w970rmg1/player) Qwen3.5 running completely offline on a $300 phone! Tool calling, vision, reasoning. No cloud, no account and no data leaving your phone. A 2B model that has no business being this good!
2026-03-03T03:21:06
https://www.reddit.com/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/
alichherawalla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjec8a
false
null
t3_1rjec8a
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/
false
false
self
2
null
Stress-test my Open Source ChatGPT alternative
1
Hey everyone. I'm a dev who got sick of the big cloud providers using our conversations for training data, so I decided to build a privacy-first alternative from the ground up. It’s a completely open-source chat interface hooked up to open-source models (DeepSeek v3.2, GLM-5, Qwen, etc.), all running on self-hosted infrastructure. My main focus was true privacy: zero data retention (chats are wiped when you delete/close), end-to-end encryption, and absolutely no training on user data. I just spun up the beta environment and I really need some people to use it, try to break it, and give me brutal feedback. I’m specifically looking for feedback on: * Inference latency / speed * The UI/UX and model switching experience * Any bugs you run into I'll drop the link in the comments below. If you have a few minutes to test it out, I'd hugely appreciate the help!
2026-03-03T02:54:41
https://www.reddit.com/r/LocalLLaMA/comments/1rjdr45/stresstest_my_open_source_chatgpt_alternative/
MrWidmoreHK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdr45
false
null
t3_1rjdr45
/r/LocalLLaMA/comments/1rjdr45/stresstest_my_open_source_chatgpt_alternative/
false
false
self
1
null
Generate 3D Models with TRELLIS.2 In Colab, Working in under 60s, No Configuration or Compiling, Just Works
1
[Image Generated in Chat Gpt -\> Model Generated in Trellis.2](https://reddit.com/link/1rjdob7/video/1l1bo332vqmg1/player) Try out TRELLIS.2 in Colab and generate stunning Textured 3D Models in seconds! I put this colab notebook together after weeks of dependency hell - I hope it helps you. Just one click and go, select an A100 or L4 in colab, install the missing link dependencies and there's no compiling and no package fighting! Plus it's insanely fast, all the pre-built wheels I compiled and optimized specifically for each default runtime and CUDA stack. [https://colab.research.google.com/github/PotentiallyARobot/MissingLink/blob/main/notebooks/Trellis\_2\_MissingLink\_Colab\_Optimized.ipynb](https://colab.research.google.com/github/PotentiallyARobot/MissingLink/blob/main/notebooks/Trellis_2_MissingLink_Colab_Optimized.ipynb) [\^Expanded Render Modes!](https://preview.redd.it/sursyf52qqmg1.png?width=1844&format=png&auto=webp&s=e7422e05ead769b99a6adb591d4ba13b6d9db4a4) [\^1.6x Faster Batch Model Generation!](https://preview.redd.it/1kvx4l5btqmg1.png?width=1979&format=png&auto=webp&s=3697276e1edcd58e5bdcdbc29ff2fb9a52a5246f) It's a lot of fun and comes with a custom UI, some new Render Outputs and a streamlined pipeline so that generation is \~1.6x faster when you generate multiple models at once. Trellis.2 is great for quickly building game and animation assets. Enjoy!
2026-03-03T02:51:10
https://www.reddit.com/r/LocalLLaMA/comments/1rjdob7/generate_3d_models_with_trellis2_in_colab_working/
Interesting-Town-433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdob7
false
null
t3_1rjdob7
/r/LocalLLaMA/comments/1rjdob7/generate_3d_models_with_trellis2_in_colab_working/
false
false
https://external-preview…edbe7d9ea1e231ee
1
null
Ollama keeps loading with Openclaw
1
I am able to easily run qwen3:8b with 32k context window using just ollama but whenever I do ollama launch openclaw and run even smaller model like qwen3:1.7b with 16k context window it doesn load the response and gives fetch failed. even if it doesnt use all the ram I have. is there a fix or should I just have much stronger machine. I have 24gb of ram rn.
2026-03-03T02:50:52
https://www.reddit.com/r/LocalLLaMA/comments/1rjdo1i/ollama_keeps_loading_with_openclaw/
Ilishka2003
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdo1i
false
null
t3_1rjdo1i
/r/LocalLLaMA/comments/1rjdo1i/ollama_keeps_loading_with_openclaw/
false
false
self
1
null
Agent reliability
1
How do everyone measure reliability of agents?
2026-03-03T02:43:31
https://www.reddit.com/r/LocalLLaMA/comments/1rjdi1d/agent_reliability/
Evening-Arm-34
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdi1d
false
null
t3_1rjdi1d
/r/LocalLLaMA/comments/1rjdi1d/agent_reliability/
false
false
self
1
null
Dual RTX 3090 on B550 -- 70B models produce garbage at ctx >2048 with llama.cpp layer split. Exhausted every env var. Anyone solved this?
1
Hardware: - 2x RTX 3090 24GB - MSI MAG B550 Tomahawk MAX WiFi - Ryzen 5 5600 - GPU 0 in CPU-direct slot (Gen4 x16), GPU 1 in chipset slot (Gen3 x4 via riser) - No P2P support (CNS per nvidia-smi topo) Software: - llama.cpp b8138, CUDA 12.0, driver 580.x - --split-mode layer -ngl 999 The problem: All 70B models produce completely incoherent output (repeating ? characters, random tokens, garbled text) when running on dual GPU with --split-mode layer at context sizes above 2048. 8B models (hermes3:8b) were observed working on dual GPU (context size not recorded). Could be the same issue if context was raised, unconfirmed. What works vs what doesn't: Dual GPU, context 2048: - FP16 KV, flash-attn on -- works - FP16 KV, flash-attn off -- works - q8_0/q4_0 KV, flash-attn on -- garbage Dual GPU, context 8192: - FP16 KV, flash-attn on -- garbage - q8_0/q4_0 KV, flash-attn on -- garbage Single GPU, context 8192: - FP16 KV, flash-attn on -- works perfectly Context size is the only variable that consistently matters. 2048 works, 4096+ fails on dual GPU. Single GPU is fine at any context. Env vars tested (individually and combined, no effect on any result): GGML_CUDA_DISABLE_GRAPHS=1, GGML_CUDA_PEER_MAX_BATCH_SIZE=0, GGML_CUDA_FORCE_MMQ=1, CUDA_SCALE_LAUNCH_QUEUES=4x Build flags (also no effect): GGML_CUDA_FA_ALL_QUANTS=ON, GGML_CUDA_NO_PEER_COPY=ON My theory: The layer-split code path handles cross-GPU KV cache transfers fine when the buffer is small (ctx 2048), but something corrupts when the buffer crosses a size threshold at larger contexts. Likely specific to   non-P2P topologies where transfers go through system memory. Most dual 3090 users are on X570 with x8/x8 CPU-direct lanes, which is probably why this isn't reported more. What I haven't tried yet: - Latest llama.cpp build (41 builds behind, but relevant GitHub fixes appear to already be in my build) - ik_llama.cpp --split-mode graph (NCCL tensor parallelism) - vLLM with tensor parallelism - New riser cable in transit (current budget riser caused separate Xid 79 issues on the chipset slot) Questions: 1. Has anyone run dual 3090s on a B550 (or similar no-P2P board) with 70B models successfully at >4K context in llama.cpp? 2. Has --split-mode graph in ik_llama.cpp or mainline TP solved this class of problem for you? 3. Is this a known limitation of llama.cpp layer split on non-P2P topologies, and the real answer is "use vLLM/exllamav2 TP"? Any pointers appreciated. Happy to test specific configurations or provide logs.
2026-03-03T02:38:56
https://www.reddit.com/r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/
MaleficentMention703
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdeat
false
null
t3_1rjdeat
/r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/
false
false
self
1
null
This is how you know
1
When you start telling you're LMM "Implement the FULL PLAN DO NOT STOP UNTIL IT HAS BEEN VERIFIED TRUTH VIA TERMINAL THAT IT IS FUNCTIONING AS INTENDED "
2026-03-03T02:30:30
https://www.reddit.com/r/LocalLLaMA/comments/1rjd7j3/this_is_how_you_know/
Apart-Yam-979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjd7j3
false
null
t3_1rjd7j3
/r/LocalLLaMA/comments/1rjd7j3/this_is_how_you_know/
false
false
self
1
null
Qwen 2.5 -> 3 -> 3.5, smallest models. Incredible improvement over the generations.
1
You might argue Qwen 3.5 is the best because it's 0.8B, but I'm pretty sure a significant part of that is the vision encoder and the language model itself is smaller.
2026-03-03T02:26:58
https://www.reddit.com/gallery/1rjd4pv
airbus_a360_when
reddit.com
1970-01-01T00:00:00
0
{}
1rjd4pv
false
null
t3_1rjd4pv
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/
false
false
https://preview.redd.it/…7293959e09ff9155
1
null
Qwen3.5 Llamacpp command-line flags for new folks switching from Ollama/Lmstudio
1
Use the **Q4KM** qunat from unlsoth and enable **q8 kv** cache quant For vision include **mmproj**, dont use the og fp32, use **bf16 or f16** **llama.cpp command:** ./llama-server \ -m "path/Qwen3.5-35B-A3B-Q4_K_M.gguf" \ --mmproj "path/mmproj-bf16.gguf" \ --port "Port" \ --ctx-size "context size" \ -n "max output lenght" \ --flash-attn on \ --fit on \ --fit-target 'ur vram -1000' \ -t 'num cpu core' \ --kv-offload \ --no-mmap \ -ctk q8_0 \ -ctv q8_0 \ --temp 1.0 \ --top-p 0.95 \ --top-k 20 \ --min-p 0.0 \ --presence-penalty 1.5 \ --repeat-penalty 1.0
2026-03-03T02:20:00
https://www.reddit.com/r/LocalLLaMA/comments/1rjcz7r/qwen35_llamacpp_commandline_flags_for_new_folks/
maho_Yun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjcz7r
false
null
t3_1rjcz7r
/r/LocalLLaMA/comments/1rjcz7r/qwen35_llamacpp_commandline_flags_for_new_folks/
false
false
self
1
null
Qwen 3.5 4B is scary smart
1
Using PocketPal on an iPhone 17 Pro Max. Let me know if any of you guys have had a experience like mine where the knowledge from such a small model was scary impressive.
2026-03-03T02:09:28
https://i.redd.it/5980e6dbnqmg1.png
Hanthunius
i.redd.it
1970-01-01T00:00:00
0
{}
1rjcqm5
false
null
t3_1rjcqm5
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/
false
false
https://preview.redd.it/…f6f2fac5fcc59e4f
1
{'images': [{'source': {'url': 'https://preview.redd.it/5980e6dbnqmg1.png?auto=webp&s=2800874832b1ddb03759abd786d2f649d16dfc02', 'width': 1320, 'height': 2868}, 'resolutions': [{'url': 'https://preview.redd.it/5980e6dbnqmg1.png?width=108&crop=smart&auto=webp&s=e6bc878f1e087d086f7f64e93aa97b27b24e05fb', 'width': 108, 'height': 216}, {'url': 'https://preview.redd.it/5980e6dbnqmg1.png?width=216&crop=smart&auto=webp&s=e2b898363c47e58c1e80f710bfde5c2e486dde85', 'width': 216, 'height': 432}, {'url': 'https://preview.redd.it/5980e6dbnqmg1.png?width=320&crop=smart&auto=webp&s=d5ec219109d4d159e57352ef307c2086bdf3ad66', 'width': 320, 'height': 640}, {'url': 'https://preview.redd.it/5980e6dbnqmg1.png?width=640&crop=smart&auto=webp&s=1969727c67ef651220b0b64c2cc96944ef700f26', 'width': 640, 'height': 1280}, {'url': 'https://preview.redd.it/5980e6dbnqmg1.png?width=960&crop=smart&auto=webp&s=63c1f1e98dc5ccb4787dce9d577392affacf69ec', 'width': 960, 'height': 1920}, {'url': 'https://preview.redd.it/5980e6dbnqmg1.png?width=1080&crop=smart&auto=webp&s=f31c234612bc6b7d664e8887cc5920393f7ff236', 'width': 1080, 'height': 2160}], 'variants': {}, 'id': '5980e6dbnqmg1'}], 'enabled': True}
[Help] Deploying Llama-3 8B Finetune for Low-Resource Language (Sinhala) on Free Tier? 4-bit GGUF ruins quality.
1
I am a final-year undergraduate student building an educational storytelling app for primary school children in Sri Lanka. I have successfully fine-tuned the `ihalage/llama3-sinhala-8b` model (Llama-3 base) using Unsloth on an A100 to generate culturally aligned Sinhala stories and JSON quizzes. **The Problem:** I need to deploy this model for **free (or extremely cheap)** for my university defense and public testing, but I'm hitting a wall between **Inference Speed vs. Generation Quality.** **What I've Tried:** 1. **Modal (Paid/Credits):** I deployed the full `bfloat16` adapter on an A10G/A100. * *Result:* Incredible quality, perfect Sinhala grammar, sub-3-second generation. * *Issue:* I'm running on academic credits that will expire. I need a sustainable free/low-cost option. 2. **Hugging Face Spaces (Free Tier CPU) + GGUF:** I converted the model to `Q4_K_M` (4-bit) GGUF to fit inside the 16GB RAM limit. * *Result:* **The quality collapsed.** Because Sinhala is a morphologically rich, low-resource language, the 4-bit quantization caused the model to lose key grammar nuances (suffixes/syntax) that remained perfect in 16-bit. It also hallucinates spelling errors. * *Speed:* Painfully slow (1-2 tokens/sec) on CPU, which ruins the "gamified" experience for kids. **My Constraints:** * **Model:** Llama-3 8B (LoRA Adapter + Base). * **Language:** Sinhala (Very sensitive to quantization loss). * **Goal:** A hosted API endpoint (FastAPI/Flask) that my React frontend can hit. * **Budget:** $0 (or <$5/mo if absolutely necessary). **My Questions for the Experts:** 1. Is there *any* free hosting platform that offers even a small GPU (T4?) where I can run an **8-bit (Q8\_0)** or **FP16** version of the model? 4-bit is simply not an option for this language. 2. Has anyone successfully deployed an 8B model on **Kaggle Notebooks** or **Colab** strictly as an API endpoint (using ngrok/cloudflared) for a production demo? Is the "cold boot" time manageable? 3. Are there specific quantization techniques (e.g., GPTQ, AWQ) that preserve low-resource language performance better than GGUF `Q4_K_M` while still fitting on smaller hardware? Any advice on architecture would be amazing. I just want these kids to experience the high-quality stories the model *can* generate without paying enterprise GPU costs! Thanks in advance!
2026-03-03T02:02:31
https://www.reddit.com/r/LocalLLaMA/comments/1rjckv2/help_deploying_llama3_8b_finetune_for_lowresource/
Annual-Captain-7642
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjckv2
false
null
t3_1rjckv2
/r/LocalLLaMA/comments/1rjckv2/help_deploying_llama3_8b_finetune_for_lowresource/
false
false
self
1
null
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp
1
[removed]
2026-03-03T02:00:02
https://www.reddit.com/r/LocalLLaMA/comments/1rjcimq/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
anthonybustamante
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjcimq
false
null
t3_1rjcimq
/r/LocalLLaMA/comments/1rjcimq/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
false
false
https://external-preview…f4e7bdae16218090
1
null
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp
1
[removed]
2026-03-03T01:58:21
https://www.reddit.com/r/LocalLLaMA/comments/1rjchbl/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
anthonybustamante
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjchbl
false
null
t3_1rjchbl
/r/LocalLLaMA/comments/1rjchbl/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
false
false
https://external-preview…f4e7bdae16218090
1
null
qwen3.5-9b q4-k-m in LM studio thinking too much!
1
I must force-stop it several times. I just stopped it after 31 minutes. Has anyone else had this happen?
2026-03-03T01:55:53
https://www.reddit.com/r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/
yingzir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjcfdk
false
null
t3_1rjcfdk
/r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/
false
false
self
1
null
Can I use an old mining rig as a LLM server?
1
[removed]
2026-03-03T01:52:03
https://www.reddit.com/r/LocalLLaMA/comments/1rjcc9u/can_i_use_an_old_mining_rig_as_a_llm_server/
Public-Call-6174
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjcc9u
false
null
t3_1rjcc9u
/r/LocalLLaMA/comments/1rjcc9u/can_i_use_an_old_mining_rig_as_a_llm_server/
false
false
self
1
null
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp
1
ERROR: type should be string, got "https://reddit.com/link/1rjc60i/video/e9g0s5c7jqmg1/player\n\nI have two Rockchip RK3588's: an **Orange Pi 5 Plus (32gb RAM)** and an **Orange Pi 5 Max (16gb)**. I'm using the most recent version of **ik\\_llama.cpp** for its CPU optimizations, but I will include llama.cpp's results as well.\n\nI wanted to see what Qwen3.5-35B quants I could run on them, so here's what I found today.\n\n* **Runs**: 3 per model x {ik\\_llama, llama.cpp}, page cache dropped before every run\n* **Prompt**: \"`Explain the RK3588 in 5 bullets.\"`\n* **Generation**: 512 tokens, 16384 context\n\n# Orange Pi 5 Plus (32GB)\n\n|Model|llama.cpp average t/s|ik\\_llama.cpp average t/s|ik Speedup|Size (GiB)|\n|:-|:-|:-|:-|:-|\n|Unsloth UD-Q4\\_K\\_M|3.60|**8.20**|2.28x|18.5|\n|Bartowski Q4\\_K\\_M|3.70|**7.79**|2.11x|19.8|\n|Bartowski Q6\\_K\\_L|3.33|**6.45**|1.94x|27.0|\n\nThe \\~27% speed increase when using Q4 is probably worth the precision tradeoff, but ymmv of course.\n\n# Orange Pi 5 Max (16GB)\n\n|Model|llama.cpp average t/s|ik\\_llama.cpp average t/s|ik Speedup|Size (GiB)|\n|:-|:-|:-|:-|:-|\n|Bartowski Q2\\_K\\_L|3.73|**8.11**|2.17x|12.1|\n\nI didn't have much time to experiment with the Max, but I'll do more tomorrow.\n\n# Build llama.cpp\n\n git clone https://github.com/ggml-org/llama.cpp\n cd llama.cpp\n \n cmake -S . -B build \\\n -DCMAKE_BUILD_TYPE=Release \\\n -DGGML_NATIVE=ON \\\n -DGGML_OPENMP=ON\n \n cmake --build build --config Release -j\"$(nproc)\"\n\n# Build ik_llama.cpp\n\n git clone https://github.com/ikawrakow/ik_llama.cpp\n cd ik_llama.cpp\n \n CFLAGS=\"-O3 -pipe -march=native\" \\\n CXXFLAGS=\"-O3 -pipe -march=native -include arm_neon.h\" \\\n cmake -S . -B build -DGGML_NATIVE=ON -DGGML_OPENMP=ON\n \n cmake --build build --config Release -j\"$(nproc)\"\n\n# Commands\n\n**llama.cpp:**\n\n sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' && \\\n taskset -c 4-7 ~/llama.cpp/build/bin/llama-cli \\\n -m <MODEL_PATH> \\\n -t 4 -c 16384 -n 512 -st \\\n -p \"Explain the RK3588 in 5 bullets.\"\n\n**ik\\_llama.cpp:**\n\n sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' && \\\n echo \"\" | taskset -c 4-7 ~/ik_llama.cpp/build/bin/llama-cli \\\n -m <MODEL_PATH> \\\n -t 4 -c 16384 -n 512 \\\n -p \"Explain the RK3588 in 5 bullets.\"\n\n\n\nThank you to Unsloth and Bartowski for their open source contributions. I was inspired to make this post after seeing u/jslominski's for the RPi ([link](https://www.reddit.com/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/)).\n\nI will be testing more models tomorrow, including some on my Jetson Orin Nano and other pcs. Right now I'm testing Qwen3.5-9b and -27b on some gaming laptops... let me know if you want to see anything in particular or if we can further improve the results."
2026-03-03T01:44:16
https://www.reddit.com/r/LocalLLaMA/comments/1rjc60i/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
anthonybustamante
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjc60i
false
null
t3_1rjc60i
/r/LocalLLaMA/comments/1rjc60i/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
false
false
https://external-preview…f4e7bdae16218090
1
null
Benchmarked Qwen 3.5 small models (0.8B/2B/4B/9B) on few-shot learning — adding examples to 0.8B code tasks actually makes it worse
1
Ran all four Qwen 3.5 small models through a few-shot evaluation on LM Studio — 3 tasks (classification, code fix, summarization) at 0/1/2/4/8-shot with TF-IDF example selection. **Image 1 — Code fix**: 0.8B scores 67% at zero-shot, then drops to 33% the moment you add 1 example and never recovers. 2B peaks at 100% at 1-2 shot, then falls back to 67%. 4B and 9B are rock solid. Adding examples to smaller models can actively hurt code task performance. **Image 2 — Classification**: The story flips. 0.8B *learns* from 60% to 100% at 8-shot — a clean learning curve. 2B/4B/9B are already perfect at zero-shot. **Image 3 — Summarization**: Scales cleanly with model size (0.8B→0.38, 2B→0.45, 4B→0.65 F1). The 9B flatlines at \~0.11 — explained in the comments (thinking model artifact). Same 0.8B model, opposite behavior depending on task. Gains from examples on classification, collapses on code fix. **Practical takeaways:** * 4B is the sweet spot — stable across all tasks, no collapse, much faster than 9B * 2B is great for classification but unreliable on code tasks * Don't blindly add few-shot examples to 0.8B — measure per task first * 9B notes in the comments
2026-03-03T01:31:50
https://www.reddit.com/gallery/1rjbw0p
Rough-Heart-7623
reddit.com
1970-01-01T00:00:00
0
{}
1rjbw0p
false
null
t3_1rjbw0p
/r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/
false
false
https://preview.redd.it/…b29dbccfa2a767ff
1
null
I made a guardrail that works with Ollama/llama.cpp to catch hallucinations during streaming — open source, runs locally, no API calls needed
1
ERROR: type should be string, got " https://github.com/anulum/director-ai\n \n Has anyone else tried running hallucination detection locally? Curious what\n approaches are working for you."
2026-03-03T01:20:12
https://www.reddit.com/r/LocalLLaMA/comments/1rjbmc9/i_made_a_guardrail_that_works_with_ollamallamacpp/
Diligent-Tomorrow-82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjbmc9
false
null
t3_1rjbmc9
/r/LocalLLaMA/comments/1rjbmc9/i_made_a_guardrail_that_works_with_ollamallamacpp/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k.png?auto=webp&s=4c8bbc7ea7905ba69da8877ca6dd0b1e313fbdf7', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k.png?width=108&crop=smart&auto=webp&s=54616fc4ee6d1a9ab7009d7eace7047254fcb32a', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k.png?width=216&crop=smart&auto=webp&s=6c0f7ee145534f01c935b024dccbd51241b6759c', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k.png?width=320&crop=smart&auto=webp&s=f313a7936d44af1b68a20109686186d4aee320e4', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k.png?width=640&crop=smart&auto=webp&s=b74cc3b1021e0607ed2aeee6e39ace0b4bc4f515', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k.png?width=960&crop=smart&auto=webp&s=fefd1605cd3816f5dd56b76bc287b45bb87c563f', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k.png?width=1080&crop=smart&auto=webp&s=5f71663ef1bca0b58e2659cadaea081809bb8a67', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k'}], 'enabled': False}
Whispr Flow - Free Windows - What's best in early 2026?
1
What is the best speech to input for Windows at the moment? Free, open source? It's hard to google these things because the space changes so frequently.
2026-03-03T01:09:38
https://www.reddit.com/r/LocalLLaMA/comments/1rjbdhh/whispr_flow_free_windows_whats_best_in_early_2026/
Plane_Garbage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjbdhh
false
null
t3_1rjbdhh
/r/LocalLLaMA/comments/1rjbdhh/whispr_flow_free_windows_whats_best_in_early_2026/
false
false
self
1
null
PSA: If you want to test new models, use llama.cpp/transformers/vLLM/SGLang
1
There's so many comments/posts discussing how new qwen models have issues with super long chain of thoughts, problems with tool calls and outright garbage responses. The thing is, those only happen with Ollama, LMStudio and other frameworks, that are basically llama.cpp but worse. Ollama is outright garbage for multiple reasons and there's hardly a good reason to use it over llama.cpp's server. LMStudio doesn't support `presence penalty` required by newer qwen models and tries to parse tool calls in model's `<thinking></thinking>` tags, when it shouldn't. So yeah, don't blame models for your choice of runtime.
2026-03-03T01:03:00
https://www.reddit.com/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/
lans_throwaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjb7yk
false
null
t3_1rjb7yk
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/
false
false
self
1
null
Transformers for Numeric Data
1
Pretty much the title. It seems like in a lot of fields, transformers have usurped the crown and proven they are superior. For example, translation: was HMMs, and now Transformers are the standard. That specific example actually is what makes me feel transformers would be great for timeseries prediction (ie. market prediction). I feel attention would be perfectly suited to picking up on these types of patterns. Does anyone actually use transformer models for anything outside of next word prediction? Specifically numeric data? Maybe anomaly detection?
2026-03-03T01:02:46
https://www.reddit.com/r/LocalLLaMA/comments/1rjb7s0/transformers_for_numeric_data/
JustinPooDough
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjb7s0
false
null
t3_1rjb7s0
/r/LocalLLaMA/comments/1rjb7s0/transformers_for_numeric_data/
false
false
self
1
null
No thinking in unsloth qwen3.5 quants?
1
It doesn't matter what parameters I pass, I can't enable thinking in the unsloth ggufs on the new small dense models. Using bartowiski quants it works normally. Anyone else experiencing this? Did they change the template to disable reasoning?
2026-03-03T00:57:19
https://www.reddit.com/r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/
guiopen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjb34p
false
null
t3_1rjb34p
/r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/
false
false
self
1
null
Self hosted provider tunnel.
1
lots of agentic coding CLI tools that allow openai\_compatible custom self hosted providers(im not talking about on local host) examle like [https://myproxy.com/v1](https://myproxy.com/v1) most of them error for some reason when trying to do this. only kilo cli i got to actually work. any one tried this exposing their llama.cpp port with a cloudflare tunnel?
2026-03-03T00:55:10
https://www.reddit.com/r/LocalLLaMA/comments/1rjb1d1/self_hosted_provider_tunnel/
Express_Quail_1493
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjb1d1
false
null
t3_1rjb1d1
/r/LocalLLaMA/comments/1rjb1d1/self_hosted_provider_tunnel/
false
false
self
1
null
Is there a list of the tools Gemini/ChatGPT/Claude have access to in their web chat interfaces to replicate locally?
1
It is clear that the closed providers have tons of tools set up behind the scenes, hidden from view, that improve the user experience, and I would love to be able to recreate the environment they have set up to possible improve the performance of a local model like Qwen 3.5 27B that has enough context to support calling plenty of tools. I just don't know if there is a publicly available list for that, or if looking through the leaked system prompts is the best bet we have. I don't really care for the chat history / memories aspects, but web search and sandboxed code execution can definitely improve models performance in knowledge and mathematics tasks at least.
2026-03-03T00:53:24
https://www.reddit.com/r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/
OUT_OF_HOST_MEMORY
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjazyt
false
null
t3_1rjazyt
/r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/
false
false
self
1
null
How do you configure your local model better for agentic tools? I'm only changing context
1
I see some of you configure like 5 or 7 parameters when hosting the model with llama, ollama or lmstudio. Honestly I'm just changing the context window and maybe temperature. What is the recommended configuration for agentic coding, tools usage?
2026-03-03T00:51:46
https://www.reddit.com/r/LocalLLaMA/comments/1rjaymu/how_do_you_configure_your_local_model_better_for/
former_farmer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjaymu
false
null
t3_1rjaymu
/r/LocalLLaMA/comments/1rjaymu/how_do_you_configure_your_local_model_better_for/
false
false
self
1
null
General LLM that uses "sub AI's" to complete complex tasks
1
I am beginning research on running a local AI and tried looking for an answer online and in this reddit, but couldn't find anything. The scenario I am thinking of is having a "main" LLM that you talk to and has a general training data set (For ease compare it to the same use as chatgpt), and say I wanted this ai to go on chess . com and grind the chess ladder. Could the Main LLM, rather than be trained on chess data, utilize a "sub ai" that I train exclusively on chess data and consult it for the gameplay knowledge and act on the sub ai output? Effectively having the "Chess sub ai" as a second brain or serve the same purpose as the "chess skill/info" part of a human brain? I use chess in this example for ease of my beginner understanding and explanation. Sorry if this is a stupid question, just wanting to broaden my understanding! Thanks in advance
2026-03-03T00:45:14
https://www.reddit.com/r/LocalLLaMA/comments/1rjat7a/general_llm_that_uses_sub_ais_to_complete_complex/
JWSlegend
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjat7a
false
null
t3_1rjat7a
/r/LocalLLaMA/comments/1rjat7a/general_llm_that_uses_sub_ais_to_complete_complex/
false
false
self
1
null
What LLM to replace Claude 3.5 sonnet for server integration?
1
So I'm a bit confused on what I need. I have openclaw running on an unraid server right now. It has a 13700 (non-k) 64GB DDR4 and a rtx4070ti super. I'm trying to compare the capability of that to something like a M4 pro mac mini with 64GB memory. Or I'd even consider getting a few mac mini. I have a base M4 16GB sitting in a desk not being used. I could buy a few of those but I don't know how that would stack up performance wise. Right now I'm using on an unraid server to monitor hardware, debug issues, and find performance increases. I also have it (read only) integrated into my gmail so I can have it catalog and create pdf of important ones. I dont' know the limits of what I'm going to do but I've been excited in doing this. Having it run through my server and find problems and fix them. Things that I thought were due to old hardware ended up being network loops of some dockers that where tying things up causing problems. Just super cool. I've been very restrictive on giving it access to too much. But I've been floating between grok 4.1 fast, Gemini 3.1 pro and 3.1 flash, and Claude 4.6 sonnet. Right now it's been Claude for the win. It just does so much more. Grok really screws things up sometimes but is great for finding info. It definitely has it's place and I'm waiting on 4.2 api access (maybe tonight). I like Gemini 3.1pro but the API seems to ALWAYS be busy during the day. Claude is the only super heavy lifter that i can tell to look at code and tell me what it thinks and it just makes it better. However I'm almost done with the heavy lifting phase. In the future I'd like to get off the pay to play services because I'm spending enough to warrant my own systems. I'm just curious if more machines (like base model macs I can grab at discounts) is the way to go, if trying to shove it all in a a large mac mini is better due to the bandwidth of the single unit, or if I running what I can on my server is better? I wouldn't mind making a dual GPU setup but I really don't know how the whole PCIe lanes works with more than one and/or what level of LLM I could run with two of them. With the mini's, I'm still learning so feel free to jump in, I could just buy another and add it to the pile for more computer, right?
2026-03-03T00:36:16
https://www.reddit.com/r/LocalLLaMA/comments/1rjaliw/what_llm_to_replace_claude_35_sonnet_for_server/
MartiniCommander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjaliw
false
null
t3_1rjaliw
/r/LocalLLaMA/comments/1rjaliw/what_llm_to_replace_claude_35_sonnet_for_server/
false
false
self
1
null
Looking for CLI beta testers (Docker, self-hosted, AGPL) for my open-source AI agent governance platform
1
I've spent the last 3 weeks building SIDJUA, an open-source (AGPL-3.0) governance layer for multi-agent AI systems. It's a CLI tool that lets you define agent hierarchies, enforce rules before agents can act, track costs, and audit everything. Self-hosted, Docker, no cloud dependency. The problem it solves: AI agents are powerful but uncontrolled. They overspend API budgets, access data they shouldn't, and take actions nobody approved. Every existing solution either gives you a chatbot wrapper or hopes the model behaves. SIDJUA enforces governance by architecture, every agent action passes through a 5-stage pipeline before execution. If it's forbidden, it gets blocked. If it needs approval, it waits. If budget is exceeded, it stops. No exceptions. What's built (V0.9.0): \- 2,352 tests across 19 implementation phases \- Hierarchical agent orchestration with tiered roles \- Pre-Action Governance Pipeline (Forbidden -> Approval -> Budget -> Classification -> Policy) \- Multi-provider support: OpenAI, Anthropic, Groq, Mistral, Cloudflare Workers AI, Ollama, LM Studio, any OpenAI-compatible endpoint \- Built-in cost tracking per agent, per task, per division \- Zero-config first run: "docker compose up" -> "sidjua init" -> "sidjua chat guide", works immediately, no API keys needed \- Configuration-driven: single "divisions.yaml" defines your entire agent org structure \- Air-gap capable, runs fully local \- 2 provisional USPTO patents filed (governance architecture + affective state monitoring) Tech stack: TypeScript, Node.js 22, SQLite, Docker, Qdrant (optional) What I'm looking for: \- 5-10 technical testers who run Docker and want to break things \- Try the CLI, stress the governance pipeline, find the gaps \- Honest feedback on architecture and developer experience \- You get: private GitHub repo access before public launch, credited in README, early contributor status What I'm NOT looking for: \- People who want a ChatGPT wrapper \- "Looks cool, starred!" without actually running it \- Anyone who needs a GUI to function (GUI is coming, but this is CLI-first) Timeline: Private beta now, public release (GitHub + Docker Hub) in \~2 weeks. Local LLM angle: SIDJUA treats local models as first-class citizens. Ollama, LM Studio, any OpenAI-compatible endpoint works out of the box. You can run your entire agent team on local hardware with zero API costs. The governance layer works the same whether you're using GPT-4o or a quantized Llama running on your Mac. If you're interested, don't comment, I can't track Reddit threads all day. Send an email to [contact@sidjua.com](mailto:contact@sidjua.com) with a short intro: what you've built, what you work with, and why this caught your eye. A GitHub profile or link to something you've shipped tells me more than a paragraph. If I can tell you'll actually run it and give real feedback, you'll have repo access within 24 hours. I'll send repo access + Docker setup instructions. Website: [sidjua.com](http://sidjua.com) | License: AGPL-3.0 Built by one person + three AI agents (yes, using SIDJUA to build SIDJUA). AMA about the architecture or the experience of running a startup where your entire dev team is AI.
2026-03-03T00:34:21
https://www.reddit.com/r/LocalLLaMA/comments/1rjajsz/looking_for_cli_beta_testers_docker_selfhosted/
Inevitable_Raccoon_9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjajsz
false
null
t3_1rjajsz
/r/LocalLLaMA/comments/1rjajsz/looking_for_cli_beta_testers_docker_selfhosted/
false
false
self
1
null
GPT-OSS had to think for 4 minutes where Qwen3.5-9B got it like a breeze
1
2026-03-03T00:11:52
https://i.redd.it/1e2qs50i2qmg1.png
Extraaltodeus
i.redd.it
1970-01-01T00:00:00
0
{}
1rja0sb
false
null
t3_1rja0sb
/r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/
false
false
https://preview.redd.it/…170840195cdc6abe
1
{'images': [{'source': {'url': 'https://preview.redd.it/1e2qs50i2qmg1.png?auto=webp&s=717dc039727a44b406b3a7f849fd29c2ce241897', 'width': 1025, 'height': 397}, 'resolutions': [{'url': 'https://preview.redd.it/1e2qs50i2qmg1.png?width=108&crop=smart&auto=webp&s=ad31a251123b6fcd529d1939f3971abf606b9801', 'width': 108, 'height': 41}, {'url': 'https://preview.redd.it/1e2qs50i2qmg1.png?width=216&crop=smart&auto=webp&s=805119ca149f5e2c241b0c6320641fc909bc18b4', 'width': 216, 'height': 83}, {'url': 'https://preview.redd.it/1e2qs50i2qmg1.png?width=320&crop=smart&auto=webp&s=276d037cc55b85f3eddf18eb25297b8a0411543b', 'width': 320, 'height': 123}, {'url': 'https://preview.redd.it/1e2qs50i2qmg1.png?width=640&crop=smart&auto=webp&s=4cdfcde8cda67b79724a5df59166fb47249b1d4f', 'width': 640, 'height': 247}, {'url': 'https://preview.redd.it/1e2qs50i2qmg1.png?width=960&crop=smart&auto=webp&s=86154c0c64623cfdb6bf18c2cb3cf04d24c216fd', 'width': 960, 'height': 371}], 'variants': {}, 'id': '1e2qs50i2qmg1'}], 'enabled': True}
API price for the 27B qwen 3.5 is just outrageous
1
https://preview.redd.it/…st this much lol
2026-03-02T23:43:35
https://www.reddit.com/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/
Ok-Internal9317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj9bl7
false
null
t3_1rj9bl7
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/
false
false
https://preview.redd.it/…6975a25964c0b240
1
null
Manage Qwen 3.5 Model Settings with LiteLLM Proxy
1
I noticed a lot of people are running the Qwen 3.5 models manually juggling the sampling settings while running Llama.cpp. The easiest way I found is to use LiteLLM Proxy to handle the sampling settings and let Llama.cpp to serve the model. LiteLLM proxy is really easy to setup. # Quickstart Here are is quick-start guide to help those that never used LiteLLM proxy. # Run Llama.cpp without sampling settings First of all make sure you are running Llama.cpp without the sampling settings. Here is what I use (for reference I’m running a 4090 + Ubuntu (popos)): /home/user/llama.cpp/build/bin/llama-server --model /home/user/models/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf --mmproj /home/user/models/Qwen3.5-35B-A3B-GGUF/mmproj-F16.gguf --alias Qwen3.5-35B-A3B-GGUF --host 0.0.0.0 --port 30000 --flash-attn on --no-mmap --jinja --fit on --ctx-size 32768 Notice the “—port 30000” and “—alias” parameter - this is very important when setting up LiteLLM. # Install LiteLLM Proxy Install LiteLLM proxy via pip: pip install 'litellm[proxy]' # Create LiteLLM configuration file I like to put my config file in .config: nano ~/.config/litellm/config.yaml # Starter configuration Here I’m going to use Qwen 3.5 35b as an example: # General settings general_settings: master_key: "llm" request_timeout: 600 # Models model_list: # Qwen3.5-35B variants - model_name: qwen3.5-35b-think-general litellm_params: model: openai/Qwen3.5-35B-A3B-GGUF api_base: http://localhost:30000/v1 api_key: none temperature: 1.0 top_p: 0.95 presence_penalty: 1.5 extra_body: top_k: 20 min_p: 0.0 repetition_penalty: 1.0 chat_template_kwargs: enable_thinking: true - model_name: qwen3.5-35b-think-code litellm_params: model: openai/Qwen3.5-35B-A3B-GGUF api_base: http://localhost:30000/v1 api_key: none temperature: 0.6 top_p: 0.95 presence_penalty: 0.0 extra_body: top_k: 20 min_p: 0.0 repetition_penalty: 1.0 chat_template_kwargs: enable_thinking: true - model_name: qwen3.5-35b-instruct-general litellm_params: model: openai/Qwen3.5-35B-A3B-GGUF api_base: http://localhost:30000/v1 api_key: none temperature: 0.7 top_p: 0.8 presence_penalty: 1.5 extra_body: top_k: 20 min_p: 0.0 repetition_penalty: 1.0 chat_template_kwargs: enable_thinking: false - model_name: qwen3.5-35b-instruct-reasoning litellm_params: model: openai/Qwen3.5-35B-A3B-GGUF api_base: http://localhost:30000/v1 api_key: none temperature: 1.0 top_p: 0.95 presence_penalty: 1.5 extra_body: top_k: 20 min_p: 0.0 repetition_penalty: 1.0 chat_template_kwargs: enable_thinking: false Each entry will show up as a separate model but they are actually pointing to the same Llama.cpp instance with different sampling settings. Notice the “model: openai/Qwen3.5-35B-A3B-GGUF” field. The part after “openai/“ needs to match the “—alias” parameter in Llama.cpp. Also take note of the “api\_base: http://localhost:30000/v1” field - this points to your Llama.cpp server. The "master\_key: “llm”” field is for the api key. I use something short because its running local but you can replace this with whatever you want. # Run LiteLLM Proxy Run LiteLLM. We are going to open up port 20000: litellm \ --config ~/.config/litellm/config.yaml \ --host 0.0.0.0 \ --port 20000 # Test it! You should see a list of 4 models: curl http://localhost:20000/v1/models # Openwebui or other clients Using Openwebui as an example: In the connections settings, add a connection point to the base URL (replace local host with your machine’s ip address): http://localhost:20000/v1 And then set the api key “llm” or whatever you set in LiteLLM’s config file. You will now see 4 different models - but its actually one model with different sampling settings! # Hope you found this useful. Hope you found this useful. You can get config files on my GitHub: [https://github.com/dicksondickson/ai-infra-onprem](https://github.com/dicksondickson/ai-infra-onprem)
2026-03-02T23:30:19
https://www.reddit.com/r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/
CATLLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8zuh
false
null
t3_1rj8zuh
/r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/
false
false
self
1
null
where can I get good priced 3090s?
1
I'm in the US, in Minnesota. I wanna get two for now.
2026-03-02T23:29:55
https://www.reddit.com/r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/
Lord_Curtis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8zhq
false
null
t3_1rj8zhq
/r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/
false
false
self
1
null
Tokyo Openclaw Meetup
1
Hey Tokyo innovators & developers! Hosting a focused offline meetup on the viral open-source AI agent: OpenClaw! Event: Tokyo OpenClaw Developer Meetup (東京 OpenClaw 開発者交流会) • Date & Time: March 7 (Friday) Afternoon (exact time notified after registration) • Format: Small group chats (chill bar/café) + hands-on discussions • Topics: OpenClaw real-world use, agent-building tips, applications in LegalTech/Web3+AI (next-gen smart contracts, autonomous agents in law, etc.), war stories & brainstorming • Location: Notified after successful registration (central Tokyo spot) If you’re hacking with OpenClaw, building AI agents, or curious about Legal AI / Web3 intersections—let’s grab drinks and dive deep! Real talk, clear insights, and growing together. We’ll send location & exact time upon confirmation. Oversubscribed? We’ll split groups. Looking forward! https://luma.com/tosuhs4c Hey 東京のイノベーター&デベロッパーの皆さん! 今回は、爆発的にviralなオープンソースAIエージェント「OpenClaw」にフォーカスしたオフライン交流会を開催します! イベント:東京 OpenClaw 開発者交流会 • 日時:3月7日(金)午後(詳細時間は登録後に通知) • 形式:少人数グループトーク(バー/カフェでリラックス)+ハンズオン議論 • 内容:OpenClawの実務活用、AIエージェント構築のTips、LegalTechやWeb3×AIへの応用(次世代スマートコントラクト、自律エージェントの法律面など)、現場の戦場話&アイデアブレスト • 場所:登録成功後に個別通知(東京23区内、アクセスしやすいスポット) OpenClawで遊んでる開発者、AIエージェントビルダー、Legal AIやWeb3に興味ある人、ドリンク片手にガチで語り合いましょう! リアルな会話で洞察をシェアして、一緒に次の波を考えよう。 登録後に場所・正確な時間をお知らせします。殺到したら複数グループに分けます! 楽しみにしてます! https://luma.com/tosuhs4c
2026-03-02T23:27:39
https://www.reddit.com/r/LocalLLaMA/comments/1rj8xi0/tokyo_openclaw_meetup/
Remarkable-Key6575
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8xi0
false
null
t3_1rj8xi0
/r/LocalLLaMA/comments/1rj8xi0/tokyo_openclaw_meetup/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/Z2EYq4G4zN8TuEgoONLwXEvblMrTNNiCFKtvqjoUa4c.jpeg?auto=webp&s=3f677743d4ff1f5f61698ce01c568077d30867d7', 'width': 800, 'height': 419}, 'resolutions': [{'url': 'https://external-preview.redd.it/Z2EYq4G4zN8TuEgoONLwXEvblMrTNNiCFKtvqjoUa4c.jpeg?width=108&crop=smart&auto=webp&s=26624088b84a2ec557ebd6c095d0269cfd5e746b', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/Z2EYq4G4zN8TuEgoONLwXEvblMrTNNiCFKtvqjoUa4c.jpeg?width=216&crop=smart&auto=webp&s=7a76df095a8f8cfa65bd7b1d2200ba55e661e9aa', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/Z2EYq4G4zN8TuEgoONLwXEvblMrTNNiCFKtvqjoUa4c.jpeg?width=320&crop=smart&auto=webp&s=2f7c6a87e0930de485e1fc45ac581aa9ccf01c30', 'width': 320, 'height': 167}, {'url': 'https://external-preview.redd.it/Z2EYq4G4zN8TuEgoONLwXEvblMrTNNiCFKtvqjoUa4c.jpeg?width=640&crop=smart&auto=webp&s=d3d9c9d0431383582bc52ab6e148e3be809d8ec1', 'width': 640, 'height': 335}], 'variants': {}, 'id': 'Z2EYq4G4zN8TuEgoONLwXEvblMrTNNiCFKtvqjoUa4c'}], 'enabled': False}
Qwen3.5 4B: overthinking to say hello.
1
Hi everyone, I've been experimenting with Qwen3.5 4B on Ollama, hoping to replace my current model (qwen3:4b-instruct-2507-q4_K_M) in an agentic RAG pipeline. Unfortunately, the results have been disappointing so far. The main issue is that with thinking enabled, the model spends an excessive amount of time reasoning — even on simple tasks like query rewriting — which makes it impractical for a multi-step pipeline where latency adds up quickly. On the other hand, disabling thinking causes a noticeable drop in quality, to the point where it underperforms the older Qwen3 4B 2507 Instruct. Is anyone else experiencing this? Are the official benchmarks measured with thinking enabled? Any suggestions would be appreciated.
2026-03-02T23:27:07
https://i.redd.it/k7wt9n7jtpmg1.png
CapitalShake3085
i.redd.it
1970-01-01T00:00:00
0
{}
1rj8x1q
false
null
t3_1rj8x1q
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/
false
false
https://preview.redd.it/…ef663ae387ee9297
1
{'images': [{'source': {'url': 'https://preview.redd.it/k7wt9n7jtpmg1.png?auto=webp&s=c1150340bad51b1fe433fce85c69fc90207d1fc7', 'width': 789, 'height': 1398}, 'resolutions': [{'url': 'https://preview.redd.it/k7wt9n7jtpmg1.png?width=108&crop=smart&auto=webp&s=324d773e53d852e1863e983e1824b02575b23917', 'width': 108, 'height': 191}, {'url': 'https://preview.redd.it/k7wt9n7jtpmg1.png?width=216&crop=smart&auto=webp&s=e2c500a7511f61e988e051d6aa48d992079b3362', 'width': 216, 'height': 382}, {'url': 'https://preview.redd.it/k7wt9n7jtpmg1.png?width=320&crop=smart&auto=webp&s=16b2ba3b32d913bd810ca9a794399274308f31ba', 'width': 320, 'height': 566}, {'url': 'https://preview.redd.it/k7wt9n7jtpmg1.png?width=640&crop=smart&auto=webp&s=a0ccca1000e0f84667baf055ea8db932670180d3', 'width': 640, 'height': 1133}], 'variants': {}, 'id': 'k7wt9n7jtpmg1'}], 'enabled': True}
Ozymandias got a ton of new stuff – Mercator shift detector, Signal Graph, Oracle watchlist, Pantheon releases, Rabbit Hole & more
1
[removed]
2026-03-02T23:25:28
https://ozymandias.group/
False_Ad8389
ozymandias.group
1970-01-01T00:00:00
0
{}
1rj8vgn
false
null
t3_1rj8vgn
/r/LocalLLaMA/comments/1rj8vgn/ozymandias_got_a_ton_of_new_stuff_mercator_shift/
false
false
default
1
null
just getting started on local llm on macbook air with 24gb of ram, are Qwen models the best ones currently?
1
Also, should I go for models published and fined tuned by Unsloth only? Is is better to get a high parameter model with low bit quantization or a lower parameter with a higher bit quantization?
2026-03-02T23:24:23
https://www.reddit.com/r/LocalLLaMA/comments/1rj8uj5/just_getting_started_on_local_llm_on_macbook_air/
murkomarko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8uj5
false
null
t3_1rj8uj5
/r/LocalLLaMA/comments/1rj8uj5/just_getting_started_on_local_llm_on_macbook_air/
false
false
self
1
null
llama.cpp models preset with multiple presets for the same model
1
I setup 2 presets in my ini file for the Qwen 3.5 model based on the unsloth recommendations, and I am curious if there is something I can do to make this better. As far as I can tell, and maybe I am wrong here, but it seems when I switch between the two in the web ui it needs to reload the model, even though its the same data. Is there a different way to specify the presets so that it does not need to reload the model but instead just uses the updated params if the model is already loaded from the other preset? [Qwen3.5-35B-A3B] m = /models/unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q8_K_XL/unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q8_K_XL.gguf mmproj = /models/unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q8_K_XL/mmproj-BF16.gguf ctx-size = 65536 temp = 1.0 top-p = 0.95 top-k = 20 min-p = 0.00 [Qwen3.5-35B-A3B-coding] m = /models/unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q8_K_XL/unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q8_K_XL.gguf mmproj = /models/unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q8_K_XL/mmproj-BF16.gguf ctx-size = 65536 temp = 0.6 top-p = 0.95 top-k = 20 min-p = 0.00 I am also struggling to find actual documentation on the format here, aside from looking at the code and basically gleaning that it parses it the same way as it would command line arguments.
2026-03-02T23:22:19
https://www.reddit.com/r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/
stoystore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8sow
false
null
t3_1rj8sow
/r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/
false
false
self
1
null
For sure
1
Yes Qwen3.5-4B, for sure. (I'm using PocketPal on Android and download the Q4-0 GGUF from their hugging face servers interface) Is anybody got this model working on PocketPal ?
2026-03-02T23:08:42
https://i.redd.it/o75mdgehrpmg1.jpeg
Open_Establishment_3
i.redd.it
1970-01-01T00:00:00
0
{}
1rj8gb4
false
null
t3_1rj8gb4
/r/LocalLLaMA/comments/1rj8gb4/for_sure/
false
false
https://preview.redd.it/…de91313bd17b58bc
1
{'images': [{'source': {'url': 'https://preview.redd.it/o75mdgehrpmg1.jpeg?auto=webp&s=253fd329c02421f068f0386df7b458ee3a2f7a89', 'width': 1440, 'height': 2951}, 'resolutions': [{'url': 'https://preview.redd.it/o75mdgehrpmg1.jpeg?width=108&crop=smart&auto=webp&s=17c1211ac8726e47b261b16b345b3d5f5712cfe1', 'width': 108, 'height': 216}, {'url': 'https://preview.redd.it/o75mdgehrpmg1.jpeg?width=216&crop=smart&auto=webp&s=798b09bec2a26554229df1d443e2da051263ebd4', 'width': 216, 'height': 432}, {'url': 'https://preview.redd.it/o75mdgehrpmg1.jpeg?width=320&crop=smart&auto=webp&s=b7ed2efe13e9bd3cd5c7538dd3bc01f59eceb6aa', 'width': 320, 'height': 640}, {'url': 'https://preview.redd.it/o75mdgehrpmg1.jpeg?width=640&crop=smart&auto=webp&s=68a555d9a063700d723195c0317453985d88a61f', 'width': 640, 'height': 1280}, {'url': 'https://preview.redd.it/o75mdgehrpmg1.jpeg?width=960&crop=smart&auto=webp&s=7625594a14f23d6747f754850de6290ddce53492', 'width': 960, 'height': 1920}, {'url': 'https://preview.redd.it/o75mdgehrpmg1.jpeg?width=1080&crop=smart&auto=webp&s=e26fe65f7008456b62fcd367275ea490ab99b8d6', 'width': 1080, 'height': 2160}], 'variants': {}, 'id': 'o75mdgehrpmg1'}], 'enabled': True}
Is anyone else seeing Qwen 3.5 35B outperform cloud APIs on structured tasks?
1
Ran some quick head-to-heads this weekend. Local Qwen 3.5 35B (Ollama, M3 Max 36GB) vs GPT-5-mini, GPT-5-nano, Gemini 3 Flash/Pro, and MiniMax on a few simple agent tasks: entity extraction, summarization, and sentiment classification. Full disclaimer: these are pretty trivial tasks, not trying to claim this is rigorous science. But the results were fun enough to share. https://preview.redd.it/fufbm14aqpmg1.png?width=1125&format=png&auto=webp&s=7c6e36505451a7b58d1eccfff08d6005d40e7853 Qwen took the overall crown at 99% correctness vs GPT-5-mini at 97%. The surprise was summarization, where an LLM judge actually rated Qwen's outputs higher (97%) than all the cloud models (91-96%). Sentiment classification was a wash, everyone got 100%. Clearly need harder tasks lol. The obvious tradeoff: latency. 24s vs 1.6s on extraction, 72s vs 1.5s on summarization. M3 Max is not a 4090. But for batch/async stuff? Totally fine. I used a little tool I wrote to run these (https://github.com/DataGobes/agent-duelist), mainly because I got tired of manually comparing providers for my own projects and comparing local inference quality with cloud providers. Curious if anyone with beefier hardware is seeing similar results on Qwen 3.5 for structured output tasks, or if my tasks were just too easy to really differentiate anything.
2026-03-02T23:06:24
https://www.reddit.com/r/LocalLLaMA/comments/1rj8e7z/is_anyone_else_seeing_qwen_35_35b_outperform/
Beautiful-Honeydew10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8e7z
false
null
t3_1rj8e7z
/r/LocalLLaMA/comments/1rj8e7z/is_anyone_else_seeing_qwen_35_35b_outperform/
false
false
https://preview.redd.it/…9b32dfb4f5d3ddb8
1
null
Merlin Research released Qwen3.5-4B-Safety-Thinking - a 4B safety-aligned reasoning model built on Qwen3.5
1
The model is designed for structured 'thinking' and safety in real-world scenarios, including agent systems. Key improvements: * Improved ability to accurately follow strict instructions in prompts. * Based on the use of Bloom and Petri methods from Anthropic and resistant to hacking attempts. * Increased resistance to 'abnormal' and adversarial prompts. * Up to 1M context * Using frameworks from Anthropic - Bloom and Petri Happy to answer any questions [https://huggingface.co/MerlinSafety/Qwen3.5-4B-Safety-Thinking](https://huggingface.co/MerlinSafety/Qwen3.5-4B-Safety-Thinking)
2026-03-02T23:01:39
https://www.reddit.com/r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/
Intelligent-Space778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj89qy
false
null
t3_1rj89qy
/r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E.png?auto=webp&s=5587798bb04611aec3e818eb73cceadb65a6f124', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E.png?width=108&crop=smart&auto=webp&s=d24d2978ab2b05d01f248f54689c95db2dfe3e4a', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E.png?width=216&crop=smart&auto=webp&s=463cf1b73c7d333e756503769f8ca2bec541c7f6', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E.png?width=320&crop=smart&auto=webp&s=3be84abc33fe1d7fc880477e77759cb0e19b68f4', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E.png?width=640&crop=smart&auto=webp&s=6c12b5de858f5064eaf3db87912bd190a0826331', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E.png?width=960&crop=smart&auto=webp&s=992bf91b1a24b3dcca6997461b2a93f06faa09ee', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E.png?width=1080&crop=smart&auto=webp&s=85cbdfcc3096d77db7d3bcdeb987ddae2bb3cf49', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E'}], 'enabled': False}
Merlin Research released Qwen3.5-4B-Safety-Thinking — a 4B safety-aligned reasoning model built on Qwen3.5
1
[deleted]
2026-03-02T22:59:58
[deleted]
1970-01-01T00:00:00
0
{}
1rj87zg
false
null
t3_1rj87zg
/r/LocalLLaMA/comments/1rj87zg/merlin_research_released_qwen354bsafetythinking_a/
false
false
default
1
null
Merlin Research released Qwen3.5-4B-Safety-Thinking — a 4B safety-aligned reasoning model built on Qwen3.5
1
[deleted]
2026-03-02T22:55:06
[deleted]
1970-01-01T00:00:00
0
{}
1rj83f3
false
null
t3_1rj83f3
/r/LocalLLaMA/comments/1rj83f3/merlin_research_released_qwen354bsafetythinking_a/
false
false
default
1
null
Where to get a comprehensive overview on the cutting edge in open source / frontier model AI
1
Hey guys! I'm new here. I've just committed to buying an RTX 5090-powered laptop and want to start vibe coding, generating realistic AI videos, and experimenting with deepfakes etc. Is there a unified resource for this? Ideally something that explains how workflows work in ComfyUI, how to find the best tool for the job, and how to replicate the latest AI demonstrations. Any responses would be much appreciated! See y'all around :)
2026-03-02T22:50:37
https://www.reddit.com/r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/
StabledFusion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7z9v
false
null
t3_1rj7z9v
/r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/
false
false
self
1
null
PMetal - LLM fine-tuning framework for Apple Silicon, written in Rust with custom Metal GPU kernels
1
Hey everyone, we're releasing PMetal (Powdered Metal) today! A Rust framework for fine-tuning LLMs natively on Apple Silicon using custom Metal compute shaders. It's a rust library (python bindings coming soon) that covers the full training pipeline: LoRA/QLoRA adapters, RLHF alignment (DPO, GRPO, DAPO, GSPO, KTO, SimPO, ORPO, PPO), knowledge distillation (TAID + reasoning-aware), and model merging (TIES, DARE, Model Stock, and more). Before anyone asks "why Rust?" - Zero-copy safetensor loading, compile-time architecture validation, fearless concurrency for async data pipelines, and `#[repr(C)]` interop with Metal shaders. The type system catches misconfigurations that Python would only surface at runtime mid-training. Custom `.metal` compute shaders for: - Fused RMSNorm + LoRA forward (single kernel dispatch instead of 5+ ops) - Fused cross-entropy loss (logits never materialize the full vocab distribution) - Fused SwiGLU activation - FlashAttention for training (forward + backward) - Fused RoPE embeddings - Grouped GEMM for MoE routing - FP8 training kernels - Fused distillation kernels Each kernel includes an auto-tuner (`pmetal-metal/tuna`) that profiles tile sizes and threadgroup configurations per-device, so M1 through M4 Ultra all get tuned dispatch parameters. **Supported model families:** Llama (3.x, 4), Qwen (2, 2-VL, 3, 3-MoE), DeepSeek, Mistral, Gemma, Phi, Granite, Cohere, Nemotron-H, Pixtral, MLlama (vision), Whisper. Training features: - Custom autograd for LoRA that only stores `x` and `x @ A^T` per layer (rank << hidden), cutting memory ~6x per LoRA layer vs standard autodiff - Sequence packing with cross-attention masking - 8-bit Adam, schedule-free optimizers, parameter groups with per-layer LR - JIT compilation of training steps via MLX - Streaming checkpoint save/resume - HuggingFace Hub integration (download + upload) This doesn't replace PyTorch for multi-GPU cluster training. It's specifically for the Apple Silicon niche -- M-series Macs and potentially future Apple hardware. If you have an NVIDIA setup, use Unsloth/axolotl/TRL. We've included distributed training powered by mDNS auto-discovery, ring all-reduce, and gradient compression! Stack your apple hardware together! Built on top of [mlx-rs](https://github.com/oxideai/mlx-rs) (Rust bindings to Apple's MLX framework). We've been contributing fixes upstream as we go. Version v0.1.2 is our first public release. We'd love your feedback: Try it out and let us know what works and what doesn't, please open issues for bugs, rough edges, or missing features! PRs are very welcome - check the CONTRIBUTING.md for guidelines. Feature requests? Absolutely, what models, training methods, or workflows would make this useful for you? Dual-licensed MIT/Apache-2.0. https://github.com/Epistates/pmetal Happy to answer questions about the Metal kernel design, the custom autograd approach, or anything else.
2026-03-02T22:49:34
https://www.reddit.com/r/LocalLLaMA/comments/1rj7y9d/pmetal_llm_finetuning_framework_for_apple_silicon/
RealEpistates
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7y9d
false
null
t3_1rj7y9d
/r/LocalLLaMA/comments/1rj7y9d/pmetal_llm_finetuning_framework_for_apple_silicon/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo.png?auto=webp&s=3a62d229478ee9d92197df3c1537a509b01ef9d9', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo.png?width=108&crop=smart&auto=webp&s=9388aee7a3462289f4938019fb98ba1392636711', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo.png?width=216&crop=smart&auto=webp&s=36125015d8f32195beea75715e7349e2261d1703', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo.png?width=320&crop=smart&auto=webp&s=8d55c2848dbf41224a6a00213c59b67e7767e890', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo.png?width=640&crop=smart&auto=webp&s=80a51c057b1106b848e58cd5ed2b326bf8169c88', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo.png?width=960&crop=smart&auto=webp&s=b17eb41ae4fa022a7d9565f7f73918eb07634f26', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo.png?width=1080&crop=smart&auto=webp&s=e9e25b94eb6bcc67c91115d6b4330c56c6fc75d6', 'width': 1080, 'height': 540}], 'variants': {}, 'id': '2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo'}], 'enabled': False}
Any issues / tips for running Linux with a 5060Ti (16gb) for Local LLM's? Best Linux Distro?
1
I'm debating with Linux distro to install on an extra NVMe drive I have, to dedicate to learning Local LLMs, AI, and programming. I have a Gigabyte Nvidia GEForce RTX 5060Ti (16GB). **Anything I should watch out for?** **Any particular Linux distro I should use for these purposes?** \----- My machine specs: * AMD Ryzen 9 9950X 4.3 GHz 16-Core Processor * Asus ProArt X870E-CREATOR WIFI ATX AM5 Motherboard * G.Skill Flare X5 128 GB (2 x 64 GB) DDR5-6000 CL34 Memory * Gigabyte GAMING OC GeForce RTX 5060 Ti 16 GB Video Card * SeaSonic PRIME 1000 W 80+ Gold Certified Fully Modular ATX
2026-03-02T22:49:18
https://www.reddit.com/r/LocalLLaMA/comments/1rj7y0u/any_issues_tips_for_running_linux_with_a_5060ti/
QuestionAsker2030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7y0u
false
null
t3_1rj7y0u
/r/LocalLLaMA/comments/1rj7y0u/any_issues_tips_for_running_linux_with_a_5060ti/
false
false
self
1
null
I made this.
1
[removed]
2026-03-02T22:42:51
https://www.reddit.com/r/LocalLLaMA/comments/1rj7ryg/i_made_this/
Distinct-Patient778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7ryg
false
null
t3_1rj7ryg
/r/LocalLLaMA/comments/1rj7ryg/i_made_this/
false
false
self
1
null
Rossavaxx
1
[deleted]
2026-03-02T22:40:07
[deleted]
1970-01-01T00:00:00
0
{}
1rj7pfq
false
null
t3_1rj7pfq
/r/LocalLLaMA/comments/1rj7pfq/rossavaxx/
false
false
default
1
null
I need an uncensored LLM for 8GB vram
1
I am currently using Mistral 7B (with zorg jailbreak) and it's giving a good performance. The issue is that the jailbreak prompt is making it hallucinate a lot. Any recommendations for fully uncensored LLM?
2026-03-02T22:39:40
https://www.reddit.com/r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/
Safe_Location9897
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7p2h
false
null
t3_1rj7p2h
/r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/
false
false
self
1
null
The biggest pain in local fine-tuning isn't training - it's everything around it
1
I've been working on local LLM fine-tuning for a few months and I keep hitting the same problems that have nothing to do with the actual training. **Data prep is a mess.** Every project starts with me manually formatting data into JSONL, guessing at splits, hoping I didn't introduce duplicates. There's no versioning. If I change 3 examples and retrain, I have no idea what's different unless I manually diff files. **Reproducibility doesn't exist.** I trained a model 2 months ago and needed to retrain with updated data. I couldn't reproduce the original. Different split, different order, couldn't even confirm it was the same dataset. Ended up re-running everything from scratch. **Deployment is manual every time.** Train with LoRA, merge adapters, write a Modelfile, load into Ollama, test it, realize the system prompt wasn't baked in, redo everything. Every single time. **There's no provenance chain.** If someone asks "what data produced this model?" - I can't answer that confidently. There's no link between the dataset version, the training config, and the deployed model. The actual training part? That works fine. HuggingFace, PEFT, LoRA - the community has nailed that. But the workflow around training is still held together with duct tape and bash scripts. I got frustrated enough that I started scripting my own workflow - deterministic dataset splits with hashing so I know if anything changed, system prompts baked into the Modelfile on deploy, and a way to trace which dataset version produced which model. Nothing fancy, just trying to stop losing track of things. Curious if anyone else is feeling these same pain points or if you've found a workflow that actually holds together. What does your setup look like?
2026-03-02T22:25:12
https://www.reddit.com/r/LocalLLaMA/comments/1rj7bvo/the_biggest_pain_in_local_finetuning_isnt/
Critical_Letter_7799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7bvo
false
null
t3_1rj7bvo
/r/LocalLLaMA/comments/1rj7bvo/the_biggest_pain_in_local_finetuning_isnt/
false
false
self
1
null
The biggest pain in local fine-tuning isn't training - it's everything around it
1
2026-03-02T22:21:53
https://www.reddit.com/r/LocalLLaMA/comments/1rj78mn/the_biggest_pain_in_local_finetuning_isnt/
Critical_Letter_7799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj78mn
false
null
t3_1rj78mn
/r/LocalLLaMA/comments/1rj78mn/the_biggest_pain_in_local_finetuning_isnt/
false
false
self
1
null
Qwen3.5-122B-A10B-Q8 handling the car wash question like a champ! 9 T/s on the 2x agx orin 1x3090 RPC mesh!
1
85k context, high volume of reasoning for that question but that makes sense. i find 9t,s highly usable. Another win for the Clarkson jetson lab!
2026-03-02T22:19:50
https://v.redd.it/wgd9fdopipmg1
braydon125
/r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/
1970-01-01T00:00:00
0
{}
1rj76pb
false
null
t3_1rj76pb
/r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/
false
false
https://external-preview…3f0f53041a602903
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/MHV6MjZ2b3BpcG1nMZtiF550ubXjfviyIKED8VdMkOUbP3yCTamJRagJLpbS.png?format=pjpg&auto=webp&s=30a7b6241976f0c0eac8cc6caae843faa39b4578', 'width': 405, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/MHV6MjZ2b3BpcG1nMZtiF550ubXjfviyIKED8VdMkOUbP3yCTamJRagJLpbS.png?width=108&crop=smart&format=pjpg&auto=webp&s=a7a678d1b8fd4874e33d01a74de2c62f9ad0d1cc', 'width': 108, 'height': 192}, {'url': 'https://external-preview.redd.it/MHV6MjZ2b3BpcG1nMZtiF550ubXjfviyIKED8VdMkOUbP3yCTamJRagJLpbS.png?width=216&crop=smart&format=pjpg&auto=webp&s=6d541b5ce24ae0bc7e2d1a71c820484bd719d287', 'width': 216, 'height': 384}, {'url': 'https://external-preview.redd.it/MHV6MjZ2b3BpcG1nMZtiF550ubXjfviyIKED8VdMkOUbP3yCTamJRagJLpbS.png?width=320&crop=smart&format=pjpg&auto=webp&s=9c8d183115b2433cdd7d35b378f999c8d86d7bd7', 'width': 320, 'height': 568}], 'variants': {}, 'id': 'MHV6MjZ2b3BpcG1nMZtiF550ubXjfviyIKED8VdMkOUbP3yCTamJRagJLpbS'}], 'enabled': False}
What exactly can I use small (2-3B) AI models for in mobiles?
1
I recently installed the Locally AI app. I’ve seen so many open source models released for use in mobile phones. I installed Qwen 3, LFM 2.5 and Gemma 3n. The answers they produce for technical engineering questions are so generic that I don’t see a point to use them. I’m curious to know the use case of these 2-3B parameter AI models which run locally, other than just summarising and writing emails, which Apple Intelligence already does (I’m on ios btw).
2026-03-02T22:14:40
https://www.reddit.com/r/LocalLLaMA/comments/1rj71wv/what_exactly_can_i_use_small_23b_ai_models_for_in/
Sylverster_Stalin_69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj71wv
false
null
t3_1rj71wv
/r/LocalLLaMA/comments/1rj71wv/what_exactly_can_i_use_small_23b_ai_models_for_in/
false
false
self
1
null
Cheap ai api services
0
I found a site offering cheap Veo 3 and LLM models, and they’ll be launching soon. I like it for now. If anyone is interested, send me a DM and I can share it with you too.
2026-03-02T22:07:32
https://www.reddit.com/r/LocalLLaMA/comments/1rj6val/cheap_ai_api_services/
PromotionEuphoric509
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj6val
false
null
t3_1rj6val
/r/LocalLLaMA/comments/1rj6val/cheap_ai_api_services/
false
false
self
0
null