@arc DeepSeek R1 is probably the best you can get locally (you can get even the full model, but it's like 700GB). That thinking is a feature, it helps with accuracy quite a bit.
I think that unfortunately you simply can't fit a good enough model into such small amount of parameters.
@arc Try this model. Just got released and should perform close to Deepseek R1.
https://huggingface.co/Qwen/QwQ-32B
@matrix I have been alternating between a Mistral Small 22B model and DeepSeek R1 Distill 14B (I can run the 32B but it's just too slow to chat with on my computer). I pretty much just searched and downloaded whatever showed up first on Hugging Face.
I don't really know much about if the models I got are representative of their true capabilities but it's fun anyway. While it's like talking to an goldfish sometimes maybe one day there is hope for an actually helpful "AI" assistant. Although DeepSeek seems to do this thing where it constantly "verbalizes" what it's "thinking" through which is annoying. Maybe I got some setting fucky.