It predicted 50h for small finetune :NotLikeThis: I give up

@matrix For LLMs, the real bottleneck is RAM bandwidth. We had a damaged card that could only run at 10% CPU power, we only noticed a 20% drop in tokens per second.

Follow

@r000t So I guess my initial guess was correct, it was spilling over to RAM and PCIe bandwidth became the bottleneck. It just confused me because task manager didn't show any spill over and inference with llama cpp was faster when i let it spill

· · Web · 0 · 0 · 0
Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.