3 articles

Unsloth and community developers release multiple GGUF quantizations of MiniMax M2.7, making the model viable for consumer hardware.

Latest llama-server build automatically migrates local cache without warning, disrupting established workflows.

Latest llama-server build auto-migrates local cache directories without user consent, sparking workflow friction.