Unsloth 2026 Update - Faster MoE
Welcome to our first release of 2026. To kick things off, we’re introducing 12x faster MoE training, embedding model support, and ultra long context for Reinforcement Learning. We’ll also be launching our brand new UI soon!
We’ve added support for many new models that you can now run and fine-tune locally, including Qwen3-Coder-Next, DeepSeek-OCR 2, GLM-4.7-Flash, Kimi-2.5, and more.
⭐ We’d also like to thank all of you for 50K stars on GitHub: https://github.com/unslothai/unsloth
💎 12× faster MoE training
You can now train MoE models 12× faster with >35% less VRAM and 6x longer context via our new Triton and math kernels (no accuracy loss). gpt-oss-20b works on 12.8GB VRAM. Qwen3-30B-A3B (16-bit LoRA) uses 63GB.
Unsloth supports fast training for gpt-oss, Qwen3 (30B, 235B, VL, Coder), DeepSeek R1/V3 arch and GLM (4.7, Flash) models. The larger the model and more context you use, the more pronounced the memory savings from our Unsloth kernels will be.
🔎 Embedding models now train 2× faster
Fine-tuning embedding models can largely improve retrieval and RAG performance on specific tasks. We collaborated with Hugging Face to enable 1.8-3.3x faster embedding, BERT and classifier model training with 20% less VRAM, 2x longer context & no accuracy loss vs. FA2 setups.
💡 Ultra Long Context RL is here
Reinforcement learning’s (RL) biggest challenge is supporting long reasoning traces. Our new batching algorithms enable ~7x longer context (can be more than 12x) RL with no accuracy or speed degradation vs. other optimized setups that use FA3, kernels & chunked losses. Unsloth trains gpt-oss QLoRA with 380K context on a single 192GB NVIDIA B200 GPU
🔮 New models
🌠 Qwen3-Coder-Next: Run the newly released coding model.
🐳 DeepSeek-OCR 2 - Run and fine-tune the new OCR model.
⚡ GLM-4.7-Flash - Run and fine-tune the best-in-class 30B LLM.
📖 New Guides
</> How To Use Claude Code + Codex with local LLMs: Guide
👾 Train & deploy to LM Studio for local inference: Guide
🎨 Run Diffusion image models with Unsloth GGUFs: Guide
February is shaping up to be an amazing month for LLM releases, and we hope you’re just as excited as we are. 😊



wooooow