Microsoft's Phi-4 now in Unsloth!
Hope you all had an amazing New Year! Here's our latest updates:
✨ Microsoft Phi-4
Phi-4, Microsoft's new 14B model that performs on par with OpenAI's GPT-4o-mini is now in Unsloth!
We fixed 4 bugs in Phi-4, greatly increasing the model’s accuracy.
Unsloth Phi-4 finetuning enables >128K context lengths
🦙 Llama 3.3 + Ultra long context
Llama 3.3 is Meta’s new 70B model that performs on par with Llama 405B.
We worked with Apple to integrate CCE, making Unsloth now support 13x longer context lengths.
🦥 Unsloth Dynamic 4-bit
We’re also excited to introduce Unsloth Dynamic 4-bit Quantization!
Unlike naive quantization, which reduces accuracy, our method selectively avoids quantizing certain parameters, achieving significantly higher accuracy, while only using <10% more VRAM than BitsandBytes 4-bit.