You can now fine-tune OpenAI’s gpt-oss models with Unsloth! You can also run the model locally by using our GGUFs.
We also investigated gpt-oss and did some chat template fixes so make sure to use Unsloth for training & our quants.
All other training libraries/methods will require a minimum of 65GB VRAM to train gpt-oss-20b, while Unsloth only requires 14GB VRAM. The 120b gpt-oss model fits in 65GB VRAM.
Fine-tune gpt-oss-20b for free with our Google Colab notebook
View all our gpt-oss model uploads with fixes analysis and Guide:
🦥 Unsloth Updates
Our newest updated speeds up every single model while also reducing VRAM usage by >20%.
Qwen upgraded Qwen3 and launched their SOTA coding models! Fine-tune and run Qwen3-Coder and Qwen-2507 with Unsloth.
Will this unsloth version supports structured output?