#
6. Conclusion
So far, we've explored the process of fine-tuning the Qwen1.5 7B model on the MoAI Platform. With MoAI Platform, you can easily fine-tune PyTorch-based open-source LLM models on GPU clusters while keeping your existing training code intact. Additionally, using MoAI Platform, you can easily configure the required number of GPUs without any code changes. So please don’t hesitate to dive in and develop new models quickly and effortlessly with your data!
In case if you still have any questions regarding this tutorial feel free to ask Moreh(support@moreh.io).
#
Learn More
- MoAI Platform's Advanced Parallelization (AP)
- Llama3 8B Fine-tuning
- Llama3 70B Fine-tuning
- Mistral Fine-tuning
- GPT Fine-tuning
- Baichuan2 Fine-tuning