#
Tag: qwen
See all tags.
Tutorials • Fine-tuning Tutorials
Qwen Fine-tuning
This tutorial introduces an example of fine-tuning the open-source Qwen2.5 7B model on the MoAI Platform.
Tutorials • Fine-tuning Tutorials • Qwen Fine-tuning
1. Preparing for Fine-tuning
Preparing the PyTorch script execution environment on the MoAI Platform is similar to doing so on a typical GPU server.
Tutorials • Fine-tuning Tutorials • Qwen Fine-tuning
2. Understanding Training Code
If you've prepared all the training data, let's now take a look at the train_qwen.py
script to actually run the fine-tuning process.
Tutorials • Fine-tuning Tutorials • Qwen Fine-tuning
3. Model Fine-tuning
Now, we will train the model through the following process.
Tutorials • Fine-tuning Tutorials • Qwen Fine-tuning
4. Checking Training Results
Running the train_qwen.py
script, as in the previous chapter, will save the resulting model in the qwen_code_generation
Tutorials • Fine-tuning Tutorials • Qwen Fine-tuning
5. Changing the Number of GPUs
Let's rerun the fine-tuning task with a different number of GPUs.
Tutorials • Fine-tuning Tutorials • Qwen Fine-tuning
6. Conclusion
So far, we've explored the process of fine-tuning the Qwen2.5 7B model on the MoAI Platform.