#
Tag: llama2
See all tags.
Fine-tuning Tutorials
Llama2 13B Fine-tuning
This tutorial introduces an example of fine-tuning the open-source Llama2 13B model on the MoAI Platform.
Fine-tuning Tutorials • Llama2 13B Fine-tuning
1. Preparing for Fine-tuning
Preparing the PyTorch script execution environment on the MoAI Platform is similar to doing so on a typical GPU server.
Fine-tuning Tutorials • Llama2 13B Fine-tuning
2. Understanding Training Code
If you've got all your training data ready, let's dive into running the actual fine-tuning process using the
Fine-tuning Tutorials • Llama2 13B Fine-tuning
3. Model Fine-tuning
Now, we will train the model through the following process.
Fine-tuning Tutorials • Llama2 13B Fine-tuning
4. Checking Training Results
Upon running the train_llama2.py
script as described earlier, the resulting model will be saved in the
Fine-tuning Tutorials • Llama2 13B Fine-tuning
5. Changing the Number of GPUs
Let's rerun the fine-tuning task with a different number of GPUs.
Fine-tuning Tutorials • Llama2 13B Fine-tuning
6. Conclusion
From this tutorial, we have seen how to fine-tune Llama2 13B for text summarization on the MoAI Platform.