#
Tag: llama3_8b
See all tags.
Tutorials • Fine-tuning Tutorials
Llama3 8B Fine-tuning
This tutorial introduces an example of fine-tuning the open-source Llama3-8B model on the MoAI Platform.
Tutorials • Fine-tuning Tutorials • Llama3 8B Fine-tuning
1. Preparing for Fine-tuning
Setting up the PyTorch execution environment on the MoAI Platform is similar to setting it up on a typical GPU server.
Tutorials • Fine-tuning Tutorials • Llama3 8B Fine-tuning
2. Understanding Training Code
Once you have prepared all the training data, let's take a look at the contents of the train_llama3.py
Tutorials • Fine-tuning Tutorials • Llama3 8B Fine-tuning
3. Model Fine-tuning
Now, we will actually execute the fine-tuning process.
Tutorials • Fine-tuning Tutorials • Llama3 8B Fine-tuning
4. Checking Training Results
When you execute the train_llama3.py
script as in the previous section, the resulting model will be saved in the
Tutorials • Fine-tuning Tutorials • Llama3 8B Fine-tuning
5. Changing the Number of GPUs
Let's rerun the fine-tuning task with a different number of GPUs.
Tutorials • Fine-tuning Tutorials • Llama3 8B Fine-tuning
6. Conclusion
From this tutorial, we have seen how to fine-tune Llama3 8B on the MoAI Platform.