1. Mo​AI Platform
  2. Tags
  3. llama​3_​8​b

# Tag: llama3_8b

See all tags.

Tutorials
Fine-tuning on MoAI Platform

This tutorial introduces an example of fine-tuning the open-source Llama3-8B model on the MoAI Platform.

Tutorials • Fine-tuning on MoAI Platform
1. Preparing for Fine-tuning

Setting up the PyTorch execution environment on the MoAI Platform is similar to setting it up on a typical GPU server.

Tutorials • Fine-tuning on MoAI Platform
2. Understanding Training Code

Once you have prepared all the training data, let's take a look at the contents of the train_llama3.py

Tutorials • Fine-tuning on MoAI Platform
3. Model Fine-tuning

Now, we will actually execute the fine-tuning process.

Tutorials • Fine-tuning on MoAI Platform
4. Checking Training Results

When you execute the train_llama3.py script as in the previous section, the resulting model will be saved in the

Tutorials • Fine-tuning on MoAI Platform
5. Changing the Number of GPUs

Let's rerun the fine-tuning task with a different number of GPUs.

Tutorials • Fine-tuning on MoAI Platform
6. Conclusion

From this tutorial, we have seen how to fine-tune Llama3 8B on the MoAI Platform.

© Copyright Moreh 2024. All rights reserved.