#
Fine-tuning Tutorials
This tutorial is for anyone who wants to fine-tune powerful large language models such as Llama2, Mistral for their own projects. We will walk you through the steps to fine-tune these large language models (LLMs) with MoAI Platform.
Fine-tuning in machine learning involves adjusting a pre-trained machine learning model's weight on new data to enhance task-specific performance. Essentially, when you want to apply an AI model to a new task, you take an existing model and optimize it with new datasets. This allows you to customize the model to meet your specific needs and domain requirements.
A pre-trained model has a large number of parameters designed for general-purpose use, and effectively fine-tuning such a large model requires a sufficient amount of training data.
With the MoAI Platform, you can easily apply optimized parallelization techniques that consider the GPU's memory size, significantly reducing the time and effort needed before starting training.
#
What you will learn here:
- Loading datasets, models, and tokenizers
- Running training and checking results
- Applying automatic parallelization
- Choosing the right training environment and AI accelerators