# 5. Changing the Number of GPUs

Let's rerun the fine-tuning task with a different number of GPUs. MoAI Platform abstracts GPU resources into a single accelerator and automatically performs parallel processing. Therefore, there is no need to modify the PyTorch script even when changing the number of GPUs.

# Changing the Accelerator type

Switch the accelerator type using the moreh-switch-model tool. For instructions on changing the accelerator, please refer to the 3. Model fine-tuning

$ moreh-switch-model

Please contact your infrastructure provider and choose one of the following options before proceeding.

  • AMD MI250 GPU with 32 units
    • When using Moreh's trial container: select 8xlarge
    • When using KT Cloud's Hyperscale AI Computing: select 8xLarge.4096GB
  • AMD MI210 GPU with 64 units
  • AMD MI300X GPU with 16 units

# Training Parameters

Run the train_baichuan2_13b.py script again.

~/moreh-quickstart$ python tutorial/train_baichuan2_13b.py --batch-size 512

Since the available GPU memory has doubled, let's increase the batch size to 512 and run the training.

f the training proceeds normally, you should see the following log:

...
[info] Got DBs from backend for auto config.
[info] Requesting resources for MoAI Accelerator from the server...
[info] Initializing the worker daemon for MoAI Accelerator
[info] [1/8] Connecting to resources on the server (192.168.xxx.xx:xxxxx)...
[info] [2/8] Connecting to resources on the server (192.168.xxx.xx:xxxxx)...
[info] [3/8] Connecting to resources on the server (192.168.xxx.xx:xxxxx)...
[info] [4/8] Connecting to resources on the server (192.168.xxx.xx:xxxxx)...
[info] [5/8] Connecting to resources on the server (192.168.xxx.xx:xxxxx)...
[info] [6/8] Connecting to resources on the server (192.168.xxx.xx:xxxxx)...
[info] [7/8] Connecting to resources on the server (192.168.xxx.xx:xxxxx)...
[info] [8/8] Connecting to resources on the server (192.168.xxx.xx:xxxxx)...
[info] Establishing links to the resources...
[info] KT AI Accelerator is ready to use.
[info] Moreh Version: 24.11.0
[info] Moreh Job ID: 991786
[info] The number of candidates is 96.
[info] Parallel Graph Compile start...
[info] Elapsed Time to compile all candidates = 99770 [ms]
[info] Parallel Graph Compile finished.
[info] The number of possible candidates is 35.
[info] SelectBestGraphFromCandidates start...
[info] Elapsed Time to compute cost for survived candidates = 20061 [ms]
[info] SelectBestGraphFromCandidates finished.
[info] Configuration for parallelism is selected.
[info] No PP, No TP,  recomputation : default(1), distribute_param : true, distribute_low_prec_param : true
[info] train: true

| INFO     | __main__:main:246 - Model load and warmup done. Duration: 456.21
| INFO     | __main__:main:256 - [Step 10/41] | Loss: 0.7734 | Duration: 121.93 | Throughput: 38698.46 tokens/sec
| INFO     | __main__:main:256 - [Step 20/41] | Loss: 0.6328 | Duration: 134.64 | Throughput: 38938.88 tokens/sec
| INFO     | __main__:main:256 - [Step 30/41] | Loss: 0.5781 | Duration: 134.70 | Throughput: 38921.30 tokens/sec
| INFO     | __main__:main:256 - [Step 40/41] | Loss: 0.5546 | Duration: 134.66 | Throughput: 38933.53 tokens/sec
...

Upon comparison with the results obtained when the number of GPUs was halved, you'll notice that the training progresses similarly, with an improvement in throughput.

  • When using AMD MI250 GPU 16 → 32 : approximately 15,000 tokens/sec → 38,000 tokens/sec