Multi-GPU training

metatrain supports training a model with several GPUs, which can accelerate the training, especially when the training dataset is large / there are many training epochs. This feature is enabled by the torch.distributed module, and thus can do multiprocess parallelism across several nodes.

In multi-GPU training, every batch of samples is split into smaller mini-batches and the computation is run for each of the smaller mini-batches in parallel on different GPUs. The different gradients obtained on each device are then summed. This approach allows the user to reduce the time it takes to train models.

To know if the model supports multi-GPU training, please check Available Architectures and see if the default hyperparameters have the distributed option.

Input file

To do this, you only need to switch on the distributed option in the .yaml file for the training. Let’s take this tutorial as an example. Now, the options.yaml is

 1architecture:
 2  name: pet
 3  model:
 4    cutoff: 4.5
 5  training:
 6    distributed: true  # switch on distributed training mode
 7    num_epochs: 100  # increased to 100 to compare with non-distributed
 8    batch_size: 10
 9    log_interval: 1
10    checkpoint_interval: 10
11
12training_set:
13  systems:
14    read_from: ethanol_reduced_100.xyz
15    length_unit: Angstrom
16  targets:
17    energy:
18      key: energy
19      unit: eV
20
21test_set: 0.1
22validation_set: 0.1

Slurm script

Below is an example Slurm script for submitting the job. Please be aware that the actual configurations vary from clusters to clusters, so you have to modify it. Different scheduler will require similar options. metatrain will automatically use all the GPUs that you have asked for. You should make a single GPU visible for each process (setting –gpus-per-node equal to the number of GPUs, or setting –gpus-per-task=1, depending on your cluster configuration).

#!/bin/bash
#SBATCH --nodes 1
#SBATCH --ntasks 2  # must equal to the number of GPUs
#SBATCH --ntasks-per-node 2
#SBATCH --gpus-per-node 2  # use 2 GPUs
#SBATCH --cpus-per-task 8
#SBATCH --exclusive
#SBATCH --partition=h100  # adapt this to your cluster
#SBATCH --time=1:00:00

# load modules and/or virtual environments and/or containers here

srun mtt train options-distributed.yaml

Performance

If the multi-GPU training runs successfully, you should see this in the training log:

[2025-10-08 11:34:22][INFO] - Distributed environment set up with MASTER_ADDR=kh080,
MASTER_PORT=39591, WORLD_SIZE=2, RANK=0, LOCAL_RANK=0
[2025-10-08 11:34:23][INFO] - Training on 2 devices with dtype torch.float32

This 100-epoch training takes 23 seconds.

[2025-10-08 11:34:22][INFO] - Starting training from scratch
...
[2025-10-08 11:34:45][INFO] - Training finished!

Now let’s switch off the multi-GPU training by writing distributed: false, and submit this job again. The training takes 69 seconds.

[2025-10-08 11:37:38][INFO] - Setting up model
...
[2025-10-08 11:38:47][INFO] - Training finished!

Multi-GPU fine-tuning

You can use multi-GPU for fine-tuning too, by writing distributed: True in the .yaml input. For information about fine-tuning, please refer to this tutorial on fine-tuning.

Gallery generated by Sphinx-Gallery