Accelerate config in Seq2SeqTrainer

I’m trying to use accelerate in an HPC system where I schedule jobs using SLURM.
Usually for such models from Huggingface, do we always need to mention the configuration we created for accelerate into the accelerator_config argument like this? -

    trainer = Seq2SeqTrainer(
        model,
accelerator_config="path/to/config/accelerate/default_config.yaml",
        compute_metrics=lambda eval_pred: compute_metrics(eval_pred, tokenizer)
    )

Or just specifying the arguments when running the python script will suffice?

Like this -

accelerate launch --num_processes $(( 4 * $SLURM_NNODES )) --num_machines $SLURM_NNODES --multi_gpu --mixed_precision fp16 --main_process_ip $MASTER_ADDR --main_process_port $MASTER_PORT samplescript.py

I’m a bit new to this package. I’m confused about the usage of accelerate when it comes to using it from Seq2SeqTrainer or other models from the transformers package.

Any help appreciated! Thank you!