Model (Pipeline) Parallelism in SLURM cluster

I have a SLURM based cluster with 2 nodes each having a single NVIDIA RTX 4090 GPU (24 GB). Hence, total 2 GPU with 2*24=48 GB memory. Can I load and fine-tune large LLMs in this cluster? If yes, then how? Any kind of help or clue would be appreciated. Thanks in advance.