(Minimal) Lightning -> Accelerate?

Hi, I’m surprised not to find any info on this yet, but… I guess I’m the first one to ask: Is there any way to make Accelerate work with a PyTorch Lightning based code? (Or a recommended way to convert from the latter to the former?)

Up until posting this, I’ve been assuming the answer is “No”, and have begun “ripping out” all my Lightning stuff and converting my pl.Trainer module to a straight-PyTorch module in order to match up with the Accelerate examples I’ve seen, and have begun writing a “manual” PyTorch training loop to replace the trainer.fit() from Lightning.

But…is that the only/best way to do it? Figured it was worth asking before I got too far along in this (kind of) “refactoring”.

2 Likes

PL has their own DDP they use, and they wrap a lot of magic so it is indeed incompatible.

If you want a framework that has Accelerate in it, you can use fastai :wink: (so long as you’re doing multi-gpu for now)

2 Likes

Cool thanks. yea I figured the DDP part was incompatible just wasn’t sure if there was any kind of “shortcut”.

Cooool (and of course, given you & Sylvain on HF staff!).
I may just do that! I had “inherited” the Lightning code from a colleague anyway. :wink: And knowing now that the notebook example is fixed, I can now go “full nbdev”, LOL.

2 Likes

Just wanted to extend this discussion since you’ve tried working on getting pytorch-lightning to work with accelerate.
On the surface, it would seem that pytorch-lightning modules should be compatible with accelerate, since they can also be treated as plain pytorch counterparts. So other than not interleaving accelerate and lightning for training and optimization, it seems they should be able to share components?

1 Like