Fastest way to do inference on a large dataset in huggingface?

Hi ! Yes there is a code example in the docs of multi-GPU inference using map() with multiprocessing

Let me know how it goes !