Problem with sharing models among processes via multiprocessing

Hi All,
I have created a simple asr decoder script which uses an XLSR pretrained model.
In order to improve its decode process, I want to do batch decoding through multiprocessing.
Namely, I want to give a batch of audios to the script, and it will spawn lets say 4 processes of decoders to simultaneously decode the audios in paralel.
I have tried multiprocessing.Process and Pool. However, both of them stuck at process creation step. Because I want to initialize (load) the ASR model at the beginning, and pass the model to each process as a shared object, to avoid reloading the same same model for each process.
During process creation it gives the following error:
RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries. If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).
Exception ignored in: <function Pool.del at 0x0000029F8F88CE50>

The related code snippet is as follows:
if model is None or processor is None:
load_model()
the above line will load and set model and processor.

with poolcontext(processes=4) as pool:
results = pool.map(process, (processor, model, audio_list))

using Paralel class in joblib also gives the same error.
Here, I am encountering a problem while serializing model and processor classes while creating a process.
How can I overcome this problem?
What is the correct way of sharing models while using multiprocessing ?
I would appreciate any recommendations and guadiance on this issue.
Thanks in advance for taking time and giving support.