Error EBUG:filelock:Attempting to acquire lock / related to cache

Hello,

I’m facing this issue so I was running this code:

Model: Gregor/mblip-bloomz-7b

    dataset = load_dataset("json",data_files=input_file)['train']
    dataloader = torch.utils.data.DataLoader(dataset,batch_size=batch_size)

    processor=Blip2Processor.from_pretrained(path_model)
    model = Blip2ForConditionalGeneration.from_pretrained(path_model,device_map="auto")
    model.eval()

And I started to get errors about

DEBUG:filelock:Attempting to acquire lock 22621346415712 on .cache/huggingface/datasets/_.cache_huggingface_datasets_json_default-b2ac538db5cb5168_0.0.0_c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7.lock
DEBUG:filelock:Lock 22621346415712 not acquired on .cache/huggingface/datasets/_.cache_huggingface_datasets_json_default-b2ac538db5cb5168_0.0.0_c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7.lock, waiting 0.05 seconds ...

first I fixed by using defining a tmp-cache dir in load_Dataset, which at first help the issue, then I got an error in model like this:

NotImplementedError: Cannot copy out of meta tensor; no data!
ERROR:torch.distributed.elastic.multiprocessing.api:failed

Which is weird because yesterday I was running everything normal, so maybe I did something I do not know. I tried to delete the cache folder and create my conda environment and install everything again, but the issue persists, please help me