Unable to load 8bit model in Kaggle with dual GPU

Hello All,

I am trying to load meta/opt6.7B in Kaggle with dual T4 GPU. The code is simple and I could run that in colab. But I can’t make it work in Kaggle. What am I missing?

Here is the code -

!pip install  bitsandbytes datasets accelerate loralib
!pip install transformers peft

import os
# os.environ["CUDA_VISIBLE_DEVICES"]="0"
import torch
import torch.nn as nn
import bitsandbytes as bnb
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "facebook/opt-6.7b", 
    load_in_8bit=True, 
    device_map='auto',
)

tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b")

And here is the stack trace.

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/tmp/ipykernel_23/3843057406.py in <module>
      9     "facebook/opt-6.7b",
     10     load_in_8bit=True,
---> 11     device_map='auto',
     12 )
     13 

/opt/conda/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    463             model_class = _get_model_class(config, cls._model_mapping)
    464             return model_class.from_pretrained(
--> 465                 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
    466             )
    467         raise ValueError(

/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
   2527         # Dispatch model with hooks on all devices if necessary
   2528         if device_map is not None:
-> 2529             dispatch_model(model, device_map=device_map, offload_dir=offload_folder, offload_index=offload_index)
   2530 
   2531         if output_loading_info:

TypeError: dispatch_model() got an unexpected keyword argument 'offload_index'

Any idea what can be done? I am a bit clueless. Any help is much appreciated.

1 Like

Ok this was super easy actually. I needed to upgrade the packages. Kaggle comes with pre-installed versions and they were old. So that is that. I have solved it :slight_smile:

1 Like

@rcshubhadeep
Hello, can you share which packages you upgraded, please? I am receiving the same issue!!

I have tried re-installing the transformers, but no luck.

I forgot to update this thread but I could not make it work there despite many attempts, unfortunately. I will try later. To answer your question, I updated transformers, datasets etc. Kaggle run time comes with a older version that makes this error.

1 Like

I have made it work…
Please try the following.

!pip uninstall wandb --yes

!pip install --upgrade git+https://github.com/huggingface/transformers.git@main

!pip install --upgrade bitsandbytes datasets accelerate loralib

!pip install --upgrade git+https://github.com/huggingface/peft.git

The key is to upgrade the packages. So, I did that and that worked for me.
Hope this helps.

2 Likes

Thanks for the update. Will check it out!

1 Like