Facing Issue in importing pipelines from transformers

from transformers import AutoTokenizer, AutoModelCausalLM, pipeline

model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id, cache_dir = "/kaggle/working/augmented")

pipe = pipeline("text-generation", model = lm_model, tokenizer = tokenizer,
                max_new_tokens = 512, device_map = "auto")

I am trying installing different versions of transformers, even I did this:

! pip install -U git+https://github.com/huggingface/transformers.git
! pip install -U git+https://github.com/huggingface/accelerate.git

But still I am getting this error:

ImportError                               Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1084, in _get_module(self, module_name)
   1083 except Exception as e:
-> 1084     raise RuntimeError(
   1085         f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
   1086         f" traceback):\n{e}"
   1087     ) from e

File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
    125         level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)

File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)

File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)

File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)

File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)

File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)

File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)

File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/__init__.py:44
     35 from ..utils import (
     37     is_kenlm_available,
     42     logging,
     43 )
---> 44 from .audio_classification import AudioClassificationPipeline
     45 from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline

File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/audio_classification.py:21
     20 from ..utils import add_end_docstrings, is_torch_available, is_torchaudio_available, logging
---> 21 from .base import PIPELINE_INIT_ARGS, Pipeline
     24 if is_torch_available():

File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py:35
     34 from ..image_processing_utils import BaseImageProcessor
---> 35 from ..modelcard import ModelCard
     36 from ..models.auto.configuration_auto import AutoConfig

File /opt/conda/lib/python3.10/site-packages/transformers/modelcard.py:48
     32 from .models.auto.modeling_auto import (
     47 )
---> 48 from .training_args import ParallelMode
     49 from .utils import (
     50     MODEL_CARD_NAME,
     51     cached_file,
     57     logging,
     58 )

File /opt/conda/lib/python3.10/site-packages/transformers/training_args.py:67
     66 if is_accelerate_available():
---> 67     from accelerate.state import AcceleratorState, PartialState
     68     from accelerate.utils import DistributedType

ImportError: cannot import name 'PartialState' from 'accelerate.state' (/opt/conda/lib/python3.10/site-packages/accelerate/state.py)

I want to run this code in my kaggle notebook. How to fix this error?

Try restarting your runtime after installing, as if you tried importing without doing this it can be cached.

1 Like

I tried that too, but it doesn’t helped.

can you share with your notebook and i will try to help! if i can!

Your notebook would be great, because this is the root of the issue

This is the link of my notebook. https://www.kaggle.com/code/akritiupadhyayks/llm-practice?scriptVersionId=134987825

make sure to restart your kernel after
pip install transformers==4.28.0


It worked. Thank you!