Error trying to use instruct_pipeline in order to avoid trust_remote_code

Note: superbeginner here.

Working with Databricks’ dolly-v2-3b.

Installed git 2.40.0.windows.1 and downloaded Dolly
Installed CUDA 12.1.1_531.14
Installed python 3.11.3 and installed the necessary using the following commands:

py.exe -m pip install numpy
py.exe -m pip install accelerate>=0.12.0 transformers[torch]==4.25.1
py.exe -m pip install numpy --pre torch --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117 --user

Now, when I run the following commands, everything works ok and Dolly will produce responses.

import torch
from transformers import pipeline
generate_text = pipeline(model=“databricks/dolly-v2-3b”, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map=“auto”)

However, I want to not use the trust_remote_code. So I downloaded the instruct_pipeline.py and added it to the C:\Users\xxxxxx\AppData\Local\Programs\Python\Python311\Lib. But when I try to do:

import torch
from instruct_pipeline import InstructionTextGenerationPipeline

I get an error as per the screenshot:

Not sure how to proceed?

I’m running everything on windows 11. I’ve also noticed my GPU (1660 super, driver v531.41) is not used at all (0% gpu at task manager), shouldn’t utilization jump to 100% when I run a prompt?

The latest version of CUDA which pytorch supports is version 11.8 which you can find installation information for here Start Locally | PyTorch

You can confirm that pytorch is using the installed CUDA software by including this in your script:

print(“CUDA available:”, torch.cuda.is_available())
print(“Number of GPUs:”, torch.cuda.device_count())
print(“GPU name:”, torch.cuda.get_device_name(0))