Langchain not changing pipeline's model to Llama-2-7b-hf

0

I am trying to run meta-llama/Llama-2-7b-hf on langchain with a HuggingfacePipeline. My set-up is below.

Why is the llm loaded with the gpt2 model. I believe gpt2 is the default for the HuggingfacePipeline(), but I am passing the model with transformers.AutoModelForCausalLM.from_pretrained() with the meta-llama/Llama-2-7b-hf override…

What am I doing wrong?

bnb_config = transformers.BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type='nf4',
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=bfloat16,
    # llm_int8_enable_fp32_cpu_offload=True
)


model_config = transformers.AutoConfig.from_pretrained(
    "meta-llama/Llama-2-7b-hf",
    use_auth_token=hf_auth
)

tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")

model = transformers.AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-2-7b-hf",
    trust_remote_code=True,
    config=model_config,
    quantization_config=bnb_config,
    device_map="auto",
    use_auth_token=hf_auth
)
model.eval()


pipe = transformers.pipeline(
    model=model,
    tokenizer=tokenizer,
    task="text-generation",
    return_full_text=True, 
    temperature=0.1,  # 'randomness' of outputs, 0.0 is the min and 1.0 the max
    max_new_tokens=64,  # mex number of tokens to generate in the output
    repetition_penalty=1.1  # without this output begins repeating
)


llm = HuggingFacePipeline(pipeline=pipe)

print(llm)```

>>>HuggingFacePipeline
Params: {'model_id': 'gpt2', 'model_kwargs': None}