BTW, my version.
peft 0.14.0
BTW, my version.
peft 0.14.0
me tooâŚâŚ
peft 0.14.0
hmmmâŚ
huggingface-hub 0.27.1
accelerate 1.0.1
bitsandbytes 0.45.1
transformers 4.46.1
torch 2.4.0+cu124
huggingface-hub 0.28.1
accelerate 1.3.0
bitsandbytes 0.45.1
transformers 4.48.2
torch 2.1.2
I changed them to your version. Others donât matter, when I change transformers from 4.48.2 to 4.46.1, the error turns to be:
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1778, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/root/miniconda3/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/detr/image_processing_detr_fast.py", line 30, in <module>
from ...image_utils import (
ImportError: cannot import name 'pil_torch_interpolation_mapping' from 'transformers.image_utils' (/root/miniconda3/lib/python3.10/site-packages/transformers/image_utils.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/autodl-tmp/merge4.py", line 1, in <module>
from transformers import *
File "<frozen importlib._bootstrap>", line 1073, in _handle_fromlist
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/root/miniconda3/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1767, in __getattr__
value = getattr(module, name)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1766, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/root/miniconda3/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1780, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.detr.image_processing_detr_fast because of the following error (look up to see its traceback):
cannot import name 'pil_torch_interpolation_mapping' from 'transformers.image_utils' (/root/miniconda3/lib/python3.10/site-packages/transformers/image_utils.py)
and 4.46.1-4.46.3 are the same. When I tried 4.47.0, the error went back to object has no attribute 'merge_and_unload'
.
There must be some changes after 4.47 and make confilct.
This is probably the only change to Transformers, and the change itself is nothing special, but what I was curious about was DETR. The DETR-related classes are being called even though itâs actually LlamaâŚ?
I was able to reproduce the error on my end, so Iâll look into it. Iâm not sure whatâs going on though.
Conclusion: It seems that the merge cannot be done because it is PromptTuning, not LoRA.
from peft import PeftConfig
lora_id = "P1sc3s007/llm4decompile-pt"
lora_id2 = "ai-blond/Qwen-Qwen2.5-Coder-1.5B-Instruct-lora"
peft_config = PeftConfig.from_pretrained(lora_id)
peft_config2 = PeftConfig.from_pretrained(lora_id2)
print(lora_id, peft_config)
print(lora_id2, peft_config2)
P1sc3s007/llm4decompile-pt PromptTuningConfig(task_type='CAUSAL_LM', peft_type=<PeftType.PROMPT_TUNING: 'PROMPT_TUNING'>, auto_mapping=None, base_model_name_or_path='LLM4Binary/llm4decompile-1.3b-v1.5', revision=None, inference_mode=True, num_virtual_tokens=8, token_dim=2048, num_transformer_submodules=1, num_attention_heads=16, num_layers=24, prompt_tuning_init='TEXT', prompt_tuning_init_text="What's the souce code of this asm?", tokenizer_name_or_path='LLM4Binary/llm4decompile-1.3b-v1.5', tokenizer_kwargs=None)
ai-blond/Qwen-Qwen2.5-Coder-1.5B-Instruct-lora LoraConfig(task_type='CAUSAL_LM', peft_type=<PeftType.LORA: 'LORA'>, auto_mapping=None, base_model_name_or_path='unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit', revision=None, inference_mode=True, r=16, target_modules={'down_proj', 'k_proj', 'o_proj', 'q_proj', 'v_proj', 'up_proj', 'gate_proj'}, exclude_modules=None, lora_alpha=16, lora_dropout=0, fan_in_fan_out=False, bias='none', use_rslora=False, modules_to_save=None, init_lora_weights=True, layers_to_transform=None, layers_pattern=None, rank_pattern={}, alpha_pattern={}, megatron_config=None, megatron_core='megatron.core', loftq_config={}, eva_config=None, use_dora=False, layer_replication=None, runtime_config=LoraRuntimeConfig(ephemeral_gpu_offload=False), lora_bias=False)
OKďźthat is to say it doesnât support the merge of prompt-tuning model, but lora. When I use lora, it works!
Thank you so much for helping me!
This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.