I am loading InstructBLIP based on Vicuna-13B.
model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-13b", load_in_4bit=True, torch_dtype=torch.bfloat16, cache_dir=download_dir)
However, when I print the number of parameters using model.num_parameters(), I get ~7B parameters. Am I missing something?