CLIP scores, with vector input rather than image input

I’ve been tearing my hair out a bit trying to understand how the Transformer CLIP model calculates scores that I can emulate.

When I run the example code:

from PIL import Image
import requests

from transformers import CLIPProcessor, CLIPModel

model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)

outputs = model(**inputs)
logits_per_image = outputs.logits_per_image  # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1)  # we can take the softmax to get the label probabilities

I get great similarities - and when using my own categories and my own images I am getting scores that make sense (i.e. significantly higher scores for categories that match)

However, if I then try to work out similarity scores, but using a vector I have generated seperately, I can’t seem to get the same scores. ideally id like to do:

inputs = processor(text=["a photo of a cat", "a photo of a dog"], return_tensors="pt", padding=True)

# here set my own image vector, provided as an array: vector = [0.034, -0.035, 0.01... etc] (size 768)

outputs = model(**inputs)

any attempts ive tried have just created similarity scores where all the scores are basically the same.

could someone give me a working example of the above, but providing ones own vector?