Using Owl ViT Embeddings with cosine similarity

Hi,

Is it possible to use Owl-Vit embeddings with cosine similarity as we do in the CLIP model? I mean, I want to extract embeddings from image&text encoder and perform cosine similarity.

What I want:

import torch
import clip
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    
    logits_per_image, logits_per_text = model(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)  # prints: [[0.9927937  0.00421068 0.00299572]]

I have tried but I get too poor results:

from transformers import OwlViTProcessor, OwlViTForObjectDetection
from sentence_transformers import util
from PIL import Image
model_itself = OwlViTModel.from_pretrained('google/owlvit-base-patch32').to('cuda')
processor = OwlViTProcessor.from_pretrained('google/owlvit-base-patch32')

text_classes = ["traffic light and car", "a photo of a dog"]
inputs = processor(    text=text_classes, return_tensors="pt").to('cuda')
text_embeddings = model_itself.get_text_features(**inputs)

inputs = processor(images=Image.open('an_image_path'), return_tensors="pt").to('cuda')
image_features = model_itself.get_image_features(**inputs)

output_st = util.semantic_search(image_features, text_embeddings)
print(text_classes)
print(output_st )

How can I do that or is this possible in Owl ViT?

Keep in mind the vision is in patches, so it doesn’t work directly like this.