I am new to this and I am asking what are the differences between the model used in clip.load(“ViT-B-32”) and the model “openai/clip-vit-base-patch32”?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Using ResNet50 weights inside `CLIPModel` | 0 | 691 | June 23, 2021 | |
CLIP model incorporated in CLIPSeg | 0 | 734 | February 22, 2023 | |
Discrepancy between OpenAI CLIP and Huggingface CLIP models | 2 | 1622 | August 19, 2024 | |
Converting CLIPModel to VisionTextDualEncoderModel | 1 | 159 | March 21, 2024 | |
Load CLIP pretrained model on GPU | 6 | 7921 | March 6, 2024 |