I am new to this and I am asking what are the differences between the model used in clip.load(“ViT-B-32”) and the model “openai/clip-vit-base-patch32”?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Using ResNet50 weights inside `CLIPModel` | 0 | 678 | June 23, 2021 | |
CLIP model incorporated in CLIPSeg | 0 | 727 | February 22, 2023 | |
Discrepancy between OpenAI CLIP and Huggingface CLIP models | 2 | 1599 | August 19, 2024 | |
Converting CLIPModel to VisionTextDualEncoderModel | 1 | 155 | March 21, 2024 | |
Load CLIP pretrained model on GPU | 6 | 7810 | March 6, 2024 |