I am new to this and I am asking what are the differences between the model used in clip.load(“ViT-B-32”) and the model “openai/clip-vit-base-patch32”?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Converting weights to .safetensors with HF format -> CLIP-L is ruined. Why? | 18 | 1365 | September 21, 2024 | |
Discrepancy between OpenAI CLIP and Huggingface CLIP models | 2 | 1695 | August 19, 2024 | |
Convert model clip to onnx | 0 | 213 | July 5, 2024 | |
How do I load ViT weights into CLIPVisionModel? | 0 | 234 | April 26, 2023 | |
Using ResNet50 weights inside `CLIPModel` | 0 | 749 | June 23, 2021 |