|
|
--- |
|
|
license: mit |
|
|
datasets: |
|
|
- ILSVRC/imagenet-1k |
|
|
- mlfoundations/datacomp_small |
|
|
base_model: |
|
|
- openai/clip-vit-large-patch14 |
|
|
--- |
|
|
|
|
|
Model Initialized from `openai/clip-vit-large-patch14`. The image encoder is finetuned with FARE at $\epsilon=2/255$. |
|
|
|
|
|
To load this model use: |
|
|
|
|
|
```python |
|
|
from transformers import CLIPProcessor, CLIPModel |
|
|
|
|
|
model_name = "LEAF-CLIP/CLIP-ViT-L-FARE2" |
|
|
processor_name = "openai/clip-vit-large-patch14" |
|
|
|
|
|
model = CLIPModel.from_pretrained(model_name) |
|
|
processor = CLIPProcessor.from_pretrained(processor_name) |
|
|
``` |