Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
nmndeep
/
clic-vit-l-14-336-redcaps
like
0
Feature Extraction
Transformers
Safetensors
clip_vision_model
arxiv:
1910.09700
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
clic-vit-l-14-336-redcaps
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
nmndeep
Initial release of converted ViT-L-14-336 CLIP Vision Model from OpenCLIP
f575109
verified
about 1 month ago
.gitattributes
Safe
1.52 kB
initial commit
about 1 month ago
README.md
Safe
5.17 kB
Initial release of converted ViT-L-14-336 CLIP Vision Model from OpenCLIP
about 1 month ago
config.json
567 Bytes
Initial release of converted ViT-L-14-336 CLIP Vision Model from OpenCLIP
about 1 month ago
model.safetensors
1.21 GB
LFS
Initial release of converted ViT-L-14-336 CLIP Vision Model from OpenCLIP
about 1 month ago