Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license: mit
|
|
6 |
## Model Description
|
7 |
- **Homepage:** https://imirandam.github.io/BiVLC_project_page/
|
8 |
- **Repository:** https://github.com/IMirandaM/BiVLC
|
9 |
-
- **Paper:**
|
10 |
- **Point of Contact:** [Imanol Miranda](mailto:imanol.miranda@ehu.eus)
|
11 |
### Model Summary
|
12 |
CLIP_COCO is a model presented in the [BiVLC](https://github.com/IMirandaM/BiVLC) paper for experimentation. It has been fine-tuned with OpenCLIP framework using as basis the CLIP ViT-B-32 model pre-trained by 'openai'. The idea behind this fine-tuning is to have a baseline to compare the [CLIP_TROHN-Text](https://huggingface.co/imirandam/CLIP_TROHN-Text) and [CLIP_TROHN-Img](https://huggingface.co/imirandam/CLIP_TROHN-Img) models. Hyperparameters:
|
|
|
6 |
## Model Description
|
7 |
- **Homepage:** https://imirandam.github.io/BiVLC_project_page/
|
8 |
- **Repository:** https://github.com/IMirandaM/BiVLC
|
9 |
+
- **Paper:** https://arxiv.org/abs/2406.09952
|
10 |
- **Point of Contact:** [Imanol Miranda](mailto:imanol.miranda@ehu.eus)
|
11 |
### Model Summary
|
12 |
CLIP_COCO is a model presented in the [BiVLC](https://github.com/IMirandaM/BiVLC) paper for experimentation. It has been fine-tuned with OpenCLIP framework using as basis the CLIP ViT-B-32 model pre-trained by 'openai'. The idea behind this fine-tuning is to have a baseline to compare the [CLIP_TROHN-Text](https://huggingface.co/imirandam/CLIP_TROHN-Text) and [CLIP_TROHN-Img](https://huggingface.co/imirandam/CLIP_TROHN-Img) models. Hyperparameters:
|