File size: 2,355 Bytes
4a98e2a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---

tags:
  - image-to-text
  - image-captioning
license: apache-2.0
metrics:
  - rouge
datasets:
  - nlphuji/flickr30k
widget:
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
    example_title: Savanna
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
    example_title: Football Match
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
    example_title: Airport
base_model:
  - google/vit-base-patch16-224-in21k

model-index:
  - name: mozilla/distilvit
    results:
      - task:
          type: image-to-text
          name: Image To Text
        dataset:
          name: nlphuji/flickr30k
          type: nlphuji/flickr30k
        metrics:
          - name: ROUGE-1
            type: rouge
            value: 43.006
            verified: true
          - name: ROUGE-2
            type: rouge
            value: 16.9939
            verified: true
          - name: ROUGE-L
            type: rouge
            value: 38.8923
            verified: true
          - name: ROUGE-LSUM
            type: rouge
            value: 38.8877
            verified: true
          - name: loss
            type: loss
            value: 0.19939416646957397
          - name: gen_len
            type: gen_len
            value: 11.327256736227712
            verified: true
---


# distilvit

This model is a work in progress. Fine-tuned version of those base models:

- a VIT model for the image encoder: https://huggingface.co/google/vit-base-patch16-224-in21k
- a Distilled GPT-2 model for the text decoder: https://huggingface.co/distilbert/distilgpt2

This model was trained on:

- Flickr30k : https://huggingface.co/datasets/nlphuji/flickr30k
- COCO 2017: https://cocodataset.org

You can get that checkpoint using the 3083a3cef6e3c8dd90df3f088074bbe836b0f403 commit.

It was then further fine-tuned on :

- Flickr30k debiased: https://huggingface.co/datasets/Mozilla/flickr30k-transformed-captions
- DocOrNot: https://huggingface.co/datasets/Mozilla/docornot

You can find the code used to create the model here: https://github.com/mozilla/distilvit

### Framework versions

- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1