dumperize commited on
Commit
0968074
1 Parent(s): 370c372

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -119
README.md CHANGED
@@ -9,19 +9,19 @@ widget:
9
  - src: https://huggingface.co/dumperize/movie-picture-captioning/resolve/main/vertical_15x.jpeg
10
  example_title: Custom Image Sample 1
11
  ---
12
- # Model Card for Model ID
13
  This model generate a description for movie posters ... mm, in principle, for any photo.
14
 
15
- # Model Details
16
 
17
- ## Model Description
18
 
19
  This is an encoder decoder model based on [VisionEncoderDecoderModel](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder).
20
  [Google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) was used as encoder, [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) as decoder.
21
 
22
  We refined the model on the dataset with descriptions and movie posters by russian app Kinoposk. Now the model generates descriptions on the jargon of blockbusters =).
23
 
24
- ## Model Sources
25
 
26
  - **Repository:** [github.com/slivka83](https://github.com/slivka83/)
27
  - **Demo [optional]:** [@MPC_project_bot](https://t.me/MPC_project_bot)
@@ -59,131 +59,22 @@ print([pred.strip() for pred in preds])
59
 
60
  # Bias, Risks, and Limitations
61
 
62
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
63
-
64
- [More Information Needed]
65
-
66
- ## Recommendations
67
-
68
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
69
-
70
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
71
 
72
  # Training Details
73
 
74
- ## Training Data
75
 
76
  We compiled a dataset from the open source of all Russian-language films for October 2022 - [kinopoisk](https://www.kinopoisk.ru/). Films with very short or very long descriptions were not included in the dataset, films with blank or very small images were excluded too.
77
 
78
- ### Preprocessing
79
 
80
  The model was trained on 8 16 GB V100 for 90 hours.
81
 
82
  # Evaluation
83
 
84
- <!-- This section describes the evaluation protocols and provides the results. -->
85
-
86
- ## Testing Data, Factors & Metrics
87
-
88
- ### Testing Data
89
-
90
- <!-- This should link to a Data Card if possible. -->
91
-
92
- [More Information Needed]
93
-
94
- ### Factors
95
-
96
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
97
-
98
- [More Information Needed]
99
-
100
- ### Metrics
101
-
102
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
103
-
104
- [More Information Needed]
105
-
106
- ## Results
107
-
108
- [More Information Needed]
109
-
110
- ### Summary
111
-
112
-
113
-
114
- # Model Examination [optional]
115
-
116
- <!-- Relevant interpretability work for the model goes here -->
117
-
118
- [More Information Needed]
119
-
120
- # Environmental Impact
121
-
122
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
123
-
124
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
125
-
126
- - **Hardware Type:** [More Information Needed]
127
- - **Hours used:** [More Information Needed]
128
- - **Cloud Provider:** [More Information Needed]
129
- - **Compute Region:** [More Information Needed]
130
- - **Carbon Emitted:** [More Information Needed]
131
-
132
- # Technical Specifications [optional]
133
-
134
- ## Model Architecture and Objective
135
-
136
- [More Information Needed]
137
-
138
- ## Compute Infrastructure
139
-
140
- [More Information Needed]
141
-
142
- ### Hardware
143
-
144
- [More Information Needed]
145
-
146
- ### Software
147
-
148
- [More Information Needed]
149
-
150
- # Citation [optional]
151
-
152
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
153
-
154
- **BibTeX:**
155
-
156
- [More Information Needed]
157
-
158
- **APA:**
159
-
160
- [More Information Needed]
161
-
162
- # Glossary [optional]
163
-
164
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
165
-
166
- [More Information Needed]
167
-
168
- # More Information [optional]
169
-
170
- [More Information Needed]
171
-
172
- # Model Card Authors [optional]
173
-
174
- [More Information Needed]
175
-
176
- # Model Card Contact
177
-
178
- [More Information Needed]
179
-
180
- # How to Get Started with the Model
181
-
182
- Use the code below to get started with the model.
183
-
184
- <details>
185
- <summary> Click to expand </summary>
186
 
187
- [More Information Needed]
188
 
189
- </details>
 
9
  - src: https://huggingface.co/dumperize/movie-picture-captioning/resolve/main/vertical_15x.jpeg
10
  example_title: Custom Image Sample 1
11
  ---
12
+ # Model Card for movie-picture-captioning
13
  This model generate a description for movie posters ... mm, in principle, for any photo.
14
 
15
+ # Model Details:
16
 
17
+ #### Model Description
18
 
19
  This is an encoder decoder model based on [VisionEncoderDecoderModel](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder).
20
  [Google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) was used as encoder, [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) as decoder.
21
 
22
  We refined the model on the dataset with descriptions and movie posters by russian app Kinoposk. Now the model generates descriptions on the jargon of blockbusters =).
23
 
24
+ #### Model Sources
25
 
26
  - **Repository:** [github.com/slivka83](https://github.com/slivka83/)
27
  - **Demo [optional]:** [@MPC_project_bot](https://t.me/MPC_project_bot)
 
59
 
60
  # Bias, Risks, and Limitations
61
 
62
+ Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.
 
 
 
 
 
 
 
 
63
 
64
  # Training Details
65
 
66
+ #### Training Data
67
 
68
  We compiled a dataset from the open source of all Russian-language films for October 2022 - [kinopoisk](https://www.kinopoisk.ru/). Films with very short or very long descriptions were not included in the dataset, films with blank or very small images were excluded too.
69
 
70
+ #### Preprocessing
71
 
72
  The model was trained on 8 16 GB V100 for 90 hours.
73
 
74
  # Evaluation
75
 
76
+ This model achieved the following results: sacrebleu 6.84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
+ #### Metrics
79
 
80
+ We used [sacrebleu](https://huggingface.co/spaces/evaluate-metric/sacrebleu) metric.