ai-forever commited on
Commit
67158dc
β€’
1 Parent(s): d3c3beb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RuDOLPH-2.7B (XL)
2
+
3
+ RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP
4
+
5
+ <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/rudolph-generated.png" height="60" border="2"/>
6
+
7
+ Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
8
+ * Task: `text2image generation`; `self reranking`; `text ranking`; `image ranking`; `image2text generation`; `zero-shot image classification`, `text2text generation`, 'text-qa', 'math-qa', 'image captioning', 'image generation', 'text-in-the-wild', 'vqa';
9
+ * Language: `Russian`
10
+ * Type: `decoder`
11
+ * Num Parameters: `2.7B`
12
+ * Training Data Volume: `119 million text-image pairs; 60 million text paragraphs; 43 334 text question-answer pairs; 100 000 math tasks; 85 000 text-image pairs (for captioning, generation); 85 759 visual question-answer pairs; 140 000 image-text pairs for text recognition`
13
+
14
+
15
+ # Model Description
16
+
17
+ **Ru**ssian **D**iffusion **O**n **L**anguage **P**icture **H**yper-modality (RuDOLPH) 2.7B is a fast and light text-image-text transformer designed for a quick and easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-modality Transformers.
18
+
19
+ *(!!!) Hyper-modality means generalized multi-modal, e.g., model that consists of two multi-modal parts: text-2-image and image-2-text becomes text and image hyper-modality model*
20
+
21
+ This is a fine-tuned version of the pre-trained RuDOLPH 2.7B model.
22
+
23
+ The model was prepared as a baseline for AI Journey 2022 (AIJ2) fine-tuned using 6 tasks:
24
+
25
+ * Text QA – SberQUaD dataset.
26
+ * Math QA – DeepMind Mathematics Dataset.
27
+ * Captioning – COCO dataset.
28
+ * VQA – COCO dataset with prepared question set.
29
+ * Generation – COCO dataset.
30
+ * Text-in-the-wild – synthesized data.
31
+
32
+ # Sparse Attention Mask
33
+
34
+ The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with both image and left text condition.
35
+
36
+ ![rudolph27b_masks.png](https://s3.amazonaws.com/moonup/production/uploads/1663662426135-5f91b1208a61a359f44e1851.png)
37
+
38
+ # Authors
39
+
40
+ + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)