chenlin commited on
Commit
60d2df3
1 Parent(s): f87808b
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -1,3 +1,32 @@
1
  ---
2
- license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ inference: false
3
  ---
4
+ <br>
5
+ <br>
6
+
7
+ # ShareCaptioner Model Card
8
+
9
+ ## Model details
10
+
11
+ **Model type:**
12
+ ShareCaptioner is an open-source captioner fine-tuned on GPT4-Vision-assisted [ShareGPT4V](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) detailed caption data with a resolution of 448x448. ShareCaptioner is based on the improved [InternLM-Xcomposer-7B](https://github.com/InternLM/InternLM-XComposer) base model.
13
+
14
+ **Model date:**
15
+ ShareCaptioner was trained in Nov 2023.
16
+
17
+ **Paper or resources for more information:**
18
+ [[Project](https://ShareGPT4V.github.io/)] [[Paper](https://huggingface.co/papers/2311.12793)] [[Code](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V)]
19
+
20
+ ## License
21
+ Llama 2 is licensed under the LLAMA 2 Community License,
22
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
23
+
24
+ ## Intended use
25
+ **Primary intended uses:**
26
+ The primary use of ShareCaptioner is about producing high-quality image captions.
27
+
28
+ **Primary intended users:**
29
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
30
+
31
+ ## Finetuning dataset
32
+ - 100K GPT4-Vision-generated image-text pairs