Weiyun1025 commited on
Commit
283e92c
1 Parent(s): a78c29a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md CHANGED
@@ -1,3 +1,37 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # ASM-FT Model Card
6
+
7
+ ## Model details
8
+
9
+ **Model type:**
10
+ ASM is a unified vision-language foundation model for open-world panoptic visual recognition and understanding. Aligning with LLMs, it supports versatile generation tasks, demonstrating impressive region comprehension capability.
11
+
12
+ **Model date:**
13
+ ASM was trained in July 2023.
14
+
15
+ **Paper or resources for more information:**
16
+ https://github.com/OpenGVLab/all-seeing
17
+
18
+ ## License
19
+ ASM is open-sourced under the Apache License 2.0.
20
+
21
+ **Where to send questions or comments about the model:**
22
+ https://github.com/OpenGVLab/all-seeing/issues
23
+
24
+ ## Intended use
25
+ **Primary intended uses:**
26
+ The primary use of ASM is research on large multimodal models and chatbots.
27
+
28
+ **Primary intended users:**
29
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
30
+
31
+ ## Training dataset
32
+ The pretrain phase employs [AS-1B](https://huggingface.co/datasets/Weiyun1025/AS-100M/tree/main) and [Laion-COCO](https://huggingface.co/datasets/laion/laion-coco).
33
+
34
+ The finetuning phase employs [AS-Core](https://huggingface.co/datasets/Weiyun1025/AS-Core), [RefCOCOg](https://github.com/lichengunc/refer), [VG](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html), [LLaVA-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K), [COCO Caption](https://cocodataset.org/#home), [TextCaps](https://textvqa.org/textcaps/), [VQAv2](https://visualqa.org/), and [GQA](https://cs.stanford.edu/people/dorarad/gqa/).
35
+
36
+ ## Evaluation dataset
37
+ A collection of 4 benchmarks, including 2 image captioning benchmarks, and 2 region captioning benchmarks.