nielsr HF staff commited on
Commit
f331a72
1 Parent(s): e1ef8dd

Create README.md (#1)

Browse files

- Create README.md (49a108dec94a43dfb1101dacae9cd9b6ef7afa54)

Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - vision
5
+ ---
6
+
7
+ # ViTMatte model
8
+
9
+ ViTMatte model trained on Composition-1k. It was introduced in the paper [ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers](https://arxiv.org/abs/2305.15272) by Yao et al. and first released in [this repository](https://github.com/hustvl/ViTMatte).
10
+
11
+ Disclaimer: The team releasing ViTMatte did not write a model card for this model so this model card has been written by the Hugging Face team.
12
+
13
+ ## Model description
14
+
15
+ ViTMatte is a simple approach to image matting, the task of accurately estimating the foreground object in an image. The model consists of a Vision Transformer (ViT) with a lightweight head on top.
16
+
17
+ ## Intended uses & limitations
18
+
19
+ You can use the raw model for image matting. See the [model hub](https://huggingface.co/models?search=vitmatte) to look for other
20
+ fine-tuned versions that may interest you.
21
+
22
+ ### How to use
23
+
24
+ We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/vitmatte#transformers.VitMatteForImageMatting.forward.example).
25
+
26
+ ### BibTeX entry and citation info
27
+
28
+ ```bibtex
29
+ @misc{yao2023vitmatte,
30
+ title={ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers},
31
+ author={Jingfeng Yao and Xinggang Wang and Shusheng Yang and Baoyuan Wang},
32
+ year={2023},
33
+ eprint={2305.15272},
34
+ archivePrefix={arXiv},
35
+ primaryClass={cs.CV}
36
+ }
37
+ ```