ZhengPeng7 commited on
Commit
18e7b13
1 Parent(s): 6c09ac5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -3
README.md CHANGED
@@ -1,3 +1,37 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ pipeline_tag: image-segmentation
6
+ tags:
7
+ - dichotomous-image-segmentation
8
+ - salient-object-detection
9
+ - camouflaged-object-detection
10
+ - image-matting
11
+ ---
12
+
13
+ > This BiRefNet for camouflaged object detection (COD) is trained on standard COD training set: **COD10K-TR and CAMO-TR** and validated on **COD10K-TE, NC4K, CAMO, and CHAMELLON**.
14
+
15
+ ## This repo holds the official model weights of "[<ins>Bilateral Reference for High-Resolution Dichotomous Image Segmentation</ins>](https://arxiv.org/pdf/2401.03407.pdf)" (_arXiv 2024_).
16
+
17
+ This repo contains the weights of BiRefNet proposed in our paper, which has achieved the SOTA performance on three tasks (DIS, HRSOD, and COD).
18
+
19
+ Go to my GitHub page for BiRefNet codes and the latest updates: https://github.com/ZhengPeng7/BiRefNet :)
20
+
21
+
22
+ #### Try our online demos for inference:
23
+
24
+ + **Inference and evaluation** of your given weights: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1MaEiBfJ4xIaZZn0DqKrhydHB8X97hNXl#scrollTo=DJ4meUYjia6S)
25
+ + **Online Inference with GUI on Hugging Face** with adjustable resolutions: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/ZhengPeng7/BiRefNet_demo)
26
+ <img src="https://drive.google.com/thumbnail?id=12XmDhKtO1o2fEvBu4OE4ULVB2BK0ecWi&sz=w1080" />
27
+
28
+ ## Citation
29
+
30
+ ```
31
+ @article{zheng2024birefnet,
32
+ title={Bilateral Reference for High-Resolution Dichotomous Image Segmentation},
33
+ author={Zheng, Peng and Gao, Dehong and Fan, Deng-Ping and Liu, Li and Laaksonen, Jorma and Ouyang, Wanli and Sebe, Nicu},
34
+ journal={arXiv},
35
+ year={2024}
36
+ }
37
+ ```