johnnv commited on
Commit
873470e
1 Parent(s): 58f60a3

add readme

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md CHANGED
@@ -1,3 +1,85 @@
1
  ---
2
  license: other
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ tags:
4
+ - vision
5
+ - image-segmentation
6
+ task_ids:
7
+ - semantic-segmentation
8
+ datasets:
9
+ - lapix/CCAgT
10
+ widget:
11
+ - src: https://huggingface.co/lapix/segformer-b3-finetuned-ccagt-400-300/resolve/main/sampleA.png
12
+ example_title: Sample A
13
+ - src: https://huggingface.co/lapix/segformer-b3-finetuned-ccagt-400-300/resolve/main/sampleB.png
14
+ example_title: Sample B
15
  ---
16
+
17
+ # SegFormer (b3-sized) model fine-tuned on CCAgT dataset
18
+
19
+ SegFormer model fine-tuned on CCAgT dataset at resolution 400x300. It was introduced in the paper [Semantic Segmentation for the Detection of Very Small Objects on Cervical Cell Samples Stained with the {AgNOR} Technique](https://doi.org/10.2139/ssrn.4126881) by [J. G. A. Amorim](https://huggingface.co/johnnv) et al.
20
+
21
+ This model was trained in a subset of [CCAgT dataset](https://huggingface.co/datasets/lapix/CCAgT/), so perform a evaluation of this model on the dataset available at HF will differ from the results presented in the paper. For more information about how the model was trained, read the paper.
22
+
23
+ Disclaimer: This model card has been written based on the SegFormer [model card](https://huggingface.co/nvidia/mit-b3/blob/main/README.md) by the Hugging Face team.
24
+
25
+ ## Model description
26
+
27
+ SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
28
+
29
+ This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes.
30
+
31
+ ## Intended uses & limitations
32
+
33
+ You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
34
+
35
+ ### How to use
36
+
37
+ Here is how to use this model to segment an image of the CCAgT dataset:
38
+
39
+ ```python
40
+ from transformers import AutoFeatureExtractor, SegformerForSemanticSegmentation
41
+ from PIL import Image
42
+ import requests
43
+
44
+ url = "https://huggingface.co/lapix/segformer-b3-finetuned-ccagt-400-300/resolve/main/sampleB.png"
45
+ image = Image.open(requests.get(url, stream=True).raw))
46
+
47
+ model = SegformerForSemanticSegmentation.from_pretrained("lapix/segformer-b3-finetuned-ccagt-400-300")
48
+ feature_extractor = AutoFeatureExtractor.from_pretrained("lapix/segformer-b3-finetuned-ccagt-400-300")
49
+
50
+ pixel_values = feature_extractor(images=image, return_tensors="pt")
51
+ outputs = model(pixel_values=pixel_values)
52
+ logits = outputs.logits
53
+
54
+ # Rescale logits to original image size (400, 300)
55
+ upsampled_logits = nn.functional.interpolate(
56
+ logits,
57
+ size=img.size[::-1], # (height, width)
58
+ mode="bilinear",
59
+ align_corners=False,
60
+ )
61
+
62
+ segmentation_mask = upsampled_logits.argmax(dim=1)[0]
63
+
64
+ print("Predicted mask:", segmentation_mask)
65
+ ```
66
+
67
+ For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
68
+
69
+ ### License
70
+
71
+ The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
72
+
73
+ ### BibTeX entry and citation info
74
+
75
+ ```bibtex
76
+ @article{AtkinsonSegmentationAgNORSSRN2022,
77
+ author= {Jo{\~{a}}o Gustavo Atkinson Amorim and Andr{\'{e}} Vict{\'{o}}ria Matias and Allan Cerentini and Fabiana Botelho de Miranda Onofre and Alexandre Sherlley Casimiro Onofre and Aldo von Wangenheim},
78
+ doi = {10.2139/ssrn.4126881},
79
+ url = {https://doi.org/10.2139/ssrn.4126881},
80
+ year = {2022},
81
+ publisher = {Elsevier {BV}},
82
+ title = {Semantic Segmentation for the Detection of Very Small Objects on Cervical Cell Samples Stained with the {AgNOR} Technique},
83
+ journal = {{SSRN} Electronic Journal}
84
+ }
85
+ ```