Lewislou commited on
Commit
ebe155d
1 Parent(s): 0ca2a11

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -69
README.md CHANGED
@@ -1,72 +1,18 @@
1
- # Solution of Team Sribd-med for NeurIPS-CellSeg Challenge
2
- This repository provides the solution of team Sribd-med for [NeurIPS-CellSeg](https://neurips22-cellseg.grand-challenge.org/) Challenge. The details of our method are described in our paper [Multi-stream Cell Segmentation with Low-level Cues for Multi-modality Images]. Some parts of the codes are from the baseline codes of the [NeurIPS-CellSeg-Baseline](https://github.com/JunMa11/NeurIPS-CellSeg) repository,
 
 
 
 
 
 
 
 
 
 
 
 
 
3
 
4
- You can reproduce our method as follows step by step:
5
 
6
- ## Environments and Requirements:
7
- Install requirements by
8
-
9
- ```shell
10
- python -m pip install -r requirements.txt
11
- ```
12
-
13
- ## Dataset
14
- The competition training and tuning data can be downloaded from https://neurips22-cellseg.grand-challenge.org/dataset/
15
- Besides, you can download three publiced data from the following link:
16
- Cellpose: https://www.cellpose.org/dataset 
17
- Omnipose: http://www.cellpose.org/dataset_omnipose
18
- Sartorius: https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/overview 
19
-
20
- ## Automatic cell classification
21
- You can classify the cells into four classes in this step.
22
- Put all the images (competition + Cellpose + Omnipose + Sartorius) in one folder (data/allimages).
23
- Run classification code:
24
-
25
- ```shell
26
- python classification/unsup_classification.py
27
- ```
28
- The results can be stored in data/classification_results/
29
-
30
- ## CNN-base classification model training
31
- Using the classified images in data/classification_results/. A resnet18 is trained:
32
- ```shell
33
- python classification/train_classification.py
34
- ```
35
- ## Segmentation Training
36
- Pre-training convnext-stardist using all the images (data/allimages).
37
- ```shell
38
- python train_convnext_stardist.py
39
- ```
40
- For class 0,2,3 finetune on the classified data (Take class1 as a example):
41
- ```shell
42
- python finetune_convnext_stardist.py model_dir=(The pretrained convnext-stardist model) data_dir='data/classification_results/class1'
43
- ```
44
- For class 1 train the convnext-hover from scratch using classified class 3 data.
45
- ```shell
46
- python train_convnext_hover.py data_dir='data/classification_results/class3'
47
- ```
48
-
49
- Finally, four segmentation models will be trained.
50
-
51
- ## Trained models
52
- The models can be downloaded from this link:
53
- https://drive.google.com/drive/folders/1MkEOpgmdkg5Yqw6Ng5PoOhtmo9xPPwIj?usp=sharing
54
-
55
- ## Inference
56
- The inference process includes classification and segmentation.
57
- ```shell
58
- python predict.py -i input_path -o output_path --model_path './models'
59
- ```
60
-
61
- ## Evaluation
62
- Calculate the F-score for evaluation:
63
- ```shell
64
- python compute_metric.py --gt_path path_to_labels --seg_path output_path
65
- ```
66
-
67
- ## Results
68
- The tuning set F1 score of our method is 0.8795. The rank running time of our method on all the 101 cases in the tuning set is zero in our local
69
- workstation.
70
- ## Acknowledgement
71
- We thank for the contributors of public datasets.
72
 
 
1
+ ---
2
+ # Example metadata to be added to a model card.
3
+ # Full model card template at https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md
4
+ language:
5
+ - {en} # Example: en
6
+ license: {apache-2.0} # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
7
+ library_name: {library_name} # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
8
+ tags:
9
+ - {cell segmentation} # Example: audio
10
+ - {stardist} # Example: automatic-speech-recognition
11
+ - {hover-net} # Example: speech # Example to specify a library: allennlp
12
+ datasets:
13
+ - {nips_cell_seg} # Example: common_voice. Use dataset id from https://hf.co/datasets
14
+ metrics:
15
+ - {f-score} # Example: wer. Use metric id from https://hf.co/metrics
16
 
 
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18