lvrgb777 commited on
Commit
d171f6f
·
1 Parent(s): 45fe2c9

Update STPoseNet/README.md

Browse files
Files changed (1) hide show
  1. STPoseNet/README.md +9 -20
STPoseNet/README.md CHANGED
@@ -19,13 +19,11 @@ Keypoints identification program for STposeNet
19
  ```
20
  ### **Dataset preparation**
21
 
22
- In this folder [Neurofinder](https://github.com/XZH-James/NeuroSeg2/tree/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/Neurofinder), We provide two images for testing.
23
- * In this folder, [leftImg8bit](https://github.com/XZH-James/NeuroSeg2/tree/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/Neurofinder/test/leftImg8bit) stores the two-photon calcium imaging and [gtFine](https://github.com/XZH-James/NeuroSeg2/tree/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/Neurofinder/test/gtFine) stores the corresponding GT.
24
- * [generate_dataset.py](https://github.com/XZH-James/NeuroSeg2/blob/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/Neurofinder/generate_dataset.py) is used to generate [image list](https://github.com/XZH-James/NeuroSeg2/tree/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/Neurofinder/imglists). After adding new images, run this code to generate the list for training and test code can read new images.
25
 
26
- ### **Model preparation**
27
 
28
- This folder [models](https://github.com/XZH-James/NeuroSeg2/tree/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/models) is used to store the pretrained model and the test model. The test model can be downloaded from our [huggingface](https://huggingface.co/XZH-James/NeuroSeg2/tree/main).
29
 
30
  ### **training or testing**
31
 
@@ -33,23 +31,14 @@ After abtaining the dataset and model, running [test.py](https://github.com/XZH-
33
  Running [train.py](https://github.com/XZH-James/NeuroSeg2/blob/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/train.py) to train the new dataset.
34
 
35
  ### **The result**
36
-
37
- [logs/evalution](https://github.com/XZH-James/NeuroSeg2/tree/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/logs/evalution/Neurofinder) contains the results of the neurons segmentation of NeuroSeg-II.
38
- * [plt/difference](https://github.com/XZH-James/NeuroSeg2/tree/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/logs/evalution/Neurofinder/plt/difference) stores the segmented image by NeuroSeg-II.
39
- * [evaluation_log.csv](https://github.com/XZH-James/NeuroSeg2/blob/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/logs/evalution/Neurofinder/evaluation_log.csv) is the score for this test.
40
 
41
  ## Other matters
42
-
43
- ### **Core code**
44
- In [neuroseg2](https://github.com/XZH-James/NeuroSeg2/tree/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/neuroseg2) are the core code of NeuroSeg-II.
45
- * [model.py](https://github.com/XZH-James/NeuroSeg2/blob/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/neuroseg2/model.py) and [utils.py](https://github.com/XZH-James/NeuroSeg2/blob/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/neuroseg2/utils.py) are the code of overall structure.
46
- * [Down.py](https://github.com/XZH-James/NeuroSeg2/blob/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/neuroseg2/Down.py) is the code of FPN+.
47
- * [attention.py](https://github.com/XZH-James/NeuroSeg2/blob/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/neuroseg2/attention.py) is the code of attention mechanism.
48
- * [visualize.py](https://github.com/XZH-James/NeuroSeg2/blob/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/neuroseg2/visualize.py) is the code for visual segmentation result.
49
-
50
- ### **Code of preprocessing**
51
- In [utilities](https://github.com/XZH-James/NeuroSeg2/tree/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/utilities) are the code for preprocessing.
52
 
53
  ## Contact information
54
 
55
- If you have any questions about this project, please feel free to contact us. Email address: zhehao_xu@qq.com
 
19
  ```
20
  ### **Dataset preparation**
21
 
22
+ In this folder [datasets](https://huggingface.co/lvrgb777/STPoseNet/tree/main/dataset), We provide train dataset and test images and video.
 
 
23
 
24
+ ### **Pretrain weight preparation**
25
 
26
+ This file [weight](https://huggingface.co/lvrgb777/STPoseNet/blob/main/yolov8l_pose_mouse_com.pt) ,We provide pretrain weight
27
 
28
  ### **training or testing**
29
 
 
31
  Running [train.py](https://github.com/XZH-James/NeuroSeg2/blob/main/NeuroSeg%E2%85%A1-main/NeuroSeg%E2%85%A1-main/train.py) to train the new dataset.
32
 
33
  ### **The result**
34
+ 2. run ./tool/min_img_label.py to perform data enhancement
35
+ 3. run mouse-train.py to train weight for your experimental data
36
+ 4. run mouse-pre to forecast result
 
37
 
38
  ## Other matters
39
+ ## Core module code location
40
+ The main modifications base on yolo v8 are located ./ultralytice/engine/predictor
 
 
 
 
 
 
 
 
41
 
42
  ## Contact information
43
 
44
+ If you have any questions about this project, please feel free to contact us. Email address: 2245162223@qq.com