RE_UPLOAD-REBUILD-RESTART
Browse files
model/layout-model-training/README.md
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Scripts for training Layout Detection Models using Detectron2
|
2 |
+
|
3 |
+
## Usage
|
4 |
+
|
5 |
+
### Directory Structure
|
6 |
+
|
7 |
+
- In `tools/`, we provide a series of handy scripts for converting data formats and training the models.
|
8 |
+
- In `scripts/`, it lists specific command for running the code for processing the given dataset.
|
9 |
+
- The `configs/` contains the configuration for different deep learning models, and is organized by datasets.
|
10 |
+
|
11 |
+
### How to train the models?
|
12 |
+
|
13 |
+
- Get the dataset and annotations -- if you are not sure, feel free to check [this tutorial](https://github.com/Layout-Parser/layout-parser/tree/main/examples/Customizing%20Layout%20Models%20with%20Label%20Studio%20Annotation).
|
14 |
+
- Duplicate and modify the config files and training scripts
|
15 |
+
- For example, you might want to copy [`configs/prima/fast_rcnn_R_50_FPN_3x`](configs/prima/fast_rcnn_R_50_FPN_3x.yaml) to [`configs/your-dataset-name/fast_rcnn_R_50_FPN_3x`](configs/prima/fast_rcnn_R_50_FPN_3x.yaml), and you can create your own `scripts/train_<your-dataset-name>.sh` based on [`scripts/train_prima.sh`](scripts/train_prima.sh).
|
16 |
+
- You'll modify the `--dataset_name`, `--json_annotation_train`, `--image_path_train`, `--json_annotation_val`, `--image_path_val`, and `--config-file` args appropriately.
|
17 |
+
- If you have a dataset with segmentation masks, you can try to train with the [`mask_rcnn model`](configs/prima/mask_rcnn_R_50_FPN_3x.yaml); otherwise you might want to start with the [`fast_rcnn model`](configs/prima/fast_rcnn_R_50_FPN_3x.yaml)
|
18 |
+
- If you see error `AttributeError: Cannot find field 'gt_masks' in the given Instances!` during training, this means you should not use
|
19 |
+
|
20 |
+
## Supported Datasets
|
21 |
+
|
22 |
+
- Prima Layout Analysis Dataset [`scripts/train_prima.sh`](https://github.com/Layout-Parser/layout-model-training/blob/master/scripts/train_prima.sh)
|
23 |
+
- You will need to download the dataset from the [official website](https://www.primaresearch.org/dataset/) and put it in the `data/prima` folder.
|
24 |
+
- As the original dataset is stored in the [PAGE format](https://www.primaresearch.org/tools/PAGEViewer), the script will use [`tools/convert_prima_to_coco.py`](https://github.com/Layout-Parser/layout-model-training/blob/master/tools/convert_prima_to_coco.py) to convert it to COCO format.
|
25 |
+
- The final dataset folder structure should look like:
|
26 |
+
```bash
|
27 |
+
data/
|
28 |
+
βββ prima/
|
29 |
+
βββ Images/
|
30 |
+
βββ XML/
|
31 |
+
βββ License.txt
|
32 |
+
βββ annotations*.json
|
33 |
+
```
|
34 |
+
|
35 |
+
## Reference
|
36 |
+
|
37 |
+
- **[cocosplit](https://github.com/akarazniewicz/cocosplit)** A script that splits the coco annotations into train and test sets.
|
38 |
+
- **[Detectron2](https://github.com/facebookresearch/detectron2)** Detectron2 is Facebook AI Research's next generation software system that implements state-of-the-art object detection algorithms.
|