jeffaudi commited on
Commit
bbd543c
1 Parent(s): 463d78c

First commit

Browse files
README.md CHANGED
@@ -1,3 +1,127 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
+
5
+ ---
6
+
7
+ ---
8
+
9
+
10
+
11
+
12
+
13
+
14
+ # Model Card for Oriented R-CNN pretrained on DOTA 1.0
15
+
16
+ <!-- Provide a quick summary of what the model is/does. [Optional] -->
17
+ The original paper is [Oriented R-CNN for Object Detection](https://openaccess.thecvf.com/content/ICCV2021/papers/Xie_Oriented_R-CNN_for_Object_Detection_ICCV_2021_paper.pdf).
18
+
19
+ This implementation of this model has been developed by [OpenMMLab](https://openmmlab.com/) in the [MMRotate](https://github.com/open-mmlab/mmrotate) framework.
20
+
21
+ The model has been trained on [DOTA 1.0](https://captain-whu.github.io/DOTA/)
22
+
23
+ The performance measured as mAP is 75.69.
24
+
25
+
26
+
27
+
28
+ # Table of Contents
29
+
30
+ - [Model Card for Oriented R-CNN pretrained on DOTA 1.0](#model-card-for--model_id-)
31
+ - [Table of Contents](#table-of-contents)
32
+ - [Model Details](#model-details)
33
+ - [Model Description](#model-description)
34
+ - [Uses](#uses)
35
+ - [Direct Use](#direct-use)
36
+ - [Out-of-Scope Use](#out-of-scope-use)
37
+ - [Bias, Risks, and Limitations](#bias-risks-and-limitations)
38
+ - [Recommendations](#recommendations)
39
+ - [Training Details](#training-details)
40
+ - [Training Data](#training-data)
41
+ - [Metrics](#metrics)
42
+ - [Results](#results)
43
+ - [Model Card Contact](#model-card-contact)
44
+ - [How to Get Started with the Model](#how-to-get-started-with-the-model)
45
+
46
+
47
+ # Model Details
48
+
49
+ ## Model Description
50
+
51
+ <!-- Provide a longer summary of what this model is/does. -->
52
+ The original paper is [Oriented R-CNN for Object Detection](https://openaccess.thecvf.com/content/ICCV2021/papers/Xie_Oriented_R-CNN_for_Object_Detection_ICCV_2021_paper.pdf).
53
+
54
+ This implementation of this model has been developed by [OpenMMLab](https://openmmlab.com/) in the [MMRotate](https://github.com/open-mmlab/mmrotate) framework.
55
+
56
+ The model has been trained on [DOTA 1.0](https://captain-whu.github.io/DOTA/)
57
+
58
+ The performance measured as mAP is 75.69.
59
+
60
+ - **Developed by:** OpenMMLab
61
+ - **Model type:** Object Detection model
62
+ - **License:** cc-by-nc-sa-4.0
63
+ - **Resources for more information:** More information needed
64
+ - [GitHub Repo](https://github.com/open-mmlab/mmrotate/)
65
+ - [Associated Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Xie_Oriented_R-CNN_for_Object_Detection_ICCV_2021_paper.pdf)
66
+
67
+ # Uses
68
+
69
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
70
+
71
+ ## Direct Use
72
+
73
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
74
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
75
+
76
+
77
+ ## Out-of-Scope Use
78
+
79
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
80
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
81
+
82
+
83
+ # Bias, Risks, and Limitations
84
+
85
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
86
+
87
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
88
+
89
+
90
+ # Training Details
91
+
92
+ ## Training Data
93
+
94
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
95
+
96
+ The model has been trained on [DOTA 1.0](https://captain-whu.github.io/DOTA/)
97
+
98
+
99
+ ## Metrics
100
+
101
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
102
+
103
+ The performance is measured as mAP.
104
+
105
+ ## Results
106
+
107
+ The final mAP is 75.69.
108
+
109
+
110
+ # Model Card Contact
111
+
112
+ Jeff Faudi
113
+
114
+ # How to Get Started with the Model
115
+
116
+ Use the code below to get started with the model.
117
+
118
+ ```
119
+ from mmdet.apis import init_detector, inference_detector
120
+ import mmrotate
121
+
122
+ config_file = 'oriented_rcnn_r50_fpn_1x_dota_le90.py'
123
+ checkpoint_file = 'oriented_rcnn_r50_fpn_1x_dota_le90-6d2b2ce0.pth'
124
+ model = init_detector(config_file, checkpoint_file, device='cuda:0')
125
+ inference_detector(model, 'demo/demo.jpg')
126
+ ```
127
+
oriented_rcnn_r50_fpn_1x_dota_le90-6d2b2ce0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d2b2ce0de1becdcb48c26dbcfdbf69d929f0d934a07335dd1065e6e8e24d3af
3
+ size 165749436
oriented_rcnn_r50_fpn_1x_dota_le90.py ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset_type = 'DOTADataset'
2
+ data_root = 'data/split_1024_dota1_0/'
3
+ img_norm_cfg = dict(
4
+ mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
5
+ train_pipeline = [
6
+ dict(type='LoadImageFromFile'),
7
+ dict(type='LoadAnnotations', with_bbox=True),
8
+ dict(type='RResize', img_scale=(1024, 1024)),
9
+ dict(
10
+ type='RRandomFlip',
11
+ flip_ratio=[0.25, 0.25, 0.25],
12
+ direction=['horizontal', 'vertical', 'diagonal'],
13
+ version='le90'),
14
+ dict(
15
+ type='Normalize',
16
+ mean=[123.675, 116.28, 103.53],
17
+ std=[58.395, 57.12, 57.375],
18
+ to_rgb=True),
19
+ dict(type='Pad', size_divisor=32),
20
+ dict(type='DefaultFormatBundle'),
21
+ dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
22
+ ]
23
+ test_pipeline = [
24
+ dict(type='LoadImageFromFile'),
25
+ dict(
26
+ type='MultiScaleFlipAug',
27
+ img_scale=(1024, 1024),
28
+ flip=False,
29
+ transforms=[
30
+ dict(type='RResize'),
31
+ dict(
32
+ type='Normalize',
33
+ mean=[123.675, 116.28, 103.53],
34
+ std=[58.395, 57.12, 57.375],
35
+ to_rgb=True),
36
+ dict(type='Pad', size_divisor=32),
37
+ dict(type='DefaultFormatBundle'),
38
+ dict(type='Collect', keys=['img'])
39
+ ])
40
+ ]
41
+ data = dict(
42
+ samples_per_gpu=2,
43
+ workers_per_gpu=2,
44
+ train=dict(
45
+ type='DOTADataset',
46
+ ann_file='data/split_1024_dota1_0/trainval/annfiles/',
47
+ img_prefix='data/split_1024_dota1_0/trainval/images/',
48
+ pipeline=[
49
+ dict(type='LoadImageFromFile'),
50
+ dict(type='LoadAnnotations', with_bbox=True),
51
+ dict(type='RResize', img_scale=(1024, 1024)),
52
+ dict(
53
+ type='RRandomFlip',
54
+ flip_ratio=[0.25, 0.25, 0.25],
55
+ direction=['horizontal', 'vertical', 'diagonal'],
56
+ version='le90'),
57
+ dict(
58
+ type='Normalize',
59
+ mean=[123.675, 116.28, 103.53],
60
+ std=[58.395, 57.12, 57.375],
61
+ to_rgb=True),
62
+ dict(type='Pad', size_divisor=32),
63
+ dict(type='DefaultFormatBundle'),
64
+ dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
65
+ ],
66
+ version='le90'),
67
+ val=dict(
68
+ type='DOTADataset',
69
+ ann_file='data/split_1024_dota1_0/trainval/annfiles/',
70
+ img_prefix='data/split_1024_dota1_0/trainval/images/',
71
+ pipeline=[
72
+ dict(type='LoadImageFromFile'),
73
+ dict(
74
+ type='MultiScaleFlipAug',
75
+ img_scale=(1024, 1024),
76
+ flip=False,
77
+ transforms=[
78
+ dict(type='RResize'),
79
+ dict(
80
+ type='Normalize',
81
+ mean=[123.675, 116.28, 103.53],
82
+ std=[58.395, 57.12, 57.375],
83
+ to_rgb=True),
84
+ dict(type='Pad', size_divisor=32),
85
+ dict(type='DefaultFormatBundle'),
86
+ dict(type='Collect', keys=['img'])
87
+ ])
88
+ ],
89
+ version='le90'),
90
+ test=dict(
91
+ type='DOTADataset',
92
+ ann_file='data/split_1024_dota1_0/test/images/',
93
+ img_prefix='data/split_1024_dota1_0/test/images/',
94
+ pipeline=[
95
+ dict(type='LoadImageFromFile'),
96
+ dict(
97
+ type='MultiScaleFlipAug',
98
+ img_scale=(1024, 1024),
99
+ flip=False,
100
+ transforms=[
101
+ dict(type='RResize'),
102
+ dict(
103
+ type='Normalize',
104
+ mean=[123.675, 116.28, 103.53],
105
+ std=[58.395, 57.12, 57.375],
106
+ to_rgb=True),
107
+ dict(type='Pad', size_divisor=32),
108
+ dict(type='DefaultFormatBundle'),
109
+ dict(type='Collect', keys=['img'])
110
+ ])
111
+ ],
112
+ version='le90'))
113
+ evaluation = dict(interval=1, metric='mAP')
114
+ optimizer = dict(type='SGD', lr=0.005, momentum=0.9, weight_decay=0.0001)
115
+ optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
116
+ lr_config = dict(
117
+ policy='step',
118
+ warmup='linear',
119
+ warmup_iters=500,
120
+ warmup_ratio=0.3333333333333333,
121
+ step=[8, 11])
122
+ runner = dict(type='EpochBasedRunner', max_epochs=12)
123
+ checkpoint_config = dict(interval=1)
124
+ log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
125
+ dist_params = dict(backend='nccl')
126
+ log_level = 'INFO'
127
+ load_from = None
128
+ resume_from = None
129
+ workflow = [('train', 1)]
130
+ opencv_num_threads = 0
131
+ mp_start_method = 'fork'
132
+ angle_version = 'le90'
133
+ model = dict(
134
+ type='OrientedRCNN',
135
+ backbone=dict(
136
+ type='ResNet',
137
+ depth=50,
138
+ num_stages=4,
139
+ out_indices=(0, 1, 2, 3),
140
+ frozen_stages=1,
141
+ norm_cfg=dict(type='BN', requires_grad=True),
142
+ norm_eval=True,
143
+ style='pytorch',
144
+ init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
145
+ neck=dict(
146
+ type='FPN',
147
+ in_channels=[256, 512, 1024, 2048],
148
+ out_channels=256,
149
+ num_outs=5),
150
+ rpn_head=dict(
151
+ type='OrientedRPNHead',
152
+ in_channels=256,
153
+ feat_channels=256,
154
+ version='le90',
155
+ anchor_generator=dict(
156
+ type='AnchorGenerator',
157
+ scales=[8],
158
+ ratios=[0.5, 1.0, 2.0],
159
+ strides=[4, 8, 16, 32, 64]),
160
+ bbox_coder=dict(
161
+ type='MidpointOffsetCoder',
162
+ angle_range='le90',
163
+ target_means=[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
164
+ target_stds=[1.0, 1.0, 1.0, 1.0, 0.5, 0.5]),
165
+ loss_cls=dict(
166
+ type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
167
+ loss_bbox=dict(
168
+ type='SmoothL1Loss', beta=0.1111111111111111, loss_weight=1.0)),
169
+ roi_head=dict(
170
+ type='OrientedStandardRoIHead',
171
+ bbox_roi_extractor=dict(
172
+ type='RotatedSingleRoIExtractor',
173
+ roi_layer=dict(
174
+ type='RoIAlignRotated',
175
+ out_size=7,
176
+ sample_num=2,
177
+ clockwise=True),
178
+ out_channels=256,
179
+ featmap_strides=[4, 8, 16, 32]),
180
+ bbox_head=dict(
181
+ type='RotatedShared2FCBBoxHead',
182
+ in_channels=256,
183
+ fc_out_channels=1024,
184
+ roi_feat_size=7,
185
+ num_classes=15,
186
+ bbox_coder=dict(
187
+ type='DeltaXYWHAOBBoxCoder',
188
+ angle_range='le90',
189
+ norm_factor=None,
190
+ edge_swap=True,
191
+ proj_xy=True,
192
+ target_means=(0.0, 0.0, 0.0, 0.0, 0.0),
193
+ target_stds=(0.1, 0.1, 0.2, 0.2, 0.1)),
194
+ reg_class_agnostic=True,
195
+ loss_cls=dict(
196
+ type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
197
+ loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))),
198
+ train_cfg=dict(
199
+ rpn=dict(
200
+ assigner=dict(
201
+ type='MaxIoUAssigner',
202
+ pos_iou_thr=0.7,
203
+ neg_iou_thr=0.3,
204
+ min_pos_iou=0.3,
205
+ match_low_quality=True,
206
+ ignore_iof_thr=-1),
207
+ sampler=dict(
208
+ type='RandomSampler',
209
+ num=256,
210
+ pos_fraction=0.5,
211
+ neg_pos_ub=-1,
212
+ add_gt_as_proposals=False),
213
+ allowed_border=0,
214
+ pos_weight=-1,
215
+ debug=False),
216
+ rpn_proposal=dict(
217
+ nms_pre=2000,
218
+ max_per_img=2000,
219
+ nms=dict(type='nms', iou_threshold=0.8),
220
+ min_bbox_size=0),
221
+ rcnn=dict(
222
+ assigner=dict(
223
+ type='MaxIoUAssigner',
224
+ pos_iou_thr=0.5,
225
+ neg_iou_thr=0.5,
226
+ min_pos_iou=0.5,
227
+ match_low_quality=False,
228
+ iou_calculator=dict(type='RBboxOverlaps2D'),
229
+ ignore_iof_thr=-1),
230
+ sampler=dict(
231
+ type='RRandomSampler',
232
+ num=512,
233
+ pos_fraction=0.25,
234
+ neg_pos_ub=-1,
235
+ add_gt_as_proposals=True),
236
+ pos_weight=-1,
237
+ debug=False)),
238
+ test_cfg=dict(
239
+ rpn=dict(
240
+ nms_pre=2000,
241
+ max_per_img=2000,
242
+ nms=dict(type='nms', iou_threshold=0.8),
243
+ min_bbox_size=0),
244
+ rcnn=dict(
245
+ nms_pre=2000,
246
+ min_bbox_size=0,
247
+ score_thr=0.05,
248
+ nms=dict(iou_thr=0.1),
249
+ max_per_img=2000)))