model documentation

#3
by nazneen - opened
Files changed (1) hide show
  1. README.md +172 -6
README.md CHANGED
@@ -1,19 +1,183 @@
1
  ---
2
  tags:
3
  - object-detection
 
4
  ---
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
- ## Model description
8
- detr-doc-table-detection is a model trained to detect both **Bordered** and **Borderless** tables in documents, based on [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50)
9
 
10
- ## Training data
 
 
 
 
 
 
 
 
 
11
  The model was trained on ICDAR2019 Table Dataset
 
 
 
 
 
 
 
 
 
 
12
 
13
- ### How to use
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ```python
16
- from transformers import DetrFeatureExtractor, DetrForObjectDetection
17
  from PIL import Image
18
 
19
  image = Image.open("Image path")
@@ -27,4 +191,6 @@ outputs = model(**inputs)
27
  # convert outputs (bounding boxes and class logits) to COCO API
28
  target_sizes = torch.tensor([image.size[::-1]])
29
  results = feature_extractor.post_process(outputs, target_sizes=target_sizes)[0]
30
- ```
 
 
 
1
  ---
2
  tags:
3
  - object-detection
4
+
5
  ---
6
 
7
+ # Model Card for detr-doc-table-detection
8
+
9
+ # Model Details
10
+
11
+ ## Model Description
12
+
13
+ detr-doc-table-detection is a model trained to detect both **Bordered** and **Borderless** tables in documents, based on [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50).
14
+
15
+ - **Developed by:** Taha Douaji
16
+ - **Shared by [Optional]:** Taha Douaji
17
+ - **Model type:** Object Detection
18
+ - **Language(s) (NLP):** More information needed
19
+ - **License:** More information needed
20
+ - **Parent Model:** [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50)
21
+ - **Resources for more information:**
22
+ - [Model Demo Space](https://huggingface.co/spaces/trevbeers/pdf-table-extraction)
23
+ - [Associated Paper](https://arxiv.org/abs/2005.12872)
24
+
25
+
26
+
27
+ # Uses
28
+
29
+
30
+ ## Direct Use
31
+ This model can be used for the task of object detection.
32
+
33
+ ## Downstream Use [Optional]
34
+
35
+ More information needed.
36
+
37
+ ## Out-of-Scope Use
38
+
39
+ The model should not be used to intentionally create hostile or alienating environments for people.
40
+
41
+ # Bias, Risks, and Limitations
42
+
43
+
44
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
45
 
 
 
46
 
47
+
48
+ ## Recommendations
49
+
50
+
51
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
52
+
53
+ # Training Details
54
+
55
+ ## Training Data
56
+
57
  The model was trained on ICDAR2019 Table Dataset
58
+
59
+ ## Training Procedure
60
+
61
+
62
+ ### Preprocessing
63
+
64
+ More information needed
65
+
66
+ ### Speeds, Sizes, Times
67
+ More information needed
68
 
69
+
70
+ # Evaluation
71
+
72
+
73
+ ## Testing Data, Factors & Metrics
74
+
75
+ ### Testing Data
76
+
77
+ More information needed
78
+
79
+
80
+ ### Factors
81
+ More information needed
82
+
83
+ ### Metrics
84
+
85
+ More information needed
86
+
87
+
88
+ ## Results
89
+
90
+ More information needed
91
+
92
+
93
+ # Model Examination
94
+
95
+ More information needed
96
+
97
+ # Environmental Impact
98
+
99
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
100
+
101
+ - **Hardware Type:** More information needed
102
+ - **Hours used:** More information needed
103
+ - **Cloud Provider:** More information needed
104
+ - **Compute Region:** More information needed
105
+ - **Carbon Emitted:** More information needed
106
+
107
+ # Technical Specifications [optional]
108
+
109
+ ## Model Architecture and Objective
110
+
111
+ More information needed
112
+
113
+ ## Compute Infrastructure
114
+
115
+ More information needed
116
+
117
+ ### Hardware
118
+
119
+
120
+ More information needed
121
+
122
+ ### Software
123
+
124
+ More information needed.
125
+
126
+ # Citation
127
+
128
+
129
+ **BibTeX:**
130
+
131
+
132
+ ```bibtex
133
+ @article{DBLP:journals/corr/abs-2005-12872,
134
+ author = {Nicolas Carion and
135
+ Francisco Massa and
136
+ Gabriel Synnaeve and
137
+ Nicolas Usunier and
138
+ Alexander Kirillov and
139
+ Sergey Zagoruyko},
140
+ title = {End-to-End Object Detection with Transformers},
141
+ journal = {CoRR},
142
+ volume = {abs/2005.12872},
143
+ year = {2020},
144
+ url = {https://arxiv.org/abs/2005.12872},
145
+ archivePrefix = {arXiv},
146
+ eprint = {2005.12872},
147
+ timestamp = {Thu, 28 May 2020 17:38:09 +0200},
148
+ biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
149
+ bibsource = {dblp computer science bibliography, https://dblp.org}
150
+ }
151
+ ```
152
+
153
+
154
+
155
+
156
+ # Glossary [optional]
157
+ More information needed
158
+
159
+ # More Information [optional]
160
+ More information needed
161
+
162
+
163
+ # Model Card Authors [optional]
164
+
165
+ Taha Douaji in collaboration with Ezi Ozoani and the Hugging Face team
166
+
167
+
168
+ # Model Card Contact
169
+
170
+ More information needed
171
+
172
+ # How to Get Started with the Model
173
+
174
+ Use the code below to get started with the model.
175
+
176
+ <details>
177
+ <summary> Click to expand </summary>
178
 
179
  ```python
180
+ from transformers import DetrFeatureExtractor, DetrForObjectDetection
181
  from PIL import Image
182
 
183
  image = Image.open("Image path")
 
191
  # convert outputs (bounding boxes and class logits) to COCO API
192
  target_sizes = torch.tensor([image.size[::-1]])
193
  results = feature_extractor.post_process(outputs, target_sizes=target_sizes)[0]
194
+ ```
195
+ </details>
196
+