nehulagrawal commited on
Commit
5b4c209
1 Parent(s): 65e6d73

add ultralytics model card

Browse files
Files changed (1) hide show
  1. README.md +25 -131
README.md CHANGED
@@ -1,3 +1,4 @@
 
1
  ---
2
  tags:
3
  - ultralyticsplus
@@ -7,171 +8,64 @@ tags:
7
  - vision
8
  - object-detection
9
  - pytorch
10
- - awesome-yolov8-models
11
- - table classification
12
- - structured table detection
13
- - unstructured table detection
14
- - table detection
15
- - table
16
- - Document
17
- - table extraction
18
- - unstructured table extraction
19
  library_name: ultralytics
20
  library_version: 8.0.43
21
- inference: False
 
22
  model-index:
23
  - name: foduucom/table-detection-and-extraction
24
  results:
25
  - task:
26
  type: object-detection
27
- metrics:
28
- - type: precision
29
- value: 0.962
30
- name: mAP@0.5(box)
31
- language:
32
- - en
33
- metrics:
34
- - accuracy
35
- ---
36
- Below is the Model Card for the YOLOv8s Table Detection and Extraction model:
37
 
 
 
 
 
38
  ---
39
 
40
- <p align="center">
41
- <!-- Smaller size image -->
42
- <img src="https://example.com/table-detection-model-thumbnail.jpg" alt="Image" style="width:500px; height:300px;">
43
- </p>
44
-
45
- # Model Card for YOLOv8s Table Detection
46
 
47
- ## Model Summary
48
-
49
- The YOLOv8s Table Detection model is an object detection model based on the YOLO (You Only Look Once) framework. It is designed to detect tables, whether they are bordered or borderless, in images. The model has been fine-tuned on a vast dataset and achieved high accuracy in detecting tables and distinguishing between bordered and borderless ones.
50
-
51
- ## Model Details
52
-
53
- ### Model Description
54
-
55
- The YOLOv8s Table Detection model is built upon the YOLOv8 architecture, known for its real-time object detection capabilities. This specific model has been tailored and trained to recognize tables of various types, including those with borders and those without borders. It can accurately detect tables in images and classify them into the appropriate categories.
56
 
57
  ```
58
- ['Bordered','Borderless']
59
  ```
60
- - **Developed by:** FODUU AI
61
- - **Model type:** Object Detection
62
- - **Task:** Table Detection (Bordered and Borderless)
63
-
64
- Furthermore, the YOLOv8s Table Detection model encourages user collaboration by providing the capability for users to contribute their own table images. Users can submit images of different table designs and types, helping to enhance the model's ability to detect a wider variety of tables accurately. User contributions can be shared through our community platform or by contacting us at info@foduu.com. Your input will significantly contribute to improving the model's recognition and classification of diverse table types.
65
-
66
- ## Uses
67
-
68
- ### Direct Use
69
- The YOLOv8s Table Detection model serves as a versatile solution for precisely identifying tables within images, whether they exhibit a bordered or borderless design. Notably, this model's capabilities extend beyond mere detection – it plays a crucial role in addressing the complexities of unstructured documents. By employing advanced techniques such as bounding box delineation, the model enables users to isolate tables of interest within the visual content.
70
-
71
- What sets this model apart is its synergy with Optical Character Recognition (OCR) technology. This seamless integration empowers the model to not only locate tables but also to extract pertinent data contained within. The bounding box information guides the cropping of tables, which is then coupled with OCR to meticulously extract textual data, streamlining the process of information retrieval from unstructured documents.
72
-
73
- We invite you to explore the potential of this model and its data extraction capabilities. For those interested in harnessing its power or seeking further collaboration, we encourage you to reach out to us at info@foduu.com. Whether you require assistance, customization, or have innovative ideas, our collaborative approach is geared towards addressing your unique challenges. Additionally, you can actively engage with our vibrant community section for valuable insights and collective problem-solving. Your input drives our continuous improvement, as we collectively pave the way towards enhanced data extraction and document analysis.
74
-
75
- ### Downstream Use
76
-
77
- The model can also be fine-tuned for specific table detection tasks or integrated into larger applications for furniture recognition, interior design, image-based data extraction, and other related fields.
78
-
79
- ### Out-of-Scope Use
80
-
81
- The model is not designed for unrelated object detection tasks or scenarios outside the scope of table detection.
82
-
83
- ## Bias, Risks, and Limitations
84
-
85
- The YOLOv8s Table Detection model may have some limitations and biases:
86
-
87
- - Performance may vary based on the quality, diversity, and representativeness of the training data.
88
- - The model may face challenges in detecting tables with intricate designs or complex arrangements.
89
- - Accuracy may be affected by variations in lighting conditions, image quality, and resolution.
90
- - Detection of very small or distant tables might be less accurate.
91
- - The model's ability to classify bordered and borderless tables may be influenced by variations in design.
92
 
93
- ### Recommendations
94
 
95
- Users should be informed about the model's limitations and potential biases. Further testing and validation are advised for specific use cases to evaluate its performance accurately.
96
-
97
- ## How to Get Started with the Model
98
-
99
- To begin using the YOLOv8s Table Detection model, follow these steps:
100
-
101
- 1. Install the required libraries, such as [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus) and [ultralytics](https://github.com/ultralytics/ultralytics), using pip:
102
 
103
  ```bash
104
- pip install ultralyticsplus ultralytics
105
  ```
106
 
107
- 2. Load the model and perform predictions using the provided code snippet.
108
 
109
  ```python
110
  from ultralyticsplus import YOLO, render_result
111
 
112
- # Load model
113
  model = YOLO('foduucom/table-detection-and-extraction')
114
 
115
- # Set model parameters
116
  model.overrides['conf'] = 0.25 # NMS confidence threshold
117
- model.overrides['iou'] = 0.45 # NMS IoU threshold
118
  model.overrides['agnostic_nms'] = False # NMS class-agnostic
119
- model.overrides['max_det'] = 1000 # Maximum number of detections per image
120
 
121
- # Set image
122
- image = 'path/to/your/image'
123
 
124
- # Perform inference
125
  results = model.predict(image)
126
 
127
- # Observe results
128
  print(results[0].boxes)
129
  render = render_result(model=model, image=image, result=results[0])
130
  render.show()
131
  ```
132
 
133
- ## Training Details
134
-
135
- ### Training Data
136
-
137
- The model is trained on a diverse dataset containing images of tables from various sources. The dataset includes examples of both bordered and borderless tables, capturing different designs and styles.
138
-
139
- ### Training Procedure
140
-
141
- The training process involves extensive computation and is conducted over multiple epochs. The model's weights are adjusted to minimize detection loss and optimize performance.
142
-
143
- #### Metrics
144
-
145
- - mAP@0.5 (box):
146
- - All: 0.962
147
- - Bordered: 0.961
148
- - Borderless: 0.963
149
-
150
- ### Model Architecture and Objective
151
-
152
- The YOLOv8s architecture employs a modified CSPDarknet53 as its backbone, along with self-attention mechanisms and feature pyramid networks. These components contribute to the model's ability to detect and classify tables accurately, considering variations in size, design, and style.
153
-
154
- ### Compute Infrastructure
155
-
156
- #### Hardware
157
-
158
- NVIDIA GeForce RTX 3060
159
-
160
- #### Software
161
-
162
- The model was trained and fine-tuned using a Jupyter Notebook environment.
163
-
164
- ## Model Card Contact
165
-
166
- For inquiries and contributions, please contact us at info@foduu.com.
167
-
168
- ```bibtex
169
- @ModelCard{
170
- author = {Nehul Agrawal and
171
- Pranjal singh Thakur},
172
- title = { Table Detection and Extraction},
173
- year = {2023}
174
- }
175
- ```
176
- ---
177
-
 
1
+
2
  ---
3
  tags:
4
  - ultralyticsplus
 
8
  - vision
9
  - object-detection
10
  - pytorch
11
+
 
 
 
 
 
 
 
 
12
  library_name: ultralytics
13
  library_version: 8.0.43
14
+ inference: false
15
+
16
  model-index:
17
  - name: foduucom/table-detection-and-extraction
18
  results:
19
  - task:
20
  type: object-detection
 
 
 
 
 
 
 
 
 
 
21
 
22
+ metrics:
23
+ - type: precision # since mAP@0.5 is not available on hf.co/metrics
24
+ value: 0.96196 # min: 0.0 - max: 1.0
25
+ name: mAP@0.5(box)
26
  ---
27
 
28
+ <div align="center">
29
+ <img width="640" alt="foduucom/table-detection-and-extraction" src="https://huggingface.co/foduucom/table-detection-and-extraction/resolve/main/thumbnail.jpg">
30
+ </div>
 
 
 
31
 
32
+ ### Supported Labels
 
 
 
 
 
 
 
 
33
 
34
  ```
35
+ ['bordered', 'borderless']
36
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
+ ### How to use
39
 
40
+ - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
 
 
 
 
 
 
41
 
42
  ```bash
43
+ pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
44
  ```
45
 
46
+ - Load model and perform prediction:
47
 
48
  ```python
49
  from ultralyticsplus import YOLO, render_result
50
 
51
+ # load model
52
  model = YOLO('foduucom/table-detection-and-extraction')
53
 
54
+ # set model parameters
55
  model.overrides['conf'] = 0.25 # NMS confidence threshold
56
+ model.overrides['iou'] = 0.45 # NMS IoU threshold
57
  model.overrides['agnostic_nms'] = False # NMS class-agnostic
58
+ model.overrides['max_det'] = 1000 # maximum number of detections per image
59
 
60
+ # set image
61
+ image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
62
 
63
+ # perform inference
64
  results = model.predict(image)
65
 
66
+ # observe results
67
  print(results[0].boxes)
68
  render = render_result(model=model, image=image, result=results[0])
69
  render.show()
70
  ```
71