MaxMagician Claude Happy commited on
Commit
c3155e8
·
0 Parent(s):

Initial HF Space: Gradio sitting posture demo

Browse files

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

.gitattributes ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ *.pt filter=lfs diff=lfs merge=lfs -text
2
+ *.png filter=lfs diff=lfs merge=lfs -text
3
+ *.jpg filter=lfs diff=lfs merge=lfs -text
4
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
5
+ *.webp filter=lfs diff=lfs merge=lfs -text
6
+ *.gif filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Sitting Posture Detection
3
+ emoji: 🪑
4
+ colorFrom: yellow
5
+ colorTo: blue
6
+ sdk: gradio
7
+ sdk_version: "4.44.1"
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ # Real-Time Lateral Sitting Posture Detection using YOLOv5
13
+
14
+ <div align="center">
15
+ <img src="https://raw.githubusercontent.com/itakurah/SittingPostureDetection/main/data/images/posture.webp" width="80%" height="80%" alt="Sitting Posture">
16
+
17
+ *Source: https://www.youtube.com/watch?v=HNgTLml_Zi4*
18
+ </div>
19
+
20
+
21
+ This repository provides an open-source solution for **real-time sitting posture detection** using [YOLOv5](https://github.com/ultralytics/yolov5), a state-of-the-art object detection algorithm. The program is designed to analyze a user’s sitting posture and offer feedback on whether it aligns with ergonomic best practices, aiming to promote healthier sitting habits.
22
+
23
+ ## Key Features
24
+
25
+ * **YOLOv5**: The program leverages the power of YOLOv5, which is an object detection algorithm, to accurately detect the user’s sitting posture from a webcam.
26
+ * **Real-time Posture Detection**: The program provides real-time feedback on the user's sitting posture, making it suitable for applications in office ergonomics, fitness, and health monitoring.
27
+ * **Good vs. Bad Posture Classification**: The program uses a pre-trained model to classify the detected posture as good or bad, enabling users to improve their posture and prevent potential health issues associated with poor sitting habits.
28
+ * **Open-source**: Released under an open-source license, allowing users to access, modify, and contribute to the project.
29
+
30
+
31
+ ---
32
+
33
+ ### Built With
34
+
35
+ ![Python]
36
+
37
+ ## IEEE Conference Publication
38
+
39
+ We are pleased to announce that this project has been published in an IEEE conference paper, which provides a comprehensive overview of our methodology, technical approach, and results in applying YOLOv5 for lateral sitting posture detection. This paper, titled **"Lateral Sitting Posture Detection using YOLOv5,"** was presented at the 2024 IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob). For more in-depth information, please refer to the full paper available at:
40
+
41
+ **[Read the IEEE Publication on Xplore](https://doi.org/10.1109/BioRob60516.2024.10719953)**
42
+
43
+ # Getting Started
44
+
45
+ ### Prerequisites
46
+
47
+ * Python 3.9.X
48
+
49
+ ### Installation
50
+ If you have an NVIDIA graphics processor, you can activate GPU acceleration by installing the GPU requirements. Note that without GPU acceleration, the inference will run on the CPU, which can be very slow.
51
+ #### Windows
52
+
53
+ 1. `git clone https://github.com/itakurah/sitting-posture-detection-yolov5.git`
54
+ 2. `python -m venv venv`
55
+ 3. `.\venv\scripts\activate.bat`
56
+ ##### Default/NVIDIA GPU support:
57
+ 4. `pip install -r ./requirements_windows.txt` **OR** `pip install -r ./requirements_windows_gpu.txt`
58
+
59
+ #### Linux
60
+
61
+ 1. `git clone https://github.com/itakurah/sitting-posture-detection-yolov5.git`
62
+ 2. `python3 -m venv venv`
63
+ 3. `source venv/bin/activate`
64
+ ##### Default/NVIDIA GPU support:
65
+ 4. `pip3 install -r requirements_linux.txt` **OR** `pip3 install -r requirements_linux_gpu.txt`
66
+
67
+
68
+ ### Run the program
69
+
70
+ `python application.py <optional: model_file.pt>` **OR** `python3 application.py <optional: model_file.pt>`
71
+
72
+ The default model is loaded if no model file is specified.
73
+
74
+ # Model Information
75
+ This project uses a custom-trained [YOLOv5s](https://github.com/ultralytics/yolov5/blob/79af1144c270ac7169553d450b9170f9c60f92e4/models/yolov5s.yaml) model fine-tuned on 160 images per class over 146 epochs. It categorizes postures into two classes:
76
+ * `sitting_good`
77
+ * `sitting_bad`
78
+
79
+ The trained model file is located under the following directory:
80
+ `data/inference_models/small640.pt`
81
+ # Architecture
82
+ The architecture that is used for the model is the standard YOLOv5s architecture:
83
+
84
+ <img src="https://raw.githubusercontent.com/itakurah/SittingPostureDetection/main/data/images/architecture.png" width=75% height=75%>
85
+
86
+
87
+
88
+ *Fig. 1: YOLOv5s network architecture (based on Liu et al.). The CBS module consists of a Convolutional layer, a Batch Normalization layer, and a Sigmoid Linear Unit (SiLU) activation function. The C3 module consists of three CBS modules and one bottleneck block. The SPPF module consists of two CBS modules and three Max Pooling layers.*
89
+
90
+ # Model Results
91
+ The validation set contains 80 images (40 sitting_good, 40 sitting_bad). The results are as follows:
92
+ |Class|Images|Instances|Precision|Recall|mAP50|mAP50-95|
93
+ |--|--|--|--|--|--|--|
94
+ |all| 80 | 80 | 0.87 | 0.939 | 0.931 | 0.734 |
95
+ |sitting_good| 40 | 40| 0.884 | 0.954 | 0.908 |0.744 |
96
+ |sitting_bad| 80 | 40 | 0.855 | 0.925 | 0.953 | 0.724 |
97
+
98
+ F1, Precision, Recall, and Precision-Recall plots:
99
+
100
+ <p align="middle">
101
+ <img src="https://raw.githubusercontent.com/itakurah/SittingPostureDetection/main/data/images/F1_curve.png" width=40% height=40%>
102
+ <img src="https://raw.githubusercontent.com/itakurah/SittingPostureDetection/main/data/images/P_curve.png" width=40% height=40%>
103
+ <img src="https://raw.githubusercontent.com/itakurah/SittingPostureDetection/main/data/images/R_curve.png" width=40% height=40%>
104
+ <img src="https://raw.githubusercontent.com/itakurah/SittingPostureDetection/main/data/images/PR_curve.png" width=40% height=40%>
105
+ </p>
106
+
107
+ # About
108
+
109
+ This project was developed by [Niklas Hoefflin](https://github.com/itakurah), [Tim Spulak](https://github.com/T-Lak),
110
+ Pascal Gerber & Jan Bösch. It was supervised by [André Jeworutzki](https://github.com/AndreJeworutzki) and Jan Schwarzer as part of the [Train Like A Machine](https://csti.haw-hamburg.de/project/TLAM/) module at Hamburg University of Applied Sciences (HAW Hamburg).
111
+ The project is actively maintained by Niklas Hoefflin and Tim Spulak.
112
+
113
+ # Sources
114
+
115
+ - Jocher, G. (2020). YOLOv5 by Ultralytics (Version 7.0). https://doi.org/10.5281/zenodo.3908559
116
+ - Fig. 1: H. Liu, F. Sun, J. Gu, and L. Deng, “Sf-yolov5: A lightweight small
117
+ object detection algorithm based on improved feature fusion mode,”
118
+ Sensors (Basel, Switzerland), vol. 22, no. 15, pp. 1–14, 2022. https://doi.org/10.3390/s22155817
119
+
120
+ # License
121
+
122
+ This project is licensed under the MIT License. See the LICENSE file for details.
123
+
124
+ <!-- MARKDOWN LINKS & IMAGES -->
125
+
126
+ [Python]: https://img.shields.io/badge/Python-3776AB?style=for-the-badge&logo=python&logoColor=white
analyze.py ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ analyze.py — 单张图片坐姿检测
4
+
5
+ 用法:
6
+ python analyze.py <image_path>
7
+ python analyze.py <image_path> --save
8
+ python analyze.py <image_path> --save <output_path>
9
+ """
10
+
11
+ import argparse
12
+ import os
13
+ import sys
14
+ from pathlib import Path
15
+
16
+ # 切换到脚本所在目录,确保 load_model.py 里的相对路径(./data/inference_models/)能正确找到模型
17
+ os.chdir(Path(__file__).parent)
18
+
19
+ import sys
20
+ import types
21
+
22
+ # yolov5 兼容 shim(新版 huggingface_hub 移除了 utils._errors 子模块)
23
+ try:
24
+ import huggingface_hub.utils._errors # noqa: F401
25
+ except (ModuleNotFoundError, ImportError):
26
+ import huggingface_hub.errors as _hf_errors
27
+ _shim = types.ModuleType("huggingface_hub.utils._errors")
28
+ for _name in dir(_hf_errors):
29
+ setattr(_shim, _name, getattr(_hf_errors, _name))
30
+ sys.modules["huggingface_hub.utils._errors"] = _shim
31
+
32
+ import torch
33
+
34
+ # PyTorch 2.6+ 默认 weights_only=True,旧版 yolov5 模型需要关闭
35
+ _orig_torch_load = torch.load
36
+ def _patched_torch_load(*args, **kwargs):
37
+ kwargs.setdefault("weights_only", False)
38
+ return _orig_torch_load(*args, **kwargs)
39
+ torch.load = _patched_torch_load
40
+
41
+ import cv2
42
+ from app_models.load_model import InferenceModel
43
+
44
+
45
+ def draw_result(img, x1, y1, x2, y2, label, conf):
46
+ """在原图上叠加黄色检测框和标签"""
47
+ color = (0, 255, 255) # 黄色 (BGR)
48
+ cv2.rectangle(img, (x1, y1), (x2, y2), color, 2)
49
+ text = f"{label} {conf:.2f}"
50
+ # 标签背景,防止文字看不清
51
+ (tw, th), _ = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2)
52
+ cv2.rectangle(img, (x1, y1 - th - 10), (x1 + tw + 4, y1), color, -1)
53
+ cv2.putText(img, text, (x1 + 2, y1 - 6),
54
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2)
55
+ return img
56
+
57
+
58
+ def main():
59
+ parser = argparse.ArgumentParser(description="坐姿检测(YOLOv5)")
60
+ parser.add_argument("image", help="输入图片路径(JPG / PNG)")
61
+ parser.add_argument(
62
+ "--save",
63
+ nargs="?",
64
+ const="", # --save 不带路径时用默认名
65
+ metavar="OUTPUT",
66
+ help="保存标注图;不指定路径则存为 <原文件名>_result.jpg",
67
+ )
68
+ args = parser.parse_args()
69
+
70
+ image_path = Path(args.image).resolve()
71
+ if not image_path.exists():
72
+ print(f"错误:找不到图片 {image_path}")
73
+ sys.exit(1)
74
+
75
+ # 读图
76
+ img = cv2.imread(str(image_path))
77
+ if img is None:
78
+ print(f"错误:无法读取图片 {image_path}")
79
+ sys.exit(1)
80
+
81
+ # 加载模型 & 推理
82
+ model = InferenceModel("small640.pt")
83
+ results = model.predict(img)
84
+ x1, y1, x2, y2, cls, conf = InferenceModel.get_results(results)
85
+
86
+ # 模型已设 conf=0.50,结果为空说明低于阈值
87
+ if cls is None:
88
+ print("未检测到人")
89
+ return
90
+
91
+ label = "good" if cls == 0 else "bad"
92
+ print(f"姿势:{label}(置信度 {conf:.2f})")
93
+ print(f"BBox:[x1={x1}, y1={y1}, x2={x2}, y2={y2}]")
94
+
95
+ # 保存标注图(仅在 --save 时)
96
+ if args.save is not None:
97
+ if args.save == "":
98
+ output_path = image_path.parent / (image_path.stem + "_result" + image_path.suffix)
99
+ else:
100
+ output_path = Path(args.save)
101
+ annotated = draw_result(img.copy(), x1, y1, x2, y2, label, conf)
102
+ cv2.imwrite(str(output_path), annotated)
103
+ print(f"标注图已保存:{output_path}")
104
+
105
+
106
+ if __name__ == "__main__":
107
+ main()
app.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Gradio demo — 坐姿检测 / Sitting Posture Detection
4
+ HF Spaces 入口:sdk: gradio,app_file: app.py
5
+ """
6
+
7
+ import sys
8
+ import types
9
+
10
+ # yolov5 内部引用了 huggingface_hub.utils._errors,新版 hf_hub 已将这些类移到
11
+ # huggingface_hub.errors。打一个向前兼容的 shim,避免 ImportError。
12
+ try:
13
+ import huggingface_hub.utils._errors # noqa: F401
14
+ except (ModuleNotFoundError, ImportError):
15
+ import huggingface_hub.errors as _hf_errors
16
+ _shim = types.ModuleType("huggingface_hub.utils._errors")
17
+ for _name in dir(_hf_errors):
18
+ setattr(_shim, _name, getattr(_hf_errors, _name))
19
+ sys.modules["huggingface_hub.utils._errors"] = _shim
20
+
21
+ import torch
22
+
23
+ # PyTorch 2.6+ 将 weights_only 默认改为 True,旧版 yolov5 模型需要兼容处理
24
+ _orig_torch_load = torch.load
25
+ def _patched_torch_load(*args, **kwargs):
26
+ kwargs.setdefault("weights_only", False)
27
+ return _orig_torch_load(*args, **kwargs)
28
+ torch.load = _patched_torch_load
29
+
30
+ import cv2
31
+ import gradio as gr
32
+ from app_models.load_model import InferenceModel
33
+
34
+ # 全局加载模型(避免每次请求重复加载)
35
+ MODEL = InferenceModel("small640.pt")
36
+
37
+
38
+ def draw_result(img_bgr, x1, y1, x2, y2, label, conf):
39
+ """在图上叠加黄色检测框和标签"""
40
+ color = (0, 255, 255) # 黄色 BGR
41
+ cv2.rectangle(img_bgr, (x1, y1), (x2, y2), color, 2)
42
+ text = f"{label} {conf:.2f}"
43
+ (tw, th), _ = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2)
44
+ cv2.rectangle(img_bgr, (x1, y1 - th - 10), (x1 + tw + 4, y1), color, -1)
45
+ cv2.putText(img_bgr, text, (x1 + 2, y1 - 6),
46
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2)
47
+ return img_bgr
48
+
49
+
50
+ def analyze(image):
51
+ """
52
+ Gradio 推理函数
53
+ image: numpy array (RGB,Gradio 默认格式)
54
+ returns: (annotated_image_rgb, result_text)
55
+ """
56
+ if image is None:
57
+ return None, "请上传图片"
58
+
59
+ img_bgr = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
60
+
61
+ results = MODEL.predict(img_bgr)
62
+ x1, y1, x2, y2, cls, conf = InferenceModel.get_results(results)
63
+
64
+ if cls is None:
65
+ return image, "⚠️ 未检测到人(置信度低于 0.5)\n\n建议:请使用侧面角度的坐姿图片"
66
+
67
+ label = "good" if cls == 0 else "bad"
68
+ emoji = "✅" if label == "good" else "❌"
69
+ result_text = (
70
+ f"{emoji} 姿势:{label}(置信度 {conf:.2f})\n"
71
+ f"BBox:[x1={x1}, y1={y1}, x2={x2}, y2={y2}]"
72
+ )
73
+
74
+ annotated_bgr = draw_result(img_bgr.copy(), x1, y1, x2, y2, label, conf)
75
+ annotated_rgb = cv2.cvtColor(annotated_bgr, cv2.COLOR_BGR2RGB)
76
+
77
+ return annotated_rgb, result_text
78
+
79
+
80
+ demo = gr.Interface(
81
+ fn=analyze,
82
+ inputs=gr.Image(type="numpy", label="上传坐姿图片(建议侧面角度)"),
83
+ outputs=[
84
+ gr.Image(type="numpy", label="检测结果"),
85
+ gr.Textbox(label="分析结果", lines=3),
86
+ ],
87
+ title="🪑 坐姿检测 / Sitting Posture Detection",
88
+ description=(
89
+ "上传一张**侧面坐姿图片**,自动识别好/坏坐姿。\n\n"
90
+ "基于 YOLOv5s,训练数据为侧面标准座椅场景。"
91
+ ),
92
+ examples=[
93
+ ["examples/bad_1.png"],
94
+ ["examples/bad_2.png"],
95
+ ["examples/good_1.png"],
96
+ ],
97
+ allow_flagging="never",
98
+ )
99
+
100
+ if __name__ == "__main__":
101
+ demo.launch()
app_models/__init__.py ADDED
File without changes
app_models/load_model.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ from pathlib import Path
3
+
4
+ import torch
5
+ import yolov5
6
+
7
+ '''Class for loading the Yolo-v5 inference_models
8
+ '''
9
+
10
+
11
+ class InferenceModel:
12
+ def __init__(self, model_name):
13
+ self.model_name = model_name
14
+ # path to inference_models
15
+ self.model_path = Path('./data/inference_models/{}'.format(model_name))
16
+ print(self.model_name + ' loaded')
17
+ print('cuda available: ' + str(torch.cuda.is_available()))
18
+ if torch.cuda.is_available():
19
+ print('running GPU inference..')
20
+ device_memory = {}
21
+ # get gpu with the highest memory
22
+ for i in range(torch.cuda.device_count()):
23
+ props = torch.cuda.get_device_properties(i)
24
+ device_memory[i] = props.total_memory
25
+ device_idx = max(device_memory, key=device_memory.get)
26
+ cuda = torch.device('cuda:{}'.format(device_idx))
27
+ # load inference_models into memory
28
+ try:
29
+ self.model = yolov5.load(str(self.model_path), device=str(cuda))
30
+ except Exception as e:
31
+ print(e)
32
+ print('Could not load model')
33
+ sys.exit(-1)
34
+ else:
35
+ print('running CPU inference..')
36
+ try:
37
+ self.model = yolov5.load(str(self.model_path), device='cpu')
38
+ except Exception as e:
39
+ print(e)
40
+ print('Could not load model')
41
+ sys.exit(-1)
42
+ # inference_models properties
43
+ self.model.conf = 0.50 # NMS confidence threshold
44
+ self.model.iou = 0.50 # NMS IoU threshold
45
+ self.model.classes = [0, 1] # Only show these classes
46
+ self.model.agnostic = False # NMS class-agnostic
47
+ self.model.multi_label = False # NMS multiple labels per box
48
+ self.model.max_det = 1 # maximum number of detections per image
49
+ self.model.amp = True # Automatic Mixed Precision (AMP) inference
50
+
51
+ # return prediction
52
+ def predict(self, image):
53
+ return self.model(image)
54
+
55
+ # extract items from results
56
+ @staticmethod
57
+ def get_results(results):
58
+ (bbox_x1, bbox_y1, bbox_x2, bbox_y2, class_name, confidence) = None, None, None, None, None, None
59
+ results = results.pandas().xyxy[0].to_dict(orient="records")
60
+ if results:
61
+ for result in results:
62
+ confidence = result['confidence']
63
+ class_name = result['class']
64
+ bbox_x1 = int(result['xmin'])
65
+ bbox_y1 = int(result['ymin'])
66
+ bbox_x2 = int(result['xmax'])
67
+ bbox_y2 = int(result['ymax'])
68
+ return bbox_x1, bbox_y1, bbox_x2, bbox_y2, class_name, confidence
app_models/model.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import cv2
2
+
3
+ from app_controllers.utils import camera_helper
4
+ from app_models.load_model import InferenceModel
5
+
6
+
7
+ class Model:
8
+ def __init__(self, model_name):
9
+ super().__init__()
10
+ self.is_fullscreen = False
11
+ self.fullscreen_window = None
12
+ self.worker_thread_pause_screen = None
13
+ self.worker_thread_memory = None
14
+ self.memory_usage = None
15
+ self.cpu_usage = None
16
+ self.confidence = None
17
+ self.class_name = None
18
+ self.width = None
19
+ self.height = None
20
+ self.fps = None
21
+ with open('./commit_hash.txt', 'r') as file:
22
+ self.commit_hash = file.read()
23
+ # self.inference_models = Model(get_model_name())
24
+ self.prev_frame_time = 0
25
+ self.IMAGE_BOX_SIZE = 600
26
+ self.flag_is_camera_thread_running = True
27
+ self.camera_mapping = camera_helper.get_camera_mapping(camera_helper.get_connected_camera_alias(),
28
+ camera_helper.get_connected_camera_ids())
29
+ self.camera = None
30
+ self.work_thread_camera = None
31
+
32
+ """
33
+ Load the frame properties
34
+ """
35
+ # bounding box options
36
+ # bbox color
37
+ self.box_color = (251, 255, 12)
38
+ # bbox line thickness
39
+ self.box_thickness = 2
40
+
41
+ # text options
42
+ # confidence color
43
+ self.text_color_conf = (251, 255, 12)
44
+ # class color
45
+ self.text_color_class = (251, 255, 12)
46
+ # background color
47
+ self.text_color_bg = (0, 0, 0)
48
+ # font thickness
49
+ self.text_thickness = 1
50
+ # font style
51
+ self.text_font = cv2.FONT_HERSHEY_SIMPLEX
52
+ # font scale
53
+ self.text_font_scale = 0.5
54
+ self.model_name = model_name
55
+ self.inference_model = InferenceModel(self.model_name)
56
+ self.frame_rotation = 0
57
+ self.frame_orientation_vertical = 0
58
+ self.frame_orientation_horizontal = 0
59
+ self.bbox_mode = 1
60
+
61
+ def get_commit_hash(self):
62
+ return self.commit_hash
data/inference_models/small640.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:477a1f6e2d3ebb67301a7ca84876d140774d5131c703a26f23d22c4703af49d1
3
+ size 14404413
examples/bad_1.png ADDED

Git LFS Details

  • SHA256: 0aa6c3aff598bcd97a5fe22b2c15baea52a95a58f93a014c7f6d45027f44f172
  • Pointer size: 131 Bytes
  • Size of remote file: 323 kB
examples/bad_2.png ADDED

Git LFS Details

  • SHA256: ccd4c2d938e193378c613b3815740fd19c92b47b1d0c98d19ac92ebfe2807f38
  • Pointer size: 131 Bytes
  • Size of remote file: 319 kB
examples/good_1.png ADDED

Git LFS Details

  • SHA256: 037caec46431f816af9724e526a1c16284c528ece62b09cafde6cea1a12163c2
  • Pointer size: 131 Bytes
  • Size of remote file: 298 kB
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ gradio>=4.0.0
2
+ yolov5
3
+ torch
4
+ torchvision
5
+ opencv-python-headless