File size: 1,711 Bytes
e4154e1
 
 
 
 
 
 
 
 
 
137b7b8
e4154e1
06b59cb
e4154e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
06b59cb
e4154e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
137b7b8
e4154e1
 
06b59cb
e4154e1
 
 
 
 
137b7b8
e4154e1
 
 
 
 
 
 
 
 
 
 
 
 
 
137b7b8
 
 
 
 
 
e4154e1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
 
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch

library_name: ultralytics
library_version: 8.0.14
inference: false

datasets:
- keremberke/forklift-object-detection

model-index:
- name: keremberke/yolov8n-forklift-detection
  results:
  - task:
      type: object-detection

    dataset:
      type: keremberke/forklift-object-detection
      name: forklift-object-detection
      split: validation

    metrics:
      - type: precision  # since mAP@0.5 is not available on hf.co/metrics
        value: 0.81163  # min: 0.0 - max: 1.0
        name: mAP@0.5(box)
---

<div align="center">
  <img width="640" alt="keremberke/yolov8n-forklift-detection" src="https://huggingface.co/keremberke/yolov8n-forklift-detection/resolve/main/thumbnail.jpg">
</div>

### Supported Labels

```
['forklift', 'person']
```

### How to use

- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):

```bash
pip install ultralyticsplus==0.0.17
```

- Load model and perform prediction:

```python
from ultralyticsplus import YOLO, render_result

# load model
model = YOLO('keremberke/yolov8n-forklift-detection')

# set model parameters
model.overrides['conf'] = 0.25  # NMS confidence threshold
model.overrides['iou'] = 0.45  # NMS IoU threshold
model.overrides['agnostic_nms'] = False  # NMS class-agnostic
model.overrides['max_det'] = 1000  # maximum number of detections per image

# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'

# perform inference
results = model.predict(image)

# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```