Xenova HF staff commited on
Commit
b706cad
·
verified ·
1 Parent(s): e7448f7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers.js
3
+ pipeline_tag: object-detection
4
+ license: agpl-3.0
5
+ ---
6
+
7
+ # YOLOv10: Real-Time End-to-End Object Detection
8
+
9
+ ONNX weights for https://github.com/THU-MIG/yolov10.
10
+
11
+ Latency-accuracy trade-offs | Size-accuracy trade-offs
12
+ :-------------------------:|:-------------------------:
13
+ ![latency-accuracy trade-offs](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/cXru_kY_pRt4n4mHERnFp.png) | ![size-accuracy trade-offs](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/8apBp9fEZW2gHVdwBN-nC.png)
14
+
15
+ ## Usage (Transformers.js)
16
+
17
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
18
+ ```bash
19
+ npm i @xenova/transformers
20
+ ```
21
+
22
+ **Example:** Perform object-detection.
23
+ ```js
24
+ import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
25
+
26
+ // Load model
27
+ const model = await AutoModel.from_pretrained('onnx-community/yolov10n', {
28
+ // quantized: false, // (Optional) Use unquantized version.
29
+ })
30
+
31
+ // Load processor
32
+ const processor = await AutoProcessor.from_pretrained('onnx-community/yolov10n');
33
+
34
+ // Read image and run processor
35
+ const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/city-streets.jpg';
36
+ const image = await RawImage.read(url);
37
+ const { pixel_values } = await processor(image);
38
+
39
+ // Run object detection
40
+ const { output0 } = await model({ images: pixel_values });
41
+ const predictions = output0.tolist()[0];
42
+ const threshold = 0.5;
43
+ for (const [xmin, ymin, xmax, ymax, score, id] of predictions) {
44
+ if (score < threshold) continue;
45
+ const bbox = [xmin, ymin, xmax, ymax].map(x => x.toFixed(2)).join(', ')
46
+ console.log(`Found "${model.config.id2label[id]}" at [${bbox}] with score ${score.toFixed(2)}.`)
47
+ }
48
+ Found "car" at [447.54, 378.72, 640.04, 478.45] with score 0.93.
49
+ Found "car" at [179.04, 339.41, 398.66, 416.86] with score 0.90.
50
+ Found "bicycle" at [2.13, 518.43, 110.29, 584.21] with score 0.88.
51
+ Found "bicycle" at [352.12, 521.97, 464.05, 588.28] with score 0.85.
52
+ Found "person" at [550.97, 258.75, 591.22, 332.01] with score 0.85.
53
+ Found "bicycle" at [449.07, 473.14, 556.22, 537.92] with score 0.83.
54
+ Found "person" at [31.36, 469.01, 79.16, 572.99] with score 0.82.
55
+ Found "person" at [473.11, 430.45, 533.71, 527.05] with score 0.79.
56
+ // ...
57
+ ```