Xenova HF staff commited on
Commit
316c8c3
1 Parent(s): 2614a84

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers.js
3
+ pipeline_tag: object-detection
4
+ license: agpl-3.0
5
+ ---
6
+
7
+ # YOLOv10: Real-Time End-to-End Object Detection
8
+
9
+ ONNX weights for https://github.com/THU-MIG/yolov10.
10
+
11
+ Latency-accuracy trade-offs | Size-accuracy trade-offs
12
+ :-------------------------:|:-------------------------:
13
+ ![latency-accuracy trade-offs](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/cXru_kY_pRt4n4mHERnFp.png) | ![size-accuracy trade-offs](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/8apBp9fEZW2gHVdwBN-nC.png)
14
+
15
+ ## Usage (Transformers.js)
16
+
17
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
18
+ ```bash
19
+ npm i @xenova/transformers
20
+ ```
21
+
22
+ **Example:** Perform object-detection.
23
+ ```js
24
+ import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
25
+
26
+ // Load model
27
+ const model = await AutoModel.from_pretrained('onnx-community/yolov10x', {
28
+ // quantized: false, // (Optional) Use unquantized version.
29
+ })
30
+
31
+ // Load processor
32
+ const processor = await AutoProcessor.from_pretrained('onnx-community/yolov10x');
33
+
34
+ // Read image and run processor
35
+ const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/city-streets.jpg';
36
+ const image = await RawImage.read(url);
37
+ const { pixel_values } = await processor(image);
38
+
39
+ // Run object detection
40
+ const { output0 } = await model({ images: pixel_values });
41
+ const predictions = output0.tolist()[0];
42
+ const threshold = 0.5;
43
+ for (const [xmin, ymin, xmax, ymax, score, id] of predictions) {
44
+ if (score < threshold) continue;
45
+ const bbox = [xmin, ymin, xmax, ymax].map(x => x.toFixed(2)).join(', ')
46
+ console.log(`Found "${model.config.id2label[id]}" at [${bbox}] with score ${score.toFixed(2)}.`)
47
+ }
48
+ Found "car" at [177.70, 336.97, 398.84, 417.47] with score 0.97.
49
+ Found "car" at [447.32, 378.86, 639.43, 478.14] with score 0.97.
50
+ Found "person" at [473.79, 430.18, 533.20, 532.84] with score 0.95.
51
+ Found "bicycle" at [352.02, 526.71, 463.56, 588.08] with score 0.93.
52
+ Found "bicycle" at [1.32, 517.64, 109.91, 584.40] with score 0.92.
53
+ Found "bicycle" at [449.09, 478.36, 555.36, 537.83] with score 0.91.
54
+ Found "person" at [550.20, 261.00, 591.49, 332.14] with score 0.90.
55
+ Found "person" at [392.72, 481.26, 442.78, 586.88] with score 0.89.
56
+ // ...
57
+ ```