|
--- |
|
library_name: transformers.js |
|
base_model: nielsr/vitpose-base-simple |
|
pipeline_tag: keypoint-detection |
|
--- |
|
|
|
https://huggingface.co/nielsr/vitpose-base-simple with ONNX weights to be compatible with Transformers.js. |
|
|
|
|
|
## Usage (Transformers.js) |
|
|
|
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: |
|
```bash |
|
npm i @huggingface/transformers |
|
``` |
|
|
|
**Example:** Pose estimation w/ `onnx-community/vitpose-base-simple`. |
|
```js |
|
import { AutoModel, AutoImageProcessor, RawImage } from '@huggingface/transformers'; |
|
|
|
// Load model and processor |
|
const model_id = 'onnx-community/vitpose-base-simple'; |
|
const model = await AutoModel.from_pretrained(model_id); |
|
const processor = await AutoImageProcessor.from_pretrained(model_id); |
|
|
|
// Load image and prepare inputs |
|
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/ryan-gosling.jpg'; |
|
const image = await RawImage.read(url); |
|
const inputs = await processor(image); |
|
|
|
// Predict heatmaps |
|
const { heatmaps } = await model(inputs); |
|
|
|
// Post-process heatmaps to get keypoints and scores |
|
const boxes = [[[0, 0, image.width, image.height]]]; |
|
const results = processor.post_process_pose_estimation(heatmaps, boxes)[0][0]; |
|
console.log(results); |
|
``` |
|
|
|
Optionally, visualize the outputs (Node.js usage shown here, using the [`canvas`](https://www.npmjs.com/package/canvas) library): |
|
```js |
|
import { createCanvas, createImageData } from 'canvas'; |
|
|
|
// Create canvas and draw image |
|
const canvas = createCanvas(image.width, image.height); |
|
const ctx = canvas.getContext('2d'); |
|
const imageData = createImageData(image.rgba().data, image.width, image.height); |
|
ctx.putImageData(imageData, 0, 0); |
|
|
|
// Draw edges between keypoints |
|
const points = results.keypoints; |
|
ctx.lineWidth = 4; |
|
ctx.strokeStyle = 'blue'; |
|
for (const [i, j] of model.config.edges) { |
|
const [x1, y1] = points[i]; |
|
const [x2, y2] = points[j]; |
|
ctx.beginPath(); |
|
ctx.moveTo(x1, y1); |
|
ctx.lineTo(x2, y2); |
|
ctx.stroke(); |
|
} |
|
|
|
// Draw circle at each keypoint |
|
ctx.fillStyle = 'red'; |
|
for (const [x, y] of points) { |
|
ctx.beginPath(); |
|
ctx.arc(x, y, 8, 0, 2 * Math.PI); |
|
ctx.fill(); |
|
} |
|
|
|
// Save image to file |
|
import fs from 'fs'; |
|
const out = fs.createWriteStream('pose.png'); |
|
const stream = canvas.createPNGStream(); |
|
stream.pipe(out) |
|
out.on('finish', () => console.log('The PNG file was created.')); |
|
``` |
|
|
|
| Input image | Output image | |
|
| :----------:|:------------:| |
|
| ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/QpXlLNyLDKZUxXjokbUyy.jpeg) | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/xj0jaKo9aAOux-NSU8U7S.png) | |
|
|
|
--- |
|
|
|
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |