Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,76 @@ library_name: transformers.js
|
|
4 |
|
5 |
https://huggingface.co/hustvl/vitmatte-small-distinctions-646 with ONNX weights to be compatible with Transformers.js.
|
6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
|
|
4 |
|
5 |
https://huggingface.co/hustvl/vitmatte-small-distinctions-646 with ONNX weights to be compatible with Transformers.js.
|
6 |
|
7 |
+
## Usage (Transformers.js)
|
8 |
+
|
9 |
+
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
|
10 |
+
```bash
|
11 |
+
npm i @xenova/transformers
|
12 |
+
```
|
13 |
+
|
14 |
+
**Example:** Perform image matting with a `VitMatteForImageMatting` model.
|
15 |
+
```javascript
|
16 |
+
import { AutoProcessor, VitMatteForImageMatting, RawImage } from '@xenova/transformers';
|
17 |
+
|
18 |
+
// Load processor and model
|
19 |
+
const processor = await AutoProcessor.from_pretrained('Xenova/vitmatte-small-distinctions-646');
|
20 |
+
const model = await VitMatteForImageMatting.from_pretrained('Xenova/vitmatte-small-distinctions-646');
|
21 |
+
|
22 |
+
// Load image and trimap
|
23 |
+
const image = await RawImage.fromURL('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/vitmatte_image.png');
|
24 |
+
const trimap = await RawImage.fromURL('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/vitmatte_trimap.png');
|
25 |
+
|
26 |
+
// Prepare image + trimap for the model
|
27 |
+
const inputs = await processor(image, trimap);
|
28 |
+
|
29 |
+
// Predict alpha matte
|
30 |
+
const { alphas } = await model(inputs);
|
31 |
+
// Tensor {
|
32 |
+
// dims: [ 1, 1, 640, 960 ],
|
33 |
+
// type: 'float32',
|
34 |
+
// size: 614400,
|
35 |
+
// data: Float32Array(614400) [ 0.9894027709960938, 0.9970508813858032, ... ]
|
36 |
+
// }
|
37 |
+
```
|
38 |
+
|
39 |
+
You can visualize the alpha matte as follows:
|
40 |
+
```javascript
|
41 |
+
import { Tensor, cat } from '@xenova/transformers';
|
42 |
+
|
43 |
+
// Visualize predicted alpha matte
|
44 |
+
const imageTensor = new Tensor(
|
45 |
+
'uint8',
|
46 |
+
new Uint8Array(image.data),
|
47 |
+
[image.height, image.width, image.channels]
|
48 |
+
).transpose(2, 0, 1);
|
49 |
+
|
50 |
+
// Convert float (0-1) alpha matte to uint8 (0-255)
|
51 |
+
const alphaChannel = alphas
|
52 |
+
.squeeze(0)
|
53 |
+
.mul_(255)
|
54 |
+
.clamp_(0, 255)
|
55 |
+
.round_()
|
56 |
+
.to('uint8');
|
57 |
+
|
58 |
+
// Concatenate original image with predicted alpha
|
59 |
+
const imageData = cat([imageTensor, alphaChannel], 0);
|
60 |
+
|
61 |
+
// Save output image
|
62 |
+
const outputImage = RawImage.fromTensor(imageData);
|
63 |
+
outputImage.save('output.png');
|
64 |
+
```
|
65 |
+
|
66 |
+
Example inputs:
|
67 |
+
| Image| Trimap |
|
68 |
+
|--------|--------|
|
69 |
+
| ![vitmatte_image](https://github.com/xenova/transformers.js/assets/26504141/7317539e-c9f6-4a61-9542-4578ea7b6292) | ![vitmatte_trimap](https://github.com/xenova/transformers.js/assets/26504141/663ef260-fe2d-4b23-83cf-8f9a9b7ee593) |
|
70 |
+
|
71 |
+
Example outputs:
|
72 |
+
| Quantized | Unquantized |
|
73 |
+
|--------|--------|
|
74 |
+
| ![output_quantized](https://github.com/xenova/transformers.js/assets/26504141/00669063-1a7e-447d-947f-1e9e0beaa7c4) | ![output_unquantized](https://github.com/xenova/transformers.js/assets/26504141/437d8ccd-af82-4853-82c4-ae897ac112bf) |
|
75 |
+
|
76 |
+
|
77 |
+
---
|
78 |
+
|
79 |
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|