Xenova HF staff commited on
Commit
c3b2cbe
1 Parent(s): b280a65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -7,4 +7,61 @@ tags:
7
 
8
  https://huggingface.co/CIDAS/clipseg-rd64-refined with ONNX weights to be compatible with Transformers.js.
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
 
7
 
8
  https://huggingface.co/CIDAS/clipseg-rd64-refined with ONNX weights to be compatible with Transformers.js.
9
 
10
+ ## Usage (Transformers.js)
11
+
12
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
13
+ ```bash
14
+ npm i @xenova/transformers
15
+ ```
16
+
17
+ **Example:** Perform zero-shot image segmentation with a `CLIPSegForImageSegmentation` model.
18
+
19
+ ```js
20
+ import { AutoTokenizer, AutoProcessor, CLIPSegForImageSegmentation, RawImage } from '@xenova/transformers';
21
+
22
+ // Load tokenizer, processor, and model
23
+ const tokenizer = await AutoTokenizer.from_pretrained('Xenova/clipseg-rd64-refined');
24
+ const processor = await AutoProcessor.from_pretrained('Xenova/clipseg-rd64-refined');
25
+ const model = await CLIPSegForImageSegmentation.from_pretrained('Xenova/clipseg-rd64-refined');
26
+
27
+ // Run tokenization
28
+ const texts = ['a glass', 'something to fill', 'wood', 'a jar'];
29
+ const text_inputs = tokenizer(texts, { padding: true, truncation: true });
30
+
31
+ // Read image and run processor
32
+ const image = await RawImage.read('https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true');
33
+ const image_inputs = await processor(image);
34
+
35
+ // Run model with both text and pixel inputs
36
+ const { logits } = await model({ ...text_inputs, ...image_inputs });
37
+ // logits: Tensor {
38
+ // dims: [4, 352, 352],
39
+ // type: 'float32',
40
+ // data: Float32Array(495616)[ ... ],
41
+ // size: 495616
42
+ // }
43
+ ```
44
+
45
+ You can visualize the predictions as follows:
46
+ ```js
47
+ // Visualize images
48
+ const preds = logits
49
+ .unsqueeze_(1)
50
+ .sigmoid_()
51
+ .mul_(255)
52
+ .round_()
53
+ .to('uint8');
54
+
55
+ for (let i = 0; i < preds.dims[0]; ++i) {
56
+ const img = RawImage.fromTensor(preds[i]);
57
+ img.save(`prediction_${i}.png`);
58
+ }
59
+ ```
60
+
61
+ | Original | `"a glass"` | `"something to fill"` | `"wood"` | `"a jar"` |
62
+ |--------|--------|--------|--------|--------|
63
+ | ![image](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/B4wAIseP3SokRd7Flu1Y9.png) | ![prediction_0](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/s3WBtlA9CyZmm9F5lrOG3.png) | ![prediction_1](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/v4_3JqhAZSfOg60v5x1C2.png) | ![prediction_2](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/MjZLENI9RMaMCGyk6G6V1.png) | ![prediction_3](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/dIHO76NAPTMt9-677yNkg.png) |
64
+
65
+ ---
66
+
67
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).