Xenova HF staff commited on
Commit
8076ede
1 Parent(s): d875a8a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md CHANGED
@@ -4,4 +4,41 @@ library_name: transformers.js
4
 
5
  https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b with ONNX weights to be compatible with Transformers.js.
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
 
4
 
5
  https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b with ONNX weights to be compatible with Transformers.js.
6
 
7
+
8
+ ## Usage (Transformers.js)
9
+
10
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
11
+ ```bash
12
+ npm i @xenova/transformers
13
+ ```
14
+
15
+ **Example:** Text generation with `Xenova/stablelm-2-zephyr-1_6b`.
16
+
17
+ ```js
18
+ import { pipeline } from '@xenova/transformers';
19
+
20
+ // Create text generation pipeline
21
+ const generator = await pipeline('text-generation', 'Xenova/stablelm-2-zephyr-1_6b');
22
+
23
+ // Define the prompt and list of messages
24
+ const prompt = "Tell me a funny joke."
25
+ const messages = [
26
+ { "role": "system", "content": "You are a helpful assistant." },
27
+ { "role": "user", "content": prompt },
28
+ ]
29
+
30
+ // Apply chat template
31
+ const inputs = generator.tokenizer.apply_chat_template(messages, {
32
+ tokenize: false,
33
+ add_generation_prompt: true,
34
+ });
35
+
36
+ // Generate text
37
+ const output = await generator(inputs, { max_new_tokens: 20 });
38
+ console.log(output[0].generated_text);
39
+ // "<|system|>\nYou are a helpful assistant.\n<|user|>\nTell me a funny joke.\n<|assistant|>\nHere's a joke for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!"
40
+ ```
41
+
42
+ ---
43
+
44
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).