sylwia-kuros commited on
Commit
bcda3ba
1 Parent(s): 3cc459f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bigcode-openrail-m
3
+ ---
4
+
5
+ # starcoder2-3b-int4-ov
6
+
7
+ * Model creator: [BigCode](https://huggingface.co/bigcode)
8
+ * Original model: [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b)
9
+
10
+ ## Description
11
+
12
+ This is [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
13
+
14
+ ## Quantization Parameters
15
+
16
+ Weight compression was performed using `nncf.compress_weights` with the following parameters:
17
+
18
+
19
+ * mode: **INT4_SYM**
20
+ * group_size: **128**
21
+ * ratio: **1.0**
22
+
23
+ For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html).
24
+
25
+ ## Compatibility
26
+
27
+ The provided OpenVINO™ IR model is compatible with:
28
+
29
+ * OpenVINO version 2024.1.0 and higher
30
+ * Optimum Intel 1.16.0 and higher
31
+
32
+ ## Running Model Inference
33
+
34
+ 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
35
+
36
+ ```
37
+ pip install optimum[openvino]
38
+ ```
39
+
40
+ 2. Run model inference:
41
+
42
+ ```
43
+ from transformers import AutoTokenizer
44
+ from optimum.intel.openvino import OVModelForCausalLM
45
+
46
+ model_id = "OpenVINO/starcoder2-3b-int4-ov"
47
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
48
+ model = OVModelForCausalLM.from_pretrained(model_id)
49
+
50
+ inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
51
+
52
+ outputs = model.generate(**inputs, max_length=200)
53
+ text = tokenizer.batch_decode(outputs)[0]
54
+ print(text)
55
+ ```
56
+
57
+ For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
58
+
59
+ ## Legal information
60
+
61
+ The original model is distributed under [bigcode-openrail-m](https://www.bigcode-project.org/docs/pages/bigcode-openrail/) license. More details can be found in [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b).
62
+
63
+ ## Disclaimer
64
+
65
+ Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.