sylwia-kuros commited on
Commit
f532c38
1 Parent(s): ac498ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -3
README.md CHANGED
@@ -1,3 +1,66 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # codegen2-3_7B_P-int4-ov
6
+
7
+ * Model creator: [Salesforce](https://huggingface.co/Salesforce)
8
+ * Original model: [codegen2-3_7B_P](https://huggingface.co/Salesforce/codegen2-3_7B_P)
9
+
10
+ ## Description
11
+
12
+ This is [codegen2-3_7B_P](https://huggingface.co/Salesforce/codegen2-3_7B_P) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to int8 by [NNCF](https://github.com/openvinotoolkit/nncf).
13
+
14
+ ## Quantization Parameters
15
+
16
+ Weight compression was performed using `nncf.compress_weights` with the following parameters:
17
+
18
+ * mode: **INT4_SYM**
19
+ * group_size: **128**
20
+ * ratio: **1**
21
+ * sensitivity_metric: **weight_quantization_error**
22
+
23
+ For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html).
24
+
25
+
26
+ ## Compatibility
27
+
28
+ The provided OpenVINO™ IR model is compatible with:
29
+
30
+ * OpenVINO version 2024.1.0 and higher
31
+ * Optimum Intel 1.16.0 and higher
32
+
33
+ ## Running Model Inference
34
+
35
+ 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
36
+
37
+ ```
38
+ pip install optimum[openvino]
39
+ ```
40
+
41
+ 2. Run model inference:
42
+
43
+ ```
44
+ from transformers import AutoTokenizer
45
+ from optimum.intel.openvino import OVModelForCausalLM
46
+
47
+ model_id = "OpenVINO/codegen2-3_7B_P-int4-ov"
48
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
49
+ model = OVModelForCausalLM.from_pretrained(model_id)
50
+
51
+ inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
52
+
53
+ outputs = model.generate(**inputs, max_length=200)
54
+ text = tokenizer.batch_decode(outputs)[0]
55
+ print(text)
56
+ ```
57
+
58
+ For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
59
+
60
+ ## Limitations
61
+
62
+ Check the original model card for [limitations](https://huggingface.co/Salesforce/codegen2-3_7B_P#intended-use-and-limitations).
63
+
64
+ ## Legal information
65
+
66
+ The original model is distributed under mit license. More details can be found in [original model card](https://huggingface.co/Salesforce/codegen2-3_7B_P).