katuni4ka commited on
Commit
1c50191
·
verified ·
1 Parent(s): 76f206f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -3
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: mit
3
  license_link: https://choosealicense.com/licenses/mit/
4
  ---
5
- # dolly-v2-7b -int8-ov
6
  * Model creator: [Databricks](https://huggingface.co/databricks)
7
  * Original model: [dolly-v2-7b ](https://huggingface.co/databricks/dolly-v2-7b)
8
 
@@ -26,7 +26,8 @@ The provided OpenVINO™ IR model is compatible with:
26
  * OpenVINO version 2024.1.0 and higher
27
  * Optimum Intel 1.16.0 and higher
28
 
29
- ## Running Model Inference
 
30
 
31
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
32
 
@@ -40,7 +41,7 @@ pip install optimum[openvino]
40
  from transformers import AutoTokenizer
41
  from optimum.intel.openvino import OVModelForCausalLM
42
 
43
- model_id = "OpenVINO/dolly-v2-7b -int8-ov"
44
  tokenizer = AutoTokenizer.from_pretrained(model_id)
45
  model = OVModelForCausalLM.from_pretrained(model_id)
46
 
@@ -53,6 +54,37 @@ print(text)
53
 
54
  For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ## Limitations
57
 
58
  Check the original model card for [limitations](https://huggingface.co/databricks/dolly-v2-7b#known-limitations).
 
2
  license: mit
3
  license_link: https://choosealicense.com/licenses/mit/
4
  ---
5
+ # dolly-v2-7b-int8-ov
6
  * Model creator: [Databricks](https://huggingface.co/databricks)
7
  * Original model: [dolly-v2-7b ](https://huggingface.co/databricks/dolly-v2-7b)
8
 
 
26
  * OpenVINO version 2024.1.0 and higher
27
  * Optimum Intel 1.16.0 and higher
28
 
29
+ ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
30
+
31
 
32
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
33
 
 
41
  from transformers import AutoTokenizer
42
  from optimum.intel.openvino import OVModelForCausalLM
43
 
44
+ model_id = "OpenVINO/dolly-v2-7b-int8-ov"
45
  tokenizer = AutoTokenizer.from_pretrained(model_id)
46
  model = OVModelForCausalLM.from_pretrained(model_id)
47
 
 
54
 
55
  For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
56
 
57
+ ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
58
+
59
+ 1. Install packages required for using OpenVINO GenAI.
60
+ ```
61
+ pip install openvino-genai huggingface_hub
62
+ ```
63
+
64
+ 2. Download model from HuggingFace Hub
65
+
66
+ ```
67
+ import huggingface_hub as hf_hub
68
+
69
+ model_id = "OpenVINO/dolly-v2-7b-int8-ov"
70
+ model_path = "dolly-v2-7b-int8-ov"
71
+
72
+ hf_hub.snapshot_download(model_id, local_dir=model_path)
73
+
74
+ ```
75
+
76
+ 3. Run model inference:
77
+
78
+ ```
79
+ import openvino_genai as ov_genai
80
+
81
+ device = "CPU"
82
+ pipe = ov_genai.LLMPipeline(model_path, device)
83
+ print(pipe.generate("What is OpenVINO?"))
84
+ ```
85
+
86
+ More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
87
+
88
  ## Limitations
89
 
90
  Check the original model card for [limitations](https://huggingface.co/databricks/dolly-v2-7b#known-limitations).