File size: 3,271 Bytes
6e44e0c
 
 
 
 
 
 
 
 
 
 
9e6c04d
6e44e0c
 
 
 
 
 
 
 
9e6c04d
6e44e0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e6c04d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6e44e0c
 
 
 
 
 
74f72ba
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: apache-2.0
---

# neural-chat-7b-v3-3-fp16-ov

 * Model creator: [Intel](https://huggingface.co/Intel)
 * Original model: [neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)

## Description

This is [neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format.

## Compatibility

The provided OpenVINO™ IR model is compatible with:

* OpenVINO version 2024.1.0 and higher
* Optimum Intel 1.16.0 and higher

## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)

1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:

    ```
    pip install optimum[openvino]
    ```

2. Run model inference:

    ```
    from transformers import AutoTokenizer
    from optimum.intel.openvino import OVModelForCausalLM
    
    model_id = "OpenVINO/neural-chat-7b-v3-3-fp16-ov"
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    model = OVModelForCausalLM.from_pretrained(model_id)
    
    inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
    
    outputs = model.generate(**inputs, max_length=200)
    text = tokenizer.batch_decode(outputs)[0]
    print(text)
    ```

For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).

## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)

1. Install packages required for using OpenVINO GenAI.
```
pip install openvino-genai huggingface_hub
```

2. Download model from HuggingFace Hub
   
```
import huggingface_hub as hf_hub

model_id = "OpenVINO/neural-chat-7b-v3-3-fp16-ov"
model_path = "neural-chat-7b-v3-3-fp16-ov"

hf_hub.snapshot_download(model_id, local_dir=model_path)

```

3. Run model inference:

```
import openvino_genai as ov_genai

device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
print(pipe.generate("What is OpenVINO?"))
```

More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)

## Limitations

Check the original model card for [limitations](https://huggingface.co/Intel/neural-chat-7b-v3-3#ethical-considerations-and-limitations).

## Legal information

The original model is distributed under [Apache 2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [original model card](https://huggingface.co/Intel/neural-chat-7b-v3-3).

## Disclaimer

Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.