|
--- |
|
license: mit |
|
datasets: |
|
- databricks/databricks-dolly-15k |
|
language: |
|
- en |
|
tags: |
|
- openvino |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# databricks/dolly-v2-3b |
|
|
|
This is the [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) model converted to [OpenVINO](https://openvino.ai), for accellerated inference. |
|
|
|
An example of how to do inference on this model: |
|
```python |
|
from optimum.intel.openvino import OVModelForCausalLM |
|
from transformers import AutoTokenizer, pipeline |
|
|
|
# model_id should be set to either a local directory or a model available on the HuggingFace hub. |
|
model_id = "katuni4ka/dolly-v2-3b-ov" |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = OVModelForCausalLM.from_pretrained(model_id) |
|
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) |
|
result = pipe("hello world") |
|
print(result) |
|
``` |
|
|
|
More detailed example how to use model in instruction following scenario, can be found in this [notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/240-dolly-2-instruction-following) |