yujiepan's picture
Update README.md
44d0222 verified
|
raw
history blame
411 Bytes
[internal use]
```python
import transformers
from optimum.intel.openvino import OVModelForCausalLM
model_id = '<local folder or model_id on HF>'
ov_model = OVModelForCausalLM.from_pretrained(model_id)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
pipe = transformers.pipelines.TextGenerationPipeline(model=ov_model, tokenizer=tokenizer)
output = pipe('Hello, I am a ', max_new_tokens=16)
```