|
# Notes: |
|
|
|
This model is inherited directly from gpt2 in HF model hub. Then, GPT2 Openvino IR from OMZ is copied here. The intended usage of this model is for optimum-intel. |
|
|
|
```bash |
|
# Install Optimum-Intel |
|
|
|
from transformers import AutoTokenizer, pipeline, set_seed, AutoModelForCausalLM |
|
from optimum.intel.openvino import OVModelForCausalLM |
|
|
|
model_id="vuiseng9/ov-gpt2-fp32-no-cache" |
|
model = OVModelForCausalLM.from_pretrained(model_id, use_cache=False) |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
generator_pipe = pipeline('text-generation', model=model, tokenizer=tokenizer) |
|
output = generator_pipe("It's a beautiful day ...", max_length=30, num_return_sequences=1) |
|
``` |
|
|