|
--- |
|
base_model: Geraldine/FineLlama-3.2-3B-Instruct-ead |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
tags: |
|
- openvino |
|
- openvino-export |
|
license: llama3.2 |
|
--- |
|
# FineLlama-3.2-3B-Instruct-ead-openvino |
|
|
|
This model was converted to OpenVINO from [`Geraldine/FineLlama-3.2-3B-Instruct-ead`](https://huggingface.co/Geraldine/FineLlama-3.2-3B-Instruct-ead) using [optimum-intel](https://github.com/huggingface/optimum-intel) |
|
via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space. |
|
|
|
## Model Description |
|
|
|
- **Original Model**: Geraldine/FineLlama-3.2-3B-Instruct-ead |
|
- **Framework**: OpenVINO |
|
- **Task**: Text Generation, EAD tag generation |
|
- **Language**: English |
|
- **License**: llama3.2 |
|
|
|
## Features |
|
|
|
- Optimized for Intel hardware using OpenVINO |
|
- Supports text generation inference |
|
- Maintains original model capabilities for EAD tag generation |
|
- Integration with PyTorch |
|
|
|
## Installation |
|
|
|
First make sure you have optimum-intel installed: |
|
|
|
```bash |
|
pip install optimum[openvino] |
|
``` |
|
|
|
To load your model you can do as follows: |
|
|
|
```python |
|
from optimum.intel import OVModelForCausalLM |
|
|
|
model_id = "Geraldine/FineLlama-3.2-3B-Instruct-ead-openvino" |
|
model = OVModelForCausalLM.from_pretrained(model_id) |
|
``` |
|
|
|
## Technical Specifications |
|
|
|
### Supported Features |
|
- Text Generation |
|
- Transformers integration |
|
- PyTorch compatibility |
|
- OpenVINO export |
|
- Inference Endpoints |
|
- Conversational capabilities |
|
|
|
### Model Architecture |
|
- Base: meta-llama/Llama-3.2-3B-Instruct |
|
- Fine-tuned: Geraldine/FineLlama-3.2-3B-Instruct-ead |
|
- Final conversion: OpenVINO optimization |
|
|
|
## Usage Examples |
|
|
|
```python |
|
from optimum.intel import OVModelForCausalLM |
|
from transformers import AutoTokenizer |
|
|
|
# Load model and tokenizer |
|
model_id = "Geraldine/FineLlama-3.2-3B-Instruct-ead-openvino" |
|
model = OVModelForCausalLM.from_pretrained(model_id) |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
|
# Generate text |
|
def generate_ead(prompt): |
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
outputs = model.generate(**inputs) |
|
return tokenizer.decode(outputs[0]) |
|
``` |