|
--- |
|
language: |
|
- en |
|
datasets: |
|
- kyujinpy/Open-platypus-Commercial |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
license: mit |
|
--- |
|
|
|
# **phi-2-test** |
|
|
|
## Model Details |
|
**Model Developers** |
|
- field2437 |
|
|
|
**Base Model** |
|
- microsoft/phi-2(https://huggingface.co/microsoft/phi-2) |
|
|
|
**Training Dataset** |
|
- kyujinpy/Open-platypus-Commercial(https://huggingface.co/datasets/kyujinpy/Open-platypus-Commercial) |
|
|
|
--- |
|
# Model comparisons1 |
|
|
|
|
|
--- |
|
# Model comparisons2 |
|
> AI-Harness evaluation; [link](https://github.com/EleutherAI/lm-evaluation-harness) |
|
|
|
| Model | Copa | HellaSwag | BoolQ | MMLU | |
|
| --- | --- | --- | --- | --- | |
|
| | 0-shot | 0-shot | 0-shot | 0-shot | |
|
| **field2437/phi-2-test** | 0.8900 | NaN | 0.5573 | NaN | 0.8260 | NaN | 0.5513 | NaN | |
|
|
|
--- |
|
# Sample Code |
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
torch.set_default_device("cuda") |
|
|
|
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True) |
|
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True) |
|
|
|
inputs = tokenizer('''def print_prime(n): |
|
""" |
|
Print all primes between 1 and n |
|
"""''', return_tensors="pt", return_attention_mask=False) |
|
|
|
outputs = model.generate(**inputs, max_length=200) |
|
text = tokenizer.batch_decode(outputs)[0] |
|
print(text) |
|
``` |
|
|
|
--- |
|
|