File size: 2,832 Bytes
66165ff
 
 
 
e6ce889
66165ff
 
 
 
 
 
 
 
 
b37a450
66165ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
206b4f3
b37a450
 
206b4f3
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: mit
datasets:
- garage-bAInd/Open-Platypus
- lgaalves/camel-ai-physics
language:
- en
pipeline_tag: text-generation
---



# lgaalves/gpt2_camel_physics-platypus

**lgaalves/gpt2_camel_physics-platypus** is an instruction fine-tuned model based on the GPT-2 transformer architecture.



We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.

### Model Details

* **Trained by**: Luiz G A Alves
* **Model type:**  **gpt2_open-platypus** is an auto-regressive language model based on the GPT-2 transformer architecture.
* **Language(s)**: English

### How to use:

```python
# Use a pipeline as a high-level helper
>>> from transformers import pipeline
>>> pipe = pipeline("text-generation", model="lgaalves/gpt2_camel_physics-platypus")
>>> question = "What is a large language model?"
>>> answer = pipe(question)
>>> print(answer[0]['generated_text'])
```

or, you can load the model direclty using:

```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("lgaalves/gpt2_camel_physics-platypus")
model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2_camel_physics-platypus")
```

### Training Dataset

`lgaalves/gpt2_open-platypus` trained using STEM and logic based dataset [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) and 
the GPT4 generated dataset [lgaalves/camel-physics](https://huggingface.co/datasets/lgaalves/camel-physics).


### Training Procedure

`lgaalves/gpt2_camel_physics-platypus` was instruction fine-tuned using LoRA on 1 v100 GPU on Google Colab. It took about 17 minutes to train it.  


# Intended uses, limitations & biases

You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.


# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2_camel_physics-platypus)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 25.04   |
| ARC (25-shot)         | 23.04          |
| HellaSwag (10-shot)   | 31.32    |
| MMLU (5-shot)         | 26.91         |
| TruthfulQA (0-shot)   | 39.56   |
| Winogrande (5-shot)   | 49.64   |
| GSM8K (5-shot)        | 0.0        |
| DROP (3-shot)         | 4.79         |