File size: 2,301 Bytes
af69a32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0dd921a
6683d73
 
 
af69a32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f6d294f
af69a32
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: mit
datasets:
- timdettmers/openassistant-guanaco
language:
- en
pipeline_tag: text-generation
---



# Falcon-7b_guanaco

**lgaalves/falcon-7b_guanaco** is an instruction fine-tuned model based on the Falcon 7B transformer architecture.


### Benchmark Metrics

| Metric                | lgaalves/falcon-7b_guanaco | tiiuae/falcon-7b (base) |
|-----------------------|-------|-------|
| Avg.                  | **56.33** | 53.42 |
| ARC (25-shot)         | **50.0** | 47.87 |
| HellaSwag (10-shot)   | **78.54** | 78.13 |
| TruthfulQA (0-shot)   | **40.45** | 34.26 |


We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.

### Model Details

* **Trained by**: Luiz G A Alves
* **Model type:**  **falcon-7b_guanaco** is an auto-regressive language model based on the Falcon 7B transformer architecture.
* **Language(s)**: English

### How to use:

```python
# Use a pipeline as a high-level helper
>>> from transformers import pipeline
>>> pipe = pipeline("text-generation", model="lgaalves/falcon-7b_guanaco")
>>> question = "What is a large language model?"
>>> answer = pipe(question)
>>> print(answer[0]['generated_text'])
```

or, you can load the model direclty using:

```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("lgaalves/falcon-7b_guanaco")
model = AutoModelForCausalLM.from_pretrained("lgaalves/falcon-7b_guanaco")
```

### Training Dataset

`lgaalves/falcon-7b_guanaco` was trained using the following dataset: [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)

### Training Procedure

`lgaalves/falcon-7b_guanaco` was instruction fine-tuned using LoRA on 1 Tesla V100-SXM2-16GB. It took about 3.5 hours to train it.  


# Intended uses, limitations & biases

You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.