nilabhra commited on
Commit
f31bc28
1 Parent(s): 127bec8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -13
README.md CHANGED
@@ -7,7 +7,7 @@
7
 
8
  🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
9
 
10
- ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon2-11B-Chat](https://huggingface.co/tiiuae/Falcon2-11B-chat).
11
 
12
  ```python
13
  from transformers import AutoTokenizer, AutoModelForCausalLM
@@ -25,7 +25,7 @@ pipeline = transformers.pipeline(
25
  trust_remote_code=True,
26
  )
27
  sequences = pipeline(
28
- "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
29
  max_length=200,
30
  do_sample=True,
31
  top_k=10,
@@ -93,7 +93,7 @@ pipeline = transformers.pipeline(
93
  device_map="auto",
94
  )
95
  sequences = pipeline(
96
- "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
97
  max_length=200,
98
  do_sample=True,
99
  top_k=10,
@@ -149,16 +149,17 @@ The model training took roughly two months.
149
 
150
  ## Evaluation
151
 
152
- |English Benchmark | **Value** | **Comment** |
153
- |--------------------|------------|-------------------------------------------|
154
- | HellaSwag-10shots | 82.91 | |
155
- | Winogrande-5shots | 78.30 | |
156
- | ARC-Challenge-25shots | 59.73 | |
157
- | TruthfulQA-0shot | 52.56 | |
158
- | MMLU-5shots | 58.37 | |
159
- | GSM8k-5shots | 53.83 | |
160
- | ARC-Challenge-0shot | 50.17 | |
161
- | ARC-Easy-0shot | 77.78 | |
 
162
 
163
  We thank the leaderboard team from HuggingFace for providing an official evaluation of our model on the leaderboard tasks.
164
 
 
7
 
8
  🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
9
 
10
+ ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.**
11
 
12
  ```python
13
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
25
  trust_remote_code=True,
26
  )
27
  sequences = pipeline(
28
+ "Can you explain the concepts of Quantum Computing?",
29
  max_length=200,
30
  do_sample=True,
31
  top_k=10,
 
93
  device_map="auto",
94
  )
95
  sequences = pipeline(
96
+ "Can you explain the concepts of Quantum Computing?",
97
  max_length=200,
98
  do_sample=True,
99
  top_k=10,
 
149
 
150
  ## Evaluation
151
 
152
+ |English Benchmark | **Value** |
153
+ |--------------------|------------|
154
+ | ARC-Challenge-25shots | 59.73 |
155
+ | HellaSwag-10shots | 82.91 |
156
+ | MMLU-5shots | 58.37 |
157
+ | Winogrande-5shots | 78.30 |
158
+ | TruthfulQA-0shot | 52.56 |
159
+ | GSM8k-5shots | 53.83 |
160
+ | ARC-Challenge-0shot | 50.17 |
161
+ | ARC-Easy-0shot | 77.78 |
162
+ | Hellaswag-0shot | 82.07 |
163
 
164
  We thank the leaderboard team from HuggingFace for providing an official evaluation of our model on the leaderboard tasks.
165