File size: 5,317 Bytes
922a098 eb65128 faf49f5 4412933 a53377e e4b2f81 a53377e b6bed06 e4b2f81 fe972ab e4b2f81 fe972ab e4b2f81 fe972ab e4b2f81 faf49f5 fe972ab b6bed06 fe972ab 44788f8 e3bdfe3 44788f8 fe972ab faf49f5 e4b2f81 faf49f5 e4b2f81 eb65128 e4b2f81 eb65128 faf49f5 b6bed06 faf49f5 eb65128 e4b2f81 eb65128 e4b2f81 faf49f5 e4b2f81 44788f8 faf49f5 e4b2f81 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce ff07b22 0b20cce c0da908 ff07b22 0b20cce ff07b22 0b20cce c0da908 ff07b22 c0da908 ff07b22 0b20cce 44788f8 e4b2f81 44788f8 e4b2f81 44788f8 fe972ab faf49f5 014ea6f faf49f5 e4b2f81 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 |
---
language:
- en
- fr
- es
- pt
tags:
- falcon3
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
---
# Falcon3-10B-Base
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
This repository contains the **Falcon3-10B-Base**. It achieves state-of-the-art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks.
Falcon3-10B-Base supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
⚠️ **This is a raw, pretrained model, which should be further finetuned using SFT, RLHF, continued pretraining, etc. for most use cases.**
## Model Details
- Architecture
- Transformer-based causal decoder-only architecture
- 40 decoder blocks
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
- Wider head dimension: 256
- High RoPE value to support long context understanding: 1000042
- Uses SwiGLu and RMSNorm
- 32K context length
- 131K vocab size
- Depth up-scaled from **Falcon3-7B-Base** with 2 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 2048 H100 GPU chips
- Supports EN, FR, ES, PT
- Developed by [Technology Innovation Institute](https://www.tii.ae)
- License: TII Falcon-LLM License 2.0
- Model Release Date: December 2024
## Getting started
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="tiiuae/Falcon3-10B-Base",
torch_dtype=torch.bfloat16,
device_map="auto"
)
response = pipe("Question: How many hours in one day? Answer: ")
print(response[0]['generated_text'])
```
</details>
<br>
## Benchmarks
We report in the following table our internal pipeline benchmarks:
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
<colgroup>
<col style="width: 10%;">
<col style="width: 10%;">
<col style="width: 7%;">
<col style="width: 7%;">
<col style="width: 7%;">
<col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
</colgroup>
<thead>
<tr>
<th>Category</th>
<th>Benchmark</th>
<th>Gemma2-9B</th>
<th>Yi1.5-9B</th>
<th>Mistral-NeMo-12B</th>
<th>Falcon3-10B-Base</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3">General</td>
<td>MMLU (5-shot)</td>
<td>70.8</td>
<td>69.6</td>
<td>68.8</td>
<td><b>73.1</b></td>
</tr>
<tr>
<td>MMLU-PRO (5-shot)</td>
<td>41.4</td>
<td>39.3</td>
<td>34.7</td>
<td><b>42.5</b></td>
</tr>
<tr>
<td>IFEval</td>
<td>21.2</td>
<td>29.1</td>
<td>16.1</td>
<td><b>36.4</b></td>
</tr>
<tr>
<td rowspan="2">Math</td>
<td>GSM8K (5-shot)</td>
<td>69.1</td>
<td>63.8</td>
<td>55.3</td>
<td><b>81.4</b></td>
</tr>
<tr>
<td>MATH(4-shot)</td>
<td>10.5</td>
<td>9.2</td>
<td>4.9</td>
<td><b>22.9</b></td>
</tr>
<tr>
<td rowspan="4">Reasoning</td>
<td>Arc Challenge (25-shot)</td>
<td>67.5</td>
<td>61.7</td>
<td>64.4</td>
<td><b>66.8</b></td>
</tr>
<tr>
<td>GPQA (0-shot)</td>
<td>33.4</td>
<td><b>36.6</b></td>
<td>28.8</td>
<td>34.1</td>
</tr>
<tr>
<td>MUSR (0-shot)</td>
<td><b>45.2</b></td>
<td>43.3</td>
<td>39.2</td>
<td>44.2</td>
</tr>
<tr>
<td>BBH (3-shot)</td>
<td>54.3</td>
<td>51.3</td>
<td>50.2</td>
<td><b>59.7</b></td>
</tr>
<tr>
<td rowspan="4">CommonSense Understanding</td>
<td>PIQA (0-shot)</td>
<td><b>83.0</b></td>
<td>80.5</td>
<td>82.1</td>
<td>79.5</td>
</tr>
<tr>
<td>SciQ (0-shot)</td>
<td><b>97.1</b></td>
<td>95.2</td>
<td>95.2</td>
<td>93.5</td>
</tr>
<tr>
<td>Winogrande (0-shot)</td>
<td><b>74.2</b></td>
<td>72.7</td>
<td>73.2</td>
<td>73.6</td>
</tr>
<tr>
<td>OpenbookQA (0-shot)</td>
<td><b>47.2</b></td>
<td>45.2</td>
<td><b>47.2</b></td>
<td>45.0</td>
</tr>
</tbody>
</table>
## Technical Report
Coming soon....
## Citation
If Falcon3 family were helpful in your work, feel free to give us a cite.
```
@misc{Falcon3,
title = {The Falcon 3 family of Open Models},
author = {TII Team},
month = {December},
year = {2024}
}
```
|