JingweiZuo
commited on
Commit
•
ef7ba29
1
Parent(s):
c2a45ff
Update README.md
Browse files
README.md
CHANGED
@@ -250,7 +250,7 @@ print(tokenizer.decode(outputs[0]))
|
|
250 |
|
251 |
## Training Data
|
252 |
|
253 |
-
Falcon-Mamba has been trained with ~
|
254 |
Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192.
|
255 |
Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity.
|
256 |
Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency.
|
|
|
250 |
|
251 |
## Training Data
|
252 |
|
253 |
+
Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated.
|
254 |
Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192.
|
255 |
Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity.
|
256 |
Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency.
|