Gkunsch commited on
Commit
c4c491d
Β·
verified Β·
1 Parent(s): 39f86c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -20,16 +20,16 @@ pinned: false
20
  * πŸ’₯ **TII has open-sourced Falcon-180B for research and commercial utilization!** Access the [180B](https://huggingface.co/tiiuae/falcon-180b), as well as [7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) models, and explore our high-quality web dataset, [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb).
21
  * ✨ **Falcon-[40B](https://huggingface.co/tiiuae/falcon-40b)/[7B](https://huggingface.co/tiiuae/falcon-7b) are now available under the Apache 2.0 license**, TII has [waived all royalties and commercial usage restrictions](https://www.tii.ae/news/uaes-falcon-40b-worlds-top-ranked-ai-model-technology-innovation-institute-now-royalty-free).
22
 
23
- # Falcon Mamba
24
 
25
  We are excited to announce the release of our groundbreaking LLM model with a pure SSM architecture, setting a new benchmark by outperforming all previous SSM models and achieving performance on par with leading transformer-based models.
26
 
27
  | **Artefact** | **Link** | **Type** | **Details** |
28
  |---------------------|------------------------------------------------------------------|-------------------------|-------------------------------------------------------------------|
29
- | 🐍 **Falcon-Mamba-7B** | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b) | *pretrained model* | 7B parameters pure SSM trained on ~6,000 billion tokens. |
30
- | Falcon-Mamba-7B-Instruct | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct) | *instruction/chat model* | Falcon-Mamba-7B finetuned using only SFT.|
31
- | Falcon-Mamba-7B-4bit | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-4bit) | *pretrained model* | 4bit quantized version using GGUF|
32
- | Falcon-Mamba-7B-Instruct-4bit | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct-4bit) | *instruction/chat model* | 4bit quantized version using GGUF.|
33
 
34
 
35
  # Falcon2 LLM
 
20
  * πŸ’₯ **TII has open-sourced Falcon-180B for research and commercial utilization!** Access the [180B](https://huggingface.co/tiiuae/falcon-180b), as well as [7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) models, and explore our high-quality web dataset, [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb).
21
  * ✨ **Falcon-[40B](https://huggingface.co/tiiuae/falcon-40b)/[7B](https://huggingface.co/tiiuae/falcon-7b) are now available under the Apache 2.0 license**, TII has [waived all royalties and commercial usage restrictions](https://www.tii.ae/news/uaes-falcon-40b-worlds-top-ranked-ai-model-technology-innovation-institute-now-royalty-free).
22
 
23
+ # FalconMamba LLM
24
 
25
  We are excited to announce the release of our groundbreaking LLM model with a pure SSM architecture, setting a new benchmark by outperforming all previous SSM models and achieving performance on par with leading transformer-based models.
26
 
27
  | **Artefact** | **Link** | **Type** | **Details** |
28
  |---------------------|------------------------------------------------------------------|-------------------------|-------------------------------------------------------------------|
29
+ | 🐍 **FalconMamba-7B** | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b) | *pretrained model* | 7B parameters pure SSM trained on ~6,000 billion tokens. |
30
+ | FalconMamba-7B-Instruct | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct) | *instruction/chat model* | Falcon-Mamba-7B finetuned using only SFT.|
31
+ | FalconMamba-7B-4bit | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-4bit) | *pretrained model* | 4bit quantized version using GGUF|
32
+ | FalconMamba-7B-Instruct-4bit | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct-4bit) | *instruction/chat model* | 4bit quantized version using GGUF.|
33
 
34
 
35
  # Falcon2 LLM