AALF commited on
Commit
d0f751c
Β·
verified Β·
1 Parent(s): 8439f35

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -16,7 +16,7 @@ pinned: false
16
 
17
 
18
  <h4> |<a href="https://arxiv.org/abs/2401.10491"> πŸ“‘ FuseLLM Paper @ICLR2024 </a> |
19
- <!-- <a href="https://arxiv.org/abs/2402.16107"> πŸ“‘ FuseChat Tech Report </a> | -->
20
  <a href="https://huggingface.co/FuseAI"> πŸ€— HuggingFace Repo </a> |
21
  <a href="https://github.com/fanqiwan/FuseLLM"> 🐱 GitHub Repo </a> |
22
  </h4>
@@ -37,7 +37,7 @@ Welcome to join us!
37
 
38
  ## News
39
 
40
- <!-- ### FuseChat [SOTA 7B LLM on MT-Bench]
41
 
42
  - **Mar 13, 2024:** πŸ”₯πŸ”₯πŸ”₯ We release a HuggingFace Space for [FuseChat-7B](https://huggingface.co/spaces/FuseAI/FuseChat-7B), try it now!
43
 
@@ -62,7 +62,7 @@ Welcome to join us!
62
  | Claude-2.0 | - | 8.06 | WizardLM-70B-v1.0 | 70B | 7.71 |
63
  | GPT-3.5-Turbo-0314 | - | 7.94 | Yi-34B-Chat | 34B | 7.67 |
64
  | Claude-1 | - | 7.90 | Nous-Hermes-2-SOLAR-10.7B | 10.7B | 7.66 |
65
- -->
66
  ### FuseLLM [Surpassing Llama-2-7B]
67
 
68
  - **Jan 22, 2024:** πŸ”₯ We release [FuseLLM-7B](https://huggingface.co/Wanfq/FuseLLM-7B), which is the fusion of three open-source foundation LLMs with distinct architectures, including [Llama-2-7B](https://huggingface.co/meta-llama/Llama-2-7b-hf), [OpenLLaMA-7B](https://huggingface.co/openlm-research/open_llama_7b_v2), and [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
@@ -97,7 +97,7 @@ Please cite the following paper if you reference our model, code, data, or paper
97
  url={https://openreview.net/pdf?id=jiDsk12qcz}
98
  }
99
  ```
100
- <!--
101
  Please cite the following paper if you reference our model, code, data, or paper related to FuseChat.
102
  ```
103
  @article{wan2024fusechat,
@@ -106,4 +106,4 @@ Please cite the following paper if you reference our model, code, data, or paper
106
  journal={arXiv preprint arXiv:2402.16107},
107
  year={2024}
108
  }
109
- ``` -->
 
16
 
17
 
18
  <h4> |<a href="https://arxiv.org/abs/2401.10491"> πŸ“‘ FuseLLM Paper @ICLR2024 </a> |
19
+ <a href="https://arxiv.org/abs/2402.16107"> πŸ“‘ FuseChat Tech Report </a> |
20
  <a href="https://huggingface.co/FuseAI"> πŸ€— HuggingFace Repo </a> |
21
  <a href="https://github.com/fanqiwan/FuseLLM"> 🐱 GitHub Repo </a> |
22
  </h4>
 
37
 
38
  ## News
39
 
40
+ ### FuseChat [SOTA 7B LLM on MT-Bench]
41
 
42
  - **Mar 13, 2024:** πŸ”₯πŸ”₯πŸ”₯ We release a HuggingFace Space for [FuseChat-7B](https://huggingface.co/spaces/FuseAI/FuseChat-7B), try it now!
43
 
 
62
  | Claude-2.0 | - | 8.06 | WizardLM-70B-v1.0 | 70B | 7.71 |
63
  | GPT-3.5-Turbo-0314 | - | 7.94 | Yi-34B-Chat | 34B | 7.67 |
64
  | Claude-1 | - | 7.90 | Nous-Hermes-2-SOLAR-10.7B | 10.7B | 7.66 |
65
+
66
  ### FuseLLM [Surpassing Llama-2-7B]
67
 
68
  - **Jan 22, 2024:** πŸ”₯ We release [FuseLLM-7B](https://huggingface.co/Wanfq/FuseLLM-7B), which is the fusion of three open-source foundation LLMs with distinct architectures, including [Llama-2-7B](https://huggingface.co/meta-llama/Llama-2-7b-hf), [OpenLLaMA-7B](https://huggingface.co/openlm-research/open_llama_7b_v2), and [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
 
97
  url={https://openreview.net/pdf?id=jiDsk12qcz}
98
  }
99
  ```
100
+
101
  Please cite the following paper if you reference our model, code, data, or paper related to FuseChat.
102
  ```
103
  @article{wan2024fusechat,
 
106
  journal={arXiv preprint arXiv:2402.16107},
107
  year={2024}
108
  }
109
+ ```