chenxingphh commited on
Commit
3ca4aaf
1 Parent(s): e325e31

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -26,7 +26,7 @@ pipeline_tag: text-generation
26
  <p>
27
  <b>🌐English</b> |
28
  <a href="https://huggingface.co/OrionStarAI/Orion-14B-LongChat/blob/main/README_cn.md">🇨🇳中文</a><br><br>
29
- 🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>🎬 <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | 🎫 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a>
30
  <p>
31
  </h4>
32
 
@@ -53,9 +53,17 @@ pipeline_tag: text-generation
53
  - The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests.
54
  - The long-chat version supports extremely long texts, extending up to 200K tokens.
55
  - The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%.
56
- <div align="center">
57
- <img src="./assets/imgs/assets_imgs_model_cap_en.png" alt="model_cap_en" width="50%" />
58
- </div>
 
 
 
 
 
 
 
 
59
 
60
  - Orion-14B series models including:
61
  - **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens.
 
26
  <p>
27
  <b>🌐English</b> |
28
  <a href="https://huggingface.co/OrionStarAI/Orion-14B-LongChat/blob/main/README_cn.md">🇨🇳中文</a><br><br>
29
+ 🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>🎬 <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | 🎫 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a><br>📖 <a href="https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf" target="_blank">Tech Report</a>
30
  <p>
31
  </h4>
32
 
 
53
  - The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests.
54
  - The long-chat version supports extremely long texts, extending up to 200K tokens.
55
  - The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%.
56
+
57
+ <table style="border-collapse: collapse; width: 100%;">
58
+ <tr>
59
+ <td style="border: none; padding: 10px; box-sizing: border-box;">
60
+ <img src="./assets/imgs/opencompass_en.png" alt="opencompass" style="width: 100%; height: auto;">
61
+ </td>
62
+ <td style="border: none; padding: 10px; box-sizing: border-box;">
63
+ <img src="./assets/imgs/model_cap_en.png" alt="modelcap" style="width: 100%; height: auto;">
64
+ </td>
65
+ </tr>
66
+ </table>
67
 
68
  - Orion-14B series models including:
69
  - **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens.