Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,7 @@ tags:
|
|
7 |
- oceangpt
|
8 |
language:
|
9 |
- en
|
|
|
10 |
datasets:
|
11 |
- zjunlp/OceanInstruct
|
12 |
---
|
@@ -21,7 +22,7 @@ datasets:
|
|
21 |
<a href="https://github.com/zjunlp/OceanGPT">Project</a> •
|
22 |
<a href="https://arxiv.org/abs/2310.02031">Paper</a> •
|
23 |
<a href="https://huggingface.co/collections/zjunlp/oceangpt-664cc106358fdd9f09aa5157">Models</a> •
|
24 |
-
<a href="http://oceangpt.zjukg.cn
|
25 |
<a href="#quickstart">Quickstart</a> •
|
26 |
<a href="#citation">Citation</a>
|
27 |
</p>
|
@@ -89,13 +90,20 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
89 |
| OceanGPT-14B-v0.1 (based on Qwen) | <a href="https://huggingface.co/zjunlp/OceanGPT-14B-v0.1" target="_blank">14B</a> | <a href="https://wisemodel.cn/models/zjunlp/OceanGPT-14B-v0.1" target="_blank">14B</a> | <a href="https://modelscope.cn/models/ZJUNLP/OceanGPT-14B-v0.1" target="_blank">14B</a> |
|
90 |
| OceanGPT-7B-v0.2 (based on Qwen) | <a href="https://huggingface.co/zjunlp/OceanGPT-7b-v0.2" target="_blank">7B</a> | <a href="https://wisemodel.cn/models/zjunlp/OceanGPT-7b-v0.2" target="_blank">7B</a> | <a href="https://modelscope.cn/models/ZJUNLP/OceanGPT-7b-v0.2" target="_blank">7B</a> |
|
91 |
| OceanGPT-2B-v0.1 (based on MiniCPM) | <a href="https://huggingface.co/zjunlp/OceanGPT-2B-v0.1" target="_blank">2B</a> | <a href="https://wisemodel.cn/models/zjunlp/OceanGPT-2b-v0.1" target="_blank">2B</a> | <a href="https://modelscope.cn/models/ZJUNLP/OceanGPT-2B-v0.1" target="_blank">2B</a> |
|
92 |
-
|
93 |
-
---
|
94 |
|
95 |
## 🌻Acknowledgement
|
96 |
|
97 |
OceanGPT is trained based on the open-sourced large language models including [Qwen](https://huggingface.co/Qwen), [MiniCPM](https://huggingface.co/collections/openbmb/minicpm-2b-65d48bf958302b9fd25b698f), [LLaMA](https://huggingface.co/meta-llama). Thanks for their great contributions!
|
98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
|
100 |
### 🚩Citation
|
101 |
|
|
|
7 |
- oceangpt
|
8 |
language:
|
9 |
- en
|
10 |
+
- zh
|
11 |
datasets:
|
12 |
- zjunlp/OceanInstruct
|
13 |
---
|
|
|
22 |
<a href="https://github.com/zjunlp/OceanGPT">Project</a> •
|
23 |
<a href="https://arxiv.org/abs/2310.02031">Paper</a> •
|
24 |
<a href="https://huggingface.co/collections/zjunlp/oceangpt-664cc106358fdd9f09aa5157">Models</a> •
|
25 |
+
<a href="http://oceangpt.zjukg.cn/">Web</a> •
|
26 |
<a href="#quickstart">Quickstart</a> •
|
27 |
<a href="#citation">Citation</a>
|
28 |
</p>
|
|
|
90 |
| OceanGPT-14B-v0.1 (based on Qwen) | <a href="https://huggingface.co/zjunlp/OceanGPT-14B-v0.1" target="_blank">14B</a> | <a href="https://wisemodel.cn/models/zjunlp/OceanGPT-14B-v0.1" target="_blank">14B</a> | <a href="https://modelscope.cn/models/ZJUNLP/OceanGPT-14B-v0.1" target="_blank">14B</a> |
|
91 |
| OceanGPT-7B-v0.2 (based on Qwen) | <a href="https://huggingface.co/zjunlp/OceanGPT-7b-v0.2" target="_blank">7B</a> | <a href="https://wisemodel.cn/models/zjunlp/OceanGPT-7b-v0.2" target="_blank">7B</a> | <a href="https://modelscope.cn/models/ZJUNLP/OceanGPT-7b-v0.2" target="_blank">7B</a> |
|
92 |
| OceanGPT-2B-v0.1 (based on MiniCPM) | <a href="https://huggingface.co/zjunlp/OceanGPT-2B-v0.1" target="_blank">2B</a> | <a href="https://wisemodel.cn/models/zjunlp/OceanGPT-2b-v0.1" target="_blank">2B</a> | <a href="https://modelscope.cn/models/ZJUNLP/OceanGPT-2B-v0.1" target="_blank">2B</a> |
|
93 |
+
|
|
|
94 |
|
95 |
## 🌻Acknowledgement
|
96 |
|
97 |
OceanGPT is trained based on the open-sourced large language models including [Qwen](https://huggingface.co/Qwen), [MiniCPM](https://huggingface.co/collections/openbmb/minicpm-2b-65d48bf958302b9fd25b698f), [LLaMA](https://huggingface.co/meta-llama). Thanks for their great contributions!
|
98 |
|
99 |
+
## Limitations
|
100 |
+
|
101 |
+
- The model may have hallucination issues.
|
102 |
+
|
103 |
+
- We did not optimize the identity and the model may generate identity information similar to that of Qwen/MiniCPM/LLaMA/GPT series models.
|
104 |
+
|
105 |
+
- The model's output is influenced by prompt tokens, which may result in inconsistent results across multiple attempts.
|
106 |
+
|
107 |
|
108 |
### 🚩Citation
|
109 |
|