Commit
•
5dfda9b
1
Parent(s):
cea5ef2
Update README.md
Browse files
README.md
CHANGED
@@ -23,6 +23,25 @@ llama-3-8b-instruct-262k-chinese基于[Llama-3-8B-Instruct-262k](https://hugging
|
|
23 |
## Relate models
|
24 |
- 完整模型权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese
|
25 |
- lora权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese-lora
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
## 如何使用
|
27 |
|
28 |
```python
|
@@ -60,8 +79,37 @@ content = outputs[0]["generated_text"][len(prompt):]
|
|
60 |
print(content)
|
61 |
```
|
62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
-
|
65 |
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model.
|
66 |
|
67 |
This model extends LLama-3 8B's context length from 8k to -> 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta.
|
|
|
23 |
## Relate models
|
24 |
- 完整模型权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese
|
25 |
- lora权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese-lora
|
26 |
+
|
27 |
+
## Features
|
28 |
+
模型优势:
|
29 |
+
1. 支持超长context length 262k token,适合RAG
|
30 |
+
2. 支持中英文
|
31 |
+
3. 支持多轮对话,代码编码、推理能力强,英文知识充分
|
32 |
+
4. 模型推理需要显存:
|
33 |
+
|
34 |
+
Quantization | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens
|
35 |
+
-- | -- | --
|
36 |
+
FP16/BF16 | 17.66GB | 22.58GB
|
37 |
+
Int4 | 8.21GB | 13.62GB
|
38 |
+
|
39 |
+
|
40 |
+
缺点:
|
41 |
+
1. model size只有8B,知识类问答幻觉明显
|
42 |
+
2. 中文知识欠缺,容易幻觉,特别是中文古文知识,属于llama类模型通病
|
43 |
+
|
44 |
+
|
45 |
## 如何使用
|
46 |
|
47 |
```python
|
|
|
79 |
print(content)
|
80 |
```
|
81 |
|
82 |
+
result:
|
83 |
+
```shell
|
84 |
+
机器学习(Machine Learning)是一种基于计算机算法的自动数据分析技术,用于从数据中学习并预测未来的结果。它是人工智能(AI)和数据挖掘(Data Mining)的子领域,旨在通过训练和调整算法来发现数据中的模式、关系和规律。
|
85 |
+
|
86 |
+
机器学习算法可以分为监督学习、无监督学习和半监督学习三类:
|
87 |
+
|
88 |
+
1. 监督学习(Supervised Learning):在这种类型的学习中,算法被提供带有标签的数据集,用于训练。算法学习如何将输入数据映射到输出数据,并在新数据上进行预测。常见的监督学习算法包括逻辑回归、决策树、支持向量机(SVM)、随机森林和神经网络。
|
89 |
+
2. 无监督学习(Unsupervised Learning):在这种类型的学习中,算法没有标签数据。算法学习数据中的模式、结构和关系,并可能发现新的数据集群或特征。常见的无监督学习算法包括聚类、主成分分析(PCA)、独立成分分析(ICA)和高维度数据降维。
|
90 |
+
3. 半监督学习(Semi-supervised Learning):在这种类型的学习中,算法被提供部分带有标签的数据集。算法学习如何将输入数据映射到输出数据,并在新数据上进行预测。半监督学习算法结合了监督学习和无监督学习的优点,常见的半监督学习算法包括自我标注(Self-Labeling)和基于图的半监督学习(Graph-based Semi-supervised Learning)。
|
91 |
+
|
92 |
+
机器学习的应用广泛,包括自然语言处理、计算机视觉、推荐系统、人工智能和自动驾驶等领域。它的优势包括:
|
93 |
+
|
94 |
+
1. 自动化:机器学习算法可以自动从数据中发现模式和关系,无需人为干预。
|
95 |
+
2. 高效性:机器学习算法可以处理大量数据,并且可以在不需要人为干预的情况下进行预测。
|
96 |
+
3. 适应性:机器学习算法可以根据数据集的变化和更新进行调整。
|
97 |
+
4. 精准性:机器学习算法可以通过训练和测试来提高预测的准确性。
|
98 |
+
```
|
99 |
+
|
100 |
+
## train detail
|
101 |
+
|
102 |
+
train loss:
|
103 |
+
|
104 |
+
<img src="https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese/raw/main/train_lossv2.svg" width="600">
|
105 |
+
|
106 |
+
eval loss:
|
107 |
+
|
108 |
+
|
109 |
+
<img src="https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese/raw/main/eval_lossv2.svg" width="600">
|
110 |
+
|
111 |
|
112 |
+
# About Llama-3-8B-Instruct-262k
|
113 |
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model.
|
114 |
|
115 |
This model extends LLama-3 8B's context length from 8k to -> 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta.
|