Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,19 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
# **Llama 2**
|
5 |
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
|
6 |
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
4 |
+
|
5 |
+
# LDCC-Instruct-Llama-2-ko-13B model card
|
6 |
+
|
7 |
+
## Model Details
|
8 |
+
|
9 |
+
* **Developed by**: [Lotte Data Communication](https://www.ldcc.co.kr)
|
10 |
+
|
11 |
+
## Hardware and Software
|
12 |
+
|
13 |
+
* **Hardware**: We utilized an A100x8 * 1 for training our model
|
14 |
+
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
|
15 |
+
|
16 |
+
|
17 |
# **Llama 2**
|
18 |
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
|
19 |
|