pub-llama-13B-v5 / README.md
DopeorNope's picture
Update README.md
aa58bbe
|
raw
history blame
742 Bytes
---
language:
- ko
datasets: DopeorNope/OpenOrca-near-dedup-v1
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa`.**
## Model Details
**Model Developers** SeungyooLee (DopeorNopeLee)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
pub-llama-13b-v5 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Repo Link**
Github: [pub-llama📑](Not_yet)
**Training Dataset**
DopeorNope/OpenOrca-near-dedup-v1 dataset was created by [Near dedup algorithm](https://arxiv.org/abs/2107.06499) to reduce similarity.
We will open it soon.