4yo1's picture
Update README.md
e2c5c27 verified
|
raw
history blame
1.09 kB
metadata
library_name: transformers
language:
  - en
  - ko
pipeline_tag: translation
tags:
  - llama-3-ko
license: mit
datasets:
  - 4yo1/llama3_enkor_testing_short

Model Card for Model ID

Model Details

Model Card: LLaMA3-ENG-KO-8B with Fine-Tuning Model Overview Model Name: LLaMA3-ENG-KO-8B

Model Type: Transformer-based Language Model

Model Size: 8 billion parameters

by: 4yo1

Languages: English and Korean

Model Description

LLaMA3-ENG-KO-8B is a language model pre-trained on a diverse corpus of English and Korean texts. This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.

how to use - sample code

from transformers import AutoConfig, AutoModel, AutoTokenizer

config = AutoConfig.from_pretrained("4yo1/llama3-eng-ko-8b")
model = AutoModel.from_pretrained("4yo1/llama3-eng-ko-8b")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-eng-ko-8b")

datasets:

  • 4yo1/llama3_enkor_testing_short

license: mit