|
--- |
|
library_name: transformers |
|
language: |
|
- en |
|
- ko |
|
pipeline_tag: translation |
|
license: gpl |
|
datasets: |
|
- 4yo1/llama3_enkor_testing_short |
|
tags: |
|
- llama-3-ko |
|
--- |
|
|
|
### Model Card for Model ID |
|
### Model Details |
|
|
|
Model Card: LLaMA3-ENG-KO-8B with Fine-Tuning |
|
Model Overview |
|
Model Name: LLaMA3-ENG-KO-8B |
|
|
|
Model Type: Transformer-based Language Model |
|
|
|
Model Size: 8 billion parameters |
|
|
|
by: 4yo1 |
|
|
|
Languages: English and Korean |
|
|
|
### Model Description |
|
LLaMA3-ENG-KO-8B is a language model pre-trained on a diverse corpus of English and Korean texts. |
|
This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications. |
|
|
|
### how to use - sample code |
|
|
|
```python |
|
from transformers import AutoConfig, AutoModel, AutoTokenizer |
|
|
|
config = AutoConfig.from_pretrained("4yo1/llama3-eng-ko-8b") |
|
model = AutoModel.from_pretrained("4yo1/llama3-eng-ko-8b") |
|
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-eng-ko-8b") |
|
``` |