|
--- |
|
library_name: transformers |
|
language: |
|
- en |
|
- ko |
|
pipeline_tag: translation |
|
license: mit |
|
datasets: |
|
- pre |
|
--- |
|
|
|
### Model Card for Model ID |
|
### Model Details |
|
|
|
Model Card: sapie with Fine-Tuning |
|
Model Overview |
|
Model Name: 4yo1/llama3-pre1-pre2-ds-ins2-lora3 |
|
|
|
Model Type: Transformer-based Language Model |
|
|
|
Model Size: 8 billion parameters |
|
|
|
by: 4yo1 |
|
|
|
Languages: English and Korean |
|
|
|
|
|
### how to use - sample code |
|
|
|
```python |
|
from transformers import AutoConfig, AutoModel, AutoTokenizer |
|
|
|
config = AutoConfig.from_pretrained("4yo1/llama3-pre1-pre2-ds-ins2-lora3") |
|
model = AutoModel.from_pretrained("4yo1/llama3-pre1-pre2-ds-ins2-lora3") |
|
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-pre2-ds-ins2-lora3") |
|
``` |
|
datasets: |
|
- pre |
|
|
|
license: mit |