--- library_name: transformers language: - en - ko pipeline_tag: translation license: mit datasets: - pre --- ### Model Card for Model ID ### Model Details Model Card: 4yo1/llama_pre2_task01 with Fine-Tuning Model Overview Model Name: 4yo1/llama_pre2_task01 Model Type: Transformer-based Language Model Model Size: 8 billion parameters by: 4yo1 Languages: English and Korean ### how to use - sample code ```python from transformers import AutoConfig, AutoModel, AutoTokenizer config = AutoConfig.from_pretrained("4yo1/llama_pre2_task01") model = AutoModel.from_pretrained("4yo1/llama_pre2_task01") tokenizer = AutoTokenizer.from_pretrained("4yo1/llama_pre2_task01") ``` datasets: - 140kgpt license: mit