--- library_name: transformers license: llama3.1 language: - ko --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642c4af1ab0cc792e4373b57/dkUAm0R7Bp3-JYxxsZXsQ.png) # A model retrained by removing the last 10 layers from the original Llama-3.1-8B-Instruct model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642c4af1ab0cc792e4373b57/hQfgaFtOZpRHi5x9mNS7T.png) To retrain the knowledge held by the original language model, we conducted broad fine-tuning to revive its extensive knowledge base. Following this, we applied refined fine-tuning using high-quality datasets to enhance the model's internal and linguistic representations, thereby improving its reliability. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642c4af1ab0cc792e4373b57/M1Iy0Sh6q08Nxjwneyf_C.png) after training the model on a specific task, we merged the pre-trained model with the task-trained model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642c4af1ab0cc792e4373b57/dIJhX8qoG5A4mZsmSHXLo.png) ```python import transformers import torch model_id = "kikikara/ko-llama-3.1-5b-instruct-FrankenMerging" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "당신은 한국어 ai 모델입니다."}, {"role": "user", "content": "인생의 의미란 뭐야?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ```