--- language: ko license: apache-2.0 tags: - korean library_name: adapter-transformers pipeline_tag: text-generation --- # Chat Model QLoRA Adapter Fine-tuned QLoRA Adapter for model [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) Fine-tuned with Korean Sympathy Conversation dataset from AIHub See more informations at [our GitHub](https://github.com/boostcampaitech6/level2-3-nlp-finalproject-nlp-09) ## Datasets - [공감형 대화](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71305) ## Quick Tour ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "EleutherAI/polyglot-ko-5.8b", low_cpu_mem_usage=True, trust_remote_code=True, device_map="auto", ) model.config.use_cache = True model.load_adapter("m2af/EleutherAI-polyglot-ko-5.8b-adapter", "loaded") model.set_adapter("loaded") tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" # Generate Sample outputs = model.generate(**tokenizer("안녕하세요, 반갑습니다.", return_tensors="pt")) print(outputs) ```