Edit model card

Model Card for Model ID

polyglot-5.8B-CoT-e1는 polyglot-5.8B 모델을 "CoT Collection" 데이터셋의 Chain-of-Thought (CoT) 데이터 216,227개로 fine-tuning 하여 제작한 언어모델입니다.

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Load Model

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "amphora/polyglot-5.8B-CoT-e1",
    device_map='auto'
    )

tokenizer = AutoTokenizer.from_pretrained("amphora/polyglot-5.8B-CoT-e1")

Generate with CoT Rationale

input_ = "10개의 빨래를 펼쳐 말리는데 1시간이 걸린다. 20개의 빨래를 동시에 펼칠 공간이 있다고 가정할때, 20개의 빨래를 말리는데 걸리는 시간은?\n풀이: "
input_tensor = tokenizer(input_,return_tensors='pt')

output = model.generate(
        input_ids = input_tensor.input_ids.to("cuda"),
        repetition_penalty=1.0,
        max_new_tokens=64,
        top_k=50, 
        top_p=0.95
)

o = tokenizer.batch_decode(output)[0].split(tokenizer.eos_token)[0]
print(o)

Out-of-Scope Use

polyglot-5.8B-CoT-e1 모델은 instruction/chat 데이터로 학습되지 않았으므로 해당 목적으로 사용하기에 적합하지 않습니다.

Citation [optional]

BibTeX:

@misc{polyglot-ko,
  title = {{Polyglot-Ko: Open-Source Korean Autoregressive Language Model}},
  author = {Ko, Hyunwoong and Yang, Kichang and Ryu, Minho and Choi, Taekyoon and Yang, Seungmu and Hyun, jiwung and Park, Sungho},
  url = {https://www.github.com/eleutherai/polyglot},
  month = {9},
  year = {2022},
}
@misc{kim2023cot,
      title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning}, 
      author={Seungone Kim and Se June Joo and Doyoung Kim and Joel Jang and Seonghyeon Ye and Jamin Shin and Minjoon Seo},
      year={2023},
      eprint={2305.14045},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
19,740

Space using amphora/polyglot-5.8B-CoT-e1 1