์—๋Ÿฌ๊ฐ€ ๋ฐœ์ƒํ–ˆ์„ ๋•Œ ๋Œ€์‘ ๋ฐฉ๋ฒ•

Ask a Question Open In Colab Open In Studio Lab

์ด๋ฒˆ ์žฅ์—์„œ๋Š” Transformer ๋ชจ๋ธ์„ ์ƒˆ๋กญ๊ฒŒ ํŠœ๋‹ ํ›„ ์˜ˆ์ธก์„ ํ•˜๋ ค๊ณ  ํ•  ๋•Œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๋ช‡๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ์—๋Ÿฌ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

์ด๋ฒˆ ์žฅ์—์„œ ๋ชจ๋ธ์˜ ์ €์žฅ์†Œ ํ…œํ”Œ๋ฆฟ์ด ์ค€๋น„๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์ด๋ฒˆ ๋‹จ์›์—์„œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋ชจ๋ธ์„ Huggingface Hub์˜ ๊ฐœ์ธ ๊ณ„์ •์— ๋ชจ๋ธ์„ ๋ณต์‚ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ๊ณ„์ •์˜ ์ €์žฅ์†Œ์— ๋ณต์ œํ•˜๊ธฐ ์œ„ํ•ด ์ฃผํ”ผํ„ฐ ๋…ธํŠธ๋ถ์—์„œ ์•„๋ž˜์˜ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜๊ฑฐ๋‚˜:

from huggingface_hub import notebook_login

notebook_login()

๋˜๋Š” ์•„๋ž˜์˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์›ํ•˜๋Š” ํ„ฐ๋ฏธ๋„์—์„œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค:

huggingface-cli login

ํ„ฐ๋ฏธ๋„์—์„œ ์•„์ด๋””์™€ ๋น„๋ฐ€๋ฒˆํ˜ธ๋ฅผ ์ž…๋ ฅํ•˜๋Š” ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉฐ, ์‹๋ณ„ ํ† ํฐ์€ ~/.cache/huggingface/์— ์ €์žฅ๋ฉ๋‹ˆ๋‹ค. ํ•œ๋ฒˆ ๋กœ๊ทธ์ธ ํ•˜๊ณ  ๋‚˜๋ฉด ๋ชจ๋ธ์˜ ์ €์žฅ์†Œ ํ…œํ”Œ๋ฆฟ์„ ์•„๋ž˜์˜ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด ๋ณต์‚ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

from distutils.dir_util import copy_tree
from huggingface_hub import Repository, snapshot_download, create_repo, get_full_repo_name


def copy_repository_template():
    # Clone the repo and extract the local path
    template_repo_id = "lewtun/distilbert-base-uncased-finetuned-squad-d5716d28"
    commit_hash = "be3eaffc28669d7932492681cd5f3e8905e358b4"
    template_repo_dir = snapshot_download(template_repo_id, revision=commit_hash)
    # Create an empty repo on the Hub
    model_name = template_repo_id.split("/")[1]
    create_repo(model_name, exist_ok=True)
    # Clone the empty repo
    new_repo_id = get_full_repo_name(model_name)
    new_repo_dir = model_name
    repo = Repository(local_dir=new_repo_dir, clone_from=new_repo_id)
    # Copy files
    copy_tree(template_repo_dir, new_repo_dir)
    # Push to Hub
    repo.push_to_hub()

์ด์ œ copy_repository_template()๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ๋ชจ๋ธ ์ €์žฅ์†Œ์˜ ํ…œํ”Œ๋ฆฟ์ด ๊ณ„์ •์— ๋ณต์‚ฌ ๋ฉ๋‹ˆ๋‹ค.

๐Ÿค— Transformers์˜ ํŒŒ์ดํ”„๋ผ์ธ ๋””๋ฒ„๊น…

Transformer ๋ชจ๋ธ๋“ค์˜ ๋ฉ‹์ง„ ๋””๋ฒ„๊น… ์„ธ๊ณ„๋กœ ์—ฌ์ •์„ ๋– ๋‚˜๊ธฐ ์œ„ํ•ด, ๋‹ค์Œ์˜ ์‹œ๋‚˜๋ฆฌ์˜ค๋ฅผ ์ƒ๊ฐํ•ด๋ณด์„ธ์š”: ์—ฌ๋Ÿฌ๋ถ„์€ E-commerce ์‚ฌ์ดํŠธ์˜ ๊ณ ๊ฐ์ด ์†Œ๋น„์ž ์ƒํ’ˆ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ์ฐพ๊ธฐ ์œ„ํ•œ ์งˆ๋ฌธ ๋ฐ ๋‹ต๋ณ€ ํ”„๋กœ์ ํŠธ์—์„œ ๋™๋ฃŒ์™€ ํ•จ๊ป˜ ์ผํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ๋™๋ฃŒ๊ฐ€ ๋‹น์‹ ์—๊ฒŒ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฉ”์„ธ์ง€๋ฅผ ๋ณด๋ƒˆ์Šต๋‹ˆ๋‹ค:

์•ˆ๋…•ํ•˜์„ธ์š”! Hugging Face ์ฝ”์Šค์— ์žˆ๋Š” 7 ๋‹จ์›์˜ ๊ธฐ์ˆ ์„ ํ™œ์šฉํ•ด์„œ ์‹คํ—˜์„ ํ•ด๋ดค๋Š”๋ฐ, SQuAD์—์„œ ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์—ˆ์Šต๋‹ˆ๋‹ค. ์ €ํฌ ํ”„๋กœ์ ํŠธ๋ฅผ ์ด ๋ชจ๋ธ๋กœ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ƒ๊ฐ์ด ๋ฉ๋‹ˆ๋‹ค. ํ—ˆ๋ธŒ์— ์žˆ๋Š” ๋ชจ๋ธ ์•„์ด๋””๋Š” โ€œlewtun/distillbert-base-uncased-finetuned-squad-d5716d28โ€ ์ž…๋‹ˆ๋‹ค. ๋งˆ์Œ ๊ป ํ…Œ์ŠคํŠธ ํ•ด๋ณด์„ธ์š”. :)

๐Ÿค— Transformers์˜ pipeline์„ ์‚ฌ์šฉ๋Š” ๋ชจ๋ธ์„ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ ์œ„ํ•ด ์šฐ์„  ๊ณ ๋ คํ•ด์•ผ ํ•  ๊ฒƒ์ด ์žˆ์Šต๋‹ˆ๋‹ค:

from transformers import pipeline

model_checkpoint = get_full_repo_name("distillbert-base-uncased-finetuned-squad-d5716d28")
reader = pipeline("question-answering", model=model_checkpoint)
"""
OSError: Can't load config for 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28'. make sure that:

- 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28'์ด๋ผ๋Š” ๋ชจ๋ธ๋ช…์ด 'https://huggingface.co/models'์— ์กด์žฌํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ฑฐ๋‚˜

'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28'์ด๋ผ๋Š” ๊ฒฝ๋กœ ๋˜๋Š” ํด๋”๊ฐ€ config.json ํŒŒ์ผ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”.
"""

์•„ ์ด๋Ÿฐ, ๋ญ”๊ฐ€ ์ž˜๋ชป๋œ ๊ฒƒ ๊ฐ™๋„ค์š”! ๋งŒ์•ฝ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์ด ์ฒ˜์Œ์ด๋ผ๋ฉด, ์ด๋Ÿฐ ์ข…๋ฅ˜์˜ ์—๋Ÿฌ๊ฐ€ ์ฒ˜์Œ์—๋Š” ๋‹ค์†Œ ์‹ ๋น„ํ•˜๊ฒŒ(OSError๋ž€ ๋„๋Œ€์ฒด..) ๋ณด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ ๋ณด์ด๋Š” ์—๋Ÿฌ๋Š” ํŒŒ์ด์ฌ์˜ traceback(stack trace๋กœ ์•Œ๋ ค์ ธ์žˆ์Œ)์œผ๋กœ ๋ถˆ๋ฆฌ๋Š” ์ข€ ๋” ํฐ ์—๋Ÿฌ ๋ฆฌํฌํŠธ์˜ ๋งˆ์ง€๋ง‰ ๋ถ€๋ถ„ ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์ด ์ฝ”๋“œ๋ฅผ Google์˜ Colab์—์„œ ์‹คํ–‰ํ•œ๋‹ค๋ฉด ์•„๋ž˜์™€ ๊ฐ™์€ ์Šคํฌ๋ฆฐ์ƒท์„ ๋ณด๊ฒŒ ๋ ๊ฒ๋‹ˆ๋‹ค:

A Python traceback.

์ด ๋ฆฌํฌํŠธ์—๋Š” ๋งŽ์€ ์ •๋ณด๋ฅผ ๋‹ด๊ณ  ์žˆ์œผ๋‹ˆ, ๊ฐ™์ด ํ•ต์‹ฌ ๋ถ€๋ถ„์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์šฐ์„  ๋ช…์‹ฌํ•ด์•ผํ•  ๊ฒƒ์€ tracebacks์€ ์•„๋ž˜๋ถ€ํ„ฐ ์œ„๋กœ ์ฝ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ง์€ ์˜์–ด ํ…์ŠคํŠธ๋ฅผ ์œ„์—์„œ ์•„๋ž˜๋กœ ์ฝ์–ด์˜ค๊ณค ํ–ˆ๋‹ค๋ฉด ์ด์ƒํ•˜๊ฒŒ ๋“ค๋ฆด ์ˆ˜ ์žˆ๊ฒ ์ง€๋งŒ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋‹ค์šด๋กœ๋“œ ํ•  ๋•Œ pipeline์ด ๋งŒ๋“œ๋Š” ํ•จ์ˆ˜ ํ˜ธ์ถœ ์ˆœ์„œ๋ฅผ ๋ณด์—ฌ์ฃผ๋Š” traceback์„ ๋ฐ˜์˜ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋‚ด๋ถ€์—์„œ pipeline์ด ์ž‘๋™ํ•˜๋Š” ๋ฐฉ์‹์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๋‹จ์› 2๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”.

Google Colab์˜ traceback์—์„œ โ€œ6 framesโ€ ์ฃผ๋ณ€์˜ ํŒŒ๋ž€ ์ƒ์ž๋ฅผ ๋ณด์…จ๋‚˜์š”? traceback์„ โ€œframesโ€๋กœ ์••์ถ•ํ•˜๋Š” Colab์˜ ํŠน๋ณ„ํ•œ ๊ธฐ๋Šฅ์ž…๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์˜ค๋ฅ˜์˜ ์›์ธ์„ ์ฐพ์„ ์ˆ˜ ์—†๋‹ค๋ฉด, ๋‘๊ฐœ์˜ ์ž‘์€ ํ™”์‚ดํ‘œ๋ฅผ ํด๋ฆญํ•ด์„œ ์ „์ฒด traceback์„ ํ™•์žฅ๋˜์–ด ์žˆ๋Š”์ง€ ์—ฌ๋ถ€๋ฅผ ํ™•์ธํ•˜์„ธ์š”.

์ฆ‰ ๋งˆ์ง€๋ง‰ ์—๋Ÿฌ ๋ฉ”์‹œ์ง€์™€ ๋ฐœ์ƒํ•œ ์˜ˆ์™ธ์˜ ์ด๋ฆ„์„ ๊ฐ€๋ฆฌํ‚ค๋Š” traceback์˜ ๋งˆ์ง€๋ง‰ ์ค„์„ ๋œปํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์˜ ์˜ˆ์™ธ ์œ ํ˜•์€ ์‹œ์Šคํ…œ ๊ด€๋ จ ์˜ค๋ฅ˜๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” OS Error ์ž…๋‹ˆ๋‹ค. ์ฒจ๋ถ€๋œ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ์ฝ์œผ๋ฉด ๋ชจ๋ธ์˜ config.json ํŒŒ์ผ์— ๋ฌธ์ œ๊ฐ€ ์žˆ๋Š” ๊ฒƒ์œผ๋กœ ๋ณด์ด๋ฉฐ ์ด๋ฅผ ์ˆ˜์ •ํ•˜๊ธฐ ์œ„ํ•ด ๋‘ ๊ฐ€์ง€ ์„ ํƒ์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค:

"""
make sure that:

- 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is the correct path to a directory containing a config.json file
"""

๐Ÿ’ก ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šด ์—๋Ÿฌ ๋ฉ”์‹œ์ง€๋ฅผ ์ ‘ํ•˜๊ฒŒ ๋œ๋‹ค๋ฉด, ๋ฉ”์„ธ์ง€๋ฅผ ๋ณต์‚ฌํ•ด์„œ Google ๋˜๋Š” ์Šคํƒ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ ๊ฒ€์ƒ‰์ฐฝ์— ๋ถ™์—ฌ ๋„ฃ๊ธฐ๋งŒ ํ•˜์„ธ์š”(๋„ค ์ง„์งญ๋‹ˆ๋‹ค!). ์ด๋Š” ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•œ ์ฒซ ์‚ฌ๋žŒ์ด ์•„๋‹ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์„๋ฟ๋”๋Ÿฌ, ์ปค๋ฎค๋‹ˆํ‹ฐ์˜ ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ๊ฒŒ์‹œํ•œ ์†”๋ฃจ์…˜์„ ์ฐพ๋Š” ์ข‹์€ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์Šคํƒ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ์—์„œ โ€˜OSError: Canโ€™t load config forโ€™๋ฅผ ๊ฒ€์ƒ‰ํ•˜๋ฉด ์—ฌ๋Ÿฌ ํ•ด๋‹ต์„ ์ œ๊ณตํ•˜๋ฉฐ ๋ฌธ์ œ ํ•ด๊ฒฐ์„ ์œ„ํ•œ ์ถœ๋ฐœ์ ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

์ฒซ ๋ฒˆ์งธ ์ œ์•ˆ์€ ๋ชจ๋ธ ID๊ฐ€ ์‹ค์ œ๋กœ ์ •ํ™•ํ•œ์ง€ ํ™•์ธํ•˜๋„๋ก ์š”์ฒญํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋น„์ฆˆ๋‹ˆ์Šค์˜ ์ฒซ ์ˆœ์„œ๋Š” ์‹๋ณ„์ž(๋ชจ๋ธ ์ด๋ฆ„)๋ฅผ ๋ณต์‚ฌํ•˜์—ฌ Hub์˜ ๊ฒ€์ƒ‰ ์ฐฝ์— ๋ถ™์—ฌ๋„ฃ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค:

The wrong model name.

์Œ, ๋™๋ฃŒ์˜ ๋ชจ๋ธ์ด ํ—ˆ๋ธŒ์— ์—†๋Š” ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹คโ€ฆ ์•„ํ•˜, ๋ชจ๋ธ์˜ ์ด๋ฆ„์— ์˜คํƒ€๊ฐ€ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค! DistilBERT๋Š” ์ด๋ฆ„์— โ€œlโ€์ด ํ•˜๋‚˜๋งŒ ์žˆ์œผ๋ฏ€๋กœ ์ด๋ฅผ ์ˆ˜์ •ํ•˜๊ณ  ๋Œ€์‹  โ€œlewtun/distilbert-base-uncased-finetuned-squad-d5716d28โ€์„ ์ฐพ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค:

The right model name.

์ข‹์Šต๋‹ˆ๋‹ค, ์„ฑ๊ณตํ–ˆ๊ตฐ์š”. ์ด์ œ ์˜ฌ๋ฐ”๋ฅธ ๋ชจ๋ธ ID๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๋‹ค์šด๋กœ๋“œ ํ•ด๋ด…์‹œ๋‹ค:

model_checkpoint = get_full_repo_name("distilbert-base-uncased-finetuned-squad-d5716d28")
reader = pipeline("question-answering", model=model_checkpoint)
"""
OSError: Can't load config for 'lewtun/distilbert-base-uncased-finetuned-squad-d5716d28'. Make sure that:

- 'lewtun/distilbert-base-uncased-finetuned-squad-d5716d28' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'lewtun/distilbert-base-uncased-finetuned-squad-d5716d28' is the correct path to a directory containing a config.json file
"""

์•„์˜ค ๋˜ ์‹คํŒจ์ž…๋‹ˆ๋‹ค. ๋จธ์‹ ๋Ÿฌ๋‹ ์—”์ง€๋‹ˆ์–ด์˜ ์ผ์ƒ์— ์˜ค์‹  ๊ฒƒ์„ ํ™˜์˜ํ•ฉ๋‹ˆ๋‹ค! ๋ชจ๋ธ ID๋ฅผ ์ˆ˜์ •ํ–ˆ์œผ๋ฏ€๋กœ ๋ฌธ์ œ๋Š” ์ €์žฅ์†Œ ์ž์ฒด์— ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Hub์˜ ์ €์žฅ์†Œ ์ปจํ…์ธ ์— ๋น ๋ฅด๊ฒŒ ์•ก์„ธ์Šคํ•˜๋Š” ๋ฐฉ๋ฒ•์€ huggingface_hub ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ list_repo_files() ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค:

from huggingface_hub import list_repo_files

list_repo_files(repo_id=model_checkpoint)
['.gitattributes', 'README.md', 'pytorch_model.bin', 'special_tokens_map.json', 'tokenizer_config.json', 'training_args.bin', 'vocab.txt']

ํฅ๋ฏธ๋กญ๋„ค์š” โ€” ์ด ์ €์žฅ์†Œ์—๋Š” config.json๊ฐ€ ๋ณด์ด์ง€ ์•Š์Šต๋‹ˆ๋‹ค! ์šฐ๋ฆฌ์˜ pipeline์ด ๋ชจ๋ธ์„ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์—†๋Š” ๊ฒƒ์ด ๋‹น์—ฐํ–ˆ๊ตฐ์š”; ๋™๋ฃŒ๊ฐ€ ํŒŒ์ธํŠœ๋‹ ํ›„์— ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•˜๋Š” ๊ฒƒ์„ ์žŠ์–ด๋ฒ„๋ฆฐ ๋ชจ์–‘์ž…๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ, ๋ฌธ์ œ๋Š” ๋งค์šฐ ๊ฐ„๋‹จํ•˜๊ฒŒ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ๋™๋ฃŒ์—๊ฒŒ ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•˜๋„๋ก ์š”์ฒญํ•˜๊ฑฐ๋‚˜, ์‚ฌ์ „ ํ›ˆ๋ จ(pretrained)๋œ ๋ชจ๋ธ์ด distilbert-base-uncased์ธ ๊ฒƒ์„ ํ™•์ธ ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ์ด ๋ชจ๋ธ์— ๋Œ€ํ•œ config๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์ €์žฅ์†Œ์— ํ‘ธ์‹œํ•˜์—ฌ ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹œ๋„ ํ•ด๋ด…์‹œ๋‹ค. ๋‹จ์› 2์—์„œ ๋ฐฐ์šด ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•ด AutoConfig ํด๋ž˜์Šค๋กœ ๋ชจ๋ธ์˜ config ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

from transformers import AutoConfig

pretrained_checkpoint = "distilbert-base-uncased"
config = AutoConfig.from_pretrained(pretrained_checkpoint)

๐Ÿšจ ์—ฌ๊ธฐ์—์„œ ํ•˜๋Š” ์ ‘๊ทผ ๋ฐฉ์‹์€ ๋™๋ฃŒ๊ฐ€ โ€˜distilbert-base-uncasedโ€™์˜ config๋ฅผ ์ˆ˜์ •ํ–ˆ์„ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ ์™„์ „ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋™๋ฃŒ์—๊ฒŒ ๋จผ์ € ํ™•์ธํ•˜๊ณ  ์‹ถ๊ฒ ์ง€๋งŒ, ์ด๋ฒˆ ์žฅ์—์„œ์˜ ๋ชฉ์ ์ƒ, ๋™๋ฃŒ๊ฐ€ ๋””ํดํŠธ config๋ฅผ ์‚ฌ์šฉํ–ˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

๊ทธ๋Ÿฐ ๋‹ค์Œ config ํด๋ž˜์Šค์˜ push_to_hub() ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•ด์„œ config ํŒŒ์ผ์„ ๋ชจ๋ธ ์ €์žฅ์†Œ๋กœ ํ‘ธ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: We can then push this to our model repository with the configurationโ€™s push_to_hub() function:

config.push_to_hub(model_checkpoint, commit_message="Add config.json")

์ด์ œ main ๋ธŒ๋žœ์น˜์˜ ์ตœ์‹  ์ปค๋ฐ‹์—์„œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•ด์„œ ์ž‘๋™ ์—ฌ๋ถ€๋ฅผ ํ…Œ์ŠคํŠธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

reader = pipeline("question-answering", model=model_checkpoint, revision="main")

context = r"""
Extractive Question Answering is the task of extracting an answer from a text
given a question. An example of a question answering dataset is the SQuAD
dataset, which is entirely based on that task. If you would like to fine-tune a
model on a SQuAD task, you may leverage the
examples/pytorch/question-answering/run_squad.py script.

๐Ÿค— Transformers is interoperable with the PyTorch, TensorFlow, and JAX
frameworks, so you can use your favourite tools for a wide variety of tasks!
"""

question = "What is extractive question answering?"
reader(question=question, context=context)
{'score': 0.38669535517692566,
 'start': 34,
 'end': 95,
 'answer': 'the task of extracting an answer from a text given a question'}

์œ ํ›„, ๋™์ž‘ํ•˜๋„ค์š”! ๋ฐฉ๊ธˆ ๋ฐฐ์šด ๋‚ด์šฉ์„ ์š”์•ฝ ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค:

์ด์ œ ํŒŒ์ดํ”„๋ผ์ธ์„ ๋””๋ฒ„๊น…ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•˜์œผ๋‹ˆ ๋ชจ๋ธ ์ž์ฒด์˜ forward pass์—์„œ ๋” ๊นŒ๋‹ค๋กœ์šด ์˜ˆ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

๋ชจ๋ธ์˜ foward pass ๋””๋ฒ„๊น…

โ€˜pipelineโ€™์€ ๋น ๋ฅด๊ฒŒ ์˜ˆ์ธก์„ ์ƒ์„ฑํ•ด์•ผ ํ•˜๋Š” ๋Œ€๋ถ€๋ถ„์˜ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์— ์ ํ•ฉํ•˜์ง€๋งŒ ๋•Œ๋กœ๋Š” ๋ชจ๋ธ์˜ logits๊ฐ’์— ์ ‘๊ทผํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ์ ์šฉํ•˜๋ ค๋Š” ์ปค์Šคํ…€ ํ›„์ฒ˜๋ฆฌ ๊ณผ์ •์ด ์žˆ๋Š” ๊ฒฝ์šฐ). ์ด ๊ฒฝ์šฐ ๋ฌด์—‡์ด ์ž˜๋ชป๋  ์ˆ˜ ์žˆ๋Š”์ง€ ์•Œ์•„๋ณด๊ธฐ ์œ„ํ•ด ๋จผ์ € pipeline์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์™€ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค:

tokenizer = reader.tokenizer
model = reader.model

๋‹ค์Œ์œผ๋กœ ์งˆ๋ฌธ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์„ ํ˜ธํ•˜๋Š” ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ์ง€์›๋˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค:

question = "Which frameworks can I use?"

๋‹จ์› 7์—์„œ ๋ณด์•˜๋“ฏ์ด ์ผ๋ฐ˜์ ์ธ ๋‹จ๊ณ„๋Š” ์ž…๋ ฅ์„ ํ† ํฐํ™”ํ•˜๊ณ  ์‹œ์ž‘๊ณผ ๋งˆ์ง€๋ง‰ ํ† ํฐ์˜ logits๋ฅผ ์ถ”์ถœํ•œ ๋‹ค์Œ ์‘๋‹ต ๋ถ€๋ถ„์„ ๋””์ฝ”๋”ฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค:

import torch

inputs = tokenizer(question, context, add_special_tokens=True)
input_ids = inputs["input_ids"][0]
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
# Get the most likely beginning of answer with the argmax of the score
answer_start = torch.argmax(answer_start_scores)
# Get the most likely end of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1
answer = tokenizer.convert_tokens_to_string(
    tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])
)
print(f"Question: {question}")
print(f"Answer: {answer}")
"""
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_75743/2725838073.py in <module>
      1 inputs = tokenizer(question, text, add_special_tokens=True)
      2 input_ids = inputs["input_ids"]
----> 3 outputs = model(**inputs)
      4 answer_start_scores = outputs.start_logits
      5 answer_end_scores = outputs.end_logits

~/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, start_positions, end_positions, output_attentions, output_hidden_states, return_dict)
    723         return_dict = return_dict if return_dict is not None else self.config.use_return_dict
    724
--> 725         distilbert_output = self.distilbert(
    726             input_ids=input_ids,
    727             attention_mask=attention_mask,

~/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
    471             raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
    472         elif input_ids is not None:
--> 473             input_shape = input_ids.size()
    474         elif inputs_embeds is not None:
    475             input_shape = inputs_embeds.size()[:-1]

AttributeError: 'list' object has no attribute 'size'
"""

์ด๋Ÿฐ, ์ฝ”๋“œ์— ๋ฒ„๊ทธ๊ฐ€ ์žˆ๋Š” ๊ฒƒ ๊ฐ™๋„ค์š”! ํ•˜์ง€๋งŒ ์•ฝ๊ฐ„์˜ ๋””๋ฒ„๊น…์€ ๋‘๋ ต์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋…ธํŠธ๋ถ์—์„œ ํŒŒ์ด์ฌ ๋””๋ฒ„๊ฑฐ๋ฅผ ์‚ฌ์šฉ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

๋˜๋Š” ํ„ฐ๋ฏธ๋„์—์„œ:

์—ฌ๊ธฐ์—์„œ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ์ฝ์œผ๋ฉด 'list' ๊ฐ์ฒด์—๋Š” 'size' ์†์„ฑ์ด ์—†์œผ๋ฉฐ model(**inputs)โ€˜์—์„œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•œ ๋ผ์ธ์„ ๊ฐ€๋ฆฌํ‚ค๋Š” --> ํ™”์‚ดํ‘œ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. .Python ๋””๋ฒ„๊ฑฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋Œ€ํ™”์‹์œผ๋กœ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ์ง€๋งŒ ์ง€๊ธˆ์€ ๋‹จ์ˆœํžˆ inputs` ๋ถ€๋ถ„์„ ์Šฌ๋ผ์ด์Šคํ•˜์—ฌ ์–ด๋–ค ๊ฐ’์ด ์žˆ๋Š”์ง€ ๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค:

inputs["input_ids"][:5]
[101, 2029, 7705, 2015, 2064]

ํ™•์‹คํžˆ ์ผ๋ฐ˜์ ์ธ Python list์ฒ˜๋Ÿผ ๋ณด์ด์ง€๋งŒ ํƒ€์ž…์„ ๋‹ค์‹œ ํ™•์ธํ•ฉ์‹œ๋‹ค:

type(inputs["input_ids"])
list

๋„ค, ํ™•์‹คํžˆ ํŒŒ์ด์ฌ์˜ list์ž…๋‹ˆ๋‹ค. ๋ฌด์—‡์ด ์ž˜๋ชป๋˜์—ˆ์„๊นŒ์š”? ๋‹จ์› 2์—์„œ ๐Ÿค— Transformers์˜ AutoModelForXxx ํด๋ž˜์Šค๋Š” tensors(PyTorch ๋˜๋Š” TensorFlow ํฌํ•จ)์—์„œ ์ž‘๋™ํ•˜๋ฉฐ tensor์˜ dimensions๋ฅผ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•ด ์ผ๋ฐ˜์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ PyTorch์˜ Tensor.size()๋ฅผ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ์–ด๋–ค ๋ผ์ธ์ด ์˜ˆ์™ธ๋ฅผ ๋ฐœ์ƒ์‹œ์ผฐ๋Š”์ง€ ์•Œ์•„๋ณด๊ธฐ ์œ„ํ•ด traceback์„ ๋‹ค์‹œ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค:

~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
    471             raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
    472         elif input_ids is not None:
--> 473             input_shape = input_ids.size()
    474         elif inputs_embeds is not None:
    475             input_shape = inputs_embeds.size()[:-1]

AttributeError: 'list' object has no attribute 'size'

์ฝ”๋“œ๊ฐ€ input_ids.size()๋ฅผ ํ˜ธ์ถœํ•˜๋ ค๊ณ  ํ•˜์ง€๋งŒ, Python list์—์„œ๋Š” ์ ˆ๋Œ€ ๋™์ž‘ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ์–ด๋–ป๊ฒŒ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์„๊นŒ์š”? ์Šคํƒ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ์—์„œ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ๊ฒ€์ƒ‰ํ•˜๋ฉด ๊ฝค ๋งŽ์€ ๊ด€๋ จ ํ•ด๊ฒฐ์ฑ…์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ์งˆ๋ฌธ์„ ํด๋ฆญํ•˜๋ฉด ์•„๋ž˜ ์Šคํฌ๋ฆฐ์ƒท์— ํ‘œ์‹œ๋œ ๋‹ต๋ณ€๊ณผ ํ•จ๊ป˜ ์šฐ๋ฆฌ์™€ ์œ ์‚ฌํ•œ ์งˆ๋ฌธ์ด ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค:

An answer from Stack Overflow.

๋Œ€๋‹ต์€ ํ† ํฌ๋‚˜์ด์ €์— return_tensors='pt'๋ฅผ ์ถ”๊ฐ€ํ•  ๊ฒƒ์„ ๊ถŒ์žฅํ•˜๋Š”๋ฐ, ์ด๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค:

inputs = tokenizer(question, context, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"][0]
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
# Get the most likely beginning of answer with the argmax of the score
answer_start = torch.argmax(answer_start_scores)
# Get the most likely end of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1
answer = tokenizer.convert_tokens_to_string(
    tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])
)
print(f"Question: {question}")
print(f"Answer: {answer}")
"""
Question: Which frameworks can I use?
Answer: pytorch, tensorflow, and jax
"""

์ž˜ ๋™์ž‘ํ•˜๋„ค์š”! ์ด๊ฒŒ ๋ฐ”๋กœ ์Šคํƒ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ๊ฐ€ ์–ผ๋งˆ๋‚˜ ์œ ์šฉํ•œ์ง€ ๋ณด์—ฌ์ฃผ๋Š” ์ข‹์€ ์˜ˆ์ž…๋‹ˆ๋‹ค. ์œ ์‚ฌํ•œ ๋ฌธ์ œ๋ฅผ ์‹๋ณ„ํ•˜์—ฌ ์ปค๋ฎค๋‹ˆํ‹ฐ์˜ ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์˜ ๊ฒฝํ—˜์„ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด์™€ ๊ฐ™์€ ๊ฒ€์ƒ‰์ด ํ•ญ์ƒ ์ ์ ˆํ•œ ๋‹ต๋ณ€์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒฝ์šฐ์— ๋ฌด์—‡์„ ํ•  ์ˆ˜ ์žˆ์„๊นŒ์š”? ๋‹คํ–‰ํžˆ๋„ Hugging Face forums์— ์—ฌ๋Ÿฌ๋ถ„์„ ๋ฐ˜๊ธฐ๊ณ  ๋„์™€์ค„ ์ˆ˜ ์žˆ๋Š” ๊ฐœ๋ฐœ์ž ์ปค๋ฎค๋‹ˆํ‹ฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค! ๋‹ค์Œ ์žฅ์—์„œ๋Š” ๋‹ต๋ณ€์„ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ์ข‹์€ ํฌ๋Ÿผ ์งˆ๋ฌธ์„ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.