repo_id
stringclasses
55 values
file_path
stringlengths
42
186
content
stringlengths
1
333k
__index_level_0__
int64
0
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/peft.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— PEFT๋กœ ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-adapters-with-peft]] [[open-in-colab]] [Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) ๋ฐฉ๋ฒ•์€ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ฏธ์„ธ ์กฐ์ • ์ค‘ ๊ณ ์ •์‹œํ‚ค๊ณ , ๊ทธ ์œ„์— ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š” ๋งค์šฐ ์ ์€ ์ˆ˜์˜ ๋งค๊ฐœ๋ณ€์ˆ˜(์–ด๋Œ‘ํ„ฐ)๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์–ด๋Œ‘ํ„ฐ๋Š” ์ž‘์—…๋ณ„ ์ •๋ณด๋ฅผ ํ•™์Šตํ•˜๋„๋ก ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ ์™„์ „ํžˆ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์— ํ•„์ ํ•˜๋Š” ๊ฒฐ๊ณผ๋ฅผ ์ƒ์„ฑํ•˜๋ฉด์„œ, ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์ ์ด๊ณ  ๋น„๊ต์  ์ ์€ ์ปดํ“จํŒ… ๋ฆฌ์†Œ์Šค๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ PEFT๋กœ ํ›ˆ๋ จ๋œ ์–ด๋Œ‘ํ„ฐ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ „์ฒด ๋ชจ๋ธ๋ณด๋‹ค ํ›จ์”ฌ ์ž‘๊ธฐ ๋•Œ๋ฌธ์— ๊ณต์œ , ์ €์žฅ ๋ฐ ๊ฐ€์ ธ์˜ค๊ธฐ๊ฐ€ ํŽธ๋ฆฌํ•ฉ๋‹ˆ๋‹ค. <div class="flex flex-col justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/> <figcaption class="text-center">Hub์— ์ €์žฅ๋œ OPTForCausalLM ๋ชจ๋ธ์˜ ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ค‘์น˜๋Š” ์ตœ๋Œ€ 700MB์— ๋‹ฌํ•˜๋Š” ๋ชจ๋ธ ๊ฐ€์ค‘์น˜์˜ ์ „์ฒด ํฌ๊ธฐ์— ๋น„ํ•ด ์•ฝ 6MB์— ๋ถˆ๊ณผํ•ฉ๋‹ˆ๋‹ค.</figcaption> </div> ๐Ÿค— PEFT ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [๋ฌธ์„œ](https://huggingface.co/docs/peft/index)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ## ์„ค์ • [[setup]] ๐Ÿค— PEFT๋ฅผ ์„ค์น˜ํ•˜์—ฌ ์‹œ์ž‘ํ•˜์„ธ์š”: ```bash pip install peft ``` ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•ด๋ณด๊ณ  ์‹ถ๋‹ค๋ฉด, ๋‹ค์Œ ์†Œ์Šค์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค: ```bash pip install git+https://github.com/huggingface/peft.git ``` ## ์ง€์›๋˜๋Š” PEFT ๋ชจ๋ธ [[supported-peft-models]] ๐Ÿค— Transformers๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์ผ๋ถ€ PEFT ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•˜๋ฉฐ, ๋กœ์ปฌ์ด๋‚˜ Hub์— ์ €์žฅ๋œ ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋งŒ์œผ๋กœ ์‰ฝ๊ฒŒ ์‹คํ–‰ํ•˜๊ฑฐ๋‚˜ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค: - [Low Rank Adapters](https://huggingface.co/docs/peft/conceptual_guides/lora) - [IA3](https://huggingface.co/docs/peft/conceptual_guides/ia3) - [AdaLoRA](https://arxiv.org/abs/2303.10512) ๐Ÿค— PEFT์™€ ๊ด€๋ จ๋œ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•(์˜ˆ: ํ”„๋กฌํ”„ํŠธ ํ›ˆ๋ จ ๋˜๋Š” ํ”„๋กฌํ”„ํŠธ ํŠœ๋‹) ๋˜๋Š” ์ผ๋ฐ˜์ ์ธ ๐Ÿค— PEFT ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [๋ฌธ์„œ](https://huggingface.co/docs/peft/index)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## PEFT ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-a-peft-adapter]] ๐Ÿค— Transformers์—์„œ PEFT ์–ด๋Œ‘ํ„ฐ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๊ณ  ์‚ฌ์šฉํ•˜๋ ค๋ฉด Hub ์ €์žฅ์†Œ๋‚˜ ๋กœ์ปฌ ๋””๋ ‰ํ„ฐ๋ฆฌ์— `adapter_config.json` ํŒŒ์ผ๊ณผ ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ค‘์น˜๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `AutoModelFor` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ PEFT ์–ด๋Œ‘ํ„ฐ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์ธ๊ณผ ๊ด€๊ณ„ ์–ธ์–ด ๋ชจ๋ธ์šฉ PEFT ์–ด๋Œ‘ํ„ฐ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅด์‹ญ์‹œ์˜ค: 1. PEFT ๋ชจ๋ธ ID๋ฅผ ์ง€์ •ํ•˜์‹ญ์‹œ์˜ค. 2. [`AutoModelForCausalLM`] ํด๋ž˜์Šค์— ์ „๋‹ฌํ•˜์‹ญ์‹œ์˜ค. ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id) ``` <Tip> `AutoModelFor` ํด๋ž˜์Šค๋‚˜ ๊ธฐ๋ณธ ๋ชจ๋ธ ํด๋ž˜์Šค(์˜ˆ: `OPTForCausalLM` ๋˜๋Š” `LlamaForCausalLM`) ์ค‘ ํ•˜๋‚˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ PEFT ์–ด๋Œ‘ํ„ฐ๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> `load_adapter` ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ PEFT ์–ด๋Œ‘ํ„ฐ๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "facebook/opt-350m" peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ``` ## 8๋น„ํŠธ ๋˜๋Š” 4๋น„ํŠธ๋กœ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-in-8bit-or-4bit]] `bitsandbytes` ํ†ตํ•ฉ์€ 8๋น„ํŠธ์™€ 4๋น„ํŠธ ์ •๋ฐ€๋„ ๋ฐ์ดํ„ฐ ์œ ํ˜•์„ ์ง€์›ํ•˜๋ฏ€๋กœ ํฐ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ๋•Œ ์œ ์šฉํ•˜๋ฉด์„œ ๋ฉ”๋ชจ๋ฆฌ๋„ ์ ˆ์•ฝํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ํ•˜๋“œ์›จ์–ด์— ํšจ๊ณผ์ ์œผ๋กœ ๋ถ„๋ฐฐํ•˜๋ ค๋ฉด [`~PreTrainedModel.from_pretrained`]์— `load_in_8bit` ๋˜๋Š” `load_in_4bit` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  `device_map="auto"`๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id, device_map="auto", load_in_8bit=True) ``` ## ์ƒˆ ์–ด๋Œ‘ํ„ฐ ์ถ”๊ฐ€ [[add-a-new-adapter]] ์ƒˆ ์–ด๋Œ‘ํ„ฐ๊ฐ€ ํ˜„์žฌ ์–ด๋Œ‘ํ„ฐ์™€ ๋™์ผํ•œ ์œ ํ˜•์ธ ๊ฒฝ์šฐ์— ํ•œํ•ด ๊ธฐ์กด ์–ด๋Œ‘ํ„ฐ๊ฐ€ ์žˆ๋Š” ๋ชจ๋ธ์— ์ƒˆ ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด [`~peft.PeftModel.add_adapter`]๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ชจ๋ธ์— ๊ธฐ์กด LoRA ์–ด๋Œ‘ํ„ฐ๊ฐ€ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋Š” ๊ฒฝ์šฐ: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=["q_proj", "k_proj"], init_lora_weights=False ) model.add_adapter(lora_config, adapter_name="adapter_1") ``` ์ƒˆ ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด: ```py # attach new adapter with same config model.add_adapter(lora_config, adapter_name="adapter_2") ``` ์ด์ œ [`~peft.PeftModel.set_adapter`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•  ์–ด๋Œ‘ํ„ฐ๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # use adapter_1 model.set_adapter("adapter_1") output = model.generate(**inputs) print(tokenizer.decode(output_disabled[0], skip_special_tokens=True)) # use adapter_2 model.set_adapter("adapter_2") output_enabled = model.generate(**inputs) print(tokenizer.decode(output_enabled[0], skip_special_tokens=True)) ``` ## ์–ด๋Œ‘ํ„ฐ ํ™œ์„ฑํ™” ๋ฐ ๋น„ํ™œ์„ฑํ™” [[enable-and-disable-adapters]] ๋ชจ๋ธ์— ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•œ ํ›„ ์–ด๋Œ‘ํ„ฐ ๋ชจ๋“ˆ์„ ํ™œ์„ฑํ™” ๋˜๋Š” ๋น„ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ด๋Œ‘ํ„ฐ ๋ชจ๋“ˆ์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" adapter_model_id = "ybelkada/opt-350m-lora" tokenizer = AutoTokenizer.from_pretrained(model_id) text = "Hello" inputs = tokenizer(text, return_tensors="pt") model = AutoModelForCausalLM.from_pretrained(model_id) peft_config = PeftConfig.from_pretrained(adapter_model_id) # to initiate with random weights peft_config.init_lora_weights = False model.add_adapter(peft_config) model.enable_adapters() output = model.generate(**inputs) ``` ์–ด๋Œ‘ํ„ฐ ๋ชจ๋“ˆ์„ ๋น„ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด: ```py model.disable_adapters() output = model.generate(**inputs) ``` ## PEFT ์–ด๋Œ‘ํ„ฐ ํ›ˆ๋ จ [[train-a-peft-adapter]] PEFT ์–ด๋Œ‘ํ„ฐ๋Š” [`Trainer`] ํด๋ž˜์Šค์—์„œ ์ง€์›๋˜๋ฏ€๋กœ ํŠน์ • ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋งž๊ฒŒ ์–ด๋Œ‘ํ„ฐ๋ฅผ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด LoRA ์–ด๋Œ‘ํ„ฐ๋ฅผ ํ›ˆ๋ จํ•˜๋ ค๋ฉด: <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด [์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](training) ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”. </Tip> 1. ์ž‘์—… ์œ ํ˜• ๋ฐ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ง€์ •ํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ ๊ตฌ์„ฑ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [`~peft.LoraConfig`]๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```py from peft import LoraConfig peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", task_type="CAUSAL_LM", ) ``` 2. ๋ชจ๋ธ์— ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ```py model.add_adapter(peft_config) ``` 3. ์ด์ œ ๋ชจ๋ธ์„ [`Trainer`]์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py trainer = Trainer(model=model, ...) trainer.train() ``` ํ›ˆ๋ จํ•œ ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ €์žฅํ•˜๊ณ  ๋‹ค์‹œ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด: ```py model.save_pretrained(save_dir) model = AutoModelForCausalLM.from_pretrained(save_dir) ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/sagemaker.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Amazon SageMaker์—์„œ ํ•™์Šต ์‹คํ–‰ํ•˜๊ธฐ[[run-training-on-amazon-sagemaker]] ๋ฌธ์„œ๊ฐ€ [hf.co/docs/sagemaker](https://huggingface.co/docs/sagemaker)๋กœ ์ด๋™๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ํŽ˜์ด์ง€๋Š” `transformers` 5.0 ์—์„œ ์‚ญ์ œ๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ### ๋ชฉ์ฐจ[[table-of-content]] - [Train Hugging Face models on Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/train) - [Deploy Hugging Face models to Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/inference)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/add_new_model.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hugging Face Transformers๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋ฌด์—‡์ธ๊ฐ€์š”? [[how-to-add-a-model-to-transformers]] Hugging Face Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์ปค๋ฎค๋‹ˆํ‹ฐ ๊ธฐ์—ฌ์ž๋“ค ๋•๋ถ„์— ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Š” ๋„์ „์ ์ธ ํ”„๋กœ์ ํŠธ์ด๋ฉฐ Hugging Face Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๊ตฌํ˜„ํ•  ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊นŠ์€ ์ดํ•ด๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. Hugging Face์—์„œ๋Š” ๋” ๋งŽ์€ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฉค๋ฒ„๊ฐ€ ๋ชจ๋ธ์„ ์ ๊ทน์ ์œผ๋กœ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•˜๊ณ ์ž ํ•˜๋ฉฐ, ์ด ๊ฐ€์ด๋“œ๋ฅผ ํ†ตํ•ด PyTorch ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ณผ์ •์„ ์•ˆ๋‚ดํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค (PyTorch๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•ด์ฃผ์„ธ์š”). ์ด ๊ณผ์ •์„ ์ง„ํ–‰ํ•˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‚ด์šฉ์„ ์ดํ•ดํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: - ์˜คํ”ˆ ์†Œ์Šค์˜ ๋ชจ๋ฒ” ์‚ฌ๋ก€์— ๋Œ€ํ•œ ํ†ต์ฐฐ๋ ฅ์„ ์–ป์Šต๋‹ˆ๋‹ค. - ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์„ค๊ณ„ ์›์น™์„ ์ดํ•ดํ•ฉ๋‹ˆ๋‹ค. - ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ํ…Œ์ŠคํŠธํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์›๋‹ˆ๋‹ค. - `black`, `ruff`, `make fix-copies`์™€ ๊ฐ™์€ Python ์œ ํ‹ธ๋ฆฌํ‹ฐ๋ฅผ ํ†ตํ•ฉํ•˜์—ฌ ๊น”๋”ํ•˜๊ณ  ๊ฐ€๋…์„ฑ ์žˆ๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์›๋‹ˆ๋‹ค. Hugging Face ํŒ€์€ ํ•ญ์ƒ ๋„์›€์„ ์ค„ ์ค€๋น„๊ฐ€ ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ˜ผ์ž๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ๐Ÿค— โค๏ธ ์‹œ์ž‘์— ์•ž์„œ ๐Ÿค— Transformers์— ์›ํ•˜๋Š” ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด [New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml) ์ด์Šˆ๋ฅผ ์—ด์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŠน์ • ๋ชจ๋ธ์„ ๊ธฐ์—ฌํ•˜๋Š” ๋ฐ ํŠน๋ณ„ํžˆ ๊นŒ๋‹ค๋กœ์šด ๊ธฐ์ค€์„ ๊ฐ€์ง€์ง€ ์•Š๋Š” ๊ฒฝ์šฐ [New model label](https://github.com/huggingface/transformers/labels/New%20model)์„ ํ•„ํ„ฐ๋งํ•˜์—ฌ ์š”์ฒญ๋˜์ง€ ์•Š์€ ๋ชจ๋ธ์ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๊ณ  ์ž‘์—…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ ์š”์ฒญ์„ ์—ด์—ˆ๋‹ค๋ฉด ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„๋Š” ๐Ÿค— Transformers์— ์ต์ˆ™ํ•ด์ง€๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! ## ๐Ÿค— Transformers์˜ ์ „๋ฐ˜์ ์ธ ๊ฐœ์š” [[general-overview-of-transformers]] ๋จผ์ € ๐Ÿค— Transformers์— ๋Œ€ํ•œ ์ „๋ฐ˜์ ์ธ ๊ฐœ์š”๋ฅผ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋งค์šฐ ์ฃผ๊ด€์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ด๊ธฐ ๋•Œ๋ฌธ์— ํ•ด๋‹น ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฒ ํ•™์ด๋‚˜ ์„ค๊ณ„ ์„ ํƒ ์‚ฌํ•ญ์— ๋™์˜ํ•˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์šฐ๋ฆฌ์˜ ๊ฒฝํ—˜์ƒ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ธฐ๋ณธ์ ์ธ ์„ค๊ณ„ ์„ ํƒ๊ณผ ์ฒ ํ•™์€ ๐Ÿค— Transformers์˜ ๊ทœ๋ชจ๋ฅผ ํšจ์œจ์ ์œผ๋กœ ํ™•์žฅํ•˜๋ฉด์„œ ์œ ์ง€ ๋ณด์ˆ˜ ๋น„์šฉ์„ ํ•ฉ๋ฆฌ์ ์ธ ์ˆ˜์ค€์œผ๋กœ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฒ ํ•™์— ๋Œ€ํ•œ ๋ฌธ์„œ](philosophy)๋ฅผ ์ฝ๋Š” ๊ฒƒ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋” ์ž˜ ์ดํ•ดํ•˜๋Š” ์ข‹์€ ์‹œ์ž‘์ ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์— ์ ์šฉํ•˜๋ ค๋Š” ๋ช‡ ๊ฐ€์ง€ ์ž‘์—… ๋ฐฉ์‹์— ๋Œ€ํ•œ ์„ ํƒ ์‚ฌํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ์ผ๋ฐ˜์ ์œผ๋กœ ์ถ”์ƒํ™”๋ณด๋‹ค๋Š” ๊ตฌ์„ฑ์„ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. - ์ฝ”๋“œ๋ฅผ ๋ณต์ œํ•˜๋Š” ๊ฒƒ์ด ํ•ญ์ƒ ๋‚˜์œ ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. ์ฝ”๋“œ์˜ ๊ฐ€๋…์„ฑ์ด๋‚˜ ์ ‘๊ทผ์„ฑ์„ ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค๋ฉด ๋ณต์ œํ•˜๋Š” ๊ฒƒ์€ ์ข‹์Šต๋‹ˆ๋‹ค. - ๋ชจ๋ธ ํŒŒ์ผ์€ ๊ฐ€๋Šฅํ•œ ํ•œ ๋…๋ฆฝ์ ์œผ๋กœ ์œ ์ง€๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํŠน์ • ๋ชจ๋ธ์˜ ์ฝ”๋“œ๋ฅผ ์ฝ์„ ๋•Œ ํ•ด๋‹น `modeling_....py` ํŒŒ์ผ๋งŒ ํ™•์ธํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฝ”๋“œ๊ฐ€ ์ œํ’ˆ์„ ์ œ๊ณตํ•˜๋Š” ์ˆ˜๋‹จ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๊ฐœ์„ ํ•˜๊ณ ์ž ํ•˜๋Š” ์ œํ’ˆ์ด๋ผ๊ณ ๋„ ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ, ์‚ฌ์šฉ์ž๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์‚ฌ๋žŒ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ฝ”๋“œ๋ฅผ ์ฝ๊ณ  ์ดํ•ดํ•˜๊ณ  ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ์‚ฌ๋žŒ๊นŒ์ง€๋„ ํฌํ•จํ•œ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์—ผ๋‘์— ๋‘๊ณ  ์ผ๋ฐ˜์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์„ค๊ณ„์— ๋Œ€ํ•ด ์กฐ๊ธˆ ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๋ชจ๋ธ ๊ฐœ์š” [[overview-of-models]] ๋ชจ๋ธ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด ๋ชจ๋ธ๊ณผ ํ•ด๋‹น ๊ตฌ์„ฑ์ธ [`PreTrainedModel`] ๋ฐ [`PretrainedConfig`] ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๐Ÿค— Transformers์— ์ถ”๊ฐ€ํ•˜๋ ค๋Š” ๋ชจ๋ธ์„ `BrandNewBert`๋ผ๊ณ  ๋ถ€๋ฅด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> ๋ณด๋‹ค์‹œํ”ผ, ๐Ÿค— Transformers์—์„œ๋Š” ์ƒ์†์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ ์ถ”์ƒํ™” ์ˆ˜์ค€์„ ์ตœ์†Œํ•œ์œผ๋กœ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์–ด๋–ค ๋ชจ๋ธ์—์„œ๋„ ๋‘ ์ˆ˜์ค€ ์ด์ƒ์˜ ์ถ”์ƒํ™”๊ฐ€ ์กด์žฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. `BrandNewBertModel`์€ `BrandNewBertPreTrainedModel`์—์„œ ์ƒ์†๋ฐ›๊ณ , ์ด ํด๋ž˜์Šค๋Š” [`PreTrainedModel`]์—์„œ ์ƒ์†๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด๋กœ์จ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์€ [`PreTrainedModel`]์—๋งŒ ์˜์กดํ•˜๋„๋ก ํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์ƒˆ๋กœ์šด ๋ชจ๋ธ์— ์ž๋™์œผ๋กœ ์ œ๊ณต๋˜๋Š” ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์€ [`~PreTrainedModel.from_pretrained`] ๋ฐ [`~PreTrainedModel.save_pretrained`]์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋Šฅ ์™ธ์—๋„ `BrandNewBertModel.forward`์™€ ๊ฐ™์€ ๋‹ค๋ฅธ ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์€ ์ƒˆ๋กœ์šด `modeling_brand_new_bert.py` ์Šคํฌ๋ฆฝํŠธ์—์„œ ์™„์ „ํžˆ ์ •์˜๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `BrandNewBertForMaskedLM`๊ณผ ๊ฐ™์€ ํŠน์ • ํ—ค๋“œ ๋ ˆ์ด์–ด๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์€ `BrandNewBertModel`์„ ์ƒ์†๋ฐ›์ง€ ์•Š๊ณ  forward pass์—์„œ ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ๋Š” `BrandNewBertModel`์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”์ƒํ™” ์ˆ˜์ค€์„ ๋‚ฎ๊ฒŒ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์ƒˆ๋กœ์šด ๋ชจ๋ธ์€ `BrandNewBertConfig`๋ผ๋Š” ๊ตฌ์„ฑ ํด๋ž˜์Šค๋ฅผ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ตฌ์„ฑ์€ ํ•ญ์ƒ [`PreTrainedModel`]์˜ ์†์„ฑ์œผ๋กœ ์ €์žฅ๋˜๋ฉฐ, ๋”ฐ๋ผ์„œ `BrandNewBertPreTrainedModel`์„ ์ƒ์†๋ฐ›๋Š” ๋ชจ๋“  ํด๋ž˜์Šค์—์„œ `config` ์†์„ฑ์„ ํ†ตํ•ด ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # model has access to its config ``` ๋ชจ๋ธ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ตฌ์„ฑ์€ [`PretrainedConfig`]์—์„œ ๊ธฐ๋ณธ ์ง๋ ฌํ™” ๋ฐ ์—ญ์ง๋ ฌํ™” ๊ธฐ๋Šฅ์„ ์ƒ์†๋ฐ›์Šต๋‹ˆ๋‹ค. ๊ตฌ์„ฑ๊ณผ ๋ชจ๋ธ์€ ํ•ญ์ƒ *pytorch_model.bin* ํŒŒ์ผ๊ณผ *config.json* ํŒŒ์ผ๋กœ ๊ฐ๊ฐ ๋ณ„๋„๋กœ ์ง๋ ฌํ™”๋ฉ๋‹ˆ๋‹ค. [`~PreTrainedModel.save_pretrained`]๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ์ž๋™์œผ๋กœ [`~PretrainedConfig.save_pretrained`]๋„ ํ˜ธ์ถœ๋˜๋ฏ€๋กœ ๋ชจ๋ธ๊ณผ ๊ตฌ์„ฑ์ด ๋ชจ๋‘ ์ €์žฅ๋ฉ๋‹ˆ๋‹ค. ### ์ฝ”๋“œ ์Šคํƒ€์ผ [[code-style]] ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ž‘์„ฑํ•  ๋•Œ, Transformers๋Š” ์ฃผ๊ด€์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ด๋ฉฐ ๋ช‡ ๊ฐ€์ง€ ๋…ํŠนํ•œ ์ฝ”๋”ฉ ์Šคํƒ€์ผ์ด ์žˆ์Šต๋‹ˆ๋‹ค: 1. ๋ชจ๋ธ์˜ forward pass๋Š” ๋ชจ๋ธ ํŒŒ์ผ์— ์™„์ „ํžˆ ์ž‘์„ฑ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‹ค๋ฅธ ๋ชจ๋ธ์—์„œ ๋ธ”๋ก์„ ์žฌ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ฝ”๋“œ๋ฅผ ๋ณต์‚ฌํ•˜์—ฌ ์œ„์— `# Copied from` ์ฃผ์„๊ณผ ํ•จ๊ป˜ ๋ถ™์—ฌ๋„ฃ์œผ๋ฉด ๋ฉ๋‹ˆ๋‹ค (์˜ˆ: [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). 2. ์ฝ”๋“œ๋Š” ์™„์ „ํžˆ ์ดํ•ดํ•˜๊ธฐ ์‰ฌ์›Œ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณ€์ˆ˜ ์ด๋ฆ„์„ ๋ช…ํ™•ํ•˜๊ฒŒ ์ง€์ •ํ•˜๊ณ  ์•ฝ์–ด๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `act`๋ณด๋‹ค๋Š” `activation`์„ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. ํ•œ ๊ธ€์ž ๋ณ€์ˆ˜ ์ด๋ฆ„์€ ๋ฃจํ”„์˜ ์ธ๋ฑ์Šค์ธ ๊ฒฝ์šฐ๋ฅผ ์ œ์™ธํ•˜๊ณ  ๊ถŒ์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 3. ๋” ์ผ๋ฐ˜์ ์œผ๋กœ, ์งง์€ ๋งˆ๋ฒ• ๊ฐ™์€ ์ฝ”๋“œ๋ณด๋‹ค๋Š” ๊ธธ๊ณ  ๋ช…์‹œ์ ์ธ ์ฝ”๋“œ๋ฅผ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. 4. PyTorch์—์„œ `nn.Sequential`์„ ํ•˜์œ„ ํด๋ž˜์Šค๋กœ ๋งŒ๋“ค์ง€ ๋ง๊ณ  `nn.Module`์„ ํ•˜์œ„ ํด๋ž˜์Šค๋กœ ๋งŒ๋“ค๊ณ  forward pass๋ฅผ ์ž‘์„ฑํ•˜์—ฌ ๋‹ค๋ฅธ ์‚ฌ๋žŒ์ด ์ฝ”๋“œ๋ฅผ ๋น ๋ฅด๊ฒŒ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. print ๋ฌธ์ด๋‚˜ ์ค‘๋‹จ์ ์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 5. ํ•จ์ˆ˜ ์‹œ๊ทธ๋‹ˆ์ฒ˜์—๋Š” ํƒ€์ž… ์ฃผ์„์„ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ์™ธ์—๋Š” ํƒ€์ž… ์ฃผ์„๋ณด๋‹ค ๋ณ€์ˆ˜ ์ด๋ฆ„์ด ํ›จ์”ฌ ์ฝ๊ธฐ ์‰ฝ๊ณ  ์ดํ•ดํ•˜๊ธฐ ์‰ฝ์Šต๋‹ˆ๋‹ค. ### ํ† ํฌ๋‚˜์ด์ € ๊ฐœ์š” [[overview-of-tokenizers]] ์•„์ง ์ค€๋น„๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค :-( ์ด ์„น์…˜์€ ๊ณง ์ถ”๊ฐ€๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค! ## ๐Ÿค— Transformers์— ๋ชจ๋ธ ์ถ”๊ฐ€ํ•˜๋Š” ๋‹จ๊ณ„๋ณ„ ๋ฐฉ๋ฒ• [[stepbystep-recipe-to-add-a-model-to-transformers]] ๊ฐ์ž ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์„ ํ˜ธ๊ฐ€ ๋‹ค๋ฅด๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค๋ฅธ ๊ธฐ์—ฌ์ž๋“ค์ด Hugging Face์— ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์š”์•ฝ์„ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์ด ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ธ”๋กœ๊ทธ ๊ฒŒ์‹œ๋ฌผ ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค: 1. [GPT2 ๋ชจ๋ธ ์ด์‹ํ•˜๊ธฐ](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) - [Thomas](https://huggingface.co/thomwolf) 2. [WMT19 MT ๋ชจ๋ธ ์ด์‹ํ•˜๊ธฐ](https://huggingface.co/blog/porting-fsmt) - [Stas](https://huggingface.co/stas) ๊ฒฝํ—˜์ƒ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ ์ฃผ์˜ํ•ด์•ผ ํ•  ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์‚ฌํ•ญ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ๊ฐ™์€ ์ผ์„ ๋ฐ˜๋ณตํ•˜์ง€ ๋งˆ์„ธ์š”! ์ƒˆ๋กœ์šด ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์œ„ํ•ด ์ถ”๊ฐ€ํ•  ์ฝ”๋“œ์˜ ๋Œ€๋ถ€๋ถ„์€ ์ด๋ฏธ ๐Ÿค— Transformers ์–ด๋”˜๊ฐ€์— ์กด์žฌํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ณต์‚ฌํ•  ์ˆ˜ ์žˆ๋Š” ์œ ์‚ฌํ•œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ฐพ๋Š”๋ฐ ์‹œ๊ฐ„์„ ํˆฌ์žํ•˜์„ธ์š”. [grep](https://www.gnu.org/software/grep/)์™€ [rg](https://github.com/BurntSushi/ripgrep)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋ชจ๋ธ์˜ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•œ ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๊ณ  ๋ชจ๋ธ๋ง ์ฝ”๋“œ๊ฐ€ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์กด์žฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด FSMT์˜ ๋ชจ๋ธ๋ง ์ฝ”๋“œ๋Š” BART๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๊ณ  FSMT์˜ ํ† ํฌ๋‚˜์ด์ € ์ฝ”๋“œ๋Š” XLM์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. - ์ด๊ฒƒ์€ ๊ณผํ•™์ ์ธ ๋„์ „๋ณด๋‹ค๋Š” ๊ณตํ•™์ ์ธ ๋„์ „์ž…๋‹ˆ๋‹ค. ๋…ผ๋ฌธ์˜ ๋ชจ๋ธ์˜ ๋ชจ๋“  ์ด๋ก ์  ์ธก๋ฉด์„ ์ดํ•ดํ•˜๋ ค๋Š” ๊ฒƒ๋ณด๋‹ค ํšจ์œจ์ ์ธ ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์„ ๋งŒ๋“œ๋Š” ๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ์†Œ๋น„ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋ง‰ํž ๋•Œ ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”! ๋ชจ๋ธ์€ ๐Ÿค— Transformers์˜ ํ•ต์‹ฌ ๊ตฌ์„ฑ ์š”์†Œ์ด๋ฏ€๋กœ Hugging Face์˜ ์šฐ๋ฆฌ๋Š” ๋‹น์‹ ์ด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฐ ๋‹จ๊ณ„์—์„œ ๊ธฐ๊บผ์ด ๋„์›€์„ ์ค„ ์ค€๋น„๊ฐ€ ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ง„์ „์ด ์—†๋‹ค๊ณ  ๋Š๋ผ๋ฉด ์ฃผ์ €ํ•˜์ง€ ๋ง๊ณ  ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”. ๋‹ค์Œ์—์„œ๋Š” ๋ชจ๋ธ์„ ๐Ÿค— Transformers๋กœ ์ด์‹ํ•˜๋Š” ๋ฐ ๊ฐ€์žฅ ์œ ์šฉํ•œ ์ผ๋ฐ˜์ ์ธ ์ ˆ์ฐจ๋ฅผ ์ œ๊ณตํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ชฉ๋ก์€ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•  ๋ชจ๋“  ์ž‘์—…์˜ ์š”์•ฝ์ด๋ฉฐ To-Do ๋ชฉ๋ก์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: โ˜ (์„ ํƒ ์‚ฌํ•ญ) BrandNewBert์˜ ์ด๋ก ์  ์ธก๋ฉด ์ดํ•ด<br> โ˜ Hugging Face ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์ค€๋น„<br> โ˜ ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์˜ ๋””๋ฒ„๊น… ํ™˜๊ฒฝ ์„ค์ •<br> โ˜ ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `forward()` pass๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰๋˜๋Š” ์Šคํฌ๋ฆฝํŠธ ์ž‘์„ฑ<br> โ˜ ๐Ÿค— Transformers์— ๋ชจ๋ธ ์Šค์ผˆ๋ ˆํ†ค ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€<br> โ˜ ์›๋ณธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๐Ÿค— Transformers ์ฒดํฌํฌ์ธํŠธ๋กœ ์„ฑ๊ณต์ ์œผ๋กœ ๋ณ€ํ™˜<br> โ˜ ๐Ÿค— Transformers์—์„œ ์›๋ณธ ์ฒดํฌํฌ์ธํŠธ์™€ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋‚ด์ฃผ๋Š” `forward()` pass ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰<br> โ˜ ๐Ÿค— Transformers์—์„œ ๋ชจ๋ธ ํ…Œ์ŠคํŠธ ์™„๋ฃŒ<br> โ˜ ๐Ÿค— Transformers์— ํ† ํฌ๋‚˜์ด์ € ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€<br> โ˜ ์ข…๋‹จ ๊ฐ„ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ ์‹คํ–‰<br> โ˜ ๋ฌธ์„œ ์ž‘์„ฑ ์™„๋ฃŒ<br> โ˜ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ<br> โ˜ Pull request ์ œ์ถœ<br> โ˜ (์„ ํƒ ์‚ฌํ•ญ) ๋ฐ๋ชจ ๋…ธํŠธ๋ถ ์ถ”๊ฐ€ ์šฐ์„ , ์ผ๋ฐ˜์ ์œผ๋กœ๋Š” `BrandNewBert`์˜ ์ด๋ก ์ ์ธ ์ดํ•ด๋กœ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ก ์  ์ธก๋ฉด์„ ์ง์ ‘ ์ดํ•ดํ•˜๋Š” ๋Œ€์‹  *์ง์ ‘ ํ•ด๋ณด๋ฉด์„œ* ๋ชจ๋ธ์˜ ์ด๋ก ์  ์ธก๋ฉด์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์„ ์„ ํ˜ธํ•˜๋Š” ๊ฒฝ์šฐ ๋ฐ”๋กœ `BrandNewBert` ์ฝ”๋“œ ๋ฒ ์ด์Šค๋กœ ๋น ์ ธ๋“œ๋Š” ๊ฒƒ๋„ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค. ์ด ์˜ต์…˜์€ ์—”์ง€๋‹ˆ์–ด๋ง ๊ธฐ์ˆ ์ด ์ด๋ก ์  ๊ธฐ์ˆ ๋ณด๋‹ค ๋” ๋›ฐ์–ด๋‚œ ๊ฒฝ์šฐ, `BrandNewBert`์˜ ๋…ผ๋ฌธ์„ ์ดํ•ดํ•˜๋Š” ๋ฐ ์–ด๋ ค์›€์ด ์žˆ๋Š” ๊ฒฝ์šฐ, ๋˜๋Š” ๊ณผํ•™์ ์ธ ๋…ผ๋ฌธ์„ ์ฝ๋Š” ๊ฒƒ๋ณด๋‹ค ํ”„๋กœ๊ทธ๋ž˜๋ฐ์— ํ›จ์”ฌ ๋” ํฅ๋ฏธ ์žˆ๋Š” ๊ฒฝ์šฐ์— ๋” ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### 1. (์„ ํƒ ์‚ฌํ•ญ) BrandNewBert์˜ ์ด๋ก ์  ์ธก๋ฉด [[1-optional-theoretical-aspects-of-brandnewbert]] ๋งŒ์•ฝ ๊ทธ๋Ÿฐ ์„œ์ˆ ์ ์ธ ์ž‘์—…์ด ์กด์žฌํ•œ๋‹ค๋ฉด, *BrandNewBert*์˜ ๋…ผ๋ฌธ์„ ์ฝ์–ด๋ณด๋Š” ์‹œ๊ฐ„์„ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šด ์„น์…˜์ด ๋งŽ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋”๋ผ๋„ ๊ฑฑ์ •ํ•˜์ง€ ๋งˆ์„ธ์š”! ๋ชฉํ‘œ๋Š” ๋…ผ๋ฌธ์˜ ๊นŠ์€ ์ด๋ก ์  ์ดํ•ด๊ฐ€ ์•„๋‹ˆ๋ผ *BrandNewBert*๋ฅผ ๐Ÿค— Transformers์—์„œ ํšจ๊ณผ์ ์œผ๋กœ ์žฌ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์ด๋ก ์  ์ธก๋ฉด์— ๋„ˆ๋ฌด ๋งŽ์€ ์‹œ๊ฐ„์„ ํˆฌ์žํ•  ํ•„์š”๋Š” ์—†์ง€๋งŒ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์‹ค์ œ์ ์ธ ์ธก๋ฉด์— ์ง‘์ค‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - *BrandNewBert*๋Š” ์–ด๋–ค ์œ ํ˜•์˜ ๋ชจ๋ธ์ธ๊ฐ€์š”? BERT์™€ ์œ ์‚ฌํ•œ ์ธ์ฝ”๋” ๋ชจ๋ธ์ธ๊ฐ€์š”? GPT2์™€ ์œ ์‚ฌํ•œ ๋””์ฝ”๋” ๋ชจ๋ธ์ธ๊ฐ€์š”? BART์™€ ์œ ์‚ฌํ•œ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์ธ๊ฐ€์š”? ์ด๋“ค ๊ฐ„์˜ ์ฐจ์ด์ ์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ[model_summary](model_summary)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. - *BrandNewBert*์˜ ์‘์šฉ ๋ถ„์•ผ๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์ธ๊ฐ€์š”? ํ…์ŠคํŠธ ์ƒ์„ฑ์ธ๊ฐ€์š”? ์š”์•ฝ๊ณผ ๊ฐ™์€ Seq2Seq ์ž‘์—…์ธ๊ฐ€์š”? - *brand_new_bert*์™€ BERT/GPT-2/BART์˜ ์ฐจ์ด์ ์€ ๋ฌด์—‡์ธ๊ฐ€์š”? - *brand_new_bert*์™€ ๊ฐ€์žฅ ์œ ์‚ฌํ•œ [๐Ÿค— Transformers ๋ชจ๋ธ](https://huggingface.co/transformers/#contents)์€ ๋ฌด์—‡์ธ๊ฐ€์š”? - ์–ด๋–ค ์ข…๋ฅ˜์˜ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์‚ฌ์šฉ๋˜๋‚˜์š”? Sentencepiece ํ† ํฌ๋‚˜์ด์ €์ธ๊ฐ€์š”? Word piece ํ† ํฌ๋‚˜์ด์ €์ธ๊ฐ€์š”? BERT ๋˜๋Š” BART์— ์‚ฌ์šฉ๋˜๋Š” ๋™์ผํ•œ ํ† ํฌ๋‚˜์ด์ €์ธ๊ฐ€์š”? ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ์ถฉ๋ถ„ํžˆ ์ดํ•ดํ–ˆ๋‹ค๋Š” ์ƒ๊ฐ์ด ๋“  ํ›„, ๊ถ๊ธˆํ•œ ์‚ฌํ•ญ์ด ์žˆ์œผ๋ฉด Hugging Face ํŒ€์— ๋ฌธ์˜ํ•˜์‹ญ์‹œ์˜ค. ์ด๋Š” ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜, ์–ดํ…์…˜ ๋ ˆ์ด์–ด ๋“ฑ์— ๊ด€ํ•œ ์งˆ๋ฌธ์„ ํฌํ•จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hugging Face์˜ ์œ ์ง€ ๊ด€๋ฆฌ์ž๋“ค์€ ๋ณดํ†ต ์ฝ”๋“œ๋ฅผ ๊ฒ€ํ† ํ•˜๋Š” ๊ฒƒ์— ๋Œ€ํ•ด ๋งค์šฐ ๊ธฐ๋ปํ•˜๋ฏ€๋กœ ๋‹น์‹ ์„ ๋•๋Š” ์ผ์„ ๋งค์šฐ ํ™˜์˜ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค! ### 2. ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์„ค์ • [[2-next-prepare-your-environment]] 1. ์ €์žฅ์†Œ ํŽ˜์ด์ง€์—์„œ "Fork" ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ €์žฅ์†Œ์˜ ์‚ฌ๋ณธ์„ GitHub ์‚ฌ์šฉ์ž ๊ณ„์ •์œผ๋กœ ๋งŒ๋“ญ๋‹ˆ๋‹ค. 2. `transformers` fork๋ฅผ ๋กœ์ปฌ ๋””์Šคํฌ์— ํด๋ก ํ•˜๊ณ  ๋ฒ ์ด์Šค ์ €์žฅ์†Œ๋ฅผ ์›๊ฒฉ ์ €์žฅ์†Œ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` ๊ฐ ์šด์˜ ์ฒด์ œ์— ๋”ฐ๋ผ Transformers์˜ ์„ ํƒ์  ์˜์กด์„ฑ์ด ๊ฐœ์ˆ˜๊ฐ€ ์ฆ๊ฐ€ํ•˜๋ฉด ์ด ๋ช…๋ น์ด ์‹คํŒจํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๊ฒฝ์šฐ์—๋Š” ์ž‘์—… ์ค‘์ธ ๋”ฅ ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ (PyTorch, TensorFlow ๋ฐ/๋˜๋Š” Flax)์„ ์„ค์น˜ํ•œ ํ›„, ๋‹ค์Œ ๋ช…๋ น์„ ์ˆ˜ํ–‰ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```bash pip install -e ".[quality]" ``` ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ์—๋Š” ์ด๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ƒ์œ„ ๋””๋ ‰ํ† ๋ฆฌ๋กœ ๋Œ์•„๊ฐ‘๋‹ˆ๋‹ค. ```bash cd .. ``` 4. Transformers์— *brand_new_bert*์˜ PyTorch ๋ฒ„์ „์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. PyTorch๋ฅผ ์„ค์น˜ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋งํฌ์˜ ์ง€์นจ์„ ๋”ฐ๋ฅด์‹ญ์‹œ์˜ค: https://pytorch.org/get-started/locally/. **์ฐธ๊ณ :** CUDA๋ฅผ ์„ค์น˜ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ์ด CPU์—์„œ ์ž‘๋™ํ•˜๋„๋ก ๋งŒ๋“œ๋Š” ๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. 5. *brand_new_bert*๋ฅผ ์ด์‹ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ•ด๋‹น ์›๋ณธ ์ €์žฅ์†Œ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` ์ด์ œ *brand_new_bert*๋ฅผ ๐Ÿค— Transformers๋กœ ์ด์‹ํ•˜๊ธฐ ์œ„ํ•œ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ### 3.-4. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ ์‹คํ–‰ํ•˜๊ธฐ [[3.-4.-run-a-pretrained-checkpoint-using-the-original-repository]] ๋จผ์ €, ์›๋ณธ *brand_new_bert* ์ €์žฅ์†Œ์—์„œ ์ž‘์—…์„ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ ๊ตฌํ˜„์€ ๋ณดํ†ต "์—ฐ๊ตฌ์šฉ"์œผ๋กœ ๋งŽ์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๋ฌธ์„œํ™”๊ฐ€ ๋ถ€์กฑํ•˜๊ณ  ์ฝ”๋“œ๊ฐ€ ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๊ฒƒ์ด ๋ฐ”๋กœ *brand_new_bert*๋ฅผ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋ ค๋Š” ๋™๊ธฐ๊ฐ€ ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. Hugging Face์—์„œ์˜ ์ฃผ์š” ๋ชฉํ‘œ ์ค‘ ํ•˜๋‚˜๋Š” **๊ฑฐ์ธ์˜ ์–ด๊นจ ์œ„์— ์„œ๋Š” ๊ฒƒ**์ด๋ฉฐ, ์ด๋Š” ์—ฌ๊ธฐ์—์„œ ์‰ฝ๊ฒŒ ํ•ด์„๋˜์–ด ๋™์ž‘ํ•˜๋Š” ๋ชจ๋ธ์„ ๊ฐ€์ ธ์™€์„œ ๊ฐ€๋Šฅํ•œ ํ•œ **์ ‘๊ทผ ๊ฐ€๋Šฅํ•˜๊ณ  ์‚ฌ์šฉ์ž ์นœํ™”์ ์ด๋ฉฐ ์•„๋ฆ„๋‹ต๊ฒŒ** ๋งŒ๋“œ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๐Ÿค— Transformers์—์„œ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋Š” ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋™๊ธฐ์ž…๋‹ˆ๋‹ค - ์ƒˆ๋กœ์šด ๋ณต์žกํ•œ NLP ๊ธฐ์ˆ ์„ **๋ชจ๋‘์—๊ฒŒ** ์ ‘๊ทผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›๋ณธ ์ €์žฅ์†Œ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ๊ณต์‹ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์€ ์ข…์ข… **๊ฐ€์žฅ ์–ด๋ ค์šด** ๋‹จ๊ณ„์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ๊ฒฝํ—˜์— ๋”ฐ๋ฅด๋ฉด, ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค์— ์ต์ˆ™ํ•ด์ง€๋Š” ๋ฐ ์‹œ๊ฐ„์„ ํˆฌ์žํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์–ด๋””์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š”์ง€? - ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ํ•ด๋‹น ๋ชจ๋ธ์—๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์€? - ๋ชจ๋ธ๊ณผ ๋…๋ฆฝ์ ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์€? - ๊ฐ„๋‹จํ•œ forward pass์— ํ•„์š”ํ•œ ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด forward pass๋ฅผ ํ•œ ๋ฒˆ ์ถ”์ ํ•ด ๋ณด์„ธ์š”. ์ผ๋ฐ˜์ ์œผ๋กœ ํ•ด๋‹น ํ•จ์ˆ˜๋“ค๋งŒ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ์˜ ์ค‘์š”ํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ํด๋ž˜์Šค๋Š” ์–ด๋””์— ์žˆ๋‚˜์š”? ๋ชจ๋ธ ํ•˜์œ„ ํด๋ž˜์Šค(*EncoderModel*, *DecoderModel* ๋“ฑ)๊ฐ€ ์žˆ๋‚˜์š”? self-attention ๋ ˆ์ด์–ด๋Š” ์–ด๋””์— ์žˆ๋‚˜์š”? self-attention, cross-attention ๋“ฑ ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋‹ค๋ฅธ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๊ฐ€ ์žˆ๋‚˜์š”? - ์›๋ณธ ํ™˜๊ฒฝ์—์„œ ๋ชจ๋ธ์„ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์€ ๋ฌด์—‡์ธ๊ฐ€์š”? *print* ๋ฌธ์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•˜๋‚˜์š”? *ipdb*์™€ ๊ฐ™์€ ๋Œ€ํ™”์‹ ๋””๋ฒ„๊ฑฐ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‚˜์š”? PyCharm๊ณผ ๊ฐ™์€ ํšจ์œจ์ ์ธ IDE๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ๋‚˜์š”? ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์ฝ”๋“œ๋ฅผ ์ด์‹ํ•˜๋Š” ์ž‘์—…์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์ฝ”๋“œ๋ฅผ **ํšจ์œจ์ ์œผ๋กœ** ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ๋˜ํ•œ, ์˜คํ”ˆ ์†Œ์Šค ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ์ž‘์—…ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›๋ณธ ์ €์žฅ์†Œ์—์„œ issue๋ฅผ ์—ด๊ฑฐ๋‚˜ pull request๋ฅผ ์—ด๊ธฐ๋ฅผ ์ฃผ์ €ํ•˜์ง€ ๋งˆ์‹ญ์‹œ์˜ค. ์ด ์ €์žฅ์†Œ์˜ ์œ ์ง€ ๊ด€๋ฆฌ์ž๋“ค์€ ๋ˆ„๊ตฐ๊ฐ€๊ฐ€ ์ž์‹ ๋“ค์˜ ์ฝ”๋“œ๋ฅผ ์‚ดํŽด๋ณธ๋‹ค๋Š” ๊ฒƒ์— ๋Œ€ํ•ด ๋งค์šฐ ๊ธฐ๋ปํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค! ํ˜„์žฌ ์‹œ์ ์—์„œ, ์›๋ž˜ ๋ชจ๋ธ์„ ๋””๋ฒ„๊น…ํ•˜๊ธฐ ์œ„ํ•ด ์–ด๋–ค ๋””๋ฒ„๊น… ํ™˜๊ฒฝ๊ณผ ์ „๋žต์„ ์„ ํ˜ธํ•˜๋Š”์ง€๋Š” ๋‹น์‹ ์—๊ฒŒ ๋‹ฌ๋ ธ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ณ ๊ฐ€์˜ GPU ํ™˜๊ฒฝ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๊ฒƒ์€ ๋น„์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ๋Œ€์‹ , ์›๋ž˜ ์ €์žฅ์†Œ๋กœ ๋“ค์–ด๊ฐ€์„œ ์ž‘์—…์„ ์‹œ์ž‘ํ•  ๋•Œ์™€ ๐Ÿค— Transformers ๋ชจ๋ธ์˜ ๊ตฌํ˜„์„ ์‹œ์ž‘ํ•  ๋•Œ์—๋„ CPU์—์„œ ์ž‘์—…ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์ด๋ฏธ ๐Ÿค— Transformers๋กœ ์„ฑ๊ณต์ ์œผ๋กœ ์ด์‹๋˜์—ˆ์„ ๋•Œ์—๋งŒ ๋ชจ๋ธ์ด GPU์—์„œ๋„ ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ, ์›๋ž˜ ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•œ ๋‘ ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์ด ์žˆ์Šต๋‹ˆ๋‹ค. - [Jupyter ๋…ธํŠธ๋ถ](https://jupyter.org/) / [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) - ๋กœ์ปฌ Python ์Šคํฌ๋ฆฝํŠธ Jupyter ๋…ธํŠธ๋ถ์˜ ์žฅ์ ์€ ์…€ ๋‹จ์œ„๋กœ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋…ผ๋ฆฌ์ ์ธ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ๋” ์ž˜ ๋ถ„๋ฆฌํ•˜๊ณ  ์ค‘๊ฐ„ ๊ฒฐ๊ณผ๋ฅผ ์ €์žฅํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ๋””๋ฒ„๊น… ์‚ฌ์ดํด์ด ๋” ๋นจ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋…ธํŠธ๋ถ์€ ๋‹ค๋ฅธ ๊ธฐ์—ฌ์ž์™€ ์‰ฝ๊ฒŒ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ Hugging Face ํŒ€์˜ ๋„์›€์„ ์š”์ฒญํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Jupyter ๋…ธํŠธ๋ถ์— ์ต์ˆ™ํ•˜๋‹ค๋ฉด ์ด๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ๊ฐ•๋ ฅํžˆ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. Jupyter ๋…ธํŠธ๋ถ์˜ ๋‹จ์ ์€ ์‚ฌ์šฉ์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ƒˆ๋กœ์šด ํ”„๋กœ๊ทธ๋ž˜๋ฐ ํ™˜๊ฒฝ์— ์ ์‘ํ•˜๋Š” ๋ฐ ์‹œ๊ฐ„์„ ํ• ์• ํ•ด์•ผ ํ•˜๋ฉฐ, `ipdb`์™€ ๊ฐ™์€ ์•Œ๋ ค์ง„ ๋””๋ฒ„๊น… ๋„๊ตฌ๋ฅผ ๋” ์ด์ƒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์„ ์ˆ˜๋„ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ ์ฝ”๋“œ ๋ฒ ์ด์Šค์— ๋Œ€ํ•ด ์ข‹์€ ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„๋Š” ํ•ญ์ƒ **์ž‘์€** ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋กœ๋“œํ•˜๊ณ  ๋”๋ฏธ ์ •์ˆ˜ ๋ฒกํ„ฐ ์ž…๋ ฅ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋‹จ์ผ forward pass๋ฅผ ์žฌํ˜„ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์™€ ๊ฐ™์€ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜์‚ฌ ์ฝ”๋“œ๋กœ ์ž‘์„ฑ): ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` ๋‹ค์Œ์œผ๋กœ, ๋””๋ฒ„๊น… ์ „๋žต์— ๋Œ€ํ•ด ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ์„ ํƒ์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - ์›๋ณธ ๋ชจ๋ธ์„ ๋งŽ์€ ์ž‘์€ ํ…Œ์ŠคํŠธ ๊ฐ€๋Šฅํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜๊ณ  ๊ฐ๊ฐ์— ๋Œ€ํ•ด forward pass๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๊ฒ€์ฆํ•ฉ๋‹ˆ๋‹ค. - ์›๋ณธ ๋ชจ๋ธ์„ ์›๋ณธ *tokenizer*๊ณผ ์›๋ณธ *model*๋กœ๋งŒ ๋ถ„ํ•ดํ•˜๊ณ  ํ•ด๋‹น ๋ถ€๋ถ„์— ๋Œ€ํ•ด forward pass๋ฅผ ์‹คํ–‰ํ•œ ํ›„ ๊ฒ€์ฆ์„ ์œ„ํ•ด ์ค‘๊ฐ„ ์ถœ๋ ฅ(print ๋ฌธ ๋˜๋Š” ์ค‘๋‹จ์ )์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, ์–ด๋–ค ์ „๋žต์„ ์„ ํƒํ• ์ง€๋Š” ๋‹น์‹ ์—๊ฒŒ ๋‹ฌ๋ ค ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค์— ๋”ฐ๋ผ ํ•˜๋‚˜ ๋˜๋Š” ๋‹ค๋ฅธ ์ „๋žต์ด ์œ ๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค๋ฅผ ๋ชจ๋ธ์˜ ์ž‘์€ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์—ฌ๋ถ€, ์˜ˆ๋ฅผ ๋“ค์–ด ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค๊ฐ€ ์ฆ‰์‹œ ์‹คํ–‰ ๋ชจ๋“œ์—์„œ ๊ฐ„๋‹จํžˆ ์‹คํ–‰๋  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ, ๊ทธ๋Ÿฐ ๊ฒฝ์šฐ์—๋Š” ๊ทธ ๋…ธ๋ ฅ์ด ๊ฐ€์น˜๊ฐ€ ์žˆ๋‹ค๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ดˆ๊ธฐ์— ๋” ์–ด๋ ค์šด ๋ฐฉ๋ฒ•์„ ์„ ํƒํ•˜๋Š” ๊ฒƒ์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์žฅ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. - ์›๋ณธ ๋ชจ๋ธ์„ ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ๋น„๊ตํ•  ๋•Œ ๊ฐ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์ผ์น˜ํ•˜๋Š”์ง€ ์ž๋™์œผ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, ์‹œ๊ฐ์ ์ธ ๋น„๊ต(print ๋ฌธ์„ ํ†ตํ•œ ๋น„๊ต๊ฐ€ ์•„๋‹Œ) ๋Œ€์‹  ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ๊ทธ์— ๋Œ€์‘ํ•˜๋Š” ์›๋ณธ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์ „์ฒด ๋ชจ๋ธ์„ ๋ชจ๋“ˆ๋ณ„๋กœ, ์ฆ‰ ์ž‘์€ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•จ์œผ๋กœ์จ ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ํฐ ๋ฌธ์ œ๋ฅผ ๋‹จ์ˆœํžˆ ๊ฐœ๋ณ„ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ด์‹ํ•˜๋Š” ์ž‘์€ ๋ฌธ์ œ๋กœ ๋ถ„ํ•ดํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ž‘์—…์„ ๋” ์ž˜ ๊ตฌ์กฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋ชจ๋ธ์„ ๋…ผ๋ฆฌ์ ์œผ๋กœ ์˜๋ฏธ ์žˆ๋Š” ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„๋ฆฌํ•˜๋Š” ๊ฒƒ์€ ๋ชจ๋ธ์˜ ์„ค๊ณ„์— ๋Œ€ํ•œ ๋” ๋‚˜์€ ๊ฐœ์š”๋ฅผ ์–ป๊ณ  ๋ชจ๋ธ์„ ๋” ์ž˜ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. - ์ด๋Ÿฌํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋ณ„ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ตํ•ด ์ฝ”๋“œ๋ฅผ ๋ณ€๊ฒฝํ•˜๋ฉด์„œ ํšŒ๊ท€๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š๋„๋ก ๋ณด์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Lysandre์˜ ELECTRA ํ†ตํ•ฉ ๊ฒ€์‚ฌ](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed)๋Š” ์ด๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์ข‹์€ ์˜ˆ์ œ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค๊ฐ€ ๋งค์šฐ ๋ณต์žกํ•˜๊ฑฐ๋‚˜ ์ค‘๊ฐ„ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ปดํŒŒ์ผ๋œ ๋ชจ๋“œ์—์„œ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ๋งŒ ํ—ˆ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธ ๊ฐ€๋Šฅํ•œ ์ž‘์€ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜๋Š” ๊ฒƒ์ด ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๊ฑฐ๋‚˜ ๋ถˆ๊ฐ€๋Šฅํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [T5์˜ MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋งค์šฐ ๋ณต์žกํ•˜๋ฉฐ ๋ชจ๋ธ์„ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜๋Š” ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ฒฝ์šฐ, ๋ณดํ†ต print ๋ฌธ์„ ํ†ตํ•ด ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์–ด๋–ค ์ „๋žต์„ ์„ ํƒํ•˜๋”๋ผ๋„ ๊ถŒ์žฅ๋˜๋Š” ์ ˆ์ฐจ๋Š” ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ์‹œ์ž‘ ๋ ˆ์ด์–ด๋ฅผ ๋””๋ฒ„๊ทธํ•˜๊ณ  ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๋ฅผ ๋งˆ์ง€๋ง‰์— ๋””๋ฒ„๊ทธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ˆœ์„œ๋กœ ๊ฐ ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ์„ ๊ฒ€์ƒ‰ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค: 1. ๋ชจ๋ธ์— ์ „๋‹ฌ๋œ ์ž…๋ ฅ ID ๊ฐ€์ ธ์˜ค๊ธฐ 2. ์›Œ๋“œ ์ž„๋ฒ ๋”ฉ ๊ฐ€์ ธ์˜ค๊ธฐ 3. ์ฒซ ๋ฒˆ์งธ Transformer ๋ ˆ์ด์–ด์˜ ์ž…๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ 4. ์ฒซ ๋ฒˆ์งธ Transformer ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ 5. ๋‹ค์Œ n-1๊ฐœ์˜ Transformer ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ 6. BrandNewBert ๋ชจ๋ธ์˜ ์ถœ๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ ์ž…๋ ฅ ID๋Š” ์ •์ˆ˜ ๋ฐฐ์—ด๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ, ์˜ˆ๋ฅผ ๋“ค์–ด `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]`์™€ ๊ฐ™์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ์€ ์ข…์ข… ๋‹ค์ฐจ์› ์‹ค์ˆ˜ ๋ฐฐ์—ด๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` ๐Ÿค— Transformers์— ์ถ”๊ฐ€๋˜๋Š” ๋ชจ๋“  ๋ชจ๋ธ์€ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ์›๋ณธ ๋ชจ๋ธ๊ณผ ๐Ÿค— Transformers์˜ ์žฌ๊ตฌํ˜„ ๋ฒ„์ „์ด 0.001์˜ ์ •๋ฐ€๋„๋กœ ์ •ํ™•ํžˆ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋‚ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ๋™์ผํ•œ ๋ชจ๋ธ์ด ๋‹ค๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ์ž‘์„ฑ๋˜์—ˆ์„ ๋•Œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ํ”„๋ ˆ์ž„์›Œํฌ์— ๋”ฐ๋ผ ์•ฝ๊ฐ„ ๋‹ค๋ฅธ ์ถœ๋ ฅ์„ ์–ป๋Š” ๊ฒƒ์€ ์ •์ƒ์ด๋ฏ€๋กœ 1e-3(0.001)์˜ ์˜ค์ฐจ๋Š” ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ฑฐ์˜ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋‚ด๋Š” ๊ฒƒ๋งŒ์œผ๋กœ๋Š” ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์œผ๋ฉฐ, ์™„๋ฒฝํžˆ ์ผ์น˜ํ•˜๋Š” ์ˆ˜์ค€์ด์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๐Ÿค— Transformers ๋ฒ„์ „์˜ ์ค‘๊ฐ„ ์ถœ๋ ฅ์„ *brand_new_bert*์˜ ์›๋ž˜ ๊ตฌํ˜„์˜ ์ค‘๊ฐ„ ์ถœ๋ ฅ๊ณผ ์—ฌ๋Ÿฌ ๋ฒˆ ๋น„๊ตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์›๋ณธ ์ €์žฅ์†Œ์˜ **ํšจ์œจ์ ์ธ** ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์ด ์ ˆ๋Œ€์ ์œผ๋กœ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์„ ๊ฐ€๋Šฅํ•œ ํ•œ ํšจ์œจ์ ์œผ๋กœ ๋งŒ๋“œ๋Š” ๋ช‡ ๊ฐ€์ง€ ์กฐ์–ธ์„ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. - ์ค‘๊ฐ„ ๊ฒฐ๊ณผ๋ฅผ ๋””๋ฒ„๊ทธํ•˜๋Š” ๊ฐ€์žฅ ์ข‹์€ ๋ฐฉ๋ฒ•์„ ์ฐพ์œผ์„ธ์š”. ์›๋ณธ ์ €์žฅ์†Œ๊ฐ€ PyTorch๋กœ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด ์›๋ณธ ๋ชจ๋ธ์„ ๋” ์ž‘์€ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜์—ฌ ์ค‘๊ฐ„ ๊ฐ’์„ ๊ฒ€์ƒ‰ํ•˜๋Š” ๊ธด ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์— ์‹œ๊ฐ„์„ ํˆฌ์žํ•  ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ €์žฅ์†Œ๊ฐ€ Tensorflow 1๋กœ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด [tf.print](https://www.tensorflow.org/api_docs/python/tf/print)์™€ ๊ฐ™์€ Tensorflow ์ถœ๋ ฅ ์ž‘์—…์„ ์‚ฌ์šฉํ•˜์—ฌ ์ค‘๊ฐ„ ๊ฐ’์„ ์ถœ๋ ฅํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ €์žฅ์†Œ๊ฐ€ Jax๋กœ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด forward pass๋ฅผ ์‹คํ–‰ํ•  ๋•Œ ๋ชจ๋ธ์ด **jit ๋˜์ง€ ์•Š๋„๋ก** ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [์ด ๋งํฌ](https://github.com/google/jax/issues/196)๋ฅผ ํ™•์ธํ•ด ๋ณด์„ธ์š”. - ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๊ฐ€์žฅ ์ž‘์€ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ž‘์„์ˆ˜๋ก ๋””๋ฒ„๊ทธ ์‚ฌ์ดํด์ด ๋” ๋นจ๋ผ์ง‘๋‹ˆ๋‹ค. ์ „๋ฐ˜์ ์œผ๋กœ forward pass์— 10์ดˆ ์ด์ƒ์ด ๊ฑธ๋ฆฌ๋Š” ๊ฒฝ์šฐ ํšจ์œจ์ ์ด์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋งค์šฐ ํฐ ์ฒดํฌํฌ์ธํŠธ๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ, ์ƒˆ ํ™˜๊ฒฝ์—์„œ ์ž„์˜๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ€์ค‘์น˜๋กœ ๋”๋ฏธ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ณ  ํ•ด๋‹น ๊ฐ€์ค‘์น˜๋ฅผ ๐Ÿค— Transformers ๋ฒ„์ „๊ณผ ๋น„๊ตํ•˜๊ธฐ ์œ„ํ•ด ์ €์žฅํ•˜๋Š” ๊ฒƒ์ด ๋” ์˜๋ฏธ๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋””๋ฒ„๊น… ์„ค์ •์—์„œ ๊ฐ€์žฅ ์‰ฝ๊ฒŒ forward pass๋ฅผ ํ˜ธ์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ **๋‹จ์ผ** forward pass๋งŒ ํ˜ธ์ถœํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ฐพ๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ž…๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ `predict`, `evaluate`, `forward`, `__call__`๊ณผ ๊ฐ™์ด ํ˜ธ์ถœ๋ฉ๋‹ˆ๋‹ค. `autoregressive_sample`๊ณผ ๊ฐ™์€ ํ…์ŠคํŠธ ์ƒ์„ฑ์—์„œ `forward`๋ฅผ ์—ฌ๋Ÿฌ ๋ฒˆ ํ˜ธ์ถœํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋“ฑ์˜ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋””๋ฒ„๊ทธํ•˜๊ณ  ์‹ถ์ง€ ์•Š์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. - ํ† ํฐํ™” ๊ณผ์ •์„ ๋ชจ๋ธ์˜ *forward* pass์™€ ๋ถ„๋ฆฌํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์„ธ์š”. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์ž…๋ ฅ ๋ฌธ์ž์—ด์„ ์ž…๋ ฅํ•ด์•ผ ํ•˜๋Š” ์˜ˆ์ œ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ, ์ž…๋ ฅ ๋ฌธ์ž์—ด์ด ์ž…๋ ฅ ID๋กœ ๋ณ€๊ฒฝ๋˜๋Š” ์ˆœ๊ฐ„์„ ์ฐพ์•„์„œ ์‹œ์ž‘ํ•˜์„ธ์š”. ์ด ๊ฒฝ์šฐ ์ง์ ‘ ID๋ฅผ ์ž…๋ ฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์ž‘์€ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•˜๊ฑฐ๋‚˜ ์›๋ณธ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋””๋ฒ„๊น… ์„ค์ •์—์„œ ๋ชจ๋ธ์ด ํ›ˆ๋ จ ๋ชจ๋“œ๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ชจ๋“œ์—์„œ๋Š” ๋ชจ๋ธ์˜ ์—ฌ๋Ÿฌ ๋“œ๋กญ์•„์›ƒ ๋ ˆ์ด์–ด ๋•Œ๋ฌธ์— ๋ฌด์ž‘์œ„ ์ถœ๋ ฅ์ด ์ƒ์„ฑ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์—์„œ forward pass๊ฐ€ **๊ฒฐ์ •๋ก ์ **์ด๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜๋Š” ๋™์ผํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์— ์žˆ๋Š” ๊ฒฝ์šฐ *transformers.utils.set_seed*๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋‹ค์Œ ์„น์…˜์—์„œ๋Š” *brand_new_bert*์— ๋Œ€ํ•ด ์ด ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐ ๋” ๊ตฌ์ฒด์ ์ธ ์„ธ๋ถ€ ์‚ฌํ•ญ/ํŒ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ### 5.-14. ๐Ÿค— Transformers์— BrandNewBert๋ฅผ ์ด์‹ํ•˜๊ธฐ [[5.-14.-port-brandnewbert-to-transformers]] ์ด์ œ, ๋งˆ์นจ๋‚ด ๐Ÿค— Transformers์— ์ƒˆ๋กœ์šด ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํฌํฌ์˜ ํด๋ก ์œผ๋กœ ์ด๋™ํ•˜์„ธ์š”: ```bash cd transformers ``` ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ์˜ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์™€ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜๋Š” ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ํŠน๋ณ„ํ•œ ๊ฒฝ์šฐ์—๋Š” [์ด ์„น์…˜](#write-a-conversion-script)์— ์„ค๋ช…๋œ๋Œ€๋กœ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋งŒ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ์˜ ์ „์ฒด ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ทธ๋Œ€๋กœ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ์ƒˆ ๋ชจ๋ธ ์ƒ์„ฑ์„ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์Œ์—์„œ ์‹œ์ž‘ํ•˜๋Š” ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๊ธฐ์กด ๋ชจ๋ธ: ```bash transformers-cli add-new-model-like ``` ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ์ •๋ณด๋ฅผ ์ž…๋ ฅํ•˜๋Š” ์„ค๋ฌธ์ง€๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. **huggingface/transformers ๋ฉ”์ธ ์ €์žฅ์†Œ์— Pull Request ์—ด๊ธฐ** ์ž๋™์œผ๋กœ ์ƒ์„ฑ๋œ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•˜๊ธฐ ์ „์—, ์ง€๊ธˆ์€ "์ž‘์—… ์ง„ํ–‰ ์ค‘ (WIP)" ํ’€ ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ ์—ด๊ธฐ ์œ„ํ•œ ์‹œ๊ธฐ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๐Ÿค— Transformers์— "*brand_new_bert* ์ถ”๊ฐ€"๋ผ๋Š” ์ œ๋ชฉ์˜ "[WIP] Add *brand_new_bert*" ํ’€ ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ ์—ฝ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋‹น์‹ ๊ณผ Hugging Face ํŒ€์ด ๐Ÿค— Transformers์— ๋ชจ๋ธ์„ ํ†ตํ•ฉํ•˜๋Š” ์ž‘์—…์„ ํ•จ๊ป˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. ๋ฉ”์ธ ๋ธŒ๋žœ์น˜์—์„œ ์ž‘์—…์„ ์ž˜ ์„ค๋ช…ํ•˜๋Š” ์ด๋ฆ„์œผ๋กœ ๋ธŒ๋žœ์น˜ ์ƒ์„ฑ ```bash git checkout -b add_brand_new_bert ``` 2. ์ž๋™์œผ๋กœ ์ƒ์„ฑ๋œ ์ฝ”๋“œ ์ปค๋ฐ‹ ```bash git add . git commit ``` 3. ํ˜„์žฌ ๋ฉ”์ธ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ๋ฒ ์ด์Šค ```bash git fetch upstream git rebase upstream/main ``` 4. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๊ณ„์ •์— ํ‘ธ์‹œ ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. ๋งŒ์กฑ์Šค๋Ÿฝ๋‹ค๋ฉด, GitHub์—์„œ ์ž์‹ ์˜ ํฌํฌํ•œ ์›น ํŽ˜์ด์ง€๋กœ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค. "Pull request"๋ฅผ ํด๋ฆญํ•ฉ๋‹ˆ๋‹ค. Hugging Face ํŒ€์˜ ์ผ๋ถ€ ๋ฉค๋ฒ„์˜ GitHub ํ•ธ๋“ค์„ ๋ฆฌ๋ทฐ์–ด๋กœ ์ถ”๊ฐ€ํ•˜์—ฌ Hugging Face ํŒ€์ด ์•ž์œผ๋กœ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋Œ€ํ•ด ์•Œ๋ฆผ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. 6. GitHub ํ’€ ๋ฆฌํ€˜์ŠคํŠธ ์›น ํŽ˜์ด์ง€ ์˜ค๋ฅธ์ชฝ์— ์žˆ๋Š” "Convert to draft"๋ฅผ ํด๋ฆญํ•˜์—ฌ PR์„ ์ดˆ์•ˆ์œผ๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์–ด๋–ค ์ง„์ „์„ ์ด๋ฃจ์—ˆ๋‹ค๋ฉด ์ž‘์—…์„ ์ปค๋ฐ‹ํ•˜๊ณ  ๊ณ„์ •์— ํ‘ธ์‹œํ•˜์—ฌ ํ’€ ๋ฆฌํ€˜์ŠคํŠธ์— ํ‘œ์‹œ๋˜๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ˜„์žฌ ๋ฉ”์ธ๊ณผ ์ž‘์—…์„ ์—…๋ฐ์ดํŠธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git fetch upstream git merge upstream/main ``` ์ผ๋ฐ˜์ ์œผ๋กœ, ๋ชจ๋ธ ๋˜๋Š” ๊ตฌํ˜„์— ๊ด€ํ•œ ๋ชจ๋“  ์งˆ๋ฌธ์€ ์ž์‹ ์˜ PR์—์„œ ํ•ด์•ผ ํ•˜๋ฉฐ, PR์—์„œ ํ† ๋ก ๋˜๊ณ  ํ•ด๊ฒฐ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด Hugging Face ํŒ€์ด ์ƒˆ๋กœ์šด ์ฝ”๋“œ๋ฅผ ์ปค๋ฐ‹ํ•˜๊ฑฐ๋‚˜ ์งˆ๋ฌธ์„ ํ•  ๋•Œ ํ•ญ์ƒ ์•Œ๋ฆผ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hugging Face ํŒ€์—๊ฒŒ ๋ฌธ์ œ ๋˜๋Š” ์งˆ๋ฌธ์„ ํšจ์œจ์ ์œผ๋กœ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์ถ”๊ฐ€ํ•œ ์ฝ”๋“œ๋ฅผ ๋ช…์‹œํ•˜๋Š” ๊ฒƒ์ด ๋„์›€์ด ๋  ๋•Œ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด, ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ชจ๋‘ ๋ณผ ์ˆ˜ ์žˆ๋Š” "Files changed" ํƒญ์œผ๋กœ ์ด๋™ํ•˜์—ฌ ์งˆ๋ฌธํ•˜๊ณ ์ž ํ•˜๋Š” ์ค„๋กœ ์ด๋™ํ•œ ๋‹ค์Œ "+" ๊ธฐํ˜ธ๋ฅผ ํด๋ฆญํ•˜์—ฌ ์ฝ”๋ฉ˜ํŠธ๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ๋ฌธ์ด๋‚˜ ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋˜๋ฉด, ์ƒ์„ฑ๋œ ์ฝ”๋ฉ˜ํŠธ์˜ "Resolve" ๋ฒ„ํŠผ์„ ํด๋ฆญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, Hugging Face ํŒ€์€ ์ฝ”๋“œ๋ฅผ ๋ฆฌ๋ทฐํ•  ๋•Œ ์ฝ”๋ฉ˜ํŠธ๋ฅผ ๋‚จ๊ธธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” PR์—์„œ ๋Œ€๋ถ€๋ถ„์˜ ์งˆ๋ฌธ์„ GitHub์—์„œ ๋ฌป๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ณต๊ฐœ์— ํฌ๊ฒŒ ๋„์›€์ด ๋˜์ง€ ์•Š๋Š” ๋งค์šฐ ์ผ๋ฐ˜์ ์ธ ์งˆ๋ฌธ์˜ ๊ฒฝ์šฐ, Slack์ด๋‚˜ ์ด๋ฉ”์ผ์„ ํ†ตํ•ด Hugging Face ํŒ€์—๊ฒŒ ๋ฌธ์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **5. brand_new_bert์— ๋Œ€ํ•ด ์ƒ์„ฑ๋œ ๋ชจ๋ธ ์ฝ”๋“œ๋ฅผ ์ ์šฉํ•˜๊ธฐ** ๋จผ์ €, ์šฐ๋ฆฌ๋Š” ๋ชจ๋ธ ์ž์ฒด์—๋งŒ ์ดˆ์ ์„ ๋งž์ถ”๊ณ  ํ† ํฌ๋‚˜์ด์ €์— ๋Œ€ํ•ด์„œ๋Š” ์‹ ๊ฒฝ ์“ฐ์ง€ ์•Š์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ๊ด€๋ จ ์ฝ”๋“œ๋Š” ๋‹ค์Œ์˜ ์ƒ์„ฑ๋œ ํŒŒ์ผ์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` ๋ฐ `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. ์ด์ œ ๋งˆ์นจ๋‚ด ์ฝ”๋”ฉ์„ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค :). `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`์˜ ์ƒ์„ฑ๋œ ์ฝ”๋“œ๋Š” ์ธ์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ BERT์™€ ๋™์ผํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ฐ€์ง€๊ฑฐ๋‚˜, ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ BART์™€ ๋™์ผํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ฐ€์งˆ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ, ๋ชจ๋ธ์˜ ์ด๋ก ์  ์ธก๋ฉด์— ๋Œ€ํ•ด ๋ฐฐ์šด ๋‚ด์šฉ์„ ๋‹ค์‹œ ์ƒ๊ธฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: *๋ชจ๋ธ์ด BERT ๋˜๋Š” BART์™€ ์–ด๋–ป๊ฒŒ ๋‹ค๋ฅธ๊ฐ€์š”?*. ์ž์ฃผ ๋ณ€๊ฒฝํ•ด์•ผ ํ•˜๋Š” ๊ฒƒ์€ *self-attention* ๋ ˆ์ด์–ด, ์ •๊ทœํ™” ๋ ˆ์ด์–ด์˜ ์ˆœ์„œ ๋“ฑ์„ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, ์ž์‹ ์˜ ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋„๋ก Transformers์—์„œ ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ์˜ ์œ ์‚ฌํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์ด ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **์ฐธ๊ณ ๋กœ** ์ด ์‹œ์ ์—์„œ, ์ฝ”๋“œ๊ฐ€ ์™„์ „ํžˆ ์ •ํ™•ํ•˜๊ฑฐ๋‚˜ ๊นจ๋—ํ•˜๋‹ค๊ณ  ํ™•์‹ ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์˜คํžˆ๋ ค ์ฒ˜์Œ์—๋Š” ์›๋ณธ ์ฝ”๋“œ์˜ ์ฒซ ๋ฒˆ์งธ *๋ถˆ์™„์ „ํ•˜๊ณ * ๋ณต์‚ฌ๋œ ๋ฒ„์ „์„ `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`์— ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ๋ชจ๋“  ์ฝ”๋“œ๊ฐ€ ์ถ”๊ฐ€๋  ๋•Œ๊นŒ์ง€ ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ์ง„ํ–‰ํ•œ ํ›„, ๋‹ค์Œ ์„น์…˜์—์„œ ์„ค๋ช…ํ•œ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ฝ”๋“œ๋ฅผ ์ ์ง„์ ์œผ๋กœ ๊ฐœ์„ ํ•˜๊ณ  ์ˆ˜์ •ํ•˜๋Š” ๊ฒƒ์ด ํ›จ์”ฌ ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ์ž‘๋™ํ•ด์•ผ ํ•˜๋Š” ์œ ์ผํ•œ ๊ฒƒ์€ ๋‹ค์Œ ๋ช…๋ น์ด ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` ์œ„์˜ ๋ช…๋ น์€ `BrandNewBertConfig()`์— ์ •์˜๋œ ๊ธฐ๋ณธ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋”ฐ๋ผ ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜๋ฉฐ, ์ด๋กœ์จ ๋ชจ๋“  ๊ตฌ์„ฑ ์š”์†Œ์˜ `init()` ๋ฉ”์„œ๋“œ๊ฐ€ ์ž‘๋™ํ•จ์„ ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ฌด์ž‘์œ„ ์ดˆ๊ธฐํ™”๋Š” `BrandnewBertPreTrainedModel` ํด๋ž˜์Šค์˜ `_init_weights` ๋ฉ”์„œ๋“œ์—์„œ ์ˆ˜ํ–‰๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฉ”์„œ๋“œ๋Š” ๊ตฌ์„ฑ ์„ค์ • ๋ณ€์ˆ˜์— ๋”ฐ๋ผ ๋ชจ๋“  ๋ฆฌํ”„ ๋ชจ๋“ˆ์„ ์ดˆ๊ธฐํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. BERT์˜ `_init_weights` ๋ฉ”์„œ๋“œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) ``` ๋ช‡ ๊ฐ€์ง€ ๋ชจ๋“ˆ์— ๋Œ€ํ•ด ํŠน๋ณ„ํ•œ ์ดˆ๊ธฐํ™”๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `Wav2Vec2ForPreTraining`์—์„œ ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐœ์˜ ์„ ํ˜• ๋ ˆ์ด์–ด๋Š” ์ผ๋ฐ˜์ ์ธ PyTorch `nn.Linear`์˜ ์ดˆ๊ธฐํ™”๋ฅผ ๊ฐ€์ ธ์•ผ ํ•˜์ง€๋งŒ, ๋‹ค๋ฅธ ๋ชจ๋“  ๋ ˆ์ด์–ด๋Š” ์œ„์™€ ๊ฐ™์€ ์ดˆ๊ธฐํ™”๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ฝ”๋“œํ™”๋ฉ๋‹ˆ๋‹ค: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() ``` `_is_hf_initialized` ํ”Œ๋ž˜๊ทธ๋Š” ์„œ๋ธŒ๋ชจ๋“ˆ์„ ํ•œ ๋ฒˆ๋งŒ ์ดˆ๊ธฐํ™”ํ•˜๋„๋ก ๋‚ด๋ถ€์ ์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. `module.project_q` ๋ฐ `module.project_hid`์— ๋Œ€ํ•ด `True`๋กœ ์„ค์ •ํ•จ์œผ๋กœ์จ, ์šฐ๋ฆฌ๊ฐ€ ์ˆ˜ํ–‰ํ•œ ์‚ฌ์šฉ์ž ์ •์˜ ์ดˆ๊ธฐํ™”๊ฐ€ ์ดํ›„์— ๋ฎ์–ด์“ฐ์ด์ง€ ์•Š๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, `_init_weights` ํ•จ์ˆ˜๊ฐ€ ์ด๋“ค์—๊ฒŒ ์ ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. **6. ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ ์ž‘์„ฑํ•˜๊ธฐ** ๋‹ค์Œ์œผ๋กœ, ๋””๋ฒ„๊ทธ์— ์‚ฌ์šฉํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ธฐ์กด ์ €์žฅ์†Œ์—์„œ ๋งŒ๋“  ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ํ˜ธํ™˜๋˜๋Š” ์ฒดํฌํฌ์ธํŠธ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ๋Š” ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฒ˜์Œ๋ถ€ํ„ฐ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค๋Š” *brand_new_bert*์™€ ๋™์ผํ•œ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ž‘์„ฑ๋œ ์œ ์‚ฌํ•œ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•œ ๊ธฐ์กด ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฐพ์•„๋ณด๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๊ธฐ์กด ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณต์‚ฌํ•˜์—ฌ ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋งž๊ฒŒ ์•ฝ๊ฐ„ ์ˆ˜์ •ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ๋Œ€ํ•ด ์œ ์‚ฌํ•œ ๊ธฐ์กด ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์–ด๋””์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š”์ง€ Hugging Face ํŒ€์—๊ฒŒ ๋ฌธ์˜ํ•˜๋Š” ๊ฒƒ์„ ๋ง์„ค์ด์ง€ ๋งˆ์„ธ์š”. - TensorFlow์—์„œ PyTorch๋กœ ๋ชจ๋ธ์„ ์ด์ „ํ•˜๋Š” ๊ฒฝ์šฐ, ์ข‹์€ ์ฐธ๊ณ  ์ž๋ฃŒ๋กœ BERT์˜ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91)๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - PyTorch์—์„œ PyTorch๋กœ ๋ชจ๋ธ์„ ์ด์ „ํ•˜๋Š” ๊ฒฝ์šฐ, ์ข‹์€ ์ฐธ๊ณ  ์ž๋ฃŒ๋กœ BART์˜ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py)๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์—์„œ๋Š” PyTorch ๋ชจ๋ธ์ด ๋ ˆ์ด์–ด ๊ฐ€์ค‘์น˜๋ฅผ ์ €์žฅํ•˜๊ณ  ๋ ˆ์ด์–ด ์ด๋ฆ„์„ ์ •์˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๊ฐ„๋‹จํžˆ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. PyTorch์—์„œ ๋ ˆ์ด์–ด์˜ ์ด๋ฆ„์€ ๋ ˆ์ด์–ด์— ์ง€์ •ํ•œ ํด๋ž˜์Šค ์†์„ฑ์˜ ์ด๋ฆ„์œผ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด PyTorch์—์„œ `SimpleModel`์ด๋ผ๋Š” ๋”๋ฏธ ๋ชจ๋ธ์„ ์ •์˜ํ•ด ๋ด…์‹œ๋‹ค: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` ์ด์ œ ์ด ๋ชจ๋ธ ์ •์˜์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ `dense`, `intermediate`, `layer_norm` ๋“ฑ์˜ ๊ฐ€์ค‘์น˜๊ฐ€ ๋žœ๋คํ•˜๊ฒŒ ํ• ๋‹น๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ถœ๋ ฅํ•˜์—ฌ ์•„ํ‚คํ…์ฒ˜๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python model = SimpleModel() print(model) ``` ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` ์šฐ๋ฆฌ๋Š” ๋ ˆ์ด์–ด์˜ ์ด๋ฆ„์ด PyTorch์—์„œ ํด๋ž˜์Šค ์†์„ฑ์˜ ์ด๋ฆ„์œผ๋กœ ์ •์˜๋˜์–ด ์žˆ๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ๋ ˆ์ด์–ด์˜ ๊ฐ€์ค‘์น˜ ๊ฐ’์„ ์ถœ๋ ฅํ•˜์—ฌ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python print(model.dense.weight.data) ``` ๊ฐ€์ค‘์น˜๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋˜์—ˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์ฒดํฌํฌ์ธํŠธ์˜ ํ•ด๋‹น ๋ ˆ์ด์–ด์˜ ์ •ํ™•ํ•œ ๊ฐ€์ค‘์น˜๋กœ ์ฑ„์›Œ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด PyTorch ๋ชจ๋ธ์˜ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ ๊ฐ€์ค‘์น˜์™€ ํ•ด๋‹น ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜๊ฐ€ **๋ชจ์–‘๊ณผ ์ด๋ฆ„** ๋ชจ๋‘์—์„œ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋ชจ์–‘์— ๋Œ€ํ•œ assert ๋ฌธ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜์˜ ์ด๋ฆ„์„ ์ถœ๋ ฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฌธ์žฅ์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` ๋˜ํ•œ ๋‘ ๊ฐ€์ค‘์น˜์˜ ์ด๋ฆ„์„ ์ถœ๋ ฅํ•˜์—ฌ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. *์˜ˆ์‹œ*: ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` ๋ชจ์–‘ ๋˜๋Š” ์ด๋ฆ„์ด ์ผ์น˜ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, ๋žœ๋ค์œผ๋กœ ์ดˆ๊ธฐํ™”๋œ ๋ ˆ์ด์–ด์— ์ž˜๋ชป๋œ ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜๋ฅผ ํ• ๋‹นํ•œ ๊ฒƒ์œผ๋กœ ์ถ”์ธก๋ฉ๋‹ˆ๋‹ค. ์ž˜๋ชป๋œ ๋ชจ์–‘์€ `BrandNewBertConfig()`์˜ ๊ตฌ์„ฑ ๋งค๊ฐœ๋ณ€์ˆ˜ ์„ค์ •์ด ๋ณ€ํ™˜ํ•˜๋ ค๋Š” ์ฒดํฌํฌ์ธํŠธ์— ์‚ฌ์šฉ๋œ ์„ค์ •๊ณผ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ํฝ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ PyTorch์˜ ๋ ˆ์ด์–ด ๊ตฌํ˜„ ์ž์ฒด์—์„œ ๊ฐ€์ค‘์น˜๋ฅผ ์ „์น˜ํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, **๋ชจ๋“ ** ํ•„์š”ํ•œ ๊ฐ€์ค‘์น˜๊ฐ€ ์ดˆ๊ธฐํ™”๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜๊ณ  ์ดˆ๊ธฐํ™”์— ์‚ฌ์šฉ๋˜์ง€ ์•Š์€ ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜๋ฅผ ์ถœ๋ ฅํ•˜์—ฌ ๋ชจ๋ธ์ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋ณ€ํ™˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž˜๋ชป๋œ ๋ชจ์–‘ ๋ฌธ์žฅ์ด๋‚˜ ์ž˜๋ชป๋œ ์ด๋ฆ„ ํ• ๋‹น์œผ๋กœ ์ธํ•ด ๋ณ€ํ™˜ ์‹œ๋„๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒƒ์€ ์™„์ „ํžˆ ์ •์ƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” `BrandNewBertConfig()`์—์„œ ์ž˜๋ชป๋œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ ๐Ÿค— Transformers ๊ตฌํ˜„์—์„œ ์ž˜๋ชป๋œ ์•„ํ‚คํ…์ฒ˜, ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ๊ตฌ์„ฑ ์š”์†Œ ์ค‘ ํ•˜๋‚˜์˜ `init()` ํ•จ์ˆ˜์— ๋ฒ„๊ทธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ์ด๊ฑฐ๋‚˜ ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜ ์ค‘ ํ•˜๋‚˜๋ฅผ ์ „์น˜ํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ๋†’์Šต๋‹ˆ๋‹ค. ์ด ๋‹จ๊ณ„๋Š” ์ด์ „ ๋‹จ๊ณ„์™€ ํ•จ๊ป˜ ๋ฐ˜๋ณต๋˜์–ด์•ผ ํ•˜๋ฉฐ ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ์˜ ๊ฐ€์ค‘์น˜๊ฐ€ Transformers ๋ชจ๋ธ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋กœ๋“œ๋˜์—ˆ์„ ๋•Œ๊นŒ์ง€ ๊ณ„์†๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers ๊ตฌํ˜„์— ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋กœ๋“œํ•œ ํ›„์—๋Š” `/path/to/converted/checkpoint/folder`์™€ ๊ฐ™์€ ์›ํ•˜๋Š” ํด๋”์— ๋ชจ๋ธ์„ ์ €์žฅํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ํด๋”์—๋Š” `pytorch_model.bin` ํŒŒ์ผ๊ณผ `config.json` ํŒŒ์ผ์ด ๋ชจ๋‘ ํฌํ•จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค ๊ตฌํ˜„ํ•˜๊ธฐ** ๐Ÿค— Transformers ๊ตฌํ˜„์— ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ๋กœ๋“œํ•œ ํ›„์—๋Š” ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๊ตฌํ˜„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [์›๋ณธ ์ €์žฅ์†Œ์— ์ต์ˆ™ํ•ด์ง€๊ธฐ](#3-4-run-a-pretrained-checkpoint-using-the-original-repository)์—์„œ ์ด๋ฏธ ์›๋ณธ ์ €์žฅ์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค๋ฅผ ์‹คํ–‰ํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์›๋ณธ ๋Œ€์‹  ๐Ÿค— Transformers ๊ตฌํ˜„์„ ์‚ฌ์šฉํ•˜๋Š” ์œ ์‚ฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ž‘์„ฑ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ์›๋ณธ ๋ชจ๋ธ ๊ตฌํ˜„์ด ์ฒ˜์Œ๋ถ€ํ„ฐ ์ •ํ™•ํžˆ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ์ œ๊ณตํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค์—์„œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋งค์šฐ ๋†’์Šต๋‹ˆ๋‹ค. ์‹ค๋งํ•˜์ง€ ๋งˆ์„ธ์š”. ์˜ˆ์ƒ๋œ ์ผ์ž…๋‹ˆ๋‹ค! ๋จผ์ €, ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค์—์„œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ข…์ข… ์ž˜๋ชป๋œ ์ฐจ์›์ด ์‚ฌ์šฉ๋˜์–ด *์ฐจ์› ๋ถˆ์ผ์น˜* ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๊ฑฐ๋‚˜ ์ž˜๋ชป๋œ ๋ฐ์ดํ„ฐ ์œ ํ˜• ๊ฐœ์ฒด๊ฐ€ ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด `torch.long` ๋Œ€์‹ ์— `torch.float32`๊ฐ€ ์‚ฌ์šฉ๋œ ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค. ํ•ด๊ฒฐํ•  ์ˆ˜ ์—†๋Š” ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด Hugging Face ํŒ€์— ๋„์›€์„ ์š”์ฒญํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ๊ตฌํ˜„์ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋Š” ์ถœ๋ ฅ์ด `1e-3`์˜ ์ •๋ฐ€๋„๋กœ ๋™์ผํ•œ์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋จผ์ €, ์ถœ๋ ฅ ๋ชจ์–‘์ด ๋™์ผํ•˜๋„๋ก ๋ณด์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๐Ÿค— Transformers ๊ตฌํ˜„ ์Šคํฌ๋ฆฝํŠธ์™€ ์›๋ณธ ๊ตฌํ˜„ ์‚ฌ์ด์—์„œ `outputs.shape`๋Š” ๋™์ผํ•œ ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ, ์ถœ๋ ฅ ๊ฐ’์ด ๋™์ผํ•˜๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„ ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์ถœ๋ ฅ์ด ๋™์ผํ•˜์ง€ ์•Š์€ ์ผ๋ฐ˜์ ์ธ ์‹ค์ˆ˜ ์‚ฌ๋ก€๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ์ผ๋ถ€ ๋ ˆ์ด์–ด๊ฐ€ ์ถ”๊ฐ€๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ์ฆ‰, *ํ™œ์„ฑํ™”* ๋ ˆ์ด์–ด๊ฐ€ ์ถ”๊ฐ€๋˜์ง€ ์•Š์•˜๊ฑฐ๋‚˜ ์ž”์ฐจ ์—ฐ๊ฒฐ์ด ๋น ์กŒ์Šต๋‹ˆ๋‹ค. - ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ ํ–‰๋ ฌ์ด ์—ฐ๊ฒฐ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. - ์ž˜๋ชป๋œ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ์ด ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ๊ตฌํ˜„์—์„œ๋Š” ์˜คํ”„์…‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. - ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค ์ค‘์— Dropout์ด ์ ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์ˆ˜์ •ํ•˜๋ ค๋ฉด *model.training์ด False*์ธ์ง€ ํ™•์ธํ•˜๊ณ  ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค ์ค‘์— Dropout ๋ ˆ์ด์–ด๊ฐ€ ์ž˜๋ชป ํ™œ์„ฑํ™”๋˜์ง€ ์•Š๋„๋ก ํ•˜์„ธ์š”. ์ฆ‰, [PyTorch์˜ ๊ธฐ๋Šฅ์  Dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout)์— *self.training*์„ ์ „๋‹ฌํ•˜์„ธ์š”. ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๊ฐ€์žฅ ์ข‹์€ ๋ฐฉ๋ฒ•์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์›๋ณธ ๊ตฌํ˜„๊ณผ ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค๋ฅผ ๋‚˜๋ž€ํžˆ ๋†“๊ณ  ์ฐจ์ด์ ์ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์ƒ์ ์œผ๋กœ๋Š” ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค์˜ ์ค‘๊ฐ„ ์ถœ๋ ฅ์„ ๋””๋ฒ„๊ทธ/์ถœ๋ ฅํ•˜์—ฌ ์›๋ณธ ๊ตฌํ˜„๊ณผ ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ์ •ํ™•ํ•œ ์œ„์น˜๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ €, ๋‘ ์Šคํฌ๋ฆฝํŠธ์˜ ํ•˜๋“œ์ฝ”๋”ฉ๋œ `input_ids`๊ฐ€ ๋™์ผํ•œ์ง€ ํ™•์ธํ•˜์„ธ์š”. ๋‹ค์Œ์œผ๋กœ, `input_ids`์˜ ์ฒซ ๋ฒˆ์งธ ๋ณ€ํ™˜์˜ ์ถœ๋ ฅ(์ผ๋ฐ˜์ ์œผ๋กœ ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ)์ด ๋™์ผํ•œ์ง€ ํ™•์ธํ•˜์„ธ์š”. ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋„คํŠธ์›Œํฌ์˜ ๊ฐ€์žฅ ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๊นŒ์ง€ ์ง„ํ–‰ํ•ด๋ณด์„ธ์š”. ์–ด๋Š ์‹œ์ ์—์„œ ๋‘ ๊ตฌํ˜„ ์‚ฌ์ด์— ์ฐจ์ด๊ฐ€ ์žˆ๋Š” ๊ฒƒ์„ ์•Œ๊ฒŒ ๋˜๋Š”๋ฐ, ์ด๋Š” ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ๋ฒ„๊ทธ ์œ„์น˜๋ฅผ ๊ฐ€๋ฆฌํ‚ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ €ํฌ ๊ฒฝํ—˜์ƒ์œผ๋กœ๋Š” ์›๋ณธ ๊ตฌํ˜„๊ณผ ๐Ÿค— Transformers ๊ตฌํ˜„ ๋ชจ๋‘์—์„œ ๋™์ผํ•œ ์œ„์น˜์— ๋งŽ์€ ์ถœ๋ ฅ ๋ฌธ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ์ด๋“ค์˜ ์ค‘๊ฐ„ ํ‘œํ˜„์— ๋Œ€ํ•ด ๋™์ผํ•œ ๊ฐ’์„ ๋ณด์ด๋Š” ์ถœ๋ ฅ ๋ฌธ์„ ์—ฐ์†์ ์œผ๋กœ ์ œ๊ฑฐํ•˜๋Š” ๊ฒƒ์ด ๊ฐ„๋‹จํ•˜๊ณ  ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. `torch.allclose(original_output, output, atol=1e-3)`๋กœ ์ถœ๋ ฅ์„ ํ™•์ธํ•˜์—ฌ ๋‘ ๊ตฌํ˜„์ด ๋™์ผํ•œ ์ถœ๋ ฅ์„ ํ•˜๋Š” ๊ฒƒ์„ ํ™•์‹ ํ•œ๋‹ค๋ฉด, ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„์€ ๋๋‚ฌ์Šต๋‹ˆ๋‹ค! ์ถ•ํ•˜๋“œ๋ฆฝ๋‹ˆ๋‹ค. ๋‚จ์€ ์ž‘์—…์€ ์‰ฌ์šด ์ผ์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค ๐Ÿ˜Š. **8. ํ•„์š”ํ•œ ๋ชจ๋“  ๋ชจ๋ธ ํ…Œ์ŠคํŠธ ์ถ”๊ฐ€ํ•˜๊ธฐ** ์ด ์‹œ์ ์—์„œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•ด๋‹น ๋ชจ๋ธ์ด ์š”๊ตฌ๋˜๋Š” ๋””์ž์ธ์— ์™„์ „ํžˆ ๋ถ€ํ•ฉํ•˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers์™€ ์™„๋ฒฝํ•˜๊ฒŒ ํ˜ธํ™˜๋˜๋Š” ๊ตฌํ˜„์ธ์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋“  ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. Cookiecutter๋Š” ์•„๋งˆ๋„ ๋ชจ๋ธ์„ ์œ„ํ•œ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์„ ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ–ˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์•„๋งˆ๋„ `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`์™€ ๊ฐ™์€ ๊ฒฝ๋กœ์— ์œ„์น˜ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํ…Œ์ŠคํŠธ ํŒŒ์ผ์„ ์‹คํ–‰ํ•˜์—ฌ ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๊ฐ€ ๋ชจ๋‘ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ```bash pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py ``` ๋ชจ๋“  ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๋ฅผ ์ˆ˜์ •ํ•œ ํ›„, ์ด์ œ ์ˆ˜ํ–‰ํ•œ ์ž‘์—…์„ ์ถฉ๋ถ„ํžˆ ํ…Œ์ŠคํŠธํ•˜์—ฌ ๋‹ค์Œ ์‚ฌํ•ญ์„ ๋ณด์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - a) ์ปค๋ฎค๋‹ˆํ‹ฐ๊ฐ€ *brand_new_bert*์˜ ํŠน์ • ํ…Œ์ŠคํŠธ๋ฅผ ์‚ดํŽด๋ด„์œผ๋กœ์จ ์ž‘์—…์„ ์‰ฝ๊ฒŒ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•จ - b) ๋ชจ๋ธ์— ๋Œ€ํ•œ ํ–ฅํ›„ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ๋ชจ๋ธ์˜ ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์„ ์†์ƒ์‹œํ‚ค์ง€ ์•Š๋„๋ก ํ•จ ๋จผ์ € ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋Š” ์ด์ „์— ๋ชจ๋ธ์„ ๐Ÿค— Transformers๋กœ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉํ•œ ๋””๋ฒ„๊น… ์Šคํฌ๋ฆฝํŠธ์™€ ๋™์ผํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. Cookiecutter์— ์ด๋ฏธ ์ด๋Ÿฌํ•œ ๋ชจ๋ธ ํ…Œ์ŠคํŠธ์˜ ํ…œํ”Œ๋ฆฟ์ธ `BrandNewBertModelIntegrationTests`๊ฐ€ ์ถ”๊ฐ€๋˜์–ด ์žˆ์œผ๋ฉฐ, ์—ฌ๋Ÿฌ๋ถ„์ด ์ž‘์„ฑํ•ด์•ผ ํ•  ๋‚ด์šฉ์œผ๋กœ๋งŒ ์ฑ„์›Œ ๋„ฃ์œผ๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> Windows๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ `RUN_SLOW=1`์„ `SET RUN_SLOW=1`๋กœ ๋ฐ”๊ฟ”์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋‘˜์งธ๋กœ, *brand_new_bert*์— ํŠนํ™”๋œ ๋ชจ๋“  ๊ธฐ๋Šฅ๋„ ๋ณ„๋„์˜ ํ…Œ์ŠคํŠธ์—์„œ ์ถ”๊ฐ€๋กœ ํ…Œ์ŠคํŠธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ถ€๋ถ„์€ ์ข…์ข… ์žŠํžˆ๋Š”๋ฐ, ๋‘ ๊ฐ€์ง€ ์ธก๋ฉด์—์„œ ๊ต‰์žฅํžˆ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. - *brand_new_bert*์˜ ํŠน์ˆ˜ ๊ธฐ๋Šฅ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•ด์•ผ ํ•˜๋Š”์ง€ ๋ณด์—ฌ์คŒ์œผ๋กœ์จ ์ปค๋ฎค๋‹ˆํ‹ฐ์—๊ฒŒ ๋ชจ๋ธ ์ถ”๊ฐ€ ๊ณผ์ •์—์„œ ์Šต๋“ํ•œ ์ง€์‹์„ ์ „๋‹ฌํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. - ํ–ฅํ›„ ๊ธฐ์—ฌ์ž๋Š” ์ด๋Ÿฌํ•œ ํŠน์ˆ˜ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋น ๋ฅด๊ฒŒ ํ…Œ์ŠคํŠธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **9. ํ† ํฌ๋‚˜์ด์ € ๊ตฌํ˜„ํ•˜๊ธฐ** ๋‹ค์Œ์œผ๋กœ, *brand_new_bert*์˜ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณดํ†ต ํ† ํฌ๋‚˜์ด์ €๋Š” ๐Ÿค— Transformers์˜ ๊ธฐ์กด ํ† ํฌ๋‚˜์ด์ €์™€ ๋™์ผํ•˜๊ฑฐ๋‚˜ ๋งค์šฐ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋จผ์ € ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์—์„œ ๋ฌธ์ž์—ด์„ ์ž…๋ ฅํ•˜๊ณ  `input_ids`๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์œ ์‚ฌํ•œ ์Šคํฌ๋ฆฝํŠธ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์˜์‚ฌ ์ฝ”๋“œ๋กœ ์ž‘์„ฑ): ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ๋ฅผ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ณ  ์˜ฌ๋ฐ”๋ฅธ ํ† ํฌ๋‚˜์ด์ € ํ•จ์ˆ˜๋ฅผ ์ฐพ๊ฑฐ๋‚˜, ๋ณต์ œ๋ณธ์—์„œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ์ ์šฉํ•˜์—ฌ `input_ids`๋งŒ ์ถœ๋ ฅํ•˜๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ธฐ๋Šฅ์ ์ธ ํ† ํฐํ™” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•œ ํ›„, ๐Ÿค— Transformers์˜ ์œ ์‚ฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ž‘์„ฑ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` ๋‘ ๊ฐœ์˜ `input_ids`๊ฐ€ ๋™์ผํ•œ ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•  ๋•Œ, ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ ํ† ํฌ๋‚˜์ด์ € ํ…Œ์ŠคํŠธ ํŒŒ์ผ๋„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. *brand_new_bert*์˜ ๋ชจ๋ธ๋ง ํ…Œ์ŠคํŠธ ํŒŒ์ผ๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ, *brand_new_bert*์˜ ํ† ํฌ๋‚˜์ด์ œ์ด์…˜ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์—๋Š” ๋ช‡ ๊ฐ€์ง€ ํ•˜๋“œ์ฝ”๋”ฉ๋œ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **10. ์ข…๋‹จ ๊ฐ„ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ ์‹คํ–‰** ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•œ ํ›„์—๋Š” ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ช‡ ๊ฐ€์ง€ ์ข…๋‹จ ๊ฐ„ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`์— ์ถ”๊ฐ€ํ•ด์ฃผ์„ธ์š”. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋Š” ๐Ÿค— Transformers ๊ตฌํ˜„์ด ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€๋ฅผ ์˜๋ฏธ ์žˆ๋Š” text-to-text ์˜ˆ์‹œ๋กœ ๋ณด์—ฌ์ค˜์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ์˜ˆ์‹œ๋กœ๋Š” *์˜ˆ๋ฅผ ๋“ค์–ด* source-to-target ๋ฒˆ์—ญ ์Œ, article-to-summary ์Œ, question-to-answer ์Œ ๋“ฑ์ด ํฌํ•จ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ถˆ๋Ÿฌ์˜จ ์ฒดํฌํฌ์ธํŠธ ์ค‘ ์–ด๋Š ๊ฒƒ๋„ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์—์„œ ๋ฏธ์„ธ ์กฐ์ •๋˜์ง€ ์•Š์•˜๋‹ค๋ฉด, ๋ชจ๋ธ ํ…Œ์ŠคํŠธ๋งŒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์™„์ „ํžˆ ๊ธฐ๋Šฅ์„ ๊ฐ–์ถ”์—ˆ๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ GPU์—์„œ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๋‚ด๋ถ€ ํ…์„œ์˜ ์ผ๋ถ€์— `.to(self.device)` ๋ฌธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ์žŠ์—ˆ์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๊ฒฝ์šฐ ํ…Œ์ŠคํŠธ์—์„œ ์˜ค๋ฅ˜๋กœ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. GPU์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์—†๋Š” ๊ฒฝ์šฐ, Hugging Face ํŒ€์ด ํ…Œ์ŠคํŠธ๋ฅผ ๋Œ€์‹  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **11. ๊ธฐ์ˆ ๋ฌธ์„œ ์ถ”๊ฐ€** ์ด์ œ *brand_new_bert*์— ํ•„์š”ํ•œ ๋ชจ๋“  ๊ธฐ๋Šฅ์ด ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๊ฑฐ์˜ ๋๋‚ฌ์Šต๋‹ˆ๋‹ค! ์ถ”๊ฐ€ํ•ด์•ผ ํ•  ๊ฒƒ์€ ๋ฉ‹์ง„ ๊ธฐ์ˆ ๋ฌธ์„œ๊ณผ ๊ธฐ์ˆ ๋ฌธ์„œ ํŽ˜์ด์ง€์ž…๋‹ˆ๋‹ค. Cookiecutter๊ฐ€ `docs/source/model_doc/brand_new_bert.md`๋ผ๋Š” ํ…œํ”Œ๋ฆฟ ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•ด์คฌ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํŽ˜์ด์ง€๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ์šฉ์ž๋“ค์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ด ํŽ˜์ด์ง€๋ฅผ ๋จผ์ € ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ฌธ์„œ๋Š” ์ดํ•ดํ•˜๊ธฐ ์‰ฝ๊ณ  ๊ฐ„๊ฒฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ฃผ๊ธฐ ์œ„ํ•ด *ํŒ*์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋…์ŠคํŠธ๋ง์— ๊ด€๋ จํ•˜์—ฌ Hugging Face ํŒ€์— ๋ฌธ์˜ํ•˜๋Š” ๊ฒƒ์„ ์ฃผ์ €ํ•˜์ง€ ๋งˆ์„ธ์š”. ๋‹ค์Œ์œผ๋กœ, `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`์— ์ถ”๊ฐ€๋œ ๋…์ŠคํŠธ๋ง์ด ์˜ฌ๋ฐ”๋ฅด๋ฉฐ ํ•„์š”ํ•œ ๋ชจ๋“  ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ์„ ํฌํ•จํ•˜๋„๋ก ํ™•์ธํ•˜์„ธ์š”. [์—ฌ๊ธฐ](writing-documentation)์—์„œ ์šฐ๋ฆฌ์˜ ๋ฌธ์„œ ์ž‘์„ฑ ๊ฐ€์ด๋“œ์™€ ๋…์ŠคํŠธ๋ง ํ˜•์‹์— ๋Œ€ํ•œ ์ƒ์„ธ ๊ฐ€์ด๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌธ์„œ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๋ชจ๋ธ์˜ ์ฒซ ๋ฒˆ์งธ ์ ‘์ ์ด๊ธฐ ๋•Œ๋ฌธ์—, ๋ฌธ์„œ๋Š” ์ ์–ด๋„ ์ฝ”๋“œ๋งŒํผ์˜ ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. **์ฝ”๋“œ ๋ฆฌํŒฉํ† ๋ง** ์ข‹์•„์š”, ์ด์ œ *brand_new_bert*๋ฅผ ์œ„ํ•œ ๋ชจ๋“  ํ•„์š”ํ•œ ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์—ฌ ์ž ์žฌ์ ์œผ๋กœ ์ž˜๋ชป๋œ ์ฝ”๋“œ ์Šคํƒ€์ผ์„ ์ˆ˜์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ๊ทธ๋ฆฌ๊ณ  ์ฝ”๋”ฉ ์Šคํƒ€์ผ์ด ํ’ˆ์งˆ ์ ๊ฒ€์„ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜๊ณ  ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash make style ``` ๐Ÿค— Transformers์—๋Š” ์—ฌ์ „ํžˆ ์‹คํŒจํ•  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๋งค์šฐ ์—„๊ฒฉํ•œ ๋””์ž์ธ ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋…์ŠคํŠธ๋ง์— ๋ˆ„๋ฝ๋œ ์ •๋ณด๋‚˜ ์ž˜๋ชป๋œ ๋ช…๋ช… ๋•Œ๋ฌธ์— ์ข…์ข… ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ๋ง‰ํžˆ๋ฉด Hugging Face ํŒ€์ด ๋„์›€์„ ์ค„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```bash make quality ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ฝ”๋“œ๊ฐ€ ์ •ํ™•ํžˆ ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•œ ํ›„์—๋Š” ํ•ญ์ƒ ์ฝ”๋“œ๋ฅผ ๋ฆฌํŒฉํ† ๋งํ•˜๋Š” ๊ฒƒ์ด ์ข‹์€ ์ƒ๊ฐ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผ๋œ ์ง€๊ธˆ์€ ์ถ”๊ฐ€ํ•œ ์ฝ”๋“œ๋ฅผ ๋‹ค์‹œ ๊ฒ€ํ† ํ•˜๊ณ  ๋ฆฌํŒฉํ† ๋งํ•˜๋Š” ์ข‹์€ ์‹œ๊ธฐ์ž…๋‹ˆ๋‹ค. ์ด์ œ ์ฝ”๋”ฉ ๋ถ€๋ถ„์„ ์™„๋ฃŒํ–ˆ์Šต๋‹ˆ๋‹ค. ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ๐ŸŽ‰ ๋ฉ‹์ ธ์š”! ๐Ÿ˜Ž **12. ๋ชจ๋ธ์„ ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜์„ธ์š”** ์ด ๋งˆ์ง€๋ง‰ ํŒŒํŠธ์—์„œ๋Š” ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณ€ํ™˜ํ•˜์—ฌ ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜๊ณ  ๊ฐ ์—…๋กœ๋“œ๋œ ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ์— ๋Œ€ํ•œ ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [Model sharing and uploading Page](model_sharing)๋ฅผ ์ฝ๊ณ  ํ—ˆ๋ธŒ ๊ธฐ๋Šฅ์— ์ต์ˆ™ํ•ด์ง€์„ธ์š”. *brand_new_bert*์˜ ์ €์ž ์กฐ์ง ์•„๋ž˜์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” ํ•„์š”ํ•œ ์•ก์„ธ์Šค ๊ถŒํ•œ์„ ์–ป๊ธฐ ์œ„ํ•ด Hugging Face ํŒ€๊ณผ ํ˜‘์—…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `transformers`์˜ ๋ชจ๋“  ๋ชจ๋ธ์— ์žˆ๋Š” `push_to_hub` ๋ฉ”์„œ๋“œ๋Š” ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ—ˆ๋ธŒ์— ๋น ๋ฅด๊ณ  ํšจ์œจ์ ์œผ๋กœ ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์•„๋ž˜์— ์ž‘์€ ์ฝ”๋“œ ์กฐ๊ฐ์ด ๋ถ™์—ฌ์ ธ ์žˆ์Šต๋‹ˆ๋‹ค: ๊ฐ ์ฒดํฌํฌ์ธํŠธ์— ์ ํ•ฉํ•œ ๋ชจ๋ธ ์นด๋“œ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๋Š” ๊ฒƒ์€ ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ ์นด๋“œ๋Š” ์ฒดํฌํฌ์ธํŠธ์˜ ํŠน์„ฑ์„ ๊ฐ•์กฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด* ์ด ์ฒดํฌํฌ์ธํŠธ๋Š” ์–ด๋–ค ๋ฐ์ดํ„ฐ์…‹์—์„œ ์‚ฌ์ „ ํ›ˆ๋ จ/์„ธ๋ถ€ ํ›ˆ๋ จ๋˜์—ˆ๋Š”์ง€? ์ด ๋ชจ๋ธ์€ ์–ด๋–ค ํ•˜์œ„ ์ž‘์—…์—์„œ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š”์ง€? ๊ทธ๋ฆฌ๊ณ  ๋ชจ๋ธ์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ฝ”๋“œ๋„ ํฌํ•จํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python brand_new_bert.push_to_hub("brand_new_bert") # Uncomment the following line to push to an organization. # brand_new_bert.push_to_hub("<organization>/brand_new_bert") ``` **13. (์„ ํƒ ์‚ฌํ•ญ) ๋…ธํŠธ๋ถ ์ถ”๊ฐ€** *brand_new_bert*๋ฅผ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์—์„œ ์ถ”๋ก  ๋˜๋Š” ๋ฏธ์„ธ ์กฐ์ •์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ž์„ธํžˆ ๋ณด์—ฌ์ฃผ๋Š” ๋…ธํŠธ๋ถ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ PR์„ ๋ณ‘ํ•ฉํ•˜๋Š” ๋ฐ ํ•„์ˆ˜์ ์ด์ง€๋Š” ์•Š์ง€๋งŒ ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. **14. ์™„๋ฃŒ๋œ PR ์ œ์ถœ** ์ด์ œ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์„ ๋งˆ์ณค์œผ๋ฉฐ, ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ PR์„ ๋ฉ”์ธ ๋ธŒ๋žœ์น˜์— ๋ณ‘ํ•ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณดํ†ต Hugging Face ํŒ€์€ ์ด๋ฏธ ์—ฌ๊ธฐ๊นŒ์ง€ ๋„์›€์„ ์ฃผ์—ˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ PR์— ๋ฉ‹์ง„ ์„ค๋ช…์„ ์ถ”๊ฐ€ํ•˜๊ณ  ๋ฆฌ๋ทฐ์–ด์—๊ฒŒ ํŠน์ • ๋””์ž์ธ ์„ ํƒ ์‚ฌํ•ญ์„ ๊ฐ•์กฐํ•˜๋ ค๋ฉด ์™„๋ฃŒ๋œ PR์— ์•ฝ๊ฐ„์˜ ์„ค๋ช…์„ ์ถ”๊ฐ€ํ•˜๋Š” ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์ž‘์—…๋ฌผ์„ ๊ณต์œ ํ•˜์„ธ์š”!! [[share-your-work]] ์ด์ œ ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ์ž‘์—…๋ฌผ์„ ์ธ์ •๋ฐ›์„ ์‹œ๊ฐ„์ž…๋‹ˆ๋‹ค! ๋ชจ๋ธ ์ถ”๊ฐ€ ์ž‘์—…์„ ์™„๋ฃŒํ•˜๋Š” ๊ฒƒ์€ Transformers์™€ ์ „์ฒด NLP ์ปค๋ฎค๋‹ˆํ‹ฐ์— ํฐ ๊ธฐ์—ฌ์ž…๋‹ˆ๋‹ค. ๋‹น์‹ ์˜ ์ฝ”๋“œ์™€ ์ด์‹๋œ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์€ ์ˆ˜๋ฐฑ, ์‹ฌ์ง€์–ด ์ˆ˜์ฒœ ๋ช…์˜ ๊ฐœ๋ฐœ์ž์™€ ์—ฐ๊ตฌ์›์— ์˜ํ•ด ํ™•์‹คํžˆ ์‚ฌ์šฉ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹น์‹ ์˜ ์ž‘์—…์— ์ž๋ž‘์Šค๋Ÿฌ์›Œํ•ด์•ผ ํ•˜๋ฉฐ ์ด๋ฅผ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **๋‹น์‹ ์€ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋‚ด ๋ชจ๋“  ์‚ฌ๋žŒ๋“ค์—๊ฒŒ ๋งค์šฐ ์‰ฝ๊ฒŒ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•œ ๋˜ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค! ๐Ÿคฏ**
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์„ค์น˜๋ฐฉ๋ฒ•[[installation]] ๐Ÿค— Transformers๋ฅผ ์‚ฌ์šฉ ์ค‘์ธ ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋งž์ถฐ ์„ค์น˜ํ•˜๊ณ , ์บ์‹œ๋ฅผ ๊ตฌ์„ฑํ•˜๊ฑฐ๋‚˜ ์„ ํƒ์ ์œผ๋กœ ์˜คํ”„๋ผ์ธ์—์„œ๋„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋„๋ก ๐Ÿค— Transformers๋ฅผ ์„ค์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šฐ๊ฒ ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+ ๋ฐ Flax์—์„œ ํ…Œ์ŠคํŠธ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•˜๋ ค๋ฉด ์•„๋ž˜ ๋งํฌ๋œ ์ €๋งˆ๋‹ค์˜ ๊ณต์‹ ์‚ฌ์ดํŠธ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. * [PyTorch](https://pytorch.org/get-started/locally/) ์„ค์น˜ํ•˜๊ธฐ * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) ์„ค์น˜ํ•˜๊ธฐ * [Flax](https://flax.readthedocs.io/en/latest/) ์„ค์น˜ํ•˜๊ธฐ ## pip์œผ๋กœ ์„ค์น˜ํ•˜๊ธฐ[[install-with-pip]] ๐Ÿค— Transformers๋ฅผ [๊ฐ€์ƒ ํ™˜๊ฒฝ](https://docs.python.org/3/library/venv.html)์— ์„ค์น˜ํ•˜๋Š” ๊ฒƒ์„ ์ถ”์ฒœ๋“œ๋ฆฝ๋‹ˆ๋‹ค. Python ๊ฐ€์ƒ ํ™˜๊ฒฝ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, ์ด [๊ฐ€์ด๋“œ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ์‚ฌ์šฉํ•˜๋ฉด ์„œ๋กœ ๋‹ค๋ฅธ ํ”„๋กœ์ ํŠธ๋“ค์„ ๋ณด๋‹ค ์‰ฝ๊ฒŒ ๊ด€๋ฆฌํ•  ์ˆ˜ ์žˆ๊ณ , ์˜์กด์„ฑ ๊ฐ„์˜ ํ˜ธํ™˜์„ฑ ๋ฌธ์ œ๋ฅผ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ํ”„๋กœ์ ํŠธ ๋””๋ ‰ํ† ๋ฆฌ์—์„œ ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ๋งŒ๋“ค์–ด ์ค๋‹ˆ๋‹ค. ```bash python -m venv .env ``` ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ํ™œ์„ฑํ™”ํ•ด์ฃผ์„ธ์š”. Linux๋‚˜ MacOS์˜ ๊ฒฝ์šฐ: ```bash source .env/bin/activate ``` Windows์˜ ๊ฒฝ์šฐ: ```bash .env/Scripts/activate ``` ์ด์ œ ๐Ÿค— Transformers๋ฅผ ์„ค์น˜ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์„ ์ž…๋ ฅํ•ด์ฃผ์„ธ์š”. ```bash pip install transformers ``` CPU๋งŒ ์จ๋„ ๋œ๋‹ค๋ฉด, ๐Ÿค— Transformers์™€ ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋‹จ 1์ค„๋กœ ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๐Ÿค— Transformers์™€ PyTorch์˜ ๊ฒฝ์šฐ: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers์™€ TensorFlow 2.0์˜ ๊ฒฝ์šฐ: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers์™€ Flax์˜ ๊ฒฝ์šฐ: ```bash pip install transformers[flax] ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ๐Ÿค— Transformers๊ฐ€ ์ œ๋Œ€๋กœ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋Š” ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค. ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` ๋ผ๋ฒจ๊ณผ ์ ์ˆ˜๊ฐ€ ์ถœ๋ ฅ๋˜๋ฉด ์ž˜ ์„ค์น˜๋œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## ์†Œ์Šค์—์„œ ์„ค์น˜ํ•˜๊ธฐ[[install-from-source]] ๐Ÿค— Transformers๋ฅผ ์†Œ์Šค์—์„œ ์„ค์น˜ํ•˜๋ ค๋ฉด ์•„๋ž˜ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash pip install git+https://github.com/huggingface/transformers ``` ์œ„ ๋ช…๋ น์€ ์ตœ์‹ ์ด์ง€๋งŒ (์•ˆ์ •์ ์ธ) `stable` ๋ฒ„์ „์ด ์•„๋‹Œ ์‹คํ—˜์„ฑ์ด ์ง™์€ `main` ๋ฒ„์ „์„ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค. `main` ๋ฒ„์ „์€ ๊ฐœ๋ฐœ ํ˜„ํ™ฉ๊ณผ ๋ฐœ๋งž์ถ”๋Š”๋ฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ๋กœ ๋งˆ์ง€๋ง‰ ๊ณต์‹ ๋ฆด๋ฆฌ์Šค ์ดํ›„ ๋ฐœ๊ฒฌ๋œ ๋ฒ„๊ทธ๊ฐ€ ํŒจ์น˜๋˜์—ˆ์ง€๋งŒ, ์ƒˆ ๋ฆด๋ฆฌ์Šค๋กœ ์•„์ง ๋กค์•„์›ƒ๋˜์ง€๋Š” ์•Š์€ ๊ฒฝ์šฐ๋ฅผ ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ”๊ฟ” ๋งํ•˜๋ฉด `main` ๋ฒ„์ „์ด ์•ˆ์ •์„ฑ๊ณผ๋Š” ๊ฑฐ๋ฆฌ๊ฐ€ ์žˆ๋‹ค๋Š” ๋œป์ด๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๋Š” `main` ๋ฒ„์ „์„ ์‚ฌ์šฉํ•˜๋Š”๋ฐ ๋ฌธ์ œ๊ฐ€ ์—†๋„๋ก ๋…ธ๋ ฅํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ๋Œ€๋ถ€๋ถ„์˜ ๋ฌธ์ œ๋Š” ๋Œ€๊ฐœ ๋ช‡ ์‹œ๊ฐ„์ด๋‚˜ ํ•˜๋ฃจ ์•ˆ์— ํ•ด๊ฒฐ๋ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด [์ด์Šˆ](https://github.com/huggingface/transformers/issues)๋ฅผ ์—ด์–ด์ฃผ์‹œ๋ฉด ๋” ๋นจ๋ฆฌ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๐Ÿค— Transformers๊ฐ€ ์ œ๋Œ€๋กœ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## ์ˆ˜์ • ๊ฐ€๋Šฅํ•œ ์„ค์น˜[[editable-install]] ์ˆ˜์ • ๊ฐ€๋Šฅํ•œ ์„ค์น˜๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. * `main` ๋ฒ„์ „์˜ ์†Œ์Šค ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด * ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๊ณ  ์‹ถ์–ด์„œ ์ฝ”๋“œ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ํ…Œ์ŠคํŠธํ•˜๊ธฐ ์œ„ํ•ด ๋ฆฌํฌ์ง€ํ„ฐ๋ฆฌ๋ฅผ ๋ณต์ œํ•˜๊ณ  ๐Ÿค— Transformers๋ฅผ ์„ค์น˜ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์ž…๋ ฅํ•ด์ฃผ์„ธ์š”. ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` ์œ„ ๋ช…๋ น์€ ๋ฆฌํฌ์ง€ํ„ฐ๋ฆฌ๋ฅผ ๋ณต์ œํ•œ ์œ„์น˜์˜ ํด๋”์™€ Python ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ฒฝ๋กœ๋ฅผ ์—ฐ๊ฒฐ์‹œํ‚ต๋‹ˆ๋‹ค. Python์ด ์ผ๋ฐ˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๊ฒฝ๋กœ ์™ธ์— ๋ณต์ œํ•œ ํด๋” ๋‚ด๋ถ€๋ฅผ ํ™•์ธํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด Python ํŒจํ‚ค์ง€๊ฐ€ ์ผ๋ฐ˜์ ์œผ๋กœ `~/anaconda3/envs/main/lib/python3.7/site-packages/`์— ์„ค์น˜๋˜์–ด ์žˆ๋Š”๋ฐ, ๋ช…๋ น์„ ๋ฐ›์€ Python์ด ์ด์ œ ๋ณต์ œํ•œ ํด๋”์ธ `~/transformers/`๋„ ๊ฒ€์ƒ‰ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. <Tip warning={true}> ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๊ณ„์† ์‚ฌ์šฉํ•˜๋ ค๋ฉด `transformers` ํด๋”๋ฅผ ๊ผญ ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋ณต์ œ๋ณธ์€ ์ตœ์‹  ๋ฒ„์ „์˜ ๐Ÿค— Transformers๋กœ ์‰ฝ๊ฒŒ ์—…๋ฐ์ดํŠธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash cd ~/transformers/ git pull ``` Python ํ™˜๊ฒฝ์„ ๋‹ค์‹œ ์‹คํ–‰ํ•˜๋ฉด ์—…๋ฐ์ดํŠธ๋œ ๐Ÿค— Transformers์˜ `main` ๋ฒ„์ „์„ ์ฐพ์•„๋‚ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## conda๋กœ ์„ค์น˜ํ•˜๊ธฐ[[install-with-conda]] `conda-forge` conda ์ฑ„๋„์—์„œ ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash conda install conda-forge::transformers ``` ## ์บ์‹œ ๊ตฌ์„ฑํ•˜๊ธฐ[[cache-setup]] ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์€ ๋‹ค์šด๋กœ๋“œ๋œ ํ›„ ๋กœ์ปฌ ๊ฒฝ๋กœ `~/.cache/huggingface/hub`์— ์บ์‹œ๋ฉ๋‹ˆ๋‹ค. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ `TRANSFORMERS_CACHE`์˜ ๊ธฐ๋ณธ ๋””๋ ‰ํ„ฐ๋ฆฌ์ž…๋‹ˆ๋‹ค. Windows์˜ ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋””๋ ‰ํ„ฐ๋ฆฌ๋Š” `C:\Users\username\.cache\huggingface\hub`์ž…๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ (์šฐ์„  ์ˆœ์œ„) ์ˆœ์„œ๋Œ€๋กœ ๋ณ€๊ฒฝํ•˜์—ฌ ๋‹ค๋ฅธ ์บ์‹œ ๋””๋ ‰ํ† ๋ฆฌ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ (๊ธฐ๋ณธ): `HUGGINGFACE_HUB_CACHE` ๋˜๋Š” `TRANSFORMERS_CACHE` 2. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜: `HF_HOME` 3. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜: `XDG_CACHE_HOME` + `/huggingface` <Tip> ๊ณผ๊ฑฐ ๐Ÿค— Transformers์—์„œ ์“ฐ์˜€๋˜ ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ `PYTORCH_TRANSFORMERS_CACHE` ๋˜๋Š” `PYTORCH_PRETRAINED_BERT_CACHE`์ด ์„ค์ •๋˜์žˆ๋‹ค๋ฉด, ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ `TRANSFORMERS_CACHE`์„ ์ง€์ •ํ•˜์ง€ ์•Š๋Š” ํ•œ ์šฐ์„  ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. </Tip> ## ์˜คํ”„๋ผ์ธ ๋ชจ๋“œ[[offline-mode]] ๐Ÿค— Transformers๋ฅผ ๋กœ์ปฌ ํŒŒ์ผ๋งŒ ์‚ฌ์šฉํ•˜๋„๋ก ํ•ด์„œ ๋ฐฉํ™”๋ฒฝ ๋˜๋Š” ์˜คํ”„๋ผ์ธ ํ™˜๊ฒฝ์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด `TRANSFORMERS_OFFLINE=1` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. <Tip> `HF_DATASETS_OFFLINE=1` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์—ฌ ์˜คํ”„๋ผ์ธ ํ›ˆ๋ จ ๊ณผ์ •์— [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/)์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์˜ˆ๋ฅผ ๋“ค์–ด ์™ธ๋ถ€ ๊ธฐ๊ธฐ ์‚ฌ์ด์— ๋ฐฉํ™”๋ฒฝ์„ ๋‘” ์ผ๋ฐ˜ ๋„คํŠธ์›Œํฌ์—์„œ ํ‰์†Œ์ฒ˜๋Ÿผ ํ”„๋กœ๊ทธ๋žจ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` ์˜คํ”„๋ผ์ธ ๊ธฐ๊ธฐ์—์„œ ๋™์ผํ•œ ํ”„๋กœ๊ทธ๋žจ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` ์ด์ œ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋กœ์ปฌ ํŒŒ์ผ์— ํ•œํ•ด์„œ๋งŒ ๊ฒ€์ƒ‰ํ•  ๊ฒƒ์ด๋ฏ€๋กœ, ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์ค‘๋‹จ๋˜๊ฑฐ๋‚˜ ์‹œ๊ฐ„์ด ์ดˆ๊ณผ๋  ๋•Œ๊นŒ์ง€ ๋ฉˆ์ถฐ์žˆ์ง€ ์•Š๊ณ  ์ž˜ ์‹คํ–‰๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ### ์˜คํ”„๋ผ์ธ์šฉ ๋ชจ๋ธ ๋ฐ ํ† ํฌ๋‚˜์ด์ € ๋งŒ๋“ค์–ด๋‘๊ธฐ[[fetch-models-and-tokenizers-to-use-offline]] Another option for using ๐Ÿค— Transformers offline is to download the files ahead of time, and then point to their local path when you need to use them offline. There are three ways to do this: ๐Ÿค— Transformers๋ฅผ ์˜คํ”„๋ผ์ธ์œผ๋กœ ์‚ฌ์šฉํ•˜๋Š” ๋˜ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€ ํŒŒ์ผ์„ ๋ฏธ๋ฆฌ ๋‹ค์šด๋กœ๋“œํ•œ ๋‹ค์Œ, ์˜คํ”„๋ผ์ธ์ผ ๋•Œ ์‚ฌ์šฉํ•  ๋กœ์ปฌ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•ด๋‘๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. 3๊ฐ€์ง€ ์ค‘ ํŽธํ•œ ๋ฐฉ๋ฒ•์„ ๊ณ ๋ฅด์„ธ์š”. * [Model Hub](https://huggingface.co/models)์˜ UI๋ฅผ ํ†ตํ•ด ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋ ค๋ฉด โ†“ ์•„์ด์ฝ˜์„ ํด๋ฆญํ•˜์„ธ์š”. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * [`PreTrainedModel.from_pretrained`]์™€ [`PreTrainedModel.save_pretrained`] ์›Œํฌํ”Œ๋กœ๋ฅผ ํ™œ์šฉํ•˜์„ธ์š”. 1. ๋ฏธ๋ฆฌ [`PreTrainedModel.from_pretrained`]๋กœ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•ด๋‘์„ธ์š”. ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. [`PreTrainedModel.save_pretrained`]๋กœ ์ง€์ •๋œ ๊ฒฝ๋กœ์— ํŒŒ์ผ์„ ์ €์žฅํ•ด๋‘์„ธ์š”. ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. ์ด์ œ ์˜คํ”„๋ผ์ธ์ผ ๋•Œ [`PreTrainedModel.from_pretrained`]๋กœ ์ €์žฅํ•ด๋’€๋˜ ํŒŒ์ผ์„ ์ง€์ •๋œ ๊ฒฝ๋กœ์—์„œ ๋‹ค์‹œ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”. ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•ด์„œ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜์„ธ์š”. 1. ๊ฐ€์ƒํ™˜๊ฒฝ์— `huggingface_hub` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•˜์„ธ์š”. ```bash python -m pip install huggingface_hub ``` 2. [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) ํ•จ์ˆ˜๋กœ ํŒŒ์ผ์„ ํŠน์ • ์œ„์น˜์— ๋‹ค์šด๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์•„๋ž˜ ๋ช…๋ น์€ [T0](https://huggingface.co/bigscience/T0_3B) ๋ชจ๋ธ์˜ `config.json` ํŒŒ์ผ์„ ์ง€์ •๋œ ๊ฒฝ๋กœ์— ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ๋กœ์ปฌ์— ์บ์‹œ ํ•ด๋†“๊ณ  ๋‚˜๋ฉด, ๋‚˜์ค‘์— ๋ถˆ๋Ÿฌ์™€ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๋กœ์ปฌ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•ด๋‘์„ธ์š”. ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Hub์— ์ €์žฅ๋œ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [Hub์—์„œ ํŒŒ์ผ ๋‹ค์šด๋กœ๋“œํ•˜๊ธฐ](https://huggingface.co/docs/hub/how-to-downstream) ์„น์…˜์„ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. </Tip>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‘˜๋Ÿฌ๋ณด๊ธฐ [[quick-tour]] [[open-in-colab]] ๐Ÿค— Transformers๋ฅผ ์‹œ์ž‘ํ•ด๋ณด์„ธ์š”! ๊ฐœ๋ฐœํ•ด๋ณธ ์ ์ด ์—†๋”๋ผ๋„ ์‰ฝ๊ฒŒ ์ฝ์„ ์ˆ˜ ์žˆ๋„๋ก ์“ฐ์ธ ์ด ๊ธ€์€ [`pipeline`](./main_classes/pipelines)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜๊ณ , ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ๊ณผ ์ „์ฒ˜๋ฆฌ๊ธฐ๋ฅผ [AutoClass](./model_doc/auto)๋กœ ๋กœ๋“œํ•˜๊ณ , PyTorch ๋˜๋Š” TensorFlow๋กœ ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ํ•™์Šต์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์„ ์†Œ๊ฐœํ•ด ๋“œ๋ฆด ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ณธ ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœ๋˜๋Š” ๊ฐœ๋…์„ (ํŠนํžˆ ์ดˆ๋ณด์ž์˜ ๊ด€์ ์œผ๋กœ) ๋” ์นœ์ ˆํ•˜๊ฒŒ ์ ‘ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ํŠœํ† ๋ฆฌ์–ผ์ด๋‚˜ [์ฝ”์Šค](https://huggingface.co/course/chapter1/1)๋ฅผ ์ฐธ์กฐํ•˜๊ธฐ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash !pip install transformers datasets evaluate accelerate ``` ๋˜ํ•œ ์„ ํ˜ธํ•˜๋Š” ๋จธ์‹  ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## ํŒŒ์ดํ”„๋ผ์ธ [[pipeline]] <Youtube id="tiZFewofSLM"/> [`pipeline`](./main_classes/pipelines)์€ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•˜๊ธฐ์— ๊ฐ€์žฅ ์‰ฝ๊ณ  ๋น ๋ฅธ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. [`pipeline`]์€ ์—ฌ๋Ÿฌ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์—์„œ ๋‹ค์–‘ํ•œ ๊ณผ์—…์„ ์‰ฝ๊ฒŒ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์•„๋ž˜ ํ‘œ์— ํ‘œ์‹œ๋œ ๋ช‡ ๊ฐ€์ง€ ๊ณผ์—…์„ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค: <Tip> ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [Pipelines API ์ฐธ์กฐ](./main_classes/pipelines)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. </Tip> | **ํƒœ์Šคํฌ** | **์„ค๋ช…** | **๋ชจ๋‹ฌ๋ฆฌํ‹ฐ** | **ํŒŒ์ดํ”„๋ผ์ธ ID** | |-----------------|----------------------------------------------------------------------|------------------|-----------------------------------------------| | ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ | ํ…์ŠคํŠธ์— ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="sentiment-analysis") | | ํ…์ŠคํŠธ ์ƒ์„ฑ | ์ฃผ์–ด์ง„ ๋ฌธ์ž์—ด ์ž…๋ ฅ๊ณผ ์ด์–ด์ง€๋Š” ํ…์ŠคํŠธ ์ƒ์„ฑํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="text-generation") | | ๊ฐœ์ฒด๋ช… ์ธ์‹ | ๋ฌธ์ž์—ด์˜ ๊ฐ ํ† ํฐ๋งˆ๋‹ค ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ (์ธ๋ฌผ, ์กฐ์ง, ์žฅ์†Œ ๋“ฑ๋“ฑ) | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="ner") | | ์งˆ์˜์‘๋‹ต | ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ๊ณผ ์งˆ๋ฌธ์— ๋”ฐ๋ผ ์˜ฌ๋ฐ”๋ฅธ ๋Œ€๋‹ตํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="question-answering") | | ๋นˆ์นธ ์ฑ„์šฐ๊ธฐ | ๋ฌธ์ž์—ด์˜ ๋นˆ์นธ์— ์•Œ๋งž์€ ํ† ํฐ ๋งž์ถ”๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="fill-mask") | | ์š”์•ฝ | ํ…์ŠคํŠธ๋‚˜ ๋ฌธ์„œ๋ฅผ ์š”์•ฝํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="summarization") | | ๋ฒˆ์—ญ | ํ…์ŠคํŠธ๋ฅผ ํ•œ ์–ธ์–ด์—์„œ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="translation") | | ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ | ์ด๋ฏธ์ง€์— ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ | ์ปดํ“จํ„ฐ ๋น„์ „(CV) | pipeline(task="image-classification") | | ์ด๋ฏธ์ง€ ๋ถ„ํ•  | ์ด๋ฏธ์ง€์˜ ํ”ฝ์…€๋งˆ๋‹ค ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ(์‹œ๋งจํ‹ฑ, ํŒŒ๋†‰ํ‹ฑ ๋ฐ ์ธ์Šคํ„ด์Šค ๋ถ„ํ•  ํฌํ•จ) | ์ปดํ“จํ„ฐ ๋น„์ „(CV) | pipeline(task="image-segmentation") | | ๊ฐ์ฒด ํƒ์ง€ | ์ด๋ฏธ์ง€ ์† ๊ฐ์ฒด์˜ ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ๊ทธ๋ฆฌ๊ณ  ํด๋ž˜์Šค๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ | ์ปดํ“จํ„ฐ ๋น„์ „(CV) | pipeline(task="object-detection") | | ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ | ์˜ค๋””์˜ค ํŒŒ์ผ์— ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ | ์˜ค๋””์˜ค | pipeline(task="audio-classification") | | ์ž๋™ ์Œ์„ฑ ์ธ์‹ | ์˜ค๋””์˜ค ํŒŒ์ผ ์† ์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ฐ”๊พธ๊ธฐ | ์˜ค๋””์˜ค | pipeline(task="automatic-speech-recognition") | | ์‹œ๊ฐ ์งˆ์˜์‘๋‹ต | ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋Œ€๋‹ตํ•˜๊ธฐ | ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ | pipeline(task="vqa") | | ๋ฌธ์„œ ์งˆ์˜์‘๋‹ต | ์ฃผ์–ด์ง„ ๋ฌธ์„œ์™€ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋Œ€๋‹ตํ•˜๊ธฐ | ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ | pipeline(task="document-question-answering") | | ์ด๋ฏธ์ง€ ์บก์…˜ ๋‹ฌ๊ธฐ | ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์˜ ์บก์…˜ ์ƒ์„ฑํ•˜๊ธฐ | ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ | pipeline(task="image-to-text") | ๋จผ์ € [`pipeline`]์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ์‚ฌ์šฉํ•  ์ž‘์—…์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ์˜ˆ์ œ๋ฅผ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` [`pipeline`]์€ ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•œ [์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english)๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ž๋™์œผ๋กœ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์บ์‹œํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ `classifier`๋ฅผ ๋Œ€์ƒ ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` ๋งŒ์•ฝ ์ž…๋ ฅ์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ๋Š” ๊ฒฝ์šฐ, ์ž…๋ ฅ์„ ๋ฆฌ์ŠคํŠธ๋กœ [`pipeline`]์— ์ „๋‹ฌํ•˜์—ฌ, ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์„ ๋”•์…”๋„ˆ๋ฆฌ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309 ``` [`pipeline`]์€ ์ฃผ์–ด์ง„ ๊ณผ์—…์— ๊ด€๊ณ„์—†์ด ๋ฐ์ดํ„ฐ์…‹ ์ „๋ถ€๋ฅผ ์ˆœํšŒํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ๊ณผ์—…์œผ๋กœ ์„ ํƒํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` ๋ฐ์ดํ„ฐ์…‹์„ ๋กœ๋“œํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. (์ž์„ธํ•œ ๋‚ด์šฉ์€ ๐Ÿค— Datasets [์‹œ์ž‘ํ•˜๊ธฐ](https://huggingface.co/docs/datasets/quickstart#audio)์„ ์ฐธ์กฐํ•˜์„ธ์š”) ์—ฌ๊ธฐ์—์„œ๋Š” [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ์…‹์„ ๋กœ๋“œํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` ๋ฐ์ดํ„ฐ์…‹์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๊ฐ€ ๊ธฐ์กด ๋ชจ๋ธ์ธ [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h)์˜ ํ›ˆ๋ จ ๋‹น์‹œ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` `"audio"` ์—ด์„ ํ˜ธ์ถœํ•˜๋ฉด ์ž๋™์œผ๋กœ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์™€์„œ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ์ฒซ 4๊ฐœ ์ƒ˜ํ”Œ์—์„œ ์›์‹œ ์›จ์ด๋ธŒํผ ๋ฐฐ์—ด์„ ์ถ”์ถœํ•˜๊ณ  ํŒŒ์ดํ”„๋ผ์ธ์— ๋ฆฌ์ŠคํŠธ๋กœ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT'] ``` ์Œ์„ฑ์ด๋‚˜ ๋น„์ „๊ณผ ๊ฐ™์ด ์ž…๋ ฅ์ด ํฐ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ์…‹์˜ ๊ฒฝ์šฐ, ๋ชจ๋“  ์ž…๋ ฅ์„ ๋ฉ”๋ชจ๋ฆฌ์— ๋กœ๋“œํ•˜๋ ค๋ฉด ๋ฆฌ์ŠคํŠธ ๋Œ€์‹  ์ œ๋„ˆ๋ ˆ์ดํ„ฐ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ [Pipelines API ์ฐธ์กฐ](./main_classes/pipelines)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ### ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ € ์‚ฌ์šฉํ•˜๊ธฐ [[use-another-model-and-tokenizer-in-the-pipeline]] [`pipeline`]์€ [Hub](https://huggingface.co/models)์˜ ๋ชจ๋“  ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, [`pipeline`]์„ ๋‹ค๋ฅธ ์šฉ๋„์— ๋งž๊ฒŒ ์‰ฝ๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„  Hub์˜ ํƒœ๊ทธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ ์ ˆํ•œ ๋ชจ๋ธ์„ ํ•„ํ„ฐ๋งํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ํ•„ํ„ฐ๋ง๋œ ๊ฒฐ๊ณผ์˜ ์ƒ์œ„ ํ•ญ๋ชฉ์œผ๋กœ๋Š” ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค๊ตญ์–ด [BERT ๋ชจ๋ธ](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment)์ด ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค: ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> [`AutoModelForSequenceClassification`]๊ณผ [`AutoTokenizer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜์„ธ์š” (๋‹ค์Œ ์„น์…˜์—์„œ [`AutoClass`]์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> [`TFAutoModelForSequenceClassification`]๊ณผ [`AutoTokenizer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜์„ธ์š” (๋‹ค์Œ ์„น์…˜์—์„œ [`TFAutoClass`]์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> [`pipeline`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ง€์ •ํ•˜๋ฉด, ์ด์ œ `classifier`๋ฅผ ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ์— ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` ๋งˆ๋•…ํ•œ ๋ชจ๋ธ์„ ์ฐพ์„ ์ˆ˜ ์—†๋Š” ๊ฒฝ์šฐ ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฏธ์„ธ์กฐ์ • ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [๋ฏธ์„ธ์กฐ์ • ํŠœํ† ๋ฆฌ์–ผ](./training)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ์กฐ์ •ํ•œ ํ›„์—๋Š” ๋ชจ๋ธ์„ Hub์˜ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜์—ฌ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ฏผ์ฃผํ™”์— ๊ธฐ์—ฌํ•ด์ฃผ์„ธ์š”! ๐Ÿค— ## AutoClass [[autoclass]] <Youtube id="AhChOFRegn4"/> [`AutoModelForSequenceClassification`]๊ณผ [`AutoTokenizer`] ํด๋ž˜์Šค๋Š” ์œ„์—์„œ ๋‹ค๋ฃฌ [`pipeline`]์˜ ๊ธฐ๋Šฅ์„ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. [AutoClass](./model_doc/auto)๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ด๋ฆ„์ด๋‚˜ ๊ฒฝ๋กœ์—์„œ ์ž๋™์œผ๋กœ ๊ฐ€์ ธ์˜ค๋Š” '๋ฐ”๋กœ๊ฐ€๊ธฐ'์ž…๋‹ˆ๋‹ค. ๊ณผ์—…์— ์ ํ•ฉํ•œ `AutoClass`๋ฅผ ์„ ํƒํ•˜๊ณ  ํ•ด๋‹น ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์„ ํƒํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด์ „ ์„น์…˜์˜ ์˜ˆ์ œ๋กœ ๋Œ์•„๊ฐ€์„œ [`pipeline`]์˜ ๊ฒฐ๊ณผ๋ฅผ `AutoClass`๋ฅผ ํ™œ์šฉํ•ด ๋ณต์ œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### AutoTokenizer [[autotokenizer]] ํ† ํฌ๋‚˜์ด์ €๋Š” ํ…์ŠคํŠธ๋ฅผ ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ˆซ์ž ๋ฐฐ์—ด ํ˜•ํƒœ๋กœ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ์—ญํ• ์„ ๋‹ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐํ™” ๊ณผ์ •์—๋Š” ๋‹จ์–ด๋ฅผ ์–ด๋””์—์„œ ๋Š์„์ง€, ์–ด๋Š ์ˆ˜์ค€๊นŒ์ง€ ๋‚˜๋ˆŒ์ง€์™€ ๊ฐ™์€ ์—ฌ๋Ÿฌ ๊ทœ์น™๋“ค์ด ์žˆ์Šต๋‹ˆ๋‹ค (ํ† ํฐํ™”์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ† ํฌ๋‚˜์ด์ € ์š”์•ฝ](./tokenizer_summary)์„ ์ฐธ์กฐํ•˜์„ธ์š”). ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์ ์€ ๋ชจ๋ธ์ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๋™์ผํ•œ ํ† ํฐํ™” ๊ทœ์น™์„ ์‚ฌ์šฉํ•˜๋„๋ก ๋™์ผํ•œ ๋ชจ๋ธ ์ด๋ฆ„์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ด์•ผ ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [`AutoTokenizer`]๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ €์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹ค์Œ์„ ํฌํ•จํ•œ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: * [input_ids](./glossary#input-ids): ํ† ํฐ์˜ ์ˆซ์ž ํ‘œํ˜„. * [attention_mask](.glossary#attention-mask): ์–ด๋–ค ํ† ํฐ์— ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์—ฌ์•ผ ํ•˜๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ์ž…๋ ฅ์„ ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ๋„ ๋ฐ›์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ํ…์ŠคํŠธ๋ฅผ ํŒจ๋”ฉํ•˜๊ณ  ์ž˜๋ผ๋‚ด์–ด ์ผ์ •ํ•œ ๊ธธ์ด์˜ ๋ฌถ์Œ์„ ๋ฐ˜ํ™˜ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> [์ „์ฒ˜๋ฆฌ](./preprocessing) ํŠœํ† ๋ฆฌ์–ผ์„ ์ฐธ์กฐํ•˜์‹œ๋ฉด ํ† ํฐํ™”์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์„ค๋ช…๊ณผ ํ•จ๊ป˜ ์ด๋ฏธ์ง€, ์˜ค๋””์˜ค์™€ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์„ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ [`AutoImageProcessor`]์™€ [`AutoFeatureExtractor`], [`AutoProcessor`]์˜ ์‚ฌ์šฉ๋ฐฉ๋ฒ•๋„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ### AutoModel [[automodel]] <frameworkcontent> <pt> ๐Ÿค— Transformers๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ„๋‹จํ•˜๊ณ  ํ†ตํ•ฉ๋œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, [`AutoTokenizer`]์ฒ˜๋Ÿผ [`AutoModel`]์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ๊ณผ์—…์— ์•Œ๋งž์€ [`AutoModel`]์„ ์„ ํƒํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ (๋˜๋Š” ์‹œํ€€์Šค) ๋ถ„๋ฅ˜์˜ ๊ฒฝ์šฐ [`AutoModelForSequenceClassification`]์„ ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> [`AutoModel`] ํด๋ž˜์Šค์—์„œ ์ง€์›ํ•˜๋Š” ๊ณผ์—…์— ๋Œ€ํ•ด์„œ๋Š” [๊ณผ์—… ์š”์•ฝ](./task_summary)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ์ด์ œ ์ „์ฒ˜๋ฆฌ๋œ ์ž…๋ ฅ ๋ฌถ์Œ์„ ์ง์ ‘ ๋ชจ๋ธ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜์ฒ˜๋Ÿผ `**`๋ฅผ ์•ž์— ๋ถ™์—ฌ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ํ’€์–ด์ฃผ๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> pt_outputs = pt_model(**pt_batch) ``` ๋ชจ๋ธ์˜ ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜ ์ถœ๋ ฅ์€ `logits` ์†์„ฑ์— ๋‹ด๊ฒจ์žˆ์Šต๋‹ˆ๋‹ค. `logits`์— softmax ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์—ฌ ํ™•๋ฅ ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ„๋‹จํ•˜๊ณ  ํ†ตํ•ฉ๋œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, [`AutoTokenizer`]์ฒ˜๋Ÿผ [`TFAutoModel`]์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ๊ณผ์—…์— ์•Œ๋งž์€ [`TFAutoModel`]์„ ์„ ํƒํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ (๋˜๋Š” ์‹œํ€€์Šค) ๋ถ„๋ฅ˜์˜ ๊ฒฝ์šฐ [`TFAutoModelForSequenceClassification`]์„ ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> [`AutoModel`] ํด๋ž˜์Šค์—์„œ ์ง€์›ํ•˜๋Š” ๊ณผ์—…์— ๋Œ€ํ•ด์„œ๋Š” [๊ณผ์—… ์š”์•ฝ](./task_summary)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ์ด์ œ ์ „์ฒ˜๋ฆฌ๋œ ์ž…๋ ฅ ๋ฌถ์Œ์„ ์ง์ ‘ ๋ชจ๋ธ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜์ฒ˜๋Ÿผ ๊ทธ๋Œ€๋กœ ํ…์„œ๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> tf_outputs = tf_model(tf_batch) ``` ๋ชจ๋ธ์˜ ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜ ์ถœ๋ ฅ์€ `logits` ์†์„ฑ์— ๋‹ด๊ฒจ์žˆ์Šต๋‹ˆ๋‹ค. `logits`์— softmax ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์—ฌ ํ™•๋ฅ ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> ๋ชจ๋“  ๐Ÿค— Transformers ๋ชจ๋ธ(PyTorch ๋˜๋Š” TensorFlow)์€ (softmax์™€ ๊ฐ™์€) ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜ *์ด์ „์—* ํ…์„œ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜์˜ ์ถœ๋ ฅ์€ ์ข…์ข… ์†์‹ค ํ•จ์ˆ˜ ์ถœ๋ ฅ๊ณผ ๊ฒฐํ•ฉ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ถœ๋ ฅ์€ ํŠน์ˆ˜ํ•œ ๋ฐ์ดํ„ฐ ํด๋ž˜์Šค์ด๋ฏ€๋กœ IDE์—์„œ ์ž๋™ ์™„์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ถœ๋ ฅ์€ ํŠœํ”Œ์ด๋‚˜ ๋”•์…”๋„ˆ๋ฆฌ์ฒ˜๋Ÿผ ๋™์ž‘ํ•˜๋ฉฐ (์ •์ˆ˜, ์Šฌ๋ผ์ด์Šค ๋˜๋Š” ๋ฌธ์ž์—ด๋กœ ์ธ๋ฑ์‹ฑ ๊ฐ€๋Šฅ), None์ธ ์†์„ฑ์€ ๋ฌด์‹œ๋ฉ๋‹ˆ๋‹ค. </Tip> ### ๋ชจ๋ธ ์ €์žฅํ•˜๊ธฐ [[save-a-model]] <frameworkcontent> <pt> ๋ฏธ์„ธ์กฐ์ •๋œ ๋ชจ๋ธ์„ ํ† ํฌ๋‚˜์ด์ €์™€ ํ•จ๊ป˜ ์ €์žฅํ•˜๋ ค๋ฉด [`PreTrainedModel.save_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` ๋ชจ๋ธ์„ ๋‹ค์‹œ ์‚ฌ์šฉํ•˜๋ ค๋ฉด [`PreTrainedModel.from_pretrained`]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> ๋ฏธ์„ธ์กฐ์ •๋œ ๋ชจ๋ธ์„ ํ† ํฌ๋‚˜์ด์ €์™€ ํ•จ๊ป˜ ์ €์žฅํ•˜๋ ค๋ฉด [`TFPreTrainedModel.save_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` ๋ชจ๋ธ์„ ๋‹ค์‹œ ์‚ฌ์šฉํ•˜๋ ค๋ฉด [`TFPreTrainedModel.from_pretrained`]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> ๐Ÿค— Transformers์˜ ๋ฉ‹์ง„ ๊ธฐ๋Šฅ ์ค‘ ํ•˜๋‚˜๋Š” ๋ชจ๋ธ์„ PyTorch ๋˜๋Š” TensorFlow ๋ชจ๋ธ๋กœ ์ €์žฅํ•ด๋’€๋‹ค๊ฐ€ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋‹ค์‹œ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” ์ ์ž…๋‹ˆ๋‹ค. `from_pt` ๋˜๋Š” `from_tf` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## ์ปค์Šคํ…€ ๋ชจ๋ธ ๊ตฌ์ถ•ํ•˜๊ธฐ [[custom-model-builds]] ๋ชจ๋ธ์˜ ๊ตฌ์„ฑ ํด๋ž˜์Šค๋ฅผ ์ˆ˜์ •ํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ตฌ์กฐ๋ฅผ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์€๋‹‰์ธต์ด๋‚˜ ์–ดํ…์…˜ ํ—ค๋“œ์˜ ์ˆ˜์™€ ๊ฐ™์€) ๋ชจ๋ธ์˜ ์†์„ฑ์€ ๊ตฌ์„ฑ์—์„œ ์ง€์ •๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ปค์Šคํ…€ ๊ตฌ์„ฑ ํด๋ž˜์Šค๋กœ ๋ชจ๋ธ์„ ๋งŒ๋“ค๋ฉด ์ฒ˜์Œ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์†์„ฑ์€ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋˜๋ฏ€๋กœ ์˜๋ฏธ ์žˆ๋Š” ๊ฒฐ๊ณผ๋ฅผ ์–ป์œผ๋ ค๋ฉด ๋จผ์ € ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œ์ผœ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € [`AutoConfig`]๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์ˆ˜์ •ํ•˜๊ณ  ์‹ถ์€ ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜์„ธ์š”. [`AutoConfig.from_pretrained`] ๋‚ด๋ถ€์—์„œ (์–ดํ…์…˜ ํ—ค๋“œ ์ˆ˜์™€ ๊ฐ™์ด) ๋ณ€๊ฒฝํ•˜๋ ค๋Š” ์†์„ฑ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> [`AutoModel.from_config`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ”๊พผ ๊ตฌ์„ฑ๋Œ€๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> [`TFAutoModel.from_config`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ”๊พผ ๊ตฌ์„ฑ๋Œ€๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> ์ปค์Šคํ…€ ๊ตฌ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์ปค์Šคํ…€ ์•„ํ‚คํ…์ฒ˜ ๋งŒ๋“ค๊ธฐ](./create_a_model) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ## Trainer - PyTorch์— ์ตœ์ ํ™”๋œ ํ›ˆ๋ จ ๋ฃจํ”„ [[trainer-a-pytorch-optimized-training-loop]] ๋ชจ๋“  ๋ชจ๋ธ์€ [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)์ด๋ฏ€๋กœ ์ผ๋ฐ˜์ ์ธ ํ›ˆ๋ จ ๋ฃจํ”„์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ž‘์„ฑํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ๐Ÿค— Transformers๋Š” PyTorch๋ฅผ ์œ„ํ•œ [`Trainer`] ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค์—๋Š” ๊ธฐ๋ณธ ํ›ˆ๋ จ ๋ฃจํ”„๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์œผ๋ฉฐ ๋ถ„์‚ฐ ํ›ˆ๋ จ, ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋“ฑ๊ณผ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€๋กœ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๊ณผ์—…์— ๋”ฐ๋ผ ๋‹ค๋ฅด์ง€๋งŒ ์ผ๋ฐ˜์ ์œผ๋กœ [`Trainer`]์— ๋‹ค์Œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: 1. [`PreTrainedModel`] ๋˜๋Š” [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` 2. [`TrainingArguments`]๋Š” ํ•™์Šต๋ฅ , ๋ฐฐ์น˜ ํฌ๊ธฐ, ํ›ˆ๋ จํ•  ์—ํฌํฌ ์ˆ˜์™€ ๊ฐ™์€ ๋ชจ๋ธ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์ธ์ž๋ฅผ ์ง€์ •ํ•˜์ง€ ์•Š์œผ๋ฉด ๊ธฐ๋ณธ๊ฐ’์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ(feature extractor) ๋˜๋Š” ํ”„๋กœ์„ธ์„œ์™€ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` 4. ๋ฐ์ดํ„ฐ์…‹์„ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. ๋ฐ์ดํ„ฐ์…‹์„ ํ† ํฐํ™”ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` ๊ทธ๋ฆฌ๊ณ  [`~datasets.Dataset.map`]๋กœ ๋ฐ์ดํ„ฐ์…‹ ์ „์ฒด์— ์ ์šฉํ•˜์„ธ์š”: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. [`DataCollatorWithPadding`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์˜ ํ‘œ๋ณธ ๋ฌถ์Œ์„ ๋งŒ๋“œ์„ธ์š”: ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` ์ด์ œ ์œ„์˜ ๋ชจ๋“  ํด๋ž˜์Šค๋ฅผ [`Trainer`]๋กœ ๋ชจ์œผ์„ธ์š”: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉด [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> ๋ฒˆ์—ญ์ด๋‚˜ ์š”์•ฝ๊ณผ ๊ฐ™์ด ์‹œํ€€์Šค-์‹œํ€€์Šค ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ณผ์—…์—๋Š” [`Seq2SeqTrainer`] ๋ฐ [`Seq2SeqTrainingArguments`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. </Tip> [`Trainer`] ๋‚ด์˜ ๋ฉ”์„œ๋“œ๋ฅผ ์„œ๋ธŒํด๋ž˜์Šคํ™”ํ•˜์—ฌ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ๋ฐ”๊ฟ€ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌ๋ฉด ์†์‹ค ํ•จ์ˆ˜, ์˜ตํ‹ฐ๋งˆ์ด์ €, ์Šค์ผ€์ค„๋Ÿฌ์™€ ๊ฐ™์€ ๊ธฐ๋Šฅ ๋˜ํ•œ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๋ณ€๊ฒฝ ๊ฐ€๋Šฅํ•œ ๋ฉ”์†Œ๋“œ์— ๋Œ€ํ•ด์„œ๋Š” [`Trainer`] ๋ฌธ์„œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ˆ˜์ •ํ•˜๋Š” ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€ [Callbacks](./main_classes/callbacks)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. Callbacks๋กœ ๋‹ค๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ํ†ตํ•ฉํ•˜๊ณ , ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ฒดํฌํ•˜์—ฌ ์ง„ํ–‰ ์ƒํ™ฉ์„ ๋ณด๊ณ ๋ฐ›๊ฑฐ๋‚˜, ํ›ˆ๋ จ์„ ์กฐ๊ธฐ์— ์ค‘๋‹จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Callbacks์€ ํ›ˆ๋ จ ๋ฃจํ”„ ์ž์ฒด๋ฅผ ๋ฐ”๊พธ์ง€๋Š” ์•Š์Šต๋‹ˆ๋‹ค. ์†์‹ค ํ•จ์ˆ˜์™€ ๊ฐ™์€ ๊ฒƒ์„ ๋ฐ”๊พธ๋ ค๋ฉด [`Trainer`]๋ฅผ ์„œ๋ธŒํด๋ž˜์Šคํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ## TensorFlow๋กœ ํ›ˆ๋ จ์‹œํ‚ค๊ธฐ [[train-with-tensorflow]] ๋ชจ๋“  ๋ชจ๋ธ์€ [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)์ด๋ฏ€๋กœ [Keras](https://keras.io/) API๋ฅผ ํ†ตํ•ด TensorFlow์—์„œ ํ›ˆ๋ จ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋ฐ์ดํ„ฐ์…‹์„ ์‰ฝ๊ฒŒ `tf.data.Dataset` ํ˜•ํƒœ๋กœ ์‰ฝ๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” [`~TFPreTrainedModel.prepare_tf_dataset`] ๋ฉ”์†Œ๋“œ๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ๋•Œ๋ฌธ์—, Keras์˜ [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฐ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ๋ฉ”์†Œ๋“œ๋กœ ๋ฐ”๋กœ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. [`TFPreTrainedModel`] ๋˜๋Š” [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` 2. ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ(feature extractor) ๋˜๋Š” ํ”„๋กœ์„ธ์„œ์™€ ๊ฐ™์€ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` 3. ๋ฐ์ดํ„ฐ์…‹์„ ํ† ํฐํ™”ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. [`~datasets.Dataset.map`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ํ† ํฐํ™” ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๊ณ , ๋ฐ์ดํ„ฐ์…‹๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ [`~TFPreTrainedModel.prepare_tf_dataset`]์— ์ „๋‹ฌํ•˜์„ธ์š”. ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๋ณ€๊ฒฝํ•˜๊ฑฐ๋‚˜ ๋ฐ์ดํ„ฐ์…‹์„ ์„ž์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. ์ค€๋น„๋˜์—ˆ์œผ๋ฉด `compile` ๋ฐ `fit`๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์„ธ์š”. ๐Ÿค— Transformers์˜ ๋ชจ๋“  ๋ชจ๋ธ์€ ๊ณผ์—…๊ณผ ๊ด€๋ จ๋œ ๊ธฐ๋ณธ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฏ€๋กœ ๋ช…์‹œ์ ์œผ๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์•„๋„ ๋ฉ๋‹ˆ๋‹ค: ```py >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) # No loss argument! >>> model.fit(tf_dataset) # doctest: +SKIP ``` ## ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? [[whats-next]] ๐Ÿค— Transformers ๋‘˜๋Ÿฌ๋ณด๊ธฐ๋ฅผ ๋ชจ๋‘ ์ฝ์œผ์…จ๋‹ค๋ฉด, ๊ฐ€์ด๋“œ๋ฅผ ์‚ดํŽด๋ณด๊ณ  ๋” ๊ตฌ์ฒด์ ์ธ ๊ฒƒ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณด์„ธ์š”. ์ด๋ฅผํ…Œ๋ฉด ์ปค์Šคํ…€ ๋ชจ๋ธ ๊ตฌ์ถ•ํ•˜๋Š” ๋ฐฉ๋ฒ•, ๊ณผ์—…์— ์•Œ๋งž๊ฒŒ ๋ชจ๋ธ์„ ๋ฏธ์„ธ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•, ์Šคํฌ๋ฆฝํŠธ๋กœ ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ• ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํ•ต์‹ฌ ๊ฐœ๋…์— ๋Œ€ํ•ด ๋” ์•Œ์•„๋ณด๋ ค๋ฉด ์ปคํ”ผ ํ•œ ์ž” ๋“ค๊ณ  ๊ฐœ๋… ๊ฐ€์ด๋“œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”!
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/troubleshooting.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ฌธ์ œ ํ•ด๊ฒฐ[[troubleshoot]] ๋•Œ๋•Œ๋กœ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ์ €ํฌ๊ฐ€ ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ์ด ๊ฐ€์ด๋“œ๋Š” ํ˜„์žฌ๊นŒ์ง€ ํ™•์ธ๋œ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ๋ฌธ์ œ ๋ช‡ ๊ฐ€์ง€์™€ ๊ทธ๊ฒƒ๋“ค์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๋‹ค๋ฃน๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ๊ฐ€์ด๋“œ๋Š” ๋ชจ๋“  ๐Ÿค— Transformers ๋ฌธ์ œ๋ฅผ ํฌ๊ด„์ ์œผ๋กœ ๋‹ค๋ฃจ๊ณ  ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฌธ์ œ ํ•ด๊ฒฐ์— ๋” ๋งŽ์€ ๋„์›€์„ ๋ฐ›์œผ๋ ค๋ฉด ๋‹ค์Œ์„ ์‹œ๋„ํ•ด๋ณด์„ธ์š”: <Youtube id="S2EEG3JIt2A"/> 1. [ํฌ๋Ÿผ](https://discuss.huggingface.co/)์—์„œ ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”. [Beginners](https://discuss.huggingface.co/c/beginners/5) ๋˜๋Š” [๐Ÿค— Transformers](https://discuss.huggingface.co/c/transformers/9)์™€ ๊ฐ™์€ ํŠน์ • ์นดํ…Œ๊ณ ๋ฆฌ์— ์งˆ๋ฌธ์„ ๊ฒŒ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ์ฝ”๋“œ์™€ ํ•จ๊ป˜ ์ž˜ ์„œ์ˆ ๋œ ํฌ๋Ÿผ ๊ฒŒ์‹œ๋ฌผ์„ ์ž‘์„ฑํ•˜์—ฌ ์—ฌ๋Ÿฌ๋ถ„์˜ ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋  ๊ฐ€๋Šฅ์„ฑ์„ ๊ทน๋Œ€ํ™”ํ•˜์„ธ์š”! <Youtube id="_PAli-V4wj0"/> 2. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๊ด€๋ จ๋œ ๋ฒ„๊ทธ์ด๋ฉด ๐Ÿค— Transformers ์ €์žฅ์†Œ์—์„œ [์ด์Šˆ](https://github.com/huggingface/transformers/issues/new/choose)๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”. ๋ฒ„๊ทธ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜๋Š” ์ •๋ณด๋ฅผ ๊ฐ€๋Šฅํ•œ ๋งŽ์ด ํฌํ•จํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์—ฌ, ๋ฌด์—‡์ด ์ž˜๋ชป ๋˜์—ˆ๋Š”์ง€์™€ ์–ด๋–ป๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ๋” ์ž˜ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ์„ธ์š”. 3. ์ด์ „ ๋ฒ„์ „์˜ ๐Ÿค— Transformers์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ์ค‘์š”ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ๋ฒ„์ „ ์‚ฌ์ด์— ๋„์ž…๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์— [๋งˆ์ด๊ทธ๋ ˆ์ด์…˜](migration) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ๋ฌธ์ œ ํ•ด๊ฒฐ ๋ฐ ๋„์›€ ๋งค๋‰ด์–ผ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ Hugging Face ๊ฐ•์ขŒ์˜ [8์žฅ](https://huggingface.co/course/chapter8/1?fw=pt)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ## ๋ฐฉํ™”๋ฒฝ ํ™˜๊ฒฝ[[firewalled-environments]] ํด๋ผ์šฐ๋“œ ๋ฐ ๋‚ด๋ถ€๋ง(intranet) ์„ค์ •์˜ ์ผ๋ถ€ GPU ์ธ์Šคํ„ด์Šค๋Š” ์™ธ๋ถ€ ์—ฐ๊ฒฐ์— ๋Œ€ํ•œ ๋ฐฉํ™”๋ฒฝ์œผ๋กœ ์ฐจ๋‹จ๋˜์–ด ์—ฐ๊ฒฐ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋‚˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๋ ค๊ณ  ํ•  ๋•Œ, ๋‹ค์šด๋กœ๋“œ๊ฐ€ ์ค‘๋‹จ๋˜๊ณ  ๋‹ค์Œ ๋ฉ”์‹œ์ง€์™€ ํ•จ๊ป˜ ์‹œ๊ฐ„ ์ดˆ๊ณผ๋ฉ๋‹ˆ๋‹ค: ``` ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. ``` ์ด ๊ฒฝ์šฐ์—๋Š” ์—ฐ๊ฒฐ ์˜ค๋ฅ˜๋ฅผ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Transformers๋ฅผ [์˜คํ”„๋ผ์ธ ๋ชจ๋“œ](installation#offline-mode)๋กœ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ## CUDA ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑ(CUDA out of memory)[[cuda-out-of-memory]] ์ˆ˜๋ฐฑ๋งŒ ๊ฐœ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ์ ์ ˆํ•œ ํ•˜๋“œ์›จ์–ด ์—†์ด ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ฐ˜์ ์ธ ์˜ค๋ฅ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reserved in total by PyTorch) ``` ๋‹ค์Œ์€ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ์„ ์ค„์ด๊ธฐ ์œ„ํ•ด ์‹œ๋„ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์ž ์žฌ์ ์ธ ํ•ด๊ฒฐ์ฑ…์ž…๋‹ˆ๋‹ค: - [`TrainingArguments`]์˜ [`per_device_train_batch_size`](main_classes/trainer#transformers.TrainingArguments.per_device_train_batch_size) ๊ฐ’์„ ์ค„์ด์„ธ์š”. - [`TrainingArguments`]์˜ [`gradient_accumulation_steps`](main_classes/trainer#transformers.TrainingArguments.gradient_accumulation_steps)์€ ์ „์ฒด ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋Š˜๋ฆฌ์„ธ์š”. <Tip> ๋ฉ”๋ชจ๋ฆฌ ์ ˆ์•ฝ ๊ธฐ์ˆ ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์„ฑ๋Šฅ [๊ฐ€์ด๋“œ](performance)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ €์žฅ๋œ TensorFlow ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค(Unable to load a saved TensorFlow model)[[unable-to-load-a-saved-uensorFlow-model]] TensorFlow์˜ [model.save](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) ๋ฉ”์†Œ๋“œ๋Š” ์•„ํ‚คํ…์ฒ˜, ๊ฐ€์ค‘์น˜, ํ›ˆ๋ จ ๊ตฌ์„ฑ ๋“ฑ ์ „์ฒด ๋ชจ๋ธ์„ ๋‹จ์ผ ํŒŒ์ผ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ ํŒŒ์ผ์„ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ฌ ๋•Œ ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ ํŒŒ์ผ์— ์žˆ๋Š” ๋ชจ๋“  TensorFlow ๊ด€๋ จ ๊ฐ์ฒด๋ฅผ ๊ฐ€์ ธ์˜ค์ง€ ์•Š์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TensorFlow ๋ชจ๋ธ ์ €์žฅ ๋ฐ ๊ฐ€์ ธ์˜ค๊ธฐ ๋ฌธ์ œ๋ฅผ ํ”ผํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค: - ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ `h5` ํŒŒ์ผ ํ™•์žฅ์ž๋กœ [`model.save_weights`](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model)๋กœ ์ €์žฅํ•œ ๋‹ค์Œ [`~TFPreTrainedModel.from_pretrained`]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFPreTrainedModel >>> from tensorflow import keras >>> model.save_weights("some_folder/tf_model.h5") >>> model = TFPreTrainedModel.from_pretrained("some_folder") ``` - ๋ชจ๋ธ์„ [`~TFPretrainedModel.save_pretrained`]๋กœ ์ €์žฅํ•˜๊ณ  [`~TFPreTrainedModel.from_pretrained`]๋กœ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFPreTrainedModel >>> model.save_pretrained("path_to/model") >>> model = TFPreTrainedModel.from_pretrained("path_to/model") ``` ## ImportError[[importerror]] ํŠนํžˆ ์ตœ์‹  ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ ๋งŒ๋‚  ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ์ผ๋ฐ˜์ ์ธ ์˜ค๋ฅ˜๋Š” `ImportError`์ž…๋‹ˆ๋‹ค: ``` ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location) ``` ์ด๋Ÿฌํ•œ ์˜ค๋ฅ˜ ์œ ํ˜•์˜ ๊ฒฝ์šฐ ์ตœ์‹  ๋ชจ๋ธ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ์‹  ๋ฒ„์ „์˜ ๐Ÿค— Transformers๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers --upgrade ``` ## CUDA error: device-side assert triggered[[cuda-error-deviceside-assert-triggered]] ๋•Œ๋•Œ๋กœ ์žฅ์น˜ ์ฝ”๋“œ ์˜ค๋ฅ˜์— ๋Œ€ํ•œ ์ผ๋ฐ˜์ ์ธ CUDA ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` RuntimeError: CUDA error: device-side assert triggered ``` ๋” ์ž์„ธํ•œ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ์–ป์œผ๋ ค๋ฉด ์šฐ์„  ์ฝ”๋“œ๋ฅผ CPU์—์„œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ฝ”๋“œ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€ํ•˜์—ฌ CPU๋กœ ์ „ํ™˜ํ•˜์„ธ์š”: ```py >>> import os >>> os.environ["CUDA_VISIBLE_DEVICES"] = "" ``` ๋˜ ๋‹ค๋ฅธ ์˜ต์…˜์€ GPU์—์„œ ๋” ๋‚˜์€ ์—ญ์ถ”์ (traceback)์„ ์–ป๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ฝ”๋“œ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€ํ•˜์—ฌ ์—ญ์ถ”์ ์ด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•œ ์†Œ์Šค๋ฅผ ๊ฐ€๋ฆฌํ‚ค๋„๋ก ํ•˜์„ธ์š”: ```py >>> import os >>> os.environ["CUDA_LAUNCH_BLOCKING"] = "1" ``` ## ํŒจ๋”ฉ ํ† ํฐ์ด ๋งˆ์Šคํ‚น๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ž˜๋ชป๋œ ์ถœ๋ ฅ(Incorrect output when padding tokens aren't masked)[[incorrect-output-when-padding-tokens-arent-masked]] ๊ฒฝ์šฐ์— ๋”ฐ๋ผ `input_ids`์— ํŒจ๋”ฉ ํ† ํฐ์ด ํฌํ•จ๋œ ๊ฒฝ์šฐ `hidden_state` ์ถœ๋ ฅ์ด ์˜ฌ๋ฐ”๋ฅด์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ๋ชจ๋ฅผ ์œ„ํ•ด ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ๋ชจ๋ธ์˜ `pad_token_id`์— ์•ก์„ธ์Šคํ•˜์—ฌ ํ•ด๋‹น ๊ฐ’์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ `pad_token_id`๊ฐ€ `None`์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์–ธ์ œ๋“ ์ง€ ์ˆ˜๋™์œผ๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForSequenceClassification >>> import torch >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-uncased") >>> model.config.pad_token_id 0 ``` ๋‹ค์Œ ์˜ˆ์ œ๋Š” ํŒจ๋”ฉ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜์ง€ ์•Š์€ ์ถœ๋ ฅ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: ```py >>> input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [ 0.1317, -0.1683]], grad_fn=<AddmmBackward0>) ``` ๋‹ค์Œ์€ ๋‘ ๋ฒˆ์งธ ์‹œํ€€์Šค์˜ ์‹ค์ œ ์ถœ๋ ฅ์ž…๋‹ˆ๋‹ค: ```py >>> input_ids = torch.tensor([[7592]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>) ``` ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋ชจ๋ธ์— `attention_mask`๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํŒจ๋”ฉ ํ† ํฐ์„ ๋ฌด์‹œํ•ด์•ผ ์ด๋Ÿฌํ•œ ์กฐ์šฉํ•œ ์˜ค๋ฅ˜๋ฅผ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์‹œํ€€์Šค์˜ ์ถœ๋ ฅ์ด ์‹ค์ œ ์ถœ๋ ฅ๊ณผ ์ผ์น˜ํ•ฉ๋‹ˆ๋‹ค: <Tip> ์ผ๋ฐ˜์ ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋Š” ํŠน์ • ํ† ํฌ๋‚˜์ด์ €์˜ ๊ธฐ๋ณธ ๊ฐ’์„ ๊ธฐ์ค€์œผ๋กœ ์‚ฌ์šฉ์ž์— ๋Œ€ํ•œ 'attention_mask'๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. </Tip> ```py >>> attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]]) >>> output = model(input_ids, attention_mask=attention_mask) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [-0.1008, -0.4061]], grad_fn=<AddmmBackward0>) ``` ๐Ÿค— Transformers๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์ œ๊ณต๋œ ๊ฒฝ์šฐ ํŒจ๋”ฉ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜๊ธฐ ์œ„ํ•œ `attention_mask`๋ฅผ ์ž๋™์œผ๋กœ ์ƒ์„ฑํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ ์ด์œ ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ์ผ๋ถ€ ๋ชจ๋ธ์—๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์—†์Šต๋‹ˆ๋‹ค. - ์ผ๋ถ€ ์‚ฌ์šฉ ์‚ฌ๋ก€์˜ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์ด ํŒจ๋”ฉ ํ† ํฐ์„ ๊ด€๋ฆฌํ•˜๊ธฐ๋ฅผ ์›ํ•ฉ๋‹ˆ๋‹ค. ## ValueError: ์ด ์œ ํ˜•์˜ AutoModel์— ๋Œ€ํ•ด ์ธ์‹ํ•  ์ˆ˜ ์—†๋Š” XYZ ๊ตฌ์„ฑ ํด๋ž˜์Šค(ValueError: Unrecognized configuration class XYZ for this kind of AutoModel)[[valueerror-unrecognized-configuration-class-xyz-for-this-kind-of-automodel]] ์ผ๋ฐ˜์ ์œผ๋กœ, ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•ด [`AutoModel`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” ๊ตฌ์„ฑ์— ๋”ฐ๋ผ ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜๊ณ  ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ๊ฐ€์ ธ์˜ฌ ๋•Œ ์ด `ValueError`๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด, ์ด๋Š” Auto ํด๋ž˜์Šค๊ฐ€ ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์˜ ๊ตฌ์„ฑ์—์„œ ๊ฐ€์ ธ์˜ค๋ ค๋Š” ๋ชจ๋ธ ์œ ํ˜•๊ณผ ๋งคํ•‘์„ ์ฐพ์„ ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ํ”ํ•˜๊ฒŒ ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ๋Š” ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ฃผ์–ด์ง„ ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์„ ๋•Œ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ ์˜ˆ์ œ์—์„œ ์งˆ์˜์‘๋‹ต์— ๋Œ€ํ•œ GPT2๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForQuestionAnswering >>> processor = AutoProcessor.from_pretrained("openai-community/gpt2-medium") >>> model = AutoModelForQuestionAnswering.from_pretrained("openai-community/gpt2-medium") ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering. Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, ... ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/task_summary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers๋กœ ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ[[what__transformers_can_do]] ๐Ÿค— Transformers๋Š” ์ž์—ฐ์–ด์ฒ˜๋ฆฌ(NLP), ์ปดํ“จํ„ฐ ๋น„์ „, ์˜ค๋””์˜ค ๋ฐ ์Œ์„ฑ ์ฒ˜๋ฆฌ ์ž‘์—…์— ๋Œ€ํ•œ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ์ตœ์ฒจ๋‹จ ๋ชจ๋ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์„ ์œ„ํ•œ ํ˜„๋Œ€์ ์ธ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง๊ณผ ๊ฐ™์€ ํŠธ๋žœ์Šคํฌ๋จธ๊ฐ€ ์•„๋‹Œ ๋ชจ๋ธ๋„ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์Šค๋งˆํŠธํฐ, ์•ฑ, ํ…”๋ ˆ๋น„์ „๊ณผ ๊ฐ™์€ ์˜ค๋Š˜๋‚  ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ์†Œ๋น„์ž ์ œํ’ˆ์„ ์‚ดํŽด๋ณด๋ฉด, ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ์ˆ ์ด ๊ทธ ๋’ค์— ์‚ฌ์šฉ๋˜๊ณ  ์žˆ์„ ํ™•๋ฅ ์ด ๋†’์Šต๋‹ˆ๋‹ค. ์Šค๋งˆํŠธํฐ์œผ๋กœ ์ดฌ์˜ํ•œ ์‚ฌ์ง„์—์„œ ๋ฐฐ๊ฒฝ ๊ฐ์ฒด๋ฅผ ์ œ๊ฑฐํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ์–ด๋–ป๊ฒŒ ํ• ๊นŒ์š”? ์ด๋Š” ํŒŒ๋†‰ํ‹ฑ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜ ์ž‘์—…์˜ ์˜ˆ์ž…๋‹ˆ๋‹ค(์•„์ง ์ด๊ฒŒ ๋ฌด์—‡์ธ์ง€ ๋ชจ๋ฅธ๋‹ค๋ฉด, ๋‹ค์Œ ์„น์…˜์—์„œ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค!). ์ด ํŽ˜์ด์ง€๋Š” ๋‹ค์–‘ํ•œ ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค, ์ปดํ“จํ„ฐ ๋น„์ „, NLP ์ž‘์—…์„ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋‹ค๋ฃจ๋Š” ๊ฐ„๋‹จํ•œ ์˜ˆ์ œ๋ฅผ 3์ค„์˜ ์ฝ”๋“œ๋กœ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ## ์˜ค๋””์˜ค[[audio]] ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค ์ฒ˜๋ฆฌ ์ž‘์—…์€ ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์™€ ์•ฝ๊ฐ„ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ด๋Š” ์ฃผ๋กœ ์˜ค๋””์˜ค๊ฐ€ ์—ฐ์†์ ์ธ ์‹ ํ˜ธ๋กœ ์ž…๋ ฅ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ์™€ ๋‹ฌ๋ฆฌ ์›๋ณธ ์˜ค๋””์˜ค ํŒŒํ˜•(waveform)์€ ๋ฌธ์žฅ์ด ๋‹จ์–ด๋กœ ๋‚˜๋ˆ ์ง€๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๊น”๋”ํ•˜๊ฒŒ ์ด์‚ฐ์ ์ธ ๋ฌถ์Œ์œผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด ์›๋ณธ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋Š” ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ์ƒ˜ํ”Œ๋ง๋ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ๊ฐ„๊ฒฉ ๋‚ด์—์„œ ๋” ๋งŽ์€ ์ƒ˜ํ”Œ์„ ์ทจํ•  ๊ฒฝ์šฐ ์ƒ˜ํ”Œ๋ง๋ฅ ์ด ๋†’์•„์ง€๋ฉฐ, ์˜ค๋””์˜ค๋Š” ์›๋ณธ ์˜ค๋””์˜ค ์†Œ์Šค์— ๋” ๊ฐ€๊นŒ์›Œ์ง‘๋‹ˆ๋‹ค. ๊ณผ๊ฑฐ์˜ ์ ‘๊ทผ ๋ฐฉ์‹์€ ์˜ค๋””์˜ค์—์„œ ์œ ์šฉํ•œ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•ด ์˜ค๋””์˜ค๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์ด์—ˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํ˜„์žฌ๋Š” ์›๋ณธ ์˜ค๋””์˜ค ํŒŒํ˜•์„ ํŠน์„ฑ ์ธ์ฝ”๋”์— ์ง์ ‘ ๋„ฃ์–ด์„œ ์˜ค๋””์˜ค ํ‘œํ˜„(representation)์„ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์ด ๋” ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๊ฐ€ ๋‹จ์ˆœํ•ด์ง€๊ณ  ๋ชจ๋ธ์ด ๊ฐ€์žฅ ์ค‘์š”ํ•œ ํŠน์ง•์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์˜ค๋””์˜ค ๋ถ„๋ฅ˜[[audio_classification]] ์˜ค๋””์˜ค ๋ถ„๋ฅ˜๋Š” ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์— ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์˜ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋งŽ์€ ๊ตฌ์ฒด์ ์ธ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์„ ํฌํ•จํ•œ ๋„“์€ ๋ฒ”์ฃผ์ž…๋‹ˆ๋‹ค. ์ผ๋ถ€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์Œํ–ฅ ์žฅ๋ฉด ๋ถ„๋ฅ˜: ์˜ค๋””์˜ค์— ์žฅ๋ฉด ๋ ˆ์ด๋ธ”("์‚ฌ๋ฌด์‹ค", "ํ•ด๋ณ€", "๊ฒฝ๊ธฐ์žฅ")์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ์Œํ–ฅ ์ด๋ฒคํŠธ ๊ฐ์ง€: ์˜ค๋””์˜ค์— ์†Œ๋ฆฌ ์ด๋ฒคํŠธ ๋ ˆ์ด๋ธ”("์ฐจ ๊ฒฝ์ ", "๊ณ ๋ž˜ ์šธ์Œ์†Œ๋ฆฌ", "์œ ๋ฆฌ ํŒŒ์†")์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ํƒœ๊น…: ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์†Œ๋ฆฌ(์ƒˆ ์ง€์ €๊ท, ํšŒ์˜์—์„œ์˜ ํ™”์ž ์‹๋ณ„)๊ฐ€ ํฌํ•จ๋œ ์˜ค๋””์˜ค์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ์Œ์•… ๋ถ„๋ฅ˜: ์Œ์•…์— ์žฅ๋ฅด ๋ ˆ์ด๋ธ”("๋ฉ”ํƒˆ", "ํž™ํ•ฉ", "์ปจํŠธ๋ฆฌ")์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er") >>> preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4532, 'label': 'hap'}, {'score': 0.3622, 'label': 'sad'}, {'score': 0.0943, 'label': 'neu'}, {'score': 0.0903, 'label': 'ang'}] ``` ### ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic_speech_recognition]] ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์€ ์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์Œ์„ฑ์€ ์ธ๊ฐ„์˜ ์ž์—ฐ์Šค๋Ÿฌ์šด ์˜์‚ฌ์†Œํ†ต ํ˜•ํƒœ์ด๊ธฐ ๋•Œ๋ฌธ์— ASR์€ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ์˜ค๋””์˜ค ์ž‘์—… ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์˜ค๋Š˜๋‚  ASR ์‹œ์Šคํ…œ์€ ์Šคํ”ผ์ปค, ์ „ํ™” ๋ฐ ์ž๋™์ฐจ์™€ ๊ฐ™์€ "์Šค๋งˆํŠธ" ๊ธฐ์ˆ  ์ œํ’ˆ์— ๋‚ด์žฅ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ์Œ์•… ์žฌ์ƒ, ์•Œ๋ฆผ ์„ค์ • ๋ฐ ๋‚ ์”จ ์ •๋ณด๋ฅผ ์š”์ฒญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค€ ํ•ต์‹ฌ ๋„์ „ ๊ณผ์ œ ์ค‘ ํ•˜๋‚˜๋Š” ์–‘์ด ๋ฐ์ดํ„ฐ ์–‘์ด ์ ์€ ์–ธ์–ด(low-resource language)์— ๋Œ€ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋Œ€๋Ÿ‰์˜ ์Œ์„ฑ ๋ฐ์ดํ„ฐ๋กœ ์‚ฌ์ „ ํ›ˆ๋ จํ•œ ํ›„ ๋ฐ์ดํ„ฐ ์–‘์ด ์ ์€ ์–ธ์–ด์—์„œ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ์Œ์„ฑ ๋ฐ์ดํ„ฐ 1์‹œ๊ฐ„๋งŒ์œผ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ฉด ์ด์ „์˜ 100๋ฐฐ ๋งŽ์€ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ๋กœ ํ›ˆ๋ จ๋œ ASR ์‹œ์Šคํ…œ๋ณด๋‹ค ํ›จ์”ฌ ๋” ๋†’์€ ํ’ˆ์งˆ์˜ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small") >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer_vision]] ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—… ์ค‘ ๊ฐ€์žฅ ์ดˆ๊ธฐ์˜ ์„ฑ๊ณต์ ์ธ ์ž‘์—… ์ค‘ ํ•˜๋‚˜๋Š” [ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง(CNN)](glossary#convolution)์„ ์‚ฌ์šฉํ•˜์—ฌ ์šฐํŽธ๋ฒˆํ˜ธ ์ˆซ์ž ์ด๋ฏธ์ง€๋ฅผ ์ธ์‹ํ•˜๋Š” ๊ฒƒ์ด์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” ํ”ฝ์…€๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์œผ๋ฉฐ ๊ฐ ํ”ฝ์…€์€ ์ˆซ์ž ๊ฐ’์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ์ด๋กœ์จ ์ด๋ฏธ์ง€๋ฅผ ํ”ฝ์…€ ๊ฐ’์˜ ํ–‰๋ ฌ๋กœ ๋‚˜ํƒ€๋‚ด๋Š” ๊ฒƒ์ด ์‰ฌ์›Œ์ง‘๋‹ˆ๋‹ค. ํŠน์ •ํ•œ ํ”ฝ์…€ ๊ฐ’์˜ ์กฐํ•ฉ์€ ์ด๋ฏธ์ง€์˜ ์ƒ‰์ƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค์Œ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค: 1. ํ•ฉ์„ฑ๊ณฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์˜ ๋‚ฎ์€ ์ˆ˜์ค€ ํŠน์ง•์—์„œ ๋†’์€ ์ˆ˜์ค€์˜ ์ถ”์ƒ์ ์ธ ์š”์†Œ๊นŒ์ง€ ๊ณ„์ธต์ ์œผ๋กœ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. 2. ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜๋กœ ๋‚˜๋ˆ„๊ณ  ํŠธ๋žœ์Šคํฌ๋จธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ ์ง„์ ์œผ๋กœ ๊ฐ ์ด๋ฏธ์ง€ ํŒจ์น˜๊ฐ€ ์„œ๋กœ ์–ด๋– ํ•œ ๋ฐฉ์‹์œผ๋กœ ์—ฐ๊ด€๋˜์–ด ์ด๋ฏธ์ง€๋ฅผ ํ˜•์„ฑํ•˜๋Š”์ง€ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. `CNN`์—์„œ ์„ ํ˜ธํ•˜๋Š” ์ƒํ–ฅ์‹ ์ ‘๊ทผ๋ฒ•๊ณผ๋Š” ๋‹ฌ๋ฆฌ, ์ด ๋ฐฉ์‹์€ ํ๋ฆฟํ•œ ์ด๋ฏธ์ง€๋กœ ์ดˆ์•ˆ์„ ๊ทธ๋ฆฌ๊ณ  ์ ์ง„์ ์œผ๋กœ ์„ ๋ช…ํ•œ ์ด๋ฏธ์ง€๋กœ ๋งŒ๋“ค์–ด๊ฐ€๋Š” ๊ฒƒ๊ณผ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ### ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[image_classification]] ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ํ•œ ๊ฐœ์˜ ์ „์ฒด ์ด๋ฏธ์ง€์— ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์˜ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๋ถ„๋ฅ˜ ์ž‘์—…๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—๋Š” ๋‹ค์–‘ํ•œ ์‹ค์šฉ์ ์ธ ์šฉ๋„๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์˜๋ฃŒ: ์งˆ๋ณ‘์„ ๊ฐ์ง€ํ•˜๊ฑฐ๋‚˜ ํ™˜์ž ๊ฑด๊ฐ•์„ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ธฐ ์œ„ํ•ด ์˜๋ฃŒ ์ด๋ฏธ์ง€์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ํ™˜๊ฒฝ: ์œ„์„ฑ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์‚ฐ๋ฆผ ๋ฒŒ์ฑ„๋ฅผ ๊ฐ์‹œํ•˜๊ณ  ์•ผ์ƒ ์ง€์—ญ ๊ด€๋ฆฌ๋ฅผ ์œ„ํ•œ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๊ฑฐ๋‚˜ ์‚ฐ๋ถˆ์„ ๊ฐ์ง€ํ•ฉ๋‹ˆ๋‹ค. * ๋†์—…: ์ž‘๋ฌผ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์‹๋ฌผ ๊ฑด๊ฐ•์„ ํ™•์ธํ•˜๊ฑฐ๋‚˜ ์œ„์„ฑ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ํ† ์ง€ ์ด์šฉ ๊ด€์ฐฐ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. * ์ƒํƒœํ•™: ๋™๋ฌผ์ด๋‚˜ ์‹๋ฌผ ์ข… ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์•ผ์ƒ ๋™๋ฌผ ๊ฐœ์ฒด๊ตฐ์„ ์กฐ์‚ฌํ•˜๊ฑฐ๋‚˜ ๋ฉธ์ข… ์œ„๊ธฐ์— ์ฒ˜ํ•œ ์ข…์„ ์ถ”์ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> classifier = pipeline(task="image-classification") >>> preds = classifier( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.4335, 'label': 'lynx, catamount'} {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'} {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'} {'score': 0.0239, 'label': 'Egyptian cat'} {'score': 0.0229, 'label': 'tiger cat'} ``` ### ๊ฐ์ฒด ํƒ์ง€[[object_detection]] ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์™€ ๋‹ฌ๋ฆฌ ๊ฐ์ฒด ํƒ์ง€๋Š” ์ด๋ฏธ์ง€ ๋‚ด์—์„œ ์—ฌ๋Ÿฌ ๊ฐ์ฒด๋ฅผ ์‹๋ณ„ํ•˜๊ณ  ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋กœ ์ •์˜๋œ ๊ฐ์ฒด์˜ ์œ„์น˜๋ฅผ ํŒŒ์•…ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€์˜ ๋ช‡ ๊ฐ€์ง€ ์‘์šฉ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰: ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰, ๋ณดํ–‰์ž ๋ฐ ์‹ ํ˜ธ๋“ฑ๊ณผ ๊ฐ™์€ ์ผ์ƒ์ ์ธ ๊ตํ†ต ๊ฐ์ฒด๋ฅผ ๊ฐ์ง€ํ•ฉ๋‹ˆ๋‹ค. * ์›๊ฒฉ ๊ฐ์ง€: ์žฌ๋‚œ ๋ชจ๋‹ˆํ„ฐ๋ง, ๋„์‹œ ๊ณ„ํš ๋ฐ ๊ธฐ์ƒ ์˜ˆ์ธก ๋“ฑ์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. * ๊ฒฐํ•จ ํƒ์ง€: ๊ฑด๋ฌผ์˜ ๊ท ์—ด์ด๋‚˜ ๊ตฌ์กฐ์  ์†์ƒ, ์ œ์กฐ ๊ฒฐํ•จ ๋“ฑ์„ ํƒ์ง€ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> detector = pipeline(task="object-detection") >>> preds = detector( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds] >>> preds [{'score': 0.9865, 'label': 'cat', 'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}] ``` ### ์ด๋ฏธ์ง€ ๋ถ„ํ• [[image_segmentation]] ์ด๋ฏธ์ง€ ๋ถ„ํ• ์€ ํ”ฝ์…€ ์ฐจ์›์˜ ์ž‘์—…์œผ๋กœ, ์ด๋ฏธ์ง€ ๋‚ด์˜ ๋ชจ๋“  ํ”ฝ์…€์„ ํด๋ž˜์Šค์— ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๊ฐ์ฒด ํƒ์ง€์™€ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€๋Š” ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ๋‚ด์˜ ๊ฐ์ฒด๋ฅผ ๋ ˆ์ด๋ธ”๋งํ•˜๊ณ  ์˜ˆ์ธกํ•˜๋Š” ๋ฐ˜๋ฉด, ๋ถ„ํ• ์€ ๋” ์„ธ๋ถ„ํ™”๋œ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๋ถ„ํ• ์€ ํ”ฝ์…€ ์ˆ˜์ค€์—์„œ ๊ฐ์ฒด๋ฅผ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„ํ• ์—๋Š” ์—ฌ๋Ÿฌ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ธ์Šคํ„ด์Šค ๋ถ„ํ• : ๊ฐœ์ฒด์˜ ํด๋ž˜์Šค๋ฅผ ๋ ˆ์ด๋ธ”๋งํ•˜๋Š” ๊ฒƒ ์™ธ์—๋„, ๊ฐœ์ฒด์˜ ๊ฐ ๊ตฌ๋ถ„๋œ ์ธ์Šคํ„ด์Šค์—๋„ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค ("๊ฐœ-1", "๊ฐœ-2" ๋“ฑ). * ํŒŒ๋†‰ํ‹ฑ ๋ถ„ํ• : ์˜๋ฏธ์  ๋ถ„ํ• ๊ณผ ์ธ์Šคํ„ด์Šค ๋ถ„ํ• ์˜ ์กฐํ•ฉ์ž…๋‹ˆ๋‹ค. ๊ฐ ํ”ฝ์…€์„ ์˜๋ฏธ์  ํด๋ž˜์Šค๋กœ ๋ ˆ์ด๋ธ”๋งํ•˜๋Š” **๋™์‹œ์—** ๊ฐœ์ฒด์˜ ๊ฐ๊ฐ ๊ตฌ๋ถ„๋œ ์ธ์Šคํ„ด์Šค๋กœ๋„ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ถ„ํ•  ์ž‘์—…์€ ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์—์„œ ์œ ์šฉํ•˜๋ฉฐ, ์ฃผ๋ณ€ ํ™˜๊ฒฝ์˜ ํ”ฝ์…€ ์ˆ˜์ค€ ์ง€๋„๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ณดํ–‰์ž์™€ ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰ ์ฃผ๋ณ€์—์„œ ์•ˆ์ „ํ•˜๊ฒŒ ํƒ์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์˜๋ฃŒ ์˜์ƒ์—์„œ๋„ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ถ„ํ•  ์ž‘์—…์ด ํ”ฝ์…€ ์ˆ˜์ค€์—์„œ ๊ฐ์ฒด๋ฅผ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋น„์ •์ƒ์ ์ธ ์„ธํฌ๋‚˜ ์žฅ๊ธฐ์˜ ํŠน์ง•์„ ์‹๋ณ„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„ํ• ์€ ์˜๋ฅ˜ ๊ฐ€์ƒ ์‹œ์ฐฉ์ด๋‚˜ ์นด๋ฉ”๋ผ๋ฅผ ํ†ตํ•ด ์‹ค์ œ ์„ธ๊ณ„์— ๊ฐ€์ƒ ๊ฐœ์ฒด๋ฅผ ๋ง์”Œ์›Œ ์ฆ๊ฐ• ํ˜„์‹ค ๊ฒฝํ—˜์„ ๋งŒ๋“œ๋Š” ๋“ฑ ์ „์ž ์ƒ๊ฑฐ๋ž˜ ๋ถ„์•ผ์—์„œ๋„ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> segmenter = pipeline(task="image-segmentation") >>> preds = segmenter( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.9879, 'label': 'LABEL_184'} {'score': 0.9973, 'label': 'snow'} {'score': 0.9972, 'label': 'cat'} ``` ### ๊นŠ์ด ์ถ”์ •[[depth_estimation]] ๊นŠ์ด ์ถ”์ •์€ ์นด๋ฉ”๋ผ๋กœ๋ถ€ํ„ฐ ์ด๋ฏธ์ง€ ๋‚ด๋ถ€์˜ ๊ฐ ํ”ฝ์…€์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ์ด ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์€ ํŠนํžˆ ์žฅ๋ฉด ์ดํ•ด์™€ ์žฌ๊ตฌ์„ฑ์— ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์€ ๋ณดํ–‰์ž, ๊ตํ†ต ํ‘œ์ง€ํŒ ๋ฐ ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰๊ณผ ๊ฐ™์€ ๊ฐ์ฒด์™€์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์ดํ•ดํ•˜์—ฌ ์žฅ์• ๋ฌผ๊ณผ ์ถฉ๋Œ์„ ํ”ผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊นŠ์ด ์ •๋ณด๋Š” ๋˜ํ•œ 2D ์ด๋ฏธ์ง€์—์„œ 3D ํ‘œํ˜„์„ ๊ตฌ์„ฑํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋ฉฐ ์ƒ๋ฌผํ•™์  ๊ตฌ์กฐ๋‚˜ ๊ฑด๋ฌผ์˜ ๊ณ ํ’ˆ์งˆ 3D ํ‘œํ˜„์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊นŠ์ด ์ถ”์ •์—๋Š” ๋‘ ๊ฐ€์ง€ ์ ‘๊ทผ ๋ฐฉ์‹์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์Šคํ…Œ๋ ˆ์˜ค: ์•ฝ๊ฐ„ ๋‹ค๋ฅธ ๊ฐ๋„์—์„œ ์ดฌ์˜๋œ ๋™์ผํ•œ ์ด๋ฏธ์ง€ ๋‘ ์žฅ์„ ๋น„๊ตํ•˜์—ฌ ๊นŠ์ด๋ฅผ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. * ๋‹จ์•ˆ: ๋‹จ์ผ ์ด๋ฏธ์ง€์—์„œ ๊นŠ์ด๋ฅผ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> depth_estimator = pipeline(task="depth-estimation") >>> preds = depth_estimator( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) ``` ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural_language_processing]] ํ…์ŠคํŠธ๋Š” ์ธ๊ฐ„์ด ์˜์‚ฌ ์†Œํ†ตํ•˜๋Š” ์ž์—ฐ์Šค๋Ÿฌ์šด ๋ฐฉ์‹ ์ค‘ ํ•˜๋‚˜์ด๊ธฐ ๋•Œ๋ฌธ์— ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์—ญ์‹œ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ์ž‘์—… ์œ ํ˜• ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์ธ์‹ํ•˜๋Š” ํ˜•์‹์œผ๋กœ ํ…์ŠคํŠธ๋ฅผ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด ํ† ํฐํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ๊ฐœ๋ณ„ ๋‹จ์–ด ๋˜๋Š” ํ•˜์œ„ ๋‹จ์–ด(ํ† ํฐ)๋กœ ๋ถ„ํ• ํ•œ ๋‹ค์Œ ์ด๋Ÿฌํ•œ ํ† ํฐ์„ ์ˆซ์ž๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ์ˆซ์ž ์‹œํ€€์Šค๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ˆซ์ž ์‹œํ€€์Šค๋ฅผ ๋‹ค์–‘ํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๋ชจ๋ธ์— ์ž…๋ ฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ### ํ…์ŠคํŠธ ๋ถ„๋ฅ˜[[text_classification]] ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์—์„œ์˜ ๋ถ„๋ฅ˜ ์ž‘์—…๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋Š” ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์—์„œ ํ…์ŠคํŠธ ์‹œํ€€์Šค(๋ฌธ์žฅ ์ˆ˜์ค€, ๋‹จ๋ฝ ๋˜๋Š” ๋ฌธ์„œ ๋“ฑ)์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์—๋Š” ๋‹ค์–‘ํ•œ ์‹ค์šฉ์ ์ธ ์‘์šฉ ์‚ฌ๋ก€๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ๊ฐ์„ฑ ๋ถ„์„: ํ…์ŠคํŠธ๋ฅผ `๊ธ์ •` ๋˜๋Š” `๋ถ€์ •`๊ณผ ๊ฐ™์€ ์–ด๋–ค ๊ทน์„ฑ์— ๋”ฐ๋ผ ๋ ˆ์ด๋ธ”๋งํ•˜์—ฌ ์ •์น˜, ๊ธˆ์œต, ๋งˆ์ผ€ํŒ…๊ณผ ๊ฐ™์€ ๋ถ„์•ผ์—์„œ ์˜์‚ฌ ๊ฒฐ์ •์— ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์ง€์›ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ์ฝ˜ํ…์ธ  ๋ถ„๋ฅ˜: ํ…์ŠคํŠธ๋ฅผ ์ฃผ์ œ์— ๋”ฐ๋ผ ๋ ˆ์ด๋ธ”๋ง(๋‚ ์”จ, ์Šคํฌ์ธ , ๊ธˆ์œต ๋“ฑ)ํ•˜์—ฌ ๋‰ด์Šค ๋ฐ ์†Œ์…œ ๋ฏธ๋””์–ด ํ”ผ๋“œ์—์„œ ์ •๋ณด๋ฅผ ๊ตฌ์„ฑํ•˜๊ณ  ํ•„ํ„ฐ๋งํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> classifier = pipeline(task="sentiment-analysis") >>> preds = classifier("Hugging Face is the best thing since sliced bread!") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.9991, 'label': 'POSITIVE'}] ``` ### ํ† ํฐ ๋ถ„๋ฅ˜[[token_classification]] ๋ชจ๋“  ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์ž‘์—…์—์„œ๋Š” ํ…์ŠคํŠธ๊ฐ€ ๊ฐœ๋ณ„ ๋‹จ์–ด๋‚˜ ํ•˜์œ„ ๋‹จ์–ด๋กœ ๋ถ„๋ฆฌ๋˜์–ด ์ „์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ๋ถ„๋ฆฌ๋œ ๋‹จ์–ด๋ฅผ [ํ† ํฐ](/glossary#token)์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜๋Š” ๊ฐ ํ† ํฐ์— ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์˜ ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜์˜ ๋‘ ๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ์œ ํ˜•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ๊ฐœ์ฒด๋ช… ์ธ์‹ (NER): ํ† ํฐ์„ ์กฐ์ง, ์ธ๋ฌผ, ์œ„์น˜ ๋˜๋Š” ๋‚ ์งœ์™€ ๊ฐ™์€ ๊ฐœ์ฒด ๋ฒ”์ฃผ์— ๋”ฐ๋ผ ๋ ˆ์ด๋ธ”๋งํ•ฉ๋‹ˆ๋‹ค. NER์€ ํŠนํžˆ ์œ ์ „์ฒดํ•™์ ์ธ ํ™˜๊ฒฝ์—์„œ ์œ ์ „์ž, ๋‹จ๋ฐฑ์งˆ ๋ฐ ์•ฝ๋ฌผ ์ด๋ฆ„์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•˜๋Š” ๋ฐ ๋„๋ฆฌ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. * ํ’ˆ์‚ฌ ํƒœ๊น… (POS): ๋ช…์‚ฌ, ๋™์‚ฌ, ํ˜•์šฉ์‚ฌ์™€ ๊ฐ™์€ ํ’ˆ์‚ฌ์— ๋”ฐ๋ผ ํ† ํฐ์— ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. POS๋Š” ๋ฒˆ์—ญ ์‹œ์Šคํ…œ์ด ๋™์ผํ•œ ๋‹จ์–ด๊ฐ€ ๋ฌธ๋ฒ•์ ์œผ๋กœ ์–ด๋–ป๊ฒŒ ๋‹ค๋ฅธ์ง€ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค (๋ช…์‚ฌ๋กœ ์‚ฌ์šฉ๋˜๋Š” "bank(์€ํ–‰)"๊ณผ ๋™์‚ฌ๋กœ ์‚ฌ์šฉ๋˜๋Š” "bank(์˜ˆ๊ธˆ์„ ์˜ˆ์น˜ํ•˜๋‹ค)"๊ณผ ๊ฐ™์€ ๊ฒฝ์šฐ). ```py >>> from transformers import pipeline >>> classifier = pipeline(task="ner") >>> preds = classifier("Hugging Face is a French company based in New York City.") >>> preds = [ ... { ... "entity": pred["entity"], ... "score": round(pred["score"], 4), ... "index": pred["index"], ... "word": pred["word"], ... "start": pred["start"], ... "end": pred["end"], ... } ... for pred in preds ... ] >>> print(*preds, sep="\n") {'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2} {'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7} {'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12} {'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24} {'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45} {'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50} {'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55} ``` ### ์งˆ์˜์‘๋‹ต[[question_answering]] ์งˆ์˜์‘๋‹ต์€ ๋˜ ํ•˜๋‚˜์˜ ํ† ํฐ ์ฐจ์›์˜ ์ž‘์—…์œผ๋กœ, ๋ฌธ๋งฅ์ด ์žˆ์„ ๋•Œ(๊ฐœ๋ฐฉํ˜• ๋„๋ฉ”์ธ)์™€ ๋ฌธ๋งฅ์ด ์—†์„ ๋•Œ(ํ์‡„ํ˜• ๋„๋ฉ”์ธ) ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ž‘์—…์€ ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ์‹๋‹น์ด ์˜์—… ์ค‘์ธ์ง€์™€ ๊ฐ™์€ ์งˆ๋ฌธ์„ ํ•  ๋•Œ๋งˆ๋‹ค ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ ๊ฐ ์ง€์› ๋˜๋Š” ๊ธฐ์ˆ  ์ง€์›์„ ์ œ๊ณตํ•˜๊ฑฐ๋‚˜ ๊ฒ€์ƒ‰ ์—”์ง„์ด ์š”์ฒญํ•œ ์ •๋ณด๋ฅผ ๊ฒ€์ƒ‰ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ๋ฌธ ๋‹ต๋ณ€์—๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ถ”์ถœํ˜•: ์งˆ๋ฌธ๊ณผ ๋ฌธ๋งฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ, ๋ชจ๋ธ์ด ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์˜ ์ผ๋ถ€์—์„œ ๊ฐ€์ ธ์˜จ ํ…์ŠคํŠธ์˜ ๋ฒ”์œ„๋ฅผ ๋‹ต๋ณ€์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. * ์ƒ์„ฑํ˜•: ์งˆ๋ฌธ๊ณผ ๋ฌธ๋งฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ, ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์„ ํ†ตํ•ด ๋‹ต๋ณ€์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ [`QuestionAnsweringPipeline`] ๋Œ€์‹  [`Text2TextGenerationPipeline`]์„ ํ†ตํ•ด ์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> question_answerer = pipeline(task="question-answering") >>> preds = question_answerer( ... question="What is the name of the repository?", ... context="The name of the repository is huggingface/transformers", ... ) >>> print( ... f"score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}" ... ) score: 0.9327, start: 30, end: 54, answer: huggingface/transformers ``` ### ์š”์•ฝ[[summarization]] ์š”์•ฝ์€ ์›๋ณธ ๋ฌธ์„œ์˜ ์˜๋ฏธ๋ฅผ ์ตœ๋Œ€ํ•œ ๋ณด์กดํ•˜๋ฉด์„œ ๊ธด ๋ฌธ์„œ๋ฅผ ์งง์€ ๋ฌธ์„œ๋กœ ๋งŒ๋“œ๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์š”์•ฝ์€ `sequence-to-sequence` ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ž…๋ ฅ๋ณด๋‹ค ์งง์€ ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์š”์•ฝ ์ž‘์—…์€ ๋…์ž๊ฐ€ ์žฅ๋ฌธ ๋ฌธ์„œ๋“ค์˜ ์ฃผ์š” ํฌ์ธํŠธ๋ฅผ ๋น ๋ฅด๊ฒŒ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž…๋ฒ•์•ˆ, ๋ฒ•๋ฅ  ๋ฐ ๊ธˆ์œต ๋ฌธ์„œ, ํŠนํ—ˆ ๋ฐ ๊ณผํ•™ ๋…ผ๋ฌธ์€ ์š”์•ฝ ์ž‘์—…์ด ๋…์ž์˜ ์‹œ๊ฐ„์„ ์ ˆ์•ฝํ•˜๊ณ  ๋…์„œ ๋ณด์กฐ ๋„๊ตฌ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ์งˆ๋ฌธ ๋‹ต๋ณ€๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์š”์•ฝ์—๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ถ”์ถœํ˜•: ์›๋ณธ ํ…์ŠคํŠธ์—์„œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋ฌธ์žฅ์„ ์‹๋ณ„ํ•˜๊ณ  ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. * ์ƒ์„ฑํ˜•: ์›๋ณธ ํ…์ŠคํŠธ์—์„œ ๋ชฉํ‘œ ์š”์•ฝ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๋ฌธ์„œ์— ์—†๋Š” ์ƒˆ๋กœ์šด ๋‹จ์–ด๋ฅผ ํฌํ•จํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [`SummarizationPipeline`]์€ ์ƒ์„ฑํ˜• ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> summarizer = pipeline(task="summarization") >>> summarizer( ... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles." ... ) [{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}] ``` ### ๋ฒˆ์—ญ[[translation]] ๋ฒˆ์—ญ์€ ํ•œ ์–ธ์–ด๋กœ ๋œ ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์„œ๋กœ ๋‹ค๋ฅธ ๋ฐฐ๊ฒฝ์„ ๊ฐ€์ง„ ์‚ฌ๋žŒ๋“ค์ด ์„œ๋กœ ์†Œํ†ตํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ฃผ๋Š” ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. ๋” ๋„“์€ ๋Œ€์ค‘์—๊ฒŒ ์ฝ˜ํ…์ธ ๋ฅผ ๋ฒˆ์—ญํ•˜์—ฌ ์ „๋‹ฌํ•˜๊ฑฐ๋‚˜, ์ƒˆ๋กœ์šด ์–ธ์–ด๋ฅผ ๋ฐฐ์šฐ๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ํ•™์Šต ๋„๊ตฌ๊ฐ€ ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์š”์•ฝ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ๋ฒˆ์—ญ์€ `sequence-to-sequence` ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ๋ฐ›์•„์„œ ์ถœ๋ ฅ์ด ๋˜๋Š” ๋ชฉํ‘œ ์‹œํ€€์Šค๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ดˆ๊ธฐ์˜ ๋ฒˆ์—ญ ๋ชจ๋ธ์€ ๋Œ€๋ถ€๋ถ„ ๋‹จ์ผ ์–ธ์–ด๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์—ˆ์ง€๋งŒ, ์ตœ๊ทผ์—๋Š” ๋งŽ์€ ์–ธ์–ด ์Œ ๊ฐ„์— ๋ฒˆ์—ญ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค์ค‘ ์–ธ์–ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ๋†’์•„์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning." >>> translator = pipeline(task="translation", model="google-t5/t5-small") >>> translator(text) [{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage des machines."}] ``` ### ์–ธ์–ด ๋ชจ๋ธ๋ง[[language_modeling]] ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ํ…์ŠคํŠธ ์‹œํ€€์Šค์—์„œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์–ธ์–ด ๋ชจ๋ธ์€ ๋งŽ์€ ๋‹ค๋ฅธ ํ•˜์œ„ ์ž‘์—…์— ๋”ฐ๋ผ ๋ฏธ์„ธ ์กฐ์ •๋  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋งค์šฐ ์ธ๊ธฐ ์žˆ๋Š” ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์ž‘์—…์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ตœ๊ทผ์—๋Š” ์ œ๋กœ ์ƒท(zero-shot) ๋˜๋Š” ํ“จ ์ƒท(few-shot) ํ•™์Šต์ด ๊ฐ€๋Šฅํ•œ ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(Large Language Models, LLM)์— ๋Œ€ํ•œ ๋งŽ์€ ๊ด€์‹ฌ์ด ๋ฐœ์ƒํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋ช…์‹œ์ ์œผ๋กœ ํ›ˆ๋ จ๋˜์ง€ ์•Š์€ ์ž‘์—…๋„ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค! ์–ธ์–ด ๋ชจ๋ธ์€ ์œ ์ฐฝํ•˜๊ณ  ์„ค๋“๋ ฅ ์žˆ๋Š” ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์ง€๋งŒ, ํ…์ŠคํŠธ๊ฐ€ ํ•ญ์ƒ ์ •ํ™•ํ•˜์ง€๋Š” ์•Š์„ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ฃผ์˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง์—๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง: ์ด ๋ชจ๋ธ์˜ ๋ชฉ์ ์€ ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ด๋ฉฐ, ๋ฏธ๋ž˜ ํ† ํฐ์ด ๋งˆ์Šคํ‚น ๋ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> prompt = "Hugging Face is a community-based open-source platform for machine learning." >>> generator = pipeline(task="text-generation") >>> generator(prompt) # doctest: +SKIP ``` * ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง: ์ด ๋ชจ๋ธ์˜ ๋ชฉ์ ์€ ์‹œํ€€์Šค ๋‚ด์˜ ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ด๋ฉฐ, ์‹œํ€€์Šค ๋‚ด์˜ ๋ชจ๋“  ํ† ํฐ์— ๋Œ€ํ•œ ์ ‘๊ทผ์ด ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ```py >>> text = "Hugging Face is a community-based open-source <mask> for machine learning." >>> fill_mask = pipeline(task="fill-mask") >>> preds = fill_mask(text, top_k=1) >>> preds = [ ... { ... "score": round(pred["score"], 4), ... "token": pred["token"], ... "token_str": pred["token_str"], ... "sequence": pred["sequence"], ... } ... for pred in preds ... ] >>> preds [{'score': 0.2236, 'token': 1761, 'token_str': ' platform', 'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}] ``` ์ด ํŽ˜์ด์ง€๋ฅผ ํ†ตํ•ด ๊ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์˜ ๋‹ค์–‘ํ•œ ์ž‘์—… ์œ ํ˜•๊ณผ ๊ฐ ์ž‘์—…์˜ ์‹ค์šฉ์  ์ค‘์š”์„ฑ์— ๋Œ€ํ•ด ์ถ”๊ฐ€์ ์ธ ๋ฐฐ๊ฒฝ ์ •๋ณด๋ฅผ ์–ป์œผ์…จ๊ธฐ๋ฅผ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๋‹ค์Œ [์„น์…˜](tasks_explained)์—์„œ๋Š” ๐Ÿค— Transformer๊ฐ€ ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” **๋ฐฉ๋ฒ•**์— ๋Œ€ํ•ด ์•Œ์•„๋ณด์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/tokenizer_summary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ† ํฌ๋‚˜์ด์ € ์š”์•ฝ[[summary-of-the-tokenizers]] [[open-in-colab]] ์ด ํŽ˜์ด์ง€์—์„œ๋Š” ํ† ํฐํ™”์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. <Youtube id="VFp38yj8h3A"/> [๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ํŠœํ† ๋ฆฌ์–ผ](preprocessing)์—์„œ ์‚ดํŽด๋ณธ ๊ฒƒ์ฒ˜๋Ÿผ, ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ๊ฒƒ์€ ํ…์ŠคํŠธ๋ฅผ ๋‹จ์–ด ๋˜๋Š” ์„œ๋ธŒ์›Œ๋“œ๋กœ ๋ถ„ํ• ํ•˜๊ณ  ๋ฃฉ์—… ํ…Œ์ด๋ธ”์„ ํ†ตํ•ด id๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ณผ์ •์ž…๋‹ˆ๋‹ค. ๋‹จ์–ด ๋˜๋Š” ์„œ๋ธŒ์›Œ๋“œ๋ฅผ id๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์€ ๊ฐ„๋‹จํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฒˆ ๋ฌธ์„œ์—์„œ๋Š” ํ…์ŠคํŠธ๋ฅผ ๋‹จ์–ด ๋˜๋Š” ์„œ๋ธŒ์›Œ๋“œ๋กœ ์ชผ๊ฐœ๋Š” ๊ฒƒ(์ฆ‰, ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ๊ฒƒ)์— ์ค‘์ ์„ ๋‘๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ, ๐Ÿค— Transformers์—์„œ ์‚ฌ์šฉ๋˜๋Š” ์„ธ ๊ฐ€์ง€ ์ฃผ์š” ํ† ํฐํ™” ์œ ํ˜•์ธ [Byte-Pair Encoding (BPE)](#byte-pair-encoding), [WordPiece](#wordpiece), [SentencePiece](#sentencepiece)๋ฅผ ์‚ดํŽด๋ณด๊ณ  ์–ด๋–ค ๋ชจ๋ธ์—์„œ ์–ด๋–ค ํ† ํฐํ™” ์œ ํ˜•์„ ์‚ฌ์šฉํ•˜๋Š”์ง€ ์˜ˆ์‹œ๋ฅผ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฐ ๋ชจ๋ธ ํŽ˜์ด์ง€์— ์—ฐ๊ฒฐ๋œ ํ† ํฌ๋‚˜์ด์ €์˜ ๋ฌธ์„œ๋ฅผ ๋ณด๋ฉด ์‚ฌ์ „ ํ›ˆ๋ จ ๋ชจ๋ธ์—์„œ ์–ด๋–ค ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ–ˆ๋Š”์ง€ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [`BertTokenizer`]๋ฅผ ๋ณด๋ฉด ์ด ๋ชจ๋ธ์ด [WordPiece](#wordpiece)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๊ฐœ์š”[[introduction]] ํ…์ŠคํŠธ๋ฅผ ์ž‘์€ ๋ฌถ์Œ(chunk)์œผ๋กœ ์ชผ๊ฐœ๋Š” ๊ฒƒ์€ ๋ณด๊ธฐ๋ณด๋‹ค ์–ด๋ ค์šด ์ž‘์—…์ด๋ฉฐ, ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `"Don't you love ๐Ÿค— Transformers? We sure do."` ๋ผ๋Š” ๋ฌธ์žฅ์„ ์‚ดํŽด๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. <Youtube id="nhJxYji1aho"/> ์œ„ ๋ฌธ์žฅ์„ ํ† ํฐํ™”ํ•˜๋Š” ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ ๊ณต๋ฐฑ์„ ๊ธฐ์ค€์œผ๋กœ ์ชผ๊ฐœ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ† ํฐํ™”๋œ ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ["Don't", "you", "love", "๐Ÿค—", "Transformers?", "We", "sure", "do."] ``` ์ด๋Š” ์ฒซ ๋ฒˆ์งธ ๊ฒฐ๊ณผ๋กœ๋Š” ํ•ฉ๋ฆฌ์ ์ด์ง€๋งŒ, `"Transformers?"`์™€ `"do."`ํ† ํฐ์„ ๋ณด๋ฉด ๊ฐ๊ฐ `"Transformer"`์™€ `"do"`์— ๊ตฌ๋‘์ ์ด ๋ถ™์–ด์žˆ๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌ๋‘์ ์„ ๊ณ ๋ คํ•ด์•ผ ๋ชจ๋ธ์ด ๋‹จ์–ด์˜ ๋‹ค๋ฅธ ํ‘œํ˜„๊ณผ ๊ทธ ๋’ค์— ์˜ฌ ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ๊ฐ€๋Šฅํ•œ ๊ตฌ๋‘์ ์„ ํ•™์Šตํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ๋ชจ๋ธ์ด ํ•™์Šตํ•ด์•ผ ํ•˜๋Š” ํ‘œํ˜„์˜ ์ˆ˜๊ฐ€ ํญ๋ฐœ์ ์œผ๋กœ ์ฆ๊ฐ€ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๊ตฌ๋‘์ ์„ ๊ณ ๋ คํ•œ ํ† ํฐํ™” ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ["Don", "'", "t", "you", "love", "๐Ÿค—", "Transformers", "?", "We", "sure", "do", "."] ``` ์ด์ „๋ณด๋‹ค ๋‚˜์•„์กŒ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, `"Don't"`์˜ ํ† ํฐํ™” ๊ฒฐ๊ณผ๋„ ์ˆ˜์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. `"Don't"`๋Š” `"do not"`์˜ ์ค„์ž„๋ง์ด๊ธฐ ๋•Œ๋ฌธ์— `["Do", "n't"]`๋กœ ํ† ํฐํ™”๋˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋ถ€ํ„ฐ ๋ณต์žกํ•ด์ง€๊ธฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด ์ ์ด ๊ฐ ๋ชจ๋ธ๋งˆ๋‹ค ๊ณ ์œ ํ•œ ํ† ํฐํ™” ์œ ํ˜•์ด ์กด์žฌํ•˜๋Š” ์ด์œ  ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ๋ฐ ์ ์šฉํ•˜๋Š” ๊ทœ์น™์— ๋”ฐ๋ผ ๋™์ผํ•œ ํ…์ŠคํŠธ์— ๋Œ€ํ•ด ํ† ํฐํ™”๋œ ๊ฒฐ๊ณผ๊ฐ€ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์€ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋œ ๊ฒƒ๊ณผ ๋™์ผํ•œ ๊ทœ์น™์œผ๋กœ ํ† ํฐํ™”๋œ ์ž…๋ ฅ์„ ์ œ๊ณตํ•ด์•ผ๋งŒ ์ œ๋Œ€๋กœ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. [spaCy](https://spacy.io/)์™€ [Moses](http://www.statmt.org/moses/?n=Development.GetStarted)๋Š” ์œ ๋ช…ํ•œ ๊ทœ์น™ ๊ธฐ๋ฐ˜ ํ† ํฌ๋‚˜์ด์ €์ž…๋‹ˆ๋‹ค. ์˜ˆ์ œ์— *spaCy*์™€ *Moses* ๋ฅผ ์ ์šฉํ•œ ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ["Do", "n't", "you", "love", "๐Ÿค—", "Transformers", "?", "We", "sure", "do", "."] ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ ๊ณต๋ฐฑ ๋ฐ ๊ตฌ๋‘์  ํ† ํฐํ™”์™€ ๊ทœ์น™ ๊ธฐ๋ฐ˜ ํ† ํฐํ™”๊ฐ€ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ณต๋ฐฑ ๋ฐ ๊ตฌ๋‘์ , ๊ทœ์น™ ๊ธฐ๋ฐ˜ ํ† ํฐํ™”์€ ๋ชจ๋‘ ๋‹จ์–ด ๋ฌธ์žฅ์„ ๋‹จ์–ด๋กœ ์ชผ๊ฐœ๋Š” ๋‹จ์–ด ํ† ํฐํ™”์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด ํ† ํฐํ™” ๋ฐฉ๋ฒ•์€ ํ…์ŠคํŠธ๋ฅผ ๋” ์ž‘์€ ๋ฌถ์Œ(chunk)๋กœ ๋ถ„ํ• ํ•˜๋Š” ๊ฐ€์žฅ ์ง๊ด€์ ์ธ ๋ฐฉ๋ฒ•์ด์ง€๋งŒ, ๋Œ€๊ทœ๋ชจ ํ…์ŠคํŠธ ๋ง๋ญ‰์น˜์— ๋Œ€ํ•ด์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ๊ณต๋ฐฑ ๋ฐ ๊ตฌ๋‘์  ํ† ํฐํ™”๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋งค์šฐ ํฐ ์–ดํœ˜(์‚ฌ์šฉ๋œ ๋ชจ๋“  ๊ณ ์œ  ๋‹จ์–ด์™€ ํ† ํฐ ์ง‘ํ•ฉ)์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด*, [Transformer XL](model_doc/transformerxl)์€ ๊ณต๋ฐฑ ๋ฐ ๊ตฌ๋‘์  ํ† ํฐํ™”๋ฅผ ์‚ฌ์šฉํ•ด ์–ดํœ˜(vocabulary) ํฌ๊ธฐ๊ฐ€ 267,735์ž…๋‹ˆ๋‹ค! ์–ดํœ˜ ํฌ๊ธฐ๊ฐ€ ํฌ๋ฉด ๋ชจ๋ธ์— ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ ๋ ˆ์ด์–ด๋กœ ์—„์ฒญ๋‚œ ์ž„๋ฒ ๋”ฉ ํ–‰๋ ฌ์ด ํ•„์š”ํ•˜๋ฏ€๋กœ ๋ฉ”๋ชจ๋ฆฌ์™€ ์‹œ๊ฐ„ ๋ณต์žก์„ฑ์ด ๋ชจ๋‘ ์ฆ๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์€ ์–ดํœ˜ ํฌ๊ธฐ๊ฐ€ 50,000๊ฐœ๋ฅผ ๋„˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋“œ๋ฌผ๋ฉฐ, ํŠนํžˆ ๋‹จ์ผ ์–ธ์–ด์— ๋Œ€ํ•ด์„œ๋งŒ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฒฝ์šฐ์—๋Š” ๋”์šฑ ๊ทธ๋ ‡์Šต๋‹ˆ๋‹ค. ๋‹จ์ˆœํ•œ ๊ณต๋ฐฑ๊ณผ ๊ตฌ๋‘์  ํ† ํฐํ™”๊ฐ€ ๋งŒ์กฑ์Šค๋Ÿฝ์ง€ ์•Š๋‹ค๋ฉด ๋‹จ์ˆœํžˆ ๋ฌธ์ž๋ฅผ ํ† ํฐํ™”ํ•˜๋ฉด ์–ด๋–จ๊นŒ์š”? <Youtube id="ssLq_EK2jLE"/> ๋ฌธ์ž ํ† ํฐํ™”๋Š” ์•„์ฃผ ๊ฐ„๋‹จํ•˜๊ณ  ๋ฉ”๋ชจ๋ฆฌ์™€ ์‹œ๊ฐ„ ๋ณต์žก๋„๋ฅผ ํฌ๊ฒŒ ์ค„์ผ ์ˆ˜ ์žˆ์ง€๋งŒ, ๋ชจ๋ธ์ด ์˜๋ฏธ ์žˆ๋Š” ์ž…๋ ฅ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ์—๋Š” ํ›จ์”ฌ ๋” ์–ด๋ ต์Šต๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด*, ๋ฌธ์ž `"t"`์— ๋Œ€ํ•œ ์˜๋ฏธ ์žˆ๋Š” ๋ฌธ๋งฅ ๋…๋ฆฝ์  ํ‘œํ˜„์„ ๋ฐฐ์šฐ๋Š” ๊ฒƒ ๋ณด๋‹ค ๋‹จ์–ด `"today"`์— ๋Œ€ํ•œ ์˜๋ฏธ ์žˆ๋Š” ๋ฌธ๋งฅ ๋…๋ฆฝ์  ํ‘œํ˜„์„ ๋ฐฐ์šฐ๋Š” ๊ฒƒ์ด ํ›จ์”ฌ ๋” ์–ด๋ ต์Šต๋‹ˆ๋‹ค. ๋ฌธ์ž ํ† ํฐํ™”๋Š” ์ข…์ข… ์„ฑ๋Šฅ ์ €ํ•˜๋ฅผ ๋™๋ฐ˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋‘ ๊ฐ€์ง€ ์žฅ์ ์„ ๋ชจ๋‘ ์–ป๊ธฐ ์œ„ํ•ด ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์€ **์„œ๋ธŒ์›Œ๋“œ** ํ† ํฐํ™”๋ผ๊ณ  ํ•˜๋Š” ๋‹จ์–ด ์ˆ˜์ค€๊ณผ ๋ฌธ์ž ์ˆ˜์ค€ ํ† ํฐํ™”์˜ ํ•˜์ด๋ธŒ๋ฆฌ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ## ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™”[[subword-tokenization]] <Youtube id="zHvTiHr506c"/> ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ž์ฃผ ์‚ฌ์šฉ๋˜๋Š” ๋‹จ์–ด๋Š” ๋” ์ž‘์€ ํ•˜์œ„ ๋‹จ์–ด๋กœ ์ชผ๊ฐœ๊ณ , ๋“œ๋ฌธ ๋‹จ์–ด๋Š” ์˜๋ฏธ ์žˆ๋Š” ํ•˜์œ„ ๋‹จ์–ด๋กœ ๋ถ„ํ•ด๋˜์–ด์•ผ ํ•œ๋‹ค๋Š” ์›์น™์— ๋”ฐ๋ผ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `"annoyingly"`๋Š” ๋“œ๋ฌธ ๋‹จ์–ด๋กœ ๊ฐ„์ฃผ๋˜์–ด `"annoying"`๊ณผ `"ly"`๋กœ ๋ถ„ํ•ด๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `"annoyingly"`๊ฐ€ `"annoying"`๊ณผ `"ly"`์˜ ํ•ฉ์„ฑ์–ด์ธ ๋ฐ˜๋ฉด, `"annoying"`๊ณผ `"ly"` ๋‘˜ ๋‹ค ๋…๋ฆฝ์ ์ธ ์„œ๋ธŒ์›Œ๋“œ๋กœ ์ž์ฃผ ๋“ฑ์žฅํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ„ฐํ‚ค์–ด์™€ ๊ฐ™์€ ์‘์ง‘์„ฑ ์–ธ์–ด์—์„œ ํŠนํžˆ ์œ ์šฉํ•˜๋ฉฐ, ์„œ๋ธŒ์›Œ๋“œ๋ฅผ ๋ฌถ์–ด ์ž„์˜๋กœ ๊ธด ๋ณตํ•ฉ ๋‹จ์–ด๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™”๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์ด ์˜๋ฏธ ์žˆ๋Š” ๋ฌธ๋งฅ ๋…๋ฆฝ์  ํ‘œํ˜„์„ ํ•™์Šตํ•˜๋ฉด์„œ ํ•ฉ๋ฆฌ์ ์ธ ์–ดํœ˜ ํฌ๊ธฐ๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™”๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ์€ ์ด์ „์— ๋ณธ ์ ์ด ์—†๋Š” ๋‹จ์–ด๋ฅผ ์•Œ๋ ค์ง„ ์„œ๋ธŒ์›Œ๋“œ๋กœ ๋ถ„ํ•ดํ•˜์—ฌ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [`~transformers.BertTokenizer`]๋Š” `"I have a new GPU!"` ๋ผ๋Š” ๋ฌธ์žฅ์„ ์•„๋ž˜์™€ ๊ฐ™์ด ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> tokenizer.tokenize("I have a new GPU!") ["i", "have", "a", "new", "gp", "##u", "!"] ``` ๋Œ€์†Œ๋ฌธ์ž๊ฐ€ ์—†๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ฌธ์žฅ์˜ ์‹œ์ž‘์ด ์†Œ๋ฌธ์ž๋กœ ํ‘œ๊ธฐ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋‹จ์–ด `["i", "have", "a", "new"]`๋Š” ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜์— ์†ํ•˜์ง€๋งŒ, `"gpu"`๋Š” ์†ํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋Š” `"gpu"`๋ฅผ ์•Œ๋ ค์ง„ ๋‘ ๊ฐœ์˜ ์„œ๋ธŒ์›Œ๋“œ๋กœ ์ชผ๊ฐญ๋‹ˆ๋‹ค: `["gp" and "##u"]`. `"##"`์€ ํ† ํฐ์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„์ด ๊ณต๋ฐฑ ์—†์ด ์ด์ „ ํ† ํฐ์— ์—ฐ๊ฒฐ๋˜์–ด์•ผ(attach) ํ•จ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค(ํ† ํฐํ™” ๋””์ฝ”๋”ฉ ๋˜๋Š” ์—ญ์ „์„ ์œ„ํ•ด). ๋˜ ๋‹ค๋ฅธ ์˜ˆ๋กœ, [`~transformers.XLNetTokenizer`]๋Š” ์ด์ „์— ์˜ˆ์‹œ ๋ฌธ์žฅ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import XLNetTokenizer >>> tokenizer = XLNetTokenizer.from_pretrained("xlnet/xlnet-base-cased") >>> tokenizer.tokenize("Don't you love ๐Ÿค— Transformers? We sure do.") ["โ–Don", "'", "t", "โ–you", "โ–love", "โ–", "๐Ÿค—", "โ–", "Transform", "ers", "?", "โ–We", "โ–sure", "โ–do", "."] ``` `"โ–"`๊ฐ€ ๊ฐ€์ง€๋Š” ์˜๋ฏธ๋Š” [SentencePiece](#sentencepiece)์—์„œ ๋‹ค์‹œ ์‚ดํŽด๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ณด๋‹ค์‹œํ”ผ `"Transformers"` ๋ผ๋Š” ๋“œ๋ฌธ ๋‹จ์–ด๋Š” ์„œ๋ธŒ์›Œ๋“œ `"Transform"`์™€ `"ers"`๋กœ ์ชผ๊ฐœ์ง‘๋‹ˆ๋‹ค. ์ด์ œ ๋‹ค์–‘ํ•œ ํ•˜์œ„ ๋‹จ์–ด ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ผ๋ฐ˜์ ์œผ๋กœ ํ•ด๋‹น ๋ชจ๋ธ์ด ํ•™์Šต๋˜๋Š” ๋ง๋ญ‰์น˜์— ๋Œ€ํ•ด ์ˆ˜ํ–‰๋˜๋Š” ์–ด๋–ค ํ˜•ํƒœ์˜ ํ•™์Šต์— ์˜์กดํ•œ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”. <a id='byte-pair-encoding'></a> ### ๋ฐ”์ดํŠธ ํŽ˜์–ด ์ธ์ฝ”๋”ฉ (Byte-Pair Encoding, BPE)[[bytepair-encoding-bpe]] ๋ฐ”์ดํŠธ ํŽ˜์–ด ์ธ์ฝ”๋”ฉ(BPE)์€ [Neural Machine Translation of Rare Words with Subword Units (Sennrich et al., 2015)](https://arxiv.org/abs/1508.07909) ์—์„œ ์†Œ๊ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. BPE๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹จ์–ด๋กœ ๋ถ„ํ• ํ•˜๋Š” ์‚ฌ์ „ ํ† ํฌ๋‚˜์ด์ €(pre-tokenizer)์— ์˜์กดํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ† ํฐํ™”(Pretokenization)์—๋Š” [GPT-2](model_doc/gpt2), [Roberta](model_doc/roberta)์™€ ๊ฐ™์€ ๊ฐ„๋‹จํ•œ ๊ณต๋ฐฑ ํ† ํฐํ™”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณต์žกํ•œ ์‚ฌ์ „ ํ† ํฐํ™”์—๋Š” ๊ทœ์น™ ๊ธฐ๋ฐ˜ ํ† ํฐํ™”๊ฐ€ ํ•ด๋‹นํ•˜๋Š”๋ฐ, ํ›ˆ๋ จ ๋ง๋ญ‰์น˜์—์„œ ๊ฐ ๋‹จ์–ด์˜ ๋นˆ๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. [XLM](model_doc/xlm), ๋Œ€๋ถ€๋ถ„์˜ ์–ธ์–ด์—์„œ Moses๋ฅผ ์‚ฌ์šฉํ•˜๋Š” [FlauBERT](model_doc/flaubert), Spacy์™€ ftfy๋ฅผ ์‚ฌ์šฉํ•˜๋Š” [GPT](model_doc/gpt)๊ฐ€ ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ† ํฐํ™” ์ดํ›„์—, ๊ณ ์œ  ๋‹จ์–ด ์ง‘ํ•ฉ๊ฐ€ ์ƒ์„ฑ๋˜๊ณ  ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—์„œ ๊ฐ ๋‹จ์–ด๊ฐ€ ๋“ฑ์žฅํ•˜๋Š” ๋นˆ๋„๊ฐ€ ๊ฒฐ์ •๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, BPE๋Š” ๊ณ ์œ  ๋‹จ์–ด ์ง‘ํ•ฉ์— ๋‚˜ํƒ€๋‚˜๋Š” ๋ชจ๋“  ๊ธฐํ˜ธ๋กœ ๊ตฌ์„ฑ๋œ ๊ธฐ๋ณธ ์–ดํœ˜๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ๊ธฐ๋ณธ ์–ดํœ˜์˜ ๋‘ ๊ธฐํ˜ธ์—์„œ ์ƒˆ๋กœ์šด ๊ธฐํ˜ธ๋ฅผ ํ˜•์„ฑํ•˜๋Š” ๋ณ‘ํ•ฉ ๊ทœ์น™์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์–ดํœ˜๊ฐ€ ์›ํ•˜๋Š” ์–ดํœ˜ ํฌ๊ธฐ์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ์œ„์˜ ๊ณผ์ •์„ ๋ฐ˜๋ณตํ•ฉ๋‹ˆ๋‹ค. ์–ดํœ˜ ํฌ๊ธฐ๋Š” ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ›ˆ๋ จ์‹œํ‚ค๊ธฐ ์ „์— ์ •์˜ํ•ด์•ผ ํ•˜๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ผ๋Š” ์ ์„ ์œ ์˜ํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด, ์‚ฌ์ „ ํ† ํฐํ™” ํ›„ ๋นˆ๋„๋ฅผ ํฌํ•จํ•œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์–ดํœ˜ ์ง‘ํ•ฉ์ด ๊ฒฐ์ •๋˜์—ˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ``` ("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5) ``` ๊ฒฐ๊ณผ์ ์œผ๋กœ ๊ธฐ๋ณธ ์–ดํœ˜๋Š” `["b", "g", "h", "n", "p", "s", "u"]` ์ด๊ณ , ๊ฐ ๋‹จ์–ด๋ฅผ ๊ธฐ๋ณธ ์–ดํœ˜์— ์†ํ•˜๋Š” ๊ธฐํ˜ธ๋กœ ์ชผ๊ฐœ๋ฉด ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ("h" "u" "g", 10), ("p" "u" "g", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "u" "g" "s", 5) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ BPE๋Š” ๊ฐ€๋Šฅํ•œ ๊ฐ ๊ธฐํ˜ธ ์Œ์˜ ๋นˆ๋„๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ์ž์ฃผ ๋ฐœ์ƒํ•˜๋Š” ๊ธฐํ˜ธ ์Œ์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์œ„์˜ ์˜ˆ์‹œ์—์„œ `"h"` ๋’ค์— ์˜ค๋Š” `"u"`๋Š” _10 + 5 = 15_ ๋ฒˆ ๋“ฑ์žฅํ•ฉ๋‹ˆ๋‹ค. (`"hug"`์—์„œ 10๋ฒˆ, `"hugs"`์—์„œ 5๋ฒˆ ๋“ฑ์žฅ) ํ•˜์ง€๋งŒ, ๊ฐ€์žฅ ๋“ฑ์žฅ ๋นˆ๋„๊ฐ€ ๋†’์€ ๊ธฐํ˜ธ ์Œ์€ `"u"` ๋’ค์— ์˜ค๋Š” `"g"`์ž…๋‹ˆ๋‹ค. _10 + 5 + 5 = 20_ ์œผ๋กœ ์ด 20๋ฒˆ ๋“ฑ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ๋ณ‘ํ•ฉํ•˜๋Š” ๊ฐ€์žฅ ์ฒซ ๋ฒˆ์งธ ์Œ์€ `"u"` ๋’ค์— ์˜ค๋Š” `"g"`์ž…๋‹ˆ๋‹ค. `"ug"`๊ฐ€ ์–ดํœ˜์— ์ถ”๊ฐ€๋˜์–ด ์–ดํœ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ("h" "ug", 10), ("p" "ug", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "ug" "s", 5) ``` BPE๋Š” ๋‹ค์Œ์œผ๋กœ ๊ฐ€์žฅ ๋งŽ์ด ๋“ฑ์žฅํ•˜๋Š” ๊ธฐํ˜ธ ์Œ์„ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. `"u"` ๋’ค์— ์˜ค๋Š” `"n"`์€ 16๋ฒˆ ๋“ฑ์žฅํ•ด `"un"` ์œผ๋กœ ๋ณ‘ํ•ฉ๋˜์–ด ์–ดํœ˜์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ ๋นˆ๋„์ˆ˜๊ฐ€ ๋†“์€ ๊ธฐํ˜ธ ์Œ์€ `"h"` ๋’ค์— ์˜ค๋Š” `"ug"`๋กœ 15๋ฒˆ ๋“ฑ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์‹œ ํ•œ ๋ฒˆ `"hug"`๋กœ ๋ณ‘ํ•ฉ๋˜์–ด ์–ดํœ˜์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ํ˜„์žฌ ๋‹จ๊ณ„์—์„œ ์–ดํœ˜๋Š” `["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"]` ์ด๊ณ , ๊ณ ์œ  ๋‹จ์–ด ์ง‘ํ•ฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ("hug", 10), ("p" "ug", 5), ("p" "un", 12), ("b" "un", 4), ("hug" "s", 5) ``` ์ด ์‹œ์ ์—์„œ ๋ฐ”์ดํŠธ ํŽ˜์–ด ์ธ์ฝ”๋”ฉ ํ›ˆ๋ จ์ด ์ค‘๋‹จ๋œ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๋ฉด, ํ›ˆ๋ จ๋œ ๋ณ‘ํ•ฉ ๊ทœ์น™์€ ์ƒˆ๋กœ์šด ๋‹จ์–ด์— ์ ์šฉ๋ฉ๋‹ˆ๋‹ค(๊ธฐ๋ณธ ์–ดํœ˜์— ํฌํ•จ๋œ ๊ธฐํ˜ธ๊ฐ€ ์ƒˆ๋กœ์šด ๋‹จ์–ด์— ํฌํ•จ๋˜์ง€ ์•Š๋Š” ํ•œ). ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹จ์–ด `"bug"`๋Š” `["b", "ug"]`๋กœ ํ† ํฐํ™”๋˜์ง€๋งŒ, `"m"`์ด ๊ธฐ๋ณธ ์–ดํœ˜์— ์—†๊ธฐ ๋•Œ๋ฌธ์— `"mug"`๋Š” `["<unk>", "ug"]`๋กœ ํ† ํฐํ™”๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—๋Š” ๋‹จ์ผ ๋ฌธ์ž๊ฐ€ ์ตœ์†Œํ•œ ํ•œ ๋ฒˆ ๋“ฑ์žฅํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ผ๋ฐ˜์ ์œผ๋กœ `"m"`๊ณผ ๊ฐ™์€ ๋‹จ์ผ ๋ฌธ์ž๋Š” `"<unk>"` ๊ธฐํ˜ธ๋กœ ๋Œ€์ฒด๋˜์ง€ ์•Š์ง€๋งŒ, ์ด๋ชจํ‹ฐ์ฝ˜๊ณผ ๊ฐ™์€ ํŠน๋ณ„ํ•œ ๋ฌธ์ž์ธ ๊ฒฝ์šฐ์—๋Š” ๋Œ€์ฒด๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ „์— ์–ธ๊ธ‰ํ–ˆ๋“ฏ์ด ์–ดํœ˜ ํฌ๊ธฐ(์ฆ‰ ๊ธฐ๋ณธ ์–ดํœ˜ ํฌ๊ธฐ + ๋ณ‘ํ•ฉ ํšŸ์ˆ˜)๋Š” ์„ ํƒํ•ด์•ผํ•˜๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [GPT](model_doc/gpt)์˜ ๊ธฐ๋ณธ ์–ดํœ˜ ํฌ๊ธฐ๋Š” 478, 40,000๋ฒˆ์˜ ๋ณ‘ํ•ฉ ์ดํ›„์— ํ›ˆ๋ จ์„ ์ข…๋ฃŒํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์–ดํœ˜ ํฌ๊ธฐ๊ฐ€ 40,478์ž…๋‹ˆ๋‹ค. #### ๋ฐ”์ดํŠธ ์ˆ˜์ค€ BPE (Byte-level BPE)[[bytelevel-bpe]] ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ๊ธฐ๋ณธ ๋ฌธ์ž๋ฅผ ํฌํ•จํ•˜๋Š” ๊ธฐ๋ณธ ์–ดํœ˜์˜ ํฌ๊ธฐ๋Š” ๊ต‰์žฅํžˆ ์ปค์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์˜ˆ: ๋ชจ๋“  ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž๋ฅผ ๊ธฐ๋ณธ ๋ฌธ์ž๋กœ ๊ฐ„์ฃผํ•˜๋Š” ๊ฒฝ์šฐ) ๋” ๋‚˜์€ ๊ธฐ๋ณธ ์–ดํœ˜๋ฅผ ๊ฐ–๋„๋ก [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)๋Š” ๊ธฐ๋ณธ ์–ดํœ˜๋กœ ๋ฐ”์ดํŠธ(bytes)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐฉ์‹์€ ๋ชจ๋“  ๊ธฐ๋ณธ ๋ฌธ์ž๊ฐ€ ์–ดํœ˜์— ํฌํ•จ๋˜๋„๋ก ํ•˜๋ฉด์„œ ๊ธฐ๋ณธ ์–ดํœ˜์˜ ํฌ๊ธฐ๋ฅผ 256์œผ๋กœ ์ œํ•œํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ๋‘์ ์„ ๋‹ค๋ฃจ๋Š” ์ถ”๊ฐ€์ ์ธ ๊ทœ์น™์„ ์‚ฌ์šฉํ•ด GPT2 ํ† ํฌ๋‚˜์ด์ €๋Š” ๋ชจ๋“  ํ…์ŠคํŠธ๋ฅผ <unk> ๊ธฐํ˜ธ ์—†์ด ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [GPT-2](model_doc/gpt)์˜ ์–ดํœ˜ ํฌ๊ธฐ๋Š” 50,257๋กœ 256 ๋ฐ”์ดํŠธ ํฌ๊ธฐ์˜ ๊ธฐ๋ณธ ํ† ํฐ, ํŠน๋ณ„ํ•œ end-of-text ํ† ํฐ๊ณผ 50,000๋ฒˆ์˜ ๋ณ‘ํ•ฉ์œผ๋กœ ํ•™์Šตํ•œ ๊ธฐํ˜ธ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. <a id='wordpiece'></a> ### ์›Œ๋“œํ”ผ์Šค (WordPiece)[[wordpiece]] ์›Œ๋“œํ”ผ์Šค๋Š” [BERT](model_doc/bert), [DistilBERT](model_doc/distilbert), [Electra](model_doc/electra)์— ์‚ฌ์šฉ๋œ ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž…๋‹ˆ๋‹ค. ์ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ [Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf)์—์„œ ์†Œ๊ฐœ๋˜์—ˆ๊ณ , BPE์™€ ๊ต‰์žฅํžˆ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์›Œ๋“œํ”ผ์Šค๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋“ฑ์žฅํ•˜๋Š” ๋ชจ๋“  ๋ฌธ์ž๋กœ ๊ธฐ๋ณธ ์–ดํœ˜๋ฅผ ์ดˆ๊ธฐํ™”ํ•œ ํ›„, ์ฃผ์–ด์ง„ ๋ณ‘ํ•ฉ ๊ทœ์น™์— ๋”ฐ๋ผ ์ ์ง„์ ์œผ๋กœ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. BPE์™€๋Š” ๋Œ€์กฐ์ ์œผ๋กœ ์›Œ๋“œํ”ผ์Šค๋Š” ๊ฐ€์žฅ ๋นˆ๋„์ˆ˜๊ฐ€ ๋†’์€ ๊ธฐํ˜ธ ์Œ์„ ์„ ํƒํ•˜์ง€ ์•Š๊ณ , ์–ดํœ˜์— ์ถ”๊ฐ€๋˜์—ˆ์„ ๋•Œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์˜ ์šฐ๋„๊ฐ€ ์ตœ๋Œ€ํ™”๋˜๋Š” ์Œ์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์ •ํ™•ํžˆ ๋ฌด์Šจ ์˜๋ฏธ์ผ๊นŒ์š”? ์ด์ „ ์˜ˆ์‹œ๋ฅผ ์ฐธ์กฐํ•˜๋ฉด, ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์˜ ์šฐ๋„ ๊ฐ’์„ ์ตœ๋Œ€ํ™”ํ•˜๋Š” ๊ฒƒ์€ ๋ชจ๋“  ๊ธฐํ˜ธ ์Œ ์ค‘์—์„œ ์ฒซ ๋ฒˆ์งธ ๊ธฐํ˜ธ์™€ ๋‘ ๋ฒˆ์งธ ๊ธฐํ˜ธ์˜ ํ™•๋ฅ ๋กœ ๋‚˜๋ˆˆ ํ™•๋ฅ ์ด ๊ฐ€์žฅ ํฐ ๊ธฐํ˜ธ ์Œ์„ ์ฐพ๋Š” ๊ฒƒ๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `"ug"`์˜ ํ™•๋ฅ ์ด `"u"`์™€ `"g"` ๊ฐ๊ฐ์œผ๋กœ ์ชผ๊ฐœ์กŒ์„ ๋•Œ ๋ณด๋‹ค ๋†’์•„์•ผ `"u"` ๋’ค์— ์˜ค๋Š” `"g"`๋Š” ๋ณ‘ํ•ฉ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ง๊ด€์ ์œผ๋กœ ์›Œ๋“œํ”ผ์Šค๋Š” ๋‘ ๊ธฐํ˜ธ๋ฅผ ๋ณ‘ํ•ฉํ•˜์—ฌ _์žƒ๋Š”_ ๊ฒƒ์„ ํ‰๊ฐ€ํ•˜์—ฌ ๊ทธ๋งŒํ•œ _๊ฐ€์น˜_๊ฐ€ ์žˆ๋Š”์ง€ ํ™•์ธํ•œ๋‹ค๋Š” ์ ์—์„œ BPE์™€ ์•ฝ๊ฐ„ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. <a id='unigram'></a> ### ์œ ๋‹ˆ๊ทธ๋žจ (Unigram)[[unigram]] ์œ ๋‹ˆ๊ทธ๋žจ์€ [Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, 2018)](https://arxiv.org/pdf/1804.10959.pdf)์—์„œ ์ œ์•ˆ๋œ ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž…๋‹ˆ๋‹ค. BPE๋‚˜ ์›Œ๋“œํ”ผ์Šค์™€ ๋‹ฌ๋ฆฌ ์œ ๋‹ˆ๊ทธ๋žจ์€ ๊ธฐ๋ณธ ์–ดํœ˜๋ฅผ ๋งŽ์€ ์ˆ˜์˜ ๊ธฐํ˜ธ๋กœ ์ดˆ๊ธฐํ™”ํ•œ ํ›„ ๊ฐ ๊ธฐํ˜ธ๋ฅผ ์ ์ง„์ ์œผ๋กœ ์ค„์—ฌ ๋” ์ž‘์€ ์–ดํœ˜๋ฅผ ์–ป์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๊ธฐ๋ณธ ์–ดํœ˜๋Š” ๋ชจ๋“  ์‚ฌ์ „ ํ† ํฐํ™”๋œ ๋‹จ์–ด์™€ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ํ•˜์œ„ ๋ฌธ์ž์—ด์— ํ•ด๋‹นํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ๋‹ˆ๊ทธ๋žจ์€ transformers ๋ชจ๋ธ์—์„œ ์ง์ ‘์ ์œผ๋กœ ์‚ฌ์šฉ๋˜์ง€๋Š” ์•Š์ง€๋งŒ, [SentencePiece](#sentencepiece)์™€ ํ•จ๊ป˜ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ฐ ํ›ˆ๋ จ ๋‹จ๊ณ„์—์„œ ์œ ๋‹ˆ๊ทธ๋žจ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ˜„์žฌ ์–ดํœ˜์™€ ์œ ๋‹ˆ๊ทธ๋žจ ์–ธ์–ด ๋ชจ๋ธ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์†์‹ค(ํ”ํžˆ ๋กœ๊ทธ ์šฐ๋„๋กœ ์ •์˜๋จ)์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์–ดํœ˜์˜ ๊ฐ ๊ธฐํ˜ธ์— ๋Œ€ํ•ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ•ด๋‹น ๊ธฐํ˜ธ๋ฅผ ์–ดํœ˜์—์„œ ์ œ๊ฑฐํ•  ๊ฒฝ์šฐ ์ „์ฒด ์†์‹ค์ด ์–ผ๋งˆ๋‚˜ ์ฆ๊ฐ€ํ• ์ง€ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ์ดํ›„์— ์œ ๋‹ˆ๊ทธ๋žจ์€ ์†์‹ค ์ฆ๊ฐ€์œจ์ด ๊ฐ€์žฅ ๋‚ฎ์€ ๊ธฐํ˜ธ์˜ p(๋ณดํ†ต 10% ๋˜๋Š” 20%) ํผ์„ผํŠธ๋ฅผ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. (์ œ๊ฑฐ๋˜๋Š” ๊ธฐํ˜ธ๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์ „์ฒด ์†์‹ค์— ๊ฐ€์žฅ ์ž‘์€ ์˜ํ–ฅ์„ ๋ฏธ์นฉ๋‹ˆ๋‹ค.) ์–ดํœ˜๊ฐ€ ์›ํ•˜๋Š” ํฌ๊ธฐ์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ์ด ๊ณผ์ •์„ ๋ฐ˜๋ณตํ•ฉ๋‹ˆ๋‹ค. ์œ ๋‹ˆ๊ทธ๋žจ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ•ญ์ƒ ๊ธฐ๋ณธ ๋ฌธ์ž๋ฅผ ํฌํ•จํ•ด ์–ด๋–ค ๋‹จ์–ด๋ผ๋„ ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ๋‹ˆ๊ทธ๋žจ์ด ๋ณ‘ํ•ฉ ๊ทœ์น™์— ๊ธฐ๋ฐ˜ํ•˜์ง€ ์•Š๊ธฐ ๋–„๋ฌธ์— (BPE๋‚˜ ์›Œ๋“œํ”ผ์Šค์™€๋Š” ๋Œ€์กฐ์ ์œผ๋กœ), ํ•ด๋‹น ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ›ˆ๋ จ ์ดํ›„์— ์ƒˆ๋กœ์šด ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š”๋ฐ ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํ›ˆ๋ จ๋œ ์œ ๋‹ˆ๊ทธ๋žจ ํ† ํฐํ™”๊ฐ€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์–ดํœ˜๋ฅผ ๊ฐ€์ง„๋‹ค๋ฉด: ``` ["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"], ``` `"hugs"`๋Š” ๋‘ ๊ฐ€์ง€๋กœ ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `["hug", "s"]`์™€ `["h", "ug", "s"]` ๋˜๋Š” `["h", "u", "g", "s"]`. ๊ทธ๋ ‡๋‹ค๋ฉด ์–ด๋–ค ํ† ํฐํ™” ๋ฐฉ๋ฒ•์„ ์„ ํƒํ•ด์•ผ ํ• ๊นŒ์š”? ์œ ๋‹ˆ๊ทธ๋žจ์€ ์–ดํœ˜๋ฅผ ์ €์žฅํ•˜๋Š” ๊ฒƒ ์™ธ์—๋„ ํ›ˆ๋ จ ๋ง๋ญ‰์น˜์— ๊ฐ ํ† ํฐ์˜ ํ™•๋ฅ ์„ ์ €์žฅํ•˜์—ฌ ํ›ˆ๋ จ ํ›„ ๊ฐ€๋Šฅํ•œ ๊ฐ ํ† ํฐํ™”์˜ ํ™•๋ฅ ์„ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋‹จ์ˆœํžˆ ์‹ค์ œ๋กœ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ† ํฐํ™”๋ฅผ ์„ ํƒํ•˜์ง€๋งŒ, ํ™•๋ฅ ์— ๋”ฐ๋ผ ๊ฐ€๋Šฅํ•œ ํ† ํฐํ™”๋ฅผ ์ƒ˜ํ”Œ๋งํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€๋Šฅ์„ฑ๋„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ™•๋ฅ ์€ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•™์Šตํ•œ ์†์‹ค์— ์˜ํ•ด ์ •์˜๋ฉ๋‹ˆ๋‹ค. ๋‹จ์–ด๋กœ ๊ตฌ์„ฑ๋œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ \\(x_{1}, \dots, x_{N}\\)๋ผ ํ•˜๊ณ , ๋‹จ์–ด \\(x_{i}\\)์— ๋Œ€ํ•œ ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ํ† ํฐํ™” ๊ฒฐ๊ณผ๋ฅผ \\(S(x_{i})\\)๋ผ ํ•œ๋‹ค๋ฉด, ์ „์ฒด ์†์‹ค์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜๋ฉ๋‹ˆ๋‹ค: $$\mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right )$$ <a id='sentencepiece'></a> ### ์„ผํ…์Šคํ”ผ์Šค (SentencePiece)[[sentencepiece]] ์ง€๊ธˆ๊นŒ์ง€ ๋‹ค๋ฃฌ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋™์ผํ•œ ๋ฌธ์ œ๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค: ์ž…๋ ฅ ํ…์ŠคํŠธ๋Š” ๊ณต๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋‹จ์–ด๋ฅผ ๊ตฌ๋ถ„ํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ๋ชจ๋“  ์–ธ์–ด์—์„œ ๋‹จ์–ด๋ฅผ ๊ตฌ๋ถ„ํ•˜๊ธฐ ์œ„ํ•ด ๊ณต๋ฐฑ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ•œ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ํ•ด๊ฒฐ๋ฐฉ์•ˆ์€ ํŠน์ • ์–ธ์–ด์— ํŠนํ™”๋œ ์‚ฌ์ „ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [XLM](model_doc/xlm)์€ ํŠน์ • ์ค‘๊ตญ์–ด, ์ผ๋ณธ์–ด, ํƒœ๊ตญ์–ด ์‚ฌ์ „ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ์ผ๋ฐ˜์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด, [SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing (Kudo et al., 2018)](https://arxiv.org/pdf/1808.06226.pdf)๋Š” ์ž…๋ ฅ์„ ์ŠคํŠธ๋ฆผ์œผ๋กœ ์ฒ˜๋ฆฌํ•ด ๊ณต๋ฐฑ๋ฅผ ํ•˜๋‚˜์˜ ๋ฌธ์ž๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ดํ›„์— BPE ๋˜๋Š” ์œ ๋‹ˆ๊ทธ๋žจ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‚ฌ์šฉํ•ด ์ ์ ˆํ•œ ์–ดํœ˜๋ฅผ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. [`XLNetTokenizer`]๋Š” ์„ผํ…์Šคํ”ผ์Šค๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์œ„์—์„œ ๋‹ค๋ฃฌ ์˜ˆ์‹œ์—์„œ ์–ดํœ˜์— `"โ–"`๊ฐ€ ํฌํ•จ๋˜์–ด์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ํ† ํฐ์„ ํ•ฉ์นœ ํ›„ `"โ–"`์„ ๊ณต๋ฐฑ์œผ๋กœ ๋Œ€์ฒดํ•˜๋ฉด ๋˜๊ธฐ ๋•Œ๋ฌธ์— ์„ผํ…์Šคํ”ผ์Šค๋กœ ํ† ํฐํ™”๋œ ๊ฒฐ๊ณผ๋Š” ๋””์ฝ”๋”ฉํ•˜๊ธฐ ์ˆ˜์›”ํ•ฉ๋‹ˆ๋‹ค. transformers์—์„œ ์ œ๊ณตํ•˜๋Š” ์„ผํ…์Šคํ”ผ์Šค ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ๋ชจ๋ธ์€ ์œ ๋‹ˆ๊ทธ๋žจ๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. [ALBERT](model_doc/albert), [XLNet](model_doc/xlnet), [Marian](model_doc/marian), [T5](model_doc/t5) ๋ชจ๋ธ์ด ์„ผํ…์Šคํ”ผ์Šค ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/pad_truncation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ[[padding-and-truncation]] ๋ฐฐ์น˜ ์ž…๋ ฅ์€ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅธ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์•„์„œ ๊ณ ์ • ํฌ๊ธฐ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ๋Š” ๋‹ค์–‘ํ•œ ๊ธธ์ด์˜ ๋ฐฐ์น˜์—์„œ ์ง์‚ฌ๊ฐํ˜• ํ…์„œ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ์ „๋žต์ž…๋‹ˆ๋‹ค. ํŒจ๋”ฉ์€ ํŠน์ˆ˜ํ•œ **ํŒจ๋”ฉ ํ† ํฐ**์„ ์ถ”๊ฐ€ํ•˜์—ฌ ์งง์€ ์‹œํ€€์Šค๊ฐ€ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค ๋˜๋Š” ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉํ•˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด์™€ ๋™์ผํ•œ ๊ธธ์ด๋ฅผ ๊ฐ–๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ž˜๋ผ๋‚ด๊ธฐ๋Š” ๊ธด ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋‚ด์–ด ํŒจ๋”ฉ๊ณผ ๋‹ค๋ฅธ ๋ฐฉ์‹์œผ๋กœ ์‹œํ€€์Šค์˜ ๊ธธ์ด๋ฅผ ๋™์ผํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋ฐฐ์น˜์— ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค์˜ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๊ณ  ๋ชจ๋ธ์ด ํ—ˆ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๋Š” ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•„์š”ํ•˜๋‹ค๋ฉด API๊ฐ€ ์ง€์›ํ•˜๋Š” ๋” ๋งŽ์€ ์ „๋žต์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ธ์ˆ˜๋Š” `padding`, `truncation`, `max_length` ์„ธ ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. `padding` ์ธ์ˆ˜๋Š” ํŒจ๋”ฉ์„ ์ œ์–ดํ•ฉ๋‹ˆ๋‹ค. ๋ถˆ๋ฆฌ์–ธ ๋˜๋Š” ๋ฌธ์ž์—ด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `True` ๋˜๋Š” `'longest'`: ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค๋กœ ํŒจ๋”ฉํ•ฉ๋‹ˆ๋‹ค(๋‹จ์ผ ์‹œํ€€์Šค๋งŒ ์ œ๊ณตํ•˜๋Š” ๊ฒฝ์šฐ ํŒจ๋”ฉ์ด ์ ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค). - `'max_length'`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ ์‹œํ€€์Šค๋งŒ ์ œ๊ณตํ•˜๋Š” ๊ฒฝ์šฐ์—๋„ ํŒจ๋”ฉ์ด ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. - `False` ๋˜๋Š” `'do_not_pad'`: ํŒจ๋”ฉ์ด ์ ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ๊ธฐ๋ณธ ๋™์ž‘์ž…๋‹ˆ๋‹ค. `truncation` ์ธ์ˆ˜๋Š” ์ž˜๋ผ๋‚ผ ๋ฐฉ๋ฒ•์„ ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ถˆ๋ฆฌ์–ธ ๋˜๋Š” ๋ฌธ์ž์—ด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `True` ๋˜๋Š” `longest_first`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ์Œ์—์„œ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค์˜ ํ† ํฐ์„ ์ ์ ˆํ•œ ๊ธธ์ด์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ํ•˜๋‚˜์”ฉ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. - `'only_second'`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ์Œ(๋˜๋Š” ์‹œํ€€์Šค ์Œ์˜ ๋ฐฐ์น˜)๊ฐ€ ์ œ๊ณต๋œ ๊ฒฝ์šฐ ์Œ์˜ ๋‘ ๋ฒˆ์งธ ๋ฌธ์žฅ๋งŒ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. - `'only_first'`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ์Œ(๋˜๋Š” ์‹œํ€€์Šค ์Œ์˜ ๋ฐฐ์น˜)๊ฐ€ ์ œ๊ณต๋œ ๊ฒฝ์šฐ ์Œ์˜ ์ฒซ ๋ฒˆ์งธ ๋ฌธ์žฅ๋งŒ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. - `False` ๋˜๋Š” `'do_not_truncate'`: ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ๊ธฐ๋ณธ ๋™์ž‘์ž…๋‹ˆ๋‹ค. `max_length` ์ธ์ˆ˜๋Š” ํŒจ๋”ฉ ๋ฐ ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์ ์šฉํ•  ๊ธธ์ด๋ฅผ ์ œ์–ดํ•ฉ๋‹ˆ๋‹ค. ์ด ์ธ์ˆ˜๋Š” ์ •์ˆ˜ ๋˜๋Š” `None`์ผ ์ˆ˜ ์žˆ์œผ๋ฉฐ, `None`์ผ ๊ฒฝ์šฐ ๋ชจ๋ธ์ด ํ—ˆ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๊ธฐ๋ณธ๊ฐ’์ด ์„ค์ •๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ํŠน์ •ํ•œ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ `max_length`์— ๋Œ€ํ•œ ์ž˜๋ผ๋‚ด๊ธฐ ๋˜๋Š” ํŒจ๋”ฉ์ด ๋น„ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ‘œ์—๋Š” ํŒจ๋”ฉ ๋ฐ ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ถŒ์žฅ ๋ฐฉ๋ฒ•์ด ์š”์•ฝ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ž…๋ ฅ์œผ๋กœ ์‹œํ€€์Šค ์Œ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ์˜ˆ์ œ์—์„œ `truncation=True`๋ฅผ `['only_first', 'only_second', 'longest_first']`์—์„œ ์„ ํƒํ•œ `STRATEGY`, ์ฆ‰ `truncation='only_second'` ๋˜๋Š” `truncation='longest_first'`๋กœ ๋ฐ”๊พธ๋ฉด ์•ž์„œ ์„ค๋ช…ํ•œ ๋Œ€๋กœ ์Œ์˜ ๋‘ ์‹œํ€€์Šค๊ฐ€ ์ž˜๋ฆฌ๋Š” ๋ฐฉ์‹์„ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. | ์ž˜๋ผ๋‚ด๊ธฐ | ํŒจ๋”ฉ | ์‚ฌ์šฉ ๋ฐฉ๋ฒ• | |--------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------| | ์ž˜๋ผ๋‚ด๊ธฐ ์—†์Œ | ํŒจ๋”ฉ ์—†์Œ | `tokenizer(batch_sentences)` | | | ๋ฐฐ์น˜ ๋‚ด ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding='longest')` | | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length')` | | | ํŠน์ • ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | | | ๋‹ค์–‘ํ•œ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8)` | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ธฐ | ํŒจ๋”ฉ ์—†์Œ | `tokenizer(batch_sentences, truncation=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, truncation=STRATEGY)` | | | ๋ฐฐ์น˜ ๋‚ด ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True, truncation=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length', truncation=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | | | ํŠน์ • ๊ธธ์ด๋กœ ํŒจ๋”ฉ | ์‚ฌ์šฉ ๋ถˆ๊ฐ€ | | ํŠน์ • ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ธฐ | ํŒจ๋”ฉ ์—†์Œ | `tokenizer(batch_sentences, truncation=True, max_length=42)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | | | ๋ฐฐ์น˜ ๋‚ด ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | ์‚ฌ์šฉ ๋ถˆ๊ฐ€ | | | ํŠน์ • ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` |
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/philosophy.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ด๋…๊ณผ ๋ชฉํ‘œ [[philosophy]] ๐Ÿค— Transformers๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชฉ์ ์œผ๋กœ ๋งŒ๋“ค์–ด์ง„ ๋…์ž์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค: - ๋Œ€๊ทœ๋ชจ Transformers ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ ์—ฐ๊ตฌํ•˜๊ฑฐ๋‚˜ ํ™•์žฅํ•˜๋ ค๋Š” ๊ธฐ๊ณ„ ํ•™์Šต ์—ฐ๊ตฌ์› ๋ฐ ๊ต์œก์ž๋ฅผ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. - ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ฑฐ๋‚˜ ์ œ์ž‘์šฉ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ ์ž ํ•˜๋Š” ์‹ค์ „ ๊ฐœ๋ฐœ์ž๋ฅผ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. - ํŠน์ • ๊ธฐ๊ณ„ ํ•™์Šต ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์‚ฌ์šฉํ•˜๊ธฐ๋งŒ ํ•˜๋ ค๋Š” ์—”์ง€๋‹ˆ์–ด๋ฅผ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋‘ ๊ฐ€์ง€ ์ฃผ์š” ๋ชฉํ‘œ๋ฅผ ๊ฐ€์ง€๊ณ  ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค: 1. ์‚ฌ์šฉํ•˜๊ธฐ ์‰ฝ๊ณ  ๋น ๋ฅด๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ: - ํ•™์Šตํ•ด์•ผ ํ•  ์‚ฌ์šฉ์ž ๋Œ€์ƒ ์ถ”์ƒํ™”์˜ ์ˆ˜๋ฅผ ์ œํ•œํ–ˆ์Šต๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ ๊ฑฐ์˜ ์ถ”์ƒํ™”๊ฐ€ ์—†์œผ๋ฉฐ, ๊ฐ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์„ธ ๊ฐ€์ง€ ํ‘œ์ค€ ํด๋ž˜์Šค์ธ [configuration](main_classes/configuration), [models](main_classes/model) ๋ฐ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค์ธ ([tokenizer](main_classes/tokenizer)๋Š” NLP์šฉ, [image processor](main_classes/image_processor)๋Š” ๋น„์ „์šฉ, [feature extractor](main_classes/feature_extractor)๋Š” ์˜ค๋””์˜ค์šฉ, [processor](main_classes/processors)๋Š” ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์šฉ)๋งŒ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. - ์ด๋Ÿฌํ•œ ํด๋ž˜์Šค๋Š” ๊ณตํ†ต์ ์ธ `from_pretrained()` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ๋ฆฌ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค์—์„œ ๊ฐ„๋‹จํ•˜๊ณ  ํ†ต์ผ๋œ ๋ฐฉ์‹์œผ๋กœ ์ดˆ๊ธฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฉ”์†Œ๋“œ๋Š” ๋ฏธ๋ฆฌ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ๊ด€๋ จ ํด๋ž˜์Šค ์ธ์Šคํ„ด์Šค์™€ ๊ด€๋ จ ๋ฐ์ดํ„ฐ(๊ตฌ์„ฑ์˜ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜, ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜)๋ฅผ (ํ•„์š”ํ•œ ๊ฒฝ์šฐ) ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์บ์‹œํ•˜๋ฉฐ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ๋Š” [Hugging Face Hub](https://huggingface.co/models)์—์„œ ์ œ๊ณต๋˜๊ฑฐ๋‚˜ ์‚ฌ์šฉ์ž ์ž์ฒด์˜ ์ €์žฅ๋œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. - ์ด ์„ธ ๊ฐ€์ง€ ๊ธฐ๋ณธ ํด๋ž˜์Šค ์œ„์— ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” [`pipeline`] API๋ฅผ ์ œ๊ณตํ•˜์—ฌ ์ฃผ์–ด์ง„ ์ž‘์—…์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ์ถ”๋ก ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉํ•˜๊ณ , [`Trainer`]๋ฅผ ์ œ๊ณตํ•˜์—ฌ PyTorch ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ํ›ˆ๋ จํ•˜๊ฑฐ๋‚˜ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋“  TensorFlow ๋ชจ๋ธ์€ `Keras.fit`๊ณผ ํ˜ธํ™˜๋ฉ๋‹ˆ๋‹ค). - ๊ฒฐ๊ณผ์ ์œผ๋กœ, ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์‹ ๊ฒฝ๋ง์„ ๊ตฌ์ถ•ํ•˜๊ธฐ ์œ„ํ•œ ๋ชจ๋“ˆ์‹ ๋„๊ตฌ ์ƒ์ž๊ฐ€ ์•„๋‹™๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™•์žฅํ•˜๊ฑฐ๋‚˜ ๊ตฌ์ถ•ํ•˜๋ ค๋ฉด ์ผ๋ฐ˜์ ์ธ Python, PyTorch, TensorFlow, Keras ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜๊ณ  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋ฅผ ์ƒ์†ํ•˜์—ฌ ๋ชจ๋ธ ๋กœ๋”ฉ ๋ฐ ์ €์žฅ๊ณผ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์žฌ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ฝ”๋”ฉ ์ฒ ํ•™์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ๊ณ  ์‹ถ๋‹ค๋ฉด [Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy) ๋ธ”๋กœ๊ทธ ๊ธ€์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. 2. ์›๋ž˜ ๋ชจ๋ธ๊ณผ ๊ฐ€๋Šฅํ•œ ํ•œ ๊ทผ์ ‘ํ•œ ์„ฑ๋Šฅ์„ ์ œ๊ณตํ•˜๋Š” ์ตœ์‹  ๋ชจ๋ธ์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ: - ๊ฐ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ๊ณต์‹ ์ €์ž๊ฐ€ ์ œ๊ณตํ•œ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•˜๋Š” ์ ์–ด๋„ ํ•œ ๊ฐ€์ง€ ์˜ˆ์ œ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - ์ฝ”๋“œ๋Š” ์›๋ž˜ ์ฝ”๋“œ์™€ ๊ฐ€๋Šฅํ•œ ํ•œ ์œ ์‚ฌํ•˜๊ฒŒ ์œ ์ง€๋˜๋ฏ€๋กœ PyTorch ์ฝ”๋“œ๋Š” TensorFlow ์ฝ”๋“œ๋กœ ๋ณ€ํ™˜๋˜์–ด *pytorchic*ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ๊ณ , ๊ทธ ๋ฐ˜๋Œ€์˜ ๊ฒฝ์šฐ๋„ ๋งˆ์ฐฌ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. ๊ธฐํƒ€ ๋ชฉํ‘œ ๋ช‡ ๊ฐ€์ง€: - ๋ชจ๋ธ์˜ ๋‚ด๋ถ€๋ฅผ ๊ฐ€๋Šฅํ•œ ์ผ๊ด€๋˜๊ฒŒ ๋…ธ์ถœ์‹œํ‚ค๊ธฐ: - ์ „์ฒด ์€๋‹‰ ์ƒํƒœ์™€ ์–ดํ…์…˜ ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•œ ์•ก์„ธ์Šค๋ฅผ ๋‹จ์ผ API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค ๋ฐ ๊ธฐ๋ณธ ๋ชจ๋ธ API๋Š” ๋ชจ๋ธ ๊ฐ„์— ์‰ฝ๊ฒŒ ์ „ํ™˜ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ‘œ์ค€ํ™”๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - ๋ฏธ์„ธ ์กฐ์ • ๋ฐ ๋ชจ๋ธ ํƒ์ƒ‰์„ ์œ„ํ•œ ์œ ๋งํ•œ ๋„๊ตฌ๋“ค์„ ์ฃผ๊ด€์ ์œผ๋กœ ์„ ํƒํ•˜๊ธฐ: - ๋ฏธ์„ธ ์กฐ์ •์„ ์œ„ํ•ด ์–ดํœ˜ ๋ฐ ์ž„๋ฒ ๋”ฉ์— ์ƒˆ๋กœ์šด ํ† ํฐ์„ ๊ฐ„๋‹จํ•˜๊ณ  ์ผ๊ด€๋œ ๋ฐฉ์‹์œผ๋กœ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - Transformer ํ—ค๋“œ๋ฅผ ๋งˆ์Šคํ‚นํ•˜๊ณ  ๊ฐ€์ง€์น˜๊ธฐํ•˜๋Š” ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - PyTorch, TensorFlow 2.0 ๋ฐ Flax ๊ฐ„์— ์‰ฝ๊ฒŒ ์ „ํ™˜ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜์—ฌ ํ•˜๋‚˜์˜ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ํ›ˆ๋ จํ•˜๊ณ  ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. ## ์ฃผ์š” ๊ฐœ๋… [[main-concepts]] ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๊ฐ ๋ชจ๋ธ์— ๋Œ€ํ•ด ์„ธ ๊ฐ€์ง€ ์œ ํ˜•์˜ ํด๋ž˜์Šค๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ตฌ์ถ•๋˜์—ˆ์Šต๋‹ˆ๋‹ค: - **๋ชจ๋ธ ํด๋ž˜์Šค**๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ์ œ๊ณตํ•˜๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•˜๋Š” PyTorch ๋ชจ๋ธ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)), Keras ๋ชจ๋ธ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)), JAX/Flax ๋ชจ๋ธ([flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html))์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - **๊ตฌ์„ฑ ํด๋ž˜์Šค**๋Š” ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ(์˜ˆ: ๋ ˆ์ด์–ด ์ˆ˜ ๋ฐ ์€๋‹‰ ํฌ๊ธฐ)๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ์„ฑ ํด๋ž˜์Šค๋ฅผ ์ง์ ‘ ์ธ์Šคํ„ด์Šคํ™”ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ํŠนํžˆ, ์ˆ˜์ • ์—†์ด ๊ณ  ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜๋ฉด ๋ชจ๋ธ์˜ ์ผ๋ถ€์ธ ๊ตฌ์„ฑ์„ ์ž๋™์œผ๋กœ ์ธ์Šคํ„ด์Šคํ™”๋ฉ๋‹ˆ๋‹ค. - **์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค**๋Š” ์›์‹œ ๋ฐ์ดํ„ฐ๋ฅผ ๋ชจ๋ธ์ด ์ˆ˜์šฉํ•˜๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. [Tokenizer](main_classes/tokenizer)๋Š” ๊ฐ ๋ชจ๋ธ์˜ ์–ดํœ˜๋ฅผ ์ €์žฅํ•˜๊ณ , ๋ฌธ์ž์—ด์„ ํ† ํฐ ์ž„๋ฒ ๋”ฉ ์ธ๋ฑ์Šค ๋ฆฌ์ŠคํŠธ๋กœ ์ธ์ฝ”๋”ฉํ•˜๊ณ  ๋””์ฝ”๋”ฉํ•˜๊ธฐ ์œ„ํ•œ ๋ฉ”์†Œ๋“œ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. [Image processors](main_classes/image_processor)๋Š” ๋น„์ „ ์ž…๋ ฅ์„ ์ „์ฒ˜๋ฆฌํ•˜๊ณ , [feature extractors](main_classes/feature_extractor)๋Š” ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ „์ฒ˜๋ฆฌํ•˜๋ฉฐ, [processor](main_classes/processors)๋Š” ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์ด๋Ÿฌํ•œ ํด๋ž˜์Šค๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค์—์„œ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ๋กœ์ปฌ๋กœ ์ €์žฅํ•˜๋ฉฐ, ์„ธ ๊ฐ€์ง€ ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ Hub์—์„œ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `from_pretrained()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์ž์ฒด์—์„œ ์ œ๊ณตํ•˜๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ฒ„์ „(์ง€์›๋˜๋Š” ๋ชจ๋ธ์€ [Model Hub](https://huggingface.co/models)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Œ)์ด๋‚˜ ์‚ฌ์šฉ์ž๊ฐ€ ๋กœ์ปฌ๋กœ ์ €์žฅํ•œ ๊ฒฝ์šฐ(๋˜๋Š” ์„œ๋ฒ„์— ์ €์žฅํ•œ ๊ฒฝ์šฐ)์˜ ๋ชจ๋ธ, ๊ตฌ์„ฑ ๋ฐ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - `save_pretrained()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ, ๊ตฌ์„ฑ ๋ฐ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋กœ์ปฌ๋กœ ์ €์žฅํ•˜์—ฌ `from_pretrained()`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - `push_to_hub()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ, ๊ตฌ์„ฑ ๋ฐ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ Hub์— ๊ณต์œ ํ•˜์—ฌ ๋ชจ๋‘์—๊ฒŒ ์‰ฝ๊ฒŒ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/hpo_train.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Trainer API๋ฅผ ์‚ฌ์šฉํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ [[hyperparameter-search-using-trainer-api]] ๐Ÿค— Transformers์—์„œ๋Š” ๐Ÿค— Transformers ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋Š”๋ฐ ์ตœ์ ํ™”๋œ [`Trainer`] ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์‚ฌ์šฉ์ž๋Š” ์ง์ ‘ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ž‘์„ฑํ•  ํ•„์š” ์—†์ด ๋”์šฑ ๊ฐ„ํŽธํ•˜๊ฒŒ ํ•™์Šต์„ ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, [`Trainer`]๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์„ ์œ„ํ•œ API๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ ์ด API๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์˜ˆ์‹œ์™€ ํ•จ๊ป˜ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. ## ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ [[hyperparameter-search-backend]] [`Trainer`]๋Š” ํ˜„์žฌ ์•„๋ž˜ 4๊ฐ€์ง€ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค: [optuna](https://optuna.org/)์™€ [sigopt](https://sigopt.com/), [raytune](https://docs.ray.io/en/latest/tune/index.html), [wandb](https://wandb.ai/site/sweeps) ์ž…๋‹ˆ๋‹ค. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ์•„๋ž˜์˜ ๋ช…๋ น์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค์„ ์„ค์น˜ํ•˜์„ธ์š”. ```bash pip install optuna/sigopt/wandb/ray[tune] ``` ## ์˜ˆ์ œ์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์„ ํ™œ์„ฑํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ• [[how-to-enable-hyperparameter-search-in-example]] ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๊ณต๊ฐ„์„ ์ •์˜ํ•˜์„ธ์š”. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ๋งˆ๋‹ค ์„œ๋กœ ๋‹ค๋ฅธ ํ˜•์‹์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. sigopt์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://docs.sigopt.com/ai-module-api-references/api_reference/objects/object_parameter) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def sigopt_hp_space(trial): ... return [ ... {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"}, ... { ... "categorical_values": ["16", "32", "64", "128"], ... "name": "per_device_train_batch_size", ... "type": "categorical", ... }, ... ] ``` optuna์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html#sphx-glr-tutorial-10-key-features-002-configurations-py) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def optuna_hp_space(trial): ... return { ... "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True), ... "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]), ... } ``` raytune์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://docs.ray.io/en/latest/tune/api/search_space.html) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def ray_hp_space(trial): ... return { ... "learning_rate": tune.loguniform(1e-6, 1e-4), ... "per_device_train_batch_size": tune.choice([16, 32, 64, 128]), ... } ``` wandb์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://docs.wandb.ai/guides/sweeps/configuration) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def wandb_hp_space(trial): ... return { ... "method": "random", ... "metric": {"name": "objective", "goal": "minimize"}, ... "parameters": { ... "learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4}, ... "per_device_train_batch_size": {"values": [16, 32, 64, 128]}, ... }, ... } ``` `model_init` ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜๊ณ  ์ด๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”. ์•„๋ž˜๋Š” ๊ทธ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ```py >>> def model_init(trial): ... return AutoModelForSequenceClassification.from_pretrained( ... model_args.model_name_or_path, ... from_tf=bool(".ckpt" in model_args.model_name_or_path), ... config=config, ... cache_dir=model_args.cache_dir, ... revision=model_args.model_revision, ... token=True if model_args.use_auth_token else None, ... ) ``` ์•„๋ž˜์™€ ๊ฐ™์ด `model_init` ํ•จ์ˆ˜, ํ›ˆ๋ จ ์ธ์ˆ˜, ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹, ๊ทธ๋ฆฌ๊ณ  ํ‰๊ฐ€ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ [`Trainer`]๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> trainer = Trainer( ... model=None, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... tokenizer=tokenizer, ... model_init=model_init, ... data_collator=data_collator, ... ) ``` ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์„ ํ˜ธ์ถœํ•˜๊ณ , ์ตœ์ ์˜ ์‹œํ—˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ๋ฐฑ์—”๋“œ๋Š” `"optuna"`/`"sigopt"`/`"wandb"`/`"ray"` ์ค‘์—์„œ ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฉํ–ฅ์€ `"minimize"` ๋˜๋Š” `"maximize"` ์ค‘ ์„ ํƒํ•˜๋ฉฐ, ๋ชฉํ‘œ๋ฅผ ์ตœ์†Œํ™”ํ•  ๊ฒƒ์ธ์ง€ ์ตœ๋Œ€ํ™”ํ•  ๊ฒƒ์ธ์ง€๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ์ž์‹ ๋งŒ์˜ compute_objective ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์ด ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜์ง€ ์•Š์œผ๋ฉด, ๊ธฐ๋ณธ compute_objective๊ฐ€ ํ˜ธ์ถœ๋˜๊ณ , f1๊ณผ ๊ฐ™์€ ํ‰๊ฐ€ ์ง€ํ‘œ์˜ ํ•ฉ์ด ๋ชฉํ‘ฏ๊ฐ’์œผ๋กœ ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค. ```py >>> best_trial = trainer.hyperparameter_search( ... direction="maximize", ... backend="optuna", ... hp_space=optuna_hp_space, ... n_trials=20, ... compute_objective=compute_objective, ... ) ``` ## DDP ๋ฏธ์„ธ ์กฐ์ •์„ ์œ„ํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ [[hyperparameter-search-for-ddp-finetune]] ํ˜„์žฌ, DDP(Distributed Data Parallelism; ๋ถ„์‚ฐ ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ์ฒ˜๋ฆฌ)๋ฅผ ์œ„ํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์€ optuna์™€ sigopt์—์„œ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์ตœ์ƒ์œ„ ํ”„๋กœ์„ธ์Šค๊ฐ€ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๊ณผ์ •์„ ์‹œ์ž‘ํ•˜๊ณ  ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ๋‹ค๋ฅธ ํ”„๋กœ์„ธ์Šค์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/pr_checks.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pull Request์— ๋Œ€ํ•œ ๊ฒ€์‚ฌ [[checks-on-a-pull-request]] ๐Ÿค— Transformers์—์„œ Pull Request๋ฅผ ์—ด ๋•Œ, ๊ธฐ์กด์— ์žˆ๋Š” ๊ฒƒ์„ ๋ง๊ฐ€๋œจ๋ฆฌ์ง€ ์•Š๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ์ƒ๋‹นํ•œ ์ˆ˜์˜ ๊ฒ€์‚ฌ๊ฐ€ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒ€์‚ฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋„ค ๊ฐ€์ง€ ์œ ํ˜•์œผ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค: - ์ผ๋ฐ˜์ ์ธ ํ…Œ์ŠคํŠธ - ๋ฌธ์„œ ๋นŒ๋“œ - ์ฝ”๋“œ ๋ฐ ๋ฌธ์„œ ์Šคํƒ€์ผ - ์ผ๋ฐ˜ ์ €์žฅ์†Œ ์ผ๊ด€์„ฑ ์ด ๋ฌธ์„œ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋‹ค์–‘ํ•œ ๊ฒ€์‚ฌ์™€ ๊ทธ ์ด์œ ๋ฅผ ์„ค๋ช…ํ•˜๊ณ , PR์—์„œ ํ•˜๋‚˜ ์ด์ƒ์˜ ๊ฒ€์‚ฌ๊ฐ€ ์‹คํŒจํ•œ ๊ฒฝ์šฐ ๋กœ์ปฌ์—์„œ ์–ด๋–ป๊ฒŒ ๋””๋ฒ„๊ทธํ•˜๋Š”์ง€ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ฐธ๊ณ ๋กœ, ์ด๋Ÿฌํ•œ ๊ฒ€์‚ฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๊ฐœ๋ฐœ ์„ค์น˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install transformers[dev] ``` ๋˜๋Š” Transformers ์ €์žฅ์†Œ ๋‚ด์— ํŽธ์ง‘ ๊ฐ€๋Šฅํ•œ ์„ค์น˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install -e .[dev] ``` Transformers์˜ ์„ ํƒ์  ์ข…์†์„ฑ ์ˆ˜๊ฐ€ ๋งŽ์ด ๋Š˜์–ด๋‚ฌ๊ธฐ ๋•Œ๋ฌธ์— ๊ฐœ๋ฐœ ์„ค์น˜๋ฅผ ์‹คํŒจํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐœ๋ฐœ ์„ค์น˜๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒฝ์šฐ, ์ž‘์—… ์ค‘์ธ Deep Learning ํ”„๋ ˆ์ž„์›Œํฌ (PyTorch, TensorFlow ๋ฐ/๋˜๋Š” Flax)๋ฅผ ์„ค์น˜ํ•˜๊ณ  ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash pip install transformers[quality] ``` ํŽธ์ง‘ ๊ฐ€๋Šฅํ•œ ์„ค์น˜์˜ ๊ฒฝ์šฐ๋Š” ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash pip install -e .[quality] ``` ## ํ…Œ์ŠคํŠธ [[tests]] `ci/circleci: run_tests_`๋กœ ์‹œ์ž‘ํ•˜๋Š” ๋ชจ๋“  ์ž‘์—…์€ Transformers ํ…Œ์ŠคํŠธ ๋ชจ์Œ์˜ ์ผ๋ถ€๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ž‘์—…์€ ํŠน์ • ํ™˜๊ฒฝ์—์„œ ์ผ๋ถ€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์ค‘์ ์„ ๋‘ก๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `ci/circleci: run_tests_pipelines_tf`๋Š” TensorFlow๋งŒ ์„ค์น˜๋œ ํ™˜๊ฒฝ์—์„œ ํŒŒ์ดํ”„๋ผ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ์—์„œ ์‹ค์ œ๋กœ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์—†์„ ๋•Œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š๊ธฐ ์œ„ํ•ด, ํ…Œ์ŠคํŠธ ๋ชจ์Œ์˜ ์ผ๋ถ€๋งŒ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋ณ€๊ฒฝ ์ „ํ›„์— ๋Œ€ํ•œ ์ฐจ์ด๋ฅผ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ์œ ํ‹ธ๋ฆฌํ‹ฐ๊ฐ€ ์‹คํ–‰๋˜๊ณ , ํ•ด๋‹น ์ฐจ์ด์— ์˜ํ–ฅ์„ ๋ฐ›๋Š” ํ…Œ์ŠคํŠธ๊ฐ€ ์„ ํƒ๋ฉ๋‹ˆ๋‹ค. ์ด ์œ ํ‹ธ๋ฆฌํ‹ฐ๋Š” ๋กœ์ปฌ์—์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python utils/tests_fetcher.py ``` Transformers ์ €์žฅ์†Œ์˜ ์ตœ์ƒ๋‹จ์—์„œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด ์œ ํ‹ธ๋ฆฌํ‹ฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค: 1. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์žˆ๋Š” ํŒŒ์ผ๋งˆ๋‹ค ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์ฝ”๋“œ์ธ์ง€ ์ฃผ์„ ๋˜๋Š” ๋ฌธ์„œ ๋ฌธ์ž์—ด์ธ์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์‹ค์ œ ์ฝ”๋“œ ๋ณ€๊ฒฝ์ด ์žˆ๋Š” ํŒŒ์ผ๋งŒ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค. 2. ์†Œ์Šค ์ฝ”๋“œ ํŒŒ์ผ์˜ ๊ฐ ํŒŒ์ผ์— ๋Œ€ํ•ด ์žฌ๊ท€์ ์œผ๋กœ ์˜ํ–ฅ์„ ์ฃผ๋Š” ๋ชจ๋“  ํŒŒ์ผ์„ ์ œ๊ณตํ•˜๋Š” ๋‚ด๋ถ€ ๋งต์„ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“ˆ B๊ฐ€ ๋ชจ๋“ˆ A๋ฅผ ๊ฐ€์ ธ์˜ค๋ฉด ๋ชจ๋“ˆ A๋Š” ๋ชจ๋“ˆ B์— ์˜ํ–ฅ์„ ์ค๋‹ˆ๋‹ค. ์žฌ๊ท€์ ์ธ ์˜ํ–ฅ์—๋Š” ๊ฐ ๋ชจ๋“ˆ์ด ์ด์ „ ๋ชจ๋“ˆ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ชจ๋“ˆ ์ฒด์ธ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. 3. ๋‹จ๊ณ„ 1์—์„œ ์ˆ˜์ง‘ํ•œ ํŒŒ์ผ์— ์ด ๋งต์„ ์ ์šฉํ•˜์—ฌ PR์— ์˜ํ–ฅ์„ ๋ฐ›๋Š” ๋ชจ๋ธ ํŒŒ์ผ ๋ชฉ๋ก์„ ์–ป์Šต๋‹ˆ๋‹ค. 4. ๊ฐ ํŒŒ์ผ์„ ํ•ด๋‹นํ•˜๋Š” ํ…Œ์ŠคํŠธ ํŒŒ์ผ์— ๋งคํ•‘ํ•˜๊ณ  ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ๋กœ์ปฌ์—์„œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ฉด ๋‹จ๊ณ„ 1, 3 ๋ฐ 4์˜ ๊ฒฐ๊ณผ๋ฅผ ์ถœ๋ ฅํ•˜์—ฌ ์‹คํ–‰๋˜๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ๋Š” ๋˜ํ•œ `test_list.txt`๋ผ๋Š” ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜์—ฌ ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก์„ ํฌํ•จํ•˜๋ฉฐ, ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ๋กœ์ปฌ์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) ``` ์ž˜๋ชป๋œ ์‚ฌํ•ญ์ด ๋ˆ„๋ฝ๋˜์—ˆ์„ ๊ฒฝ์šฐ, ์ „์ฒด ํ…Œ์ŠคํŠธ ๋ชจ์Œ๋„ ๋งค์ผ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ## ๋ฌธ์„œ ๋นŒ๋“œ [[documentation-build]] `build_pr_documentation` ์ž‘์—…์€ ๋ฌธ์„œ๋ฅผ ๋นŒ๋“œํ•˜๊ณ  ๋ฏธ๋ฆฌ ๋ณด๊ธฐ๋ฅผ ์ƒ์„ฑํ•˜์—ฌ PR์ด ๋ณ‘ํ•ฉ๋œ ํ›„ ๋ชจ๋“  ๊ฒƒ์ด ์ œ๋Œ€๋กœ ๋ณด์ด๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋กœ๋ด‡์€ PR์— ๋ฌธ์„œ ๋ฏธ๋ฆฌ๋ณด๊ธฐ ๋งํฌ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. PR์—์„œ ๋งŒ๋“  ๋ณ€๊ฒฝ ์‚ฌํ•ญ์€ ์ž๋™์œผ๋กœ ๋ฏธ๋ฆฌ๋ณด๊ธฐ์— ์—…๋ฐ์ดํŠธ๋ฉ๋‹ˆ๋‹ค. ๋ฌธ์„œ ๋นŒ๋“œ์— ์‹คํŒจํ•œ ๊ฒฝ์šฐ **์„ธ๋ถ€ ์ •๋ณด**๋ฅผ ํด๋ฆญํ•˜์—ฌ ์–ด๋””์—์„œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ–ˆ๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ค๋ฅ˜๋Š” ์ฃผ๋กœ `toctree`์— ๋ˆ„๋ฝ๋œ ํŒŒ์ผ๊ณผ ๊ฐ™์ด ๊ฐ„๋‹จํ•œ ์˜ค๋ฅ˜์ž…๋‹ˆ๋‹ค. ๋กœ์ปฌ์—์„œ ๋ฌธ์„œ๋ฅผ ๋นŒ๋“œํ•˜๊ฑฐ๋‚˜ ๋ฏธ๋ฆฌ ๋ณผ ๊ฒฝ์šฐ, docs ํด๋”์˜ [`README.md`](https://github.com/huggingface/transformers/tree/main/docs)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## ์ฝ”๋“œ ๋ฐ ๋ฌธ์„œ ์Šคํƒ€์ผ [[code-and-documentation-style]] `black`๊ณผ `ruff`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  ์†Œ์Šค ํŒŒ์ผ, ์˜ˆ์ œ ๋ฐ ํ…Œ์ŠคํŠธ์— ์ฝ”๋“œ ํ˜•์‹์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `utils/style_doc.py`์—์„œ ๋ฌธ์„œ ๋ฌธ์ž์—ด๊ณผ `rst` ํŒŒ์ผ์˜ ํ˜•์‹, ๊ทธ๋ฆฌ๊ณ  Transformers์˜ `__init__.py` ํŒŒ์ผ์—์„œ ์‹คํ–‰๋˜๋Š” ์ง€์—ฐ๋œ ์ž„ํฌํŠธ์˜ ์ˆœ์„œ์— ๋Œ€ํ•œ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋“  ๊ฒƒ์€ ๋‹ค์Œ์„ ์‹คํ–‰ํ•จ์œผ๋กœ์จ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash make style ``` CI๋Š” ์ด๋Ÿฌํ•œ ์‚ฌํ•ญ์ด `ci/circleci: check_code_quality` ๊ฒ€์‚ฌ ๋‚ด์—์„œ ์ ์šฉ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `ruff`๋„ ์‹คํ–‰๋˜๋ฉฐ, ์ •์˜๋˜์ง€ ์•Š์€ ๋ณ€์ˆ˜๋‚˜ ์‚ฌ์šฉ๋˜์ง€ ์•Š์€ ๋ณ€์ˆ˜๋ฅผ ๋ฐœ๊ฒฌํ•˜๋ฉด ๊ฒฝ๊ณ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒ€์‚ฌ๋ฅผ ๋กœ์ปฌ์—์„œ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash make quality ``` ์ด ์ž‘์—…์€ ๋งŽ์€ ์‹œ๊ฐ„์ด ์†Œ์š”๋  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ํ˜„์žฌ ๋ธŒ๋žœ์น˜์—์„œ ์ˆ˜์ •ํ•œ ํŒŒ์ผ์— ๋Œ€ํ•ด์„œ๋งŒ ๋™์ผํ•œ ์ž‘์—…์„ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash make fixup ``` ์ด ๋ช…๋ น์€ ํ˜„์žฌ ๋ธŒ๋žœ์น˜์—์„œ ์ˆ˜์ •ํ•œ ํŒŒ์ผ์— ๋Œ€ํ•œ ๋ชจ๋“  ์ถ”๊ฐ€์ ์ธ ๊ฒ€์‚ฌ๋„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ์ด๋“ค์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## ์ €์žฅ์†Œ ์ผ๊ด€์„ฑ [[repository-consistency]] ์ด๋Š” PR์ด ์ €์žฅ์†Œ๋ฅผ ์ •์ƒ์ ์ธ ์ƒํƒœ๋กœ ์œ ์ง€ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ๋ชจ์€ ๊ฒƒ์ด๋ฉฐ, `ci/circleci: check_repository_consistency` ๊ฒ€์‚ฌ์—์„œ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ์‹คํ–‰ํ•จ์œผ๋กœ์จ ๋กœ์ปฌ์—์„œ ์ด ๊ฒ€์‚ฌ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash make repo-consistency ``` ์ด ๊ฒ€์‚ฌ๋Š” ๋‹ค์Œ์„ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. - init์— ์ถ”๊ฐ€๋œ ๋ชจ๋“  ๊ฐ์ฒด๊ฐ€ ๋ฌธ์„œํ™”๋˜์—ˆ๋Š”์ง€ (`utils/check_repo.py`์—์„œ ์ˆ˜ํ–‰) - `__init__.py` ํŒŒ์ผ์˜ ๋‘ ์„น์…˜์— ๋™์ผํ•œ ๋‚ด์šฉ์ด ์žˆ๋Š”์ง€ (`utils/check_inits.py`์—์„œ ์ˆ˜ํ–‰) - ๋‹ค๋ฅธ ๋ชจ๋“ˆ์—์„œ ๋ณต์‚ฌ๋œ ์ฝ”๋“œ๊ฐ€ ์›๋ณธ๊ณผ ์ผ์น˜ํ•˜๋Š”์ง€ (`utils/check_copies.py`์—์„œ ์ˆ˜ํ–‰) - ๋ชจ๋“  ๊ตฌ์„ฑ ํด๋ž˜์Šค์— docstring์— ์–ธ๊ธ‰๋œ ์œ ํšจํ•œ ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ ์–ด๋„ ํ•˜๋‚˜ ์žˆ๋Š”์ง€ (`utils/check_config_docstrings.py`์—์„œ ์ˆ˜ํ–‰) - ๋ชจ๋“  ๊ตฌ์„ฑ ํด๋ž˜์Šค๊ฐ€ ํ•ด๋‹นํ•˜๋Š” ๋ชจ๋ธ๋ง ํŒŒ์ผ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ์†์„ฑ๋งŒ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š”์ง€ (`utils/check_config_attributes.py`์—์„œ ์ˆ˜ํ–‰) - README์™€ ๋ฌธ์„œ ์ธ๋ฑ์Šค์˜ ๋ฒˆ์—ญ์ด ๋ฉ”์ธ README์™€ ๋™์ผํ•œ ๋ชจ๋ธ ๋ชฉ๋ก์„ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”์ง€ (`utils/check_copies.py`์—์„œ ์ˆ˜ํ–‰) - ๋ฌธ์„œ์˜ ์ž๋™ ์ƒ์„ฑ๋œ ํ…Œ์ด๋ธ”์ด ์ตœ์‹  ์ƒํƒœ์ธ์ง€ (`utils/check_table.py`์—์„œ ์ˆ˜ํ–‰) - ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—๋Š” ์„ ํƒ์  ์ข…์†์„ฑ์ด ์„ค์น˜๋˜์ง€ ์•Š์•˜๋”๋ผ๋„ ๋ชจ๋“  ๊ฐ์ฒด๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ์ง€ (`utils/check_dummies.py`์—์„œ ์ˆ˜ํ–‰) ์ด๋Ÿฌํ•œ ๊ฒ€์‚ฌ๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒฝ์šฐ, ์ฒ˜์Œ ๋‘ ๊ฐ€์ง€ ํ•ญ๋ชฉ์€ ์ˆ˜๋™์œผ๋กœ ์ˆ˜์ •ํ•ด์•ผ ํ•˜๋ฉฐ, ๋‚˜๋จธ์ง€ ๋„ค ๊ฐ€์ง€ ํ•ญ๋ชฉ์€ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ์ž๋™์œผ๋กœ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash make fix-copies ``` ์ถ”๊ฐ€์ ์ธ ๊ฒ€์‚ฌ๋Š” ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” PR์— ๋Œ€ํ•œ ๊ฒƒ์œผ๋กœ, ์ฃผ๋กœ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ์ถ”๊ฐ€๋œ ๋ชจ๋“  ๋ชจ๋ธ์ด Auto-mapping์— ์žˆ๋Š”์ง€ (`utils/check_repo.py`์—์„œ ์ˆ˜ํ–‰) <!-- TODO Sylvain, add a check that makes sure the common tests are implemented.--> - ๋ชจ๋“  ๋ชจ๋ธ์ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ํ…Œ์ŠคํŠธ๋˜์—ˆ๋Š”์ง€ (`utils/check_repo.py`์—์„œ ์ˆ˜ํ–‰) <!-- TODO Sylvain, add the following - ๋ชจ๋“  ๋ชจ๋ธ์ด ๋ฉ”์ธ README, ์ฃผ์š” ๋ฌธ์„œ์— ์ถ”๊ฐ€๋˜์—ˆ๋Š”์ง€ - ์‚ฌ์šฉ๋œ ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์‹ค์ œ๋กœ Hub์— ์กด์žฌํ•˜๋Š”์ง€ --> ### ๋ณต์‚ฌ๋ณธ ํ™•์ธ [[check-copies]] Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋ชจ๋ธ ์ฝ”๋“œ์— ๋Œ€ํ•ด ๋งค์šฐ ์™„๊ณ ํ•˜๋ฉฐ, ๊ฐ ๋ชจ๋ธ์€ ๋‹ค๋ฅธ ๋ชจ๋ธ์— ์˜์กดํ•˜์ง€ ์•Š๊ณ  ์™„์ „ํžˆ ๋‹จ์ผ ํŒŒ์ผ๋กœ ๊ตฌํ˜„๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด ํŠน์ • ๋ชจ๋ธ์˜ ์ฝ”๋“œ ๋ณต์‚ฌ๋ณธ์ด ์›๋ณธ๊ณผ ์ผ๊ด€๋œ ์ƒํƒœ๋กœ ์œ ์ง€๋˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ฒ„๊ทธ ์ˆ˜์ •์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋‹ค๋ฅธ ๋ชจ๋ธ์— ์˜ํ–ฅ์„ ์ฃผ๋Š” ๋ชจ๋“  ๋ชจ๋ธ์„ ๋ณผ ์ˆ˜ ์žˆ์œผ๋ฉฐ ์ˆ˜์ •์„ ์ ์šฉํ• ์ง€ ์ˆ˜์ •๋œ ์‚ฌ๋ณธ์„ ์‚ญ์ œํ• ์ง€ ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <Tip> ํŒŒ์ผ์ด ๋‹ค๋ฅธ ํŒŒ์ผ์˜ ์™„์ „ํ•œ ์‚ฌ๋ณธ์ธ ๊ฒฝ์šฐ ํ•ด๋‹น ํŒŒ์ผ์„ `utils/check_copies.py`์˜ `FULL_COPIES` ์ƒ์ˆ˜์— ๋“ฑ๋กํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ์ด ๋ฉ”์ปค๋‹ˆ์ฆ˜์€ `# Copied from xxx` ํ˜•์‹์˜ ์ฃผ์„์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. `xxx`์—๋Š” ์•„๋ž˜์— ๋ณต์‚ฌ๋˜๋Š” ํด๋ž˜์Šค ๋˜๋Š” ํ•จ์ˆ˜์˜ ์ „์ฒด ๊ฒฝ๋กœ๊ฐ€ ํฌํ•จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `RobertaSelfOutput`์€ `BertSelfOutput` ํด๋ž˜์Šค์˜ ๋ณต์‚ฌ๋ณธ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L289)์—์„œ ์ฃผ์„์ด ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.bert.modeling_bert.BertSelfOutput ``` ํด๋ž˜์Šค ์ „์ฒด์— ์ˆ˜์ •์„ ์ ์šฉํ•˜๋Š” ๋Œ€์‹ ์— ๋ณต์‚ฌ๋ณธ๊ณผ ๊ด€๋ จ์žˆ๋Š” ๋ฉ”์„œ๋“œ์— ์ ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L598)์—์„œ `RobertaPreTrainedModel._init_weights`๊ฐ€ `BertPreTrainedModel`์˜ ๋™์ผํ•œ ๋ฉ”์„œ๋“œ์—์„œ ๋ณต์‚ฌ๋œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์œผ๋ฉฐ ํ•ด๋‹น ์ฃผ์„์ด ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights ``` ๋ณต์‚ฌ๋ณธ์ด ์ด๋ฆ„๋งŒ ๋‹ค๋ฅธ ๊ฒฝ์šฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: ์˜ˆ๋ฅผ ๋“ค์–ด `RobertaAttention`์—์„œ `BertSelfAttention` ๋Œ€์‹  `RobertaSelfAttention`์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ ๊ทธ ์™ธ์—๋Š” ์ฝ”๋“œ๊ฐ€ ์™„์ „ํžˆ ๋™์ผํ•ฉ๋‹ˆ๋‹ค: ์ด ๋•Œ `# Copied from`์€ `Copied from xxx with foo->bar`์™€ ๊ฐ™์€ ๊ฐ„๋‹จํ•œ ๋ฌธ์ž์—ด ๋Œ€์ฒด๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋“  `foo` ์ธ์Šคํ„ด์Šค๋ฅผ `bar`๋กœ ๋ฐ”๊ฟ”์„œ ์ฝ”๋“œ๋ฅผ ๋ณต์‚ฌํ•ฉ๋‹ˆ๋‹ค. [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L304C1-L304C86)์—์„œ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉ๋˜๋Š”์ง€ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta ``` ํ™”์‚ดํ‘œ ์ฃผ๋ณ€์—๋Š” ๊ณต๋ฐฑ์ด ์—†์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค(๊ณต๋ฐฑ์ด ๋Œ€์ฒด ํŒจํ„ด์˜ ์ผ๋ถ€์ธ ๊ฒฝ์šฐ๋Š” ์˜ˆ์™ธ์ž…๋‹ˆ๋‹ค). ๋Œ€์ฒด ํŒจํ„ด์„ ์‰ผํ‘œ๋กœ ๊ตฌ๋ถ„ํ•˜์—ฌ ์—ฌ๋Ÿฌ ํŒจํ„ด์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `CamemberForMaskedLM`์€ ๋‘ ๊ฐ€์ง€ ๋Œ€์ฒด ์‚ฌํ•ญ์„ ๊ฐ€์ง„ `RobertaForMaskedLM`์˜ ๋ณต์‚ฌ๋ณธ์ž…๋‹ˆ๋‹ค: `Roberta`๋ฅผ `Camembert`๋กœ ๋Œ€์ฒดํ•˜๊ณ  `ROBERTA`๋ฅผ `CAMEMBERT`๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/camembert/modeling_camembert.py#L929)์—์„œ ์ด๊ฒƒ์ด ์ฃผ์„์œผ๋กœ ์–ด๋–ป๊ฒŒ ๊ตฌํ˜„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.roberta.modeling_roberta.RobertaForMaskedLM with Roberta->Camembert, ROBERTA->CAMEMBERT ``` ์ˆœ์„œ๊ฐ€ ์ค‘์š”ํ•œ ๊ฒฝ์šฐ(์ด์ „ ์ˆ˜์ •๊ณผ ์ถฉ๋Œํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ) ์ˆ˜์ •์€ ์™ผ์ชฝ์—์„œ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. <Tip> ์ƒˆ ๋ณ€๊ฒฝ์ด ์„œ์‹์„ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒฝ์šฐ(์งง์€ ์ด๋ฆ„์„ ๋งค์šฐ ๊ธด ์ด๋ฆ„์œผ๋กœ ๋ฐ”๊พธ๋Š” ๊ฒฝ์šฐ) ์ž๋™ ์„œ์‹ ์ง€์ •๊ธฐ๋ฅผ ์ ์šฉํ•œ ํ›„ ๋ณต์‚ฌ๋ณธ์ด ๊ฒ€์‚ฌ๋ฉ๋‹ˆ๋‹ค. </Tip> ํŒจํ„ด์˜ ๋Œ€์†Œ๋ฌธ์ž๊ฐ€ ๋‹ค๋ฅธ ๊ฒฝ์šฐ(๋Œ€๋ฌธ์ž์™€ ์†Œ๋ฌธ์ž๊ฐ€ ํ˜ผ์šฉ๋œ ๋Œ€์ฒด ์–‘์‹) `all-casing` ์˜ต์…˜์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/mobilebert/modeling_mobilebert.py#L1237)์—์„œ `MobileBertForSequenceClassification`์—์„œ ์‚ฌ์šฉ๋œ ์˜ˆ์‹œ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.bert.modeling_bert.BertForSequenceClassification with Bert->MobileBert all-casing ``` ์ด ๊ฒฝ์šฐ, ์ฝ”๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ณต์‚ฌ๋ฉ๋‹ˆ๋‹ค: - `MobileBert`์—์„œ `Bert`๋กœ(์˜ˆ: `MobileBertModel`์„ init์—์„œ ์‚ฌ์šฉํ•  ๋•Œ) - `mobilebert`์—์„œ `bert`๋กœ(์˜ˆ: `self.mobilebert`๋ฅผ ์ •์˜ํ•  ๋•Œ) - `MOBILEBERT`์—์„œ `BERT`๋กœ(`MOBILEBERT_INPUTS_DOCSTRING` ์ƒ์ˆ˜์—์„œ)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/preprocessing.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ „์ฒ˜๋ฆฌ[[preprocess]] [[open-in-colab]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ๋งž๋Š” ์ž…๋ ฅ ํ˜•์‹์œผ๋กœ ์ „์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ, ์ด๋ฏธ์ง€ ๋˜๋Š” ์˜ค๋””์˜ค์ธ์ง€ ๊ด€๊ณ„์—†์ด ๋ฐ์ดํ„ฐ๋ฅผ ํ…์„œ ๋ฐฐ์น˜๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์กฐ๋ฆฝํ•  ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ์ผ๋ จ์˜ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ ๋‚ด์šฉ์„ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ํ…์ŠคํŠธ๋Š” [Tokenizer](./main_classes/tokenizer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ† ํฐ ์‹œํ€€์Šค๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ํ† ํฐ์˜ ์ˆซ์ž ํ‘œํ˜„์„ ๋งŒ๋“  ํ›„ ํ…์„œ๋กœ ์กฐ๋ฆฝํ•ฉ๋‹ˆ๋‹ค. * ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค๋Š” [Feature extractor](./main_classes/feature_extractor)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ค๋””์˜ค ํŒŒํ˜•์—์„œ ์‹œํ€€์Šค ํŠน์„ฑ์„ ํŒŒ์•…ํ•˜์—ฌ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. * ์ด๋ฏธ์ง€ ์ž…๋ ฅ์€ [ImageProcessor](./main_classes/image)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. * ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์€ [Processor](./main_classes/processors)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ† ํฌ๋‚˜์ด์ €์™€ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. <Tip> `AutoProcessor`๋Š” **์–ธ์ œ๋‚˜** ์ž‘๋™ํ•˜์—ฌ ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋˜๋Š” ํ”„๋กœ์„ธ์„œ ๋“ฑ ์‚ฌ์šฉ ์ค‘์ธ ๋ชจ๋ธ์— ๋งž๋Š” ํด๋ž˜์Šค๋ฅผ ์ž๋™์œผ๋กœ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ๐Ÿค— Datasets๋ฅผ ์„ค์น˜ํ•˜์—ฌ ์‹คํ—˜์— ์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ๋ฅผ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pip install datasets ``` ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural-language-processing]] <Youtube id="Yffk5aydLzg"/> ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๋ณธ ๋„๊ตฌ๋Š” [tokenizer](main_classes/tokenizer)์ž…๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ์ผ๋ จ์˜ ๊ทœ์น™์— ๋”ฐ๋ผ ํ…์ŠคํŠธ๋ฅผ *ํ† ํฐ*์œผ๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค. ํ† ํฐ์€ ์ˆซ์ž๋กœ ๋ณ€ํ™˜๋˜๊ณ  ํ…์„œ๋Š” ๋ชจ๋ธ ์ž…๋ ฅ์ด ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ํ•„์š”ํ•œ ์ถ”๊ฐ€ ์ž…๋ ฅ์€ ํ† ํฌ๋‚˜์ด์ €์— ์˜ํ•ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. <Tip> ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ๊ณ„ํš์ด๋ผ๋ฉด ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ํ…์ŠคํŠธ๊ฐ€ ์‚ฌ์ „ํ›ˆ๋ จ ๋ง๋ญ‰์น˜์™€ ๋™์ผํ•œ ๋ฐฉ์‹์œผ๋กœ ๋ถ„ํ• ๋˜๊ณ  ์‚ฌ์ „ํ›ˆ๋ จ ์ค‘์— ๋™์ผํ•œ ํ•ด๋‹น ํ† ํฐ-์ธ๋ฑ์Šค ์Œ(์ผ๋ฐ˜์ ์œผ๋กœ *vocab*์ด๋ผ๊ณ  ํ•จ)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๋ ค๋ฉด [`AutoTokenizer.from_pretrained`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”. ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์ „ํ›ˆ๋ จ๋œ *vocab*์„ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") ``` ๊ทธ ๋‹ค์Œ์œผ๋กœ ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ €์— ๋„ฃ์–ด์ฃผ์„ธ์š”: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ํ† ํฌ๋‚˜์ด์ €๋Š” ์„ธ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ํ•ญ๋ชฉ์„ ํฌํ•จํ•œ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: * [input_ids](glossary#input-ids)๋Š” ๋ฌธ์žฅ์˜ ๊ฐ ํ† ํฐ์— ํ•ด๋‹นํ•˜๋Š” ์ธ๋ฑ์Šค์ž…๋‹ˆ๋‹ค. * [attention_mask](glossary#attention-mask)๋Š” ํ† ํฐ์„ ์ฒ˜๋ฆฌํ•ด์•ผ ํ•˜๋Š”์ง€ ์—ฌ๋ถ€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. * [token_type_ids](glossary#token-type-ids)๋Š” ๋‘ ๊ฐœ ์ด์ƒ์˜ ์‹œํ€€์Šค๊ฐ€ ์žˆ์„ ๋•Œ ํ† ํฐ์ด ์†ํ•œ ์‹œํ€€์Šค๋ฅผ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. `input_ids`๋ฅผ ๋””์ฝ”๋”ฉํ•˜์—ฌ ์ž…๋ ฅ์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ๋‘ ๊ฐœ์˜ ํŠน์ˆ˜ํ•œ ํ† ํฐ(๋ถ„๋ฅ˜ ํ† ํฐ `CLS`์™€ ๋ถ„ํ•  ํ† ํฐ `SEP`)์„ ๋ฌธ์žฅ์— ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์— ํŠน์ˆ˜ํ•œ ํ† ํฐ์ด ํ•„์š”ํ•œ ๊ฒƒ์€ ์•„๋‹ˆ์ง€๋งŒ, ํ•„์š”ํ•˜๋‹ค๋ฉด ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒ˜๋ฆฌํ•  ๋ฌธ์žฅ์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ๋Š” ๊ฒฝ์šฐ์—๋Š” ๋ฆฌ์ŠคํŠธ๋กœ ํ† ํฌ๋‚˜์ด์ €์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### ํŒจ๋”ฉ[[pad]] ๋ชจ๋ธ ์ž…๋ ฅ์ธ ํ…์„œ๋Š” ๋ชจ์–‘์ด ๊ท ์ผํ•ด์•ผ ํ•˜์ง€๋งŒ, ๋ฌธ์žฅ์˜ ๊ธธ์ด๊ฐ€ ํ•ญ์ƒ ๊ฐ™์ง€๋Š” ์•Š๊ธฐ ๋•Œ๋ฌธ์— ๋ฌธ์ œ๊ฐ€ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŒจ๋”ฉ์€ ์งง์€ ๋ฌธ์žฅ์— ํŠน์ˆ˜ํ•œ *ํŒจ๋”ฉ ํ† ํฐ*์„ ์ถ”๊ฐ€ํ•˜์—ฌ ํ…์„œ๋ฅผ ์ง์‚ฌ๊ฐํ˜• ๋ชจ์–‘์ด ๋˜๋„๋ก ํ•˜๋Š” ์ „๋žต์ž…๋‹ˆ๋‹ค. `padding` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐฐ์น˜ ๋‚ด์˜ ์งง์€ ์‹œํ€€์Šค๋ฅผ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค์— ๋งž์ถฐ ํŒจ๋”ฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` ๊ธธ์ด๊ฐ€ ์งง์€ ์ฒซ ๋ฌธ์žฅ๊ณผ ์„ธ ๋ฒˆ์งธ ๋ฌธ์žฅ์ด ์ด์ œ `0`์œผ๋กœ ์ฑ„์›Œ์กŒ์Šต๋‹ˆ๋‹ค. ### ์ž˜๋ผ๋‚ด๊ธฐ[[truncation]] ํ•œํŽธ, ๋•Œ๋กœ๋Š” ์‹œํ€€์Šค๊ฐ€ ๋ชจ๋ธ์—์„œ ์ฒ˜๋ฆฌํ•˜๊ธฐ์— ๋„ˆ๋ฌด ๊ธธ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ, ์‹œํ€€์Šค๋ฅผ ๋” ์งง๊ฒŒ ์ค„์ผ ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉํ•˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์‹œํ€€์Šค๋ฅผ ์ž๋ฅด๋ ค๋ฉด `truncation` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` <Tip> ๋‹ค์–‘ํ•œ ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ ์ธ์ˆ˜์— ๋Œ€ํ•ด ๋” ์•Œ์•„๋ณด๋ ค๋ฉด [ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ](./pad_truncation) ๊ฐœ๋… ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”. </Tip> ### ํ…์„œ ๋งŒ๋“ค๊ธฐ[[build-tensors]] ๋งˆ์ง€๋ง‰์œผ๋กœ, ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ๋ชจ๋ธ์— ๊ณต๊ธ‰๋˜๋Š” ์‹ค์ œ ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. `return_tensors` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ PyTorch์˜ ๊ฒฝ์šฐ `pt`, TensorFlow์˜ ๊ฒฝ์šฐ `tf`๋กœ ์„ค์ •ํ•˜์„ธ์š”: <frameworkcontent> <pt> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])} ``` </pt> <tf> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} ``` </tf> </frameworkcontent> ## ์˜ค๋””์˜ค[[audio]] ์˜ค๋””์˜ค ์ž‘์—…์€ ๋ชจ๋ธ์— ๋งž๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [ํŠน์„ฑ ์ถ”์ถœ๊ธฐ](main_classes/feature_extractor)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋Š” ์›์‹œ ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์—์„œ ํŠน์„ฑ๋ฅผ ์ถ”์ถœํ•˜๊ณ  ์ด๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์ด ๋ชฉ์ ์ž…๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด๊ธฐ ์œ„ํ•ด [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. (๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— [๋ฐ์ดํ„ฐ ์„ธํŠธ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/datasets/load_hub)์—์„œ ์ž์„ธํžˆ ์„ค๋ช…ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.) ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` `audio` ์—ด์˜ ์ฒซ ๋ฒˆ์งธ ์š”์†Œ์— ์ ‘๊ทผํ•˜์—ฌ ์ž…๋ ฅ์„ ์‚ดํŽด๋ณด์„ธ์š”. `audio` ์—ด์„ ํ˜ธ์ถœํ•˜๋ฉด ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ž๋™์œผ๋กœ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์„ธ ๊ฐ€์ง€ ํ•ญ๋ชฉ์ด ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค: * `array`๋Š” 1D ๋ฐฐ์—ด๋กœ ๊ฐ€์ ธ์™€์„œ (ํ•„์š”ํ•œ ๊ฒฝ์šฐ) ๋ฆฌ์ƒ˜ํ”Œ๋ง๋œ ์Œ์„ฑ ์‹ ํ˜ธ์ž…๋‹ˆ๋‹ค. * `path`๋Š” ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์œ„์น˜๋ฅผ ๊ฐ€๋ฆฌํ‚ต๋‹ˆ๋‹ค. * `sampling_rate`๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์—์„œ ์ดˆ๋‹น ์ธก์ •๋˜๋Š” ๋ฐ์ดํ„ฐ ํฌ์ธํŠธ ์ˆ˜๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์นด๋“œ๋ฅผ ๋ณด๋ฉด Wav2Vec2๊ฐ€ 16kHz ์ƒ˜ํ”Œ๋ง๋œ ์Œ์„ฑ ์˜ค๋””์˜ค๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์ „ํ›ˆ๋ จํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์™€ ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๊ฐ€ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๊ฐ€ ๋‹ค๋ฅด๋ฉด ๋ฐ์ดํ„ฐ๋ฅผ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 1. ๐Ÿค— Datasets์˜ [`~datasets.Dataset.cast_column`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋ฅผ 16kHz๋กœ ์—…์ƒ˜ํ”Œ๋งํ•˜์„ธ์š”: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด `audio` ์—ด์„ ๋‹ค์‹œ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` ๋‹ค์Œ์œผ๋กœ, ์ž…๋ ฅ์„ ์ •๊ทœํ™”ํ•˜๊ณ  ํŒจ๋”ฉํ•  ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ๊ฒฝ์šฐ, ๋” ์งง์€ ์‹œํ€€์Šค์— ๋Œ€ํ•ด `0`์ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์—๋„ ๊ฐ™์€ ๊ฐœ๋…์ด ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋Š” ๋ฐฐ์—ด์— `0`(๋ฌต์Œ์œผ๋กœ ํ•ด์„)์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. [`AutoFeatureExtractor.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` ์˜ค๋””์˜ค `array`๋ฅผ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์— ์ „๋‹ฌํ•˜์„ธ์š”. ๋˜ํ•œ, ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์กฐ์šฉํ•œ ์˜ค๋ฅ˜(silent errors)๋ฅผ ๋” ์ž˜ ๋””๋ฒ„๊น…ํ•  ์ˆ˜ ์žˆ๋„๋ก ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์— `sampling_rate` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` ํ† ํฌ๋‚˜์ด์ €์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ฐฐ์น˜ ๋‚ด์—์„œ ๊ฐ€๋ณ€์ ์ธ ์‹œํ€€์Šค๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ํŒจ๋”ฉ ๋˜๋Š” ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐœ์˜ ์˜ค๋””์˜ค ์ƒ˜ํ”Œ์˜ ์‹œํ€€์Šค ๊ธธ์ด๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` ์˜ค๋””์˜ค ์ƒ˜ํ”Œ์˜ ๊ธธ์ด๊ฐ€ ๋™์ผํ•˜๋„๋ก ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ์„ธ์š”. ์ตœ๋Œ€ ์ƒ˜ํ”Œ ๊ธธ์ด๋ฅผ ์ง€์ •ํ•˜๋ฉด ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๊ฐ€ ํ•ด๋‹น ๊ธธ์ด์— ๋งž์ถฐ ์‹œํ€€์Šค๋ฅผ ํŒจ๋”ฉํ•˜๊ฑฐ๋‚˜ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` `preprocess_function`์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ฒ˜์Œ ์˜ˆ์‹œ ๋ช‡ ๊ฐœ์— ์ ์šฉํ•ด๋ณด์„ธ์š”: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` ์ด์ œ ์ƒ˜ํ”Œ ๊ธธ์ด๊ฐ€ ๋ชจ๋‘ ๊ฐ™๊ณ  ์ง€์ •๋œ ์ตœ๋Œ€ ๊ธธ์ด์— ๋งž๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋“œ๋””์–ด ์ „์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer-vision]] ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ](main_classes/image_processor)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋ชจ๋ธ์ด ์˜ˆ์ƒํ•˜๋Š” ์ž…๋ ฅ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์—ฌ๋Ÿฌ ๋‹จ๊ณ„๋กœ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋‹จ๊ณ„์—๋Š” ํฌ๊ธฐ ์กฐ์ •, ์ •๊ทœํ™”, ์ƒ‰์ƒ ์ฑ„๋„ ๋ณด์ •, ์ด๋ฏธ์ง€์˜ ํ…์„œ ๋ณ€ํ™˜ ๋“ฑ์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. <Tip> ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋Š” ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์„ ๋ช‡ ๊ฐ€์ง€ ์ ์šฉํ•œ ๋’ค์— ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ ๋ฐ ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์€ ๋ชจ๋‘ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ˜•ํ•˜์ง€๋งŒ, ์„œ๋กœ ๋‹ค๋ฅธ ๋ชฉ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค: * ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์€ ๊ณผ์ ํ•ฉ(over-fitting)์„ ๋ฐฉ์ง€ํ•˜๊ณ  ๋ชจ๋ธ์˜ ๊ฒฌ๊ณ ํ•จ(resiliency)์„ ๋†’์ด๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ๊ธฐ์™€ ์ƒ‰์ƒ ์กฐ์ •, ์ž๋ฅด๊ธฐ, ํšŒ์ „, ํฌ๊ธฐ ์กฐ์ •, ํ™•๋Œ€/์ถ•์†Œ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์ฆ๊ฐ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ฆ๊ฐ•์œผ๋กœ ์ด๋ฏธ์ง€์˜ ์˜๋ฏธ๊ฐ€ ๋ฐ”๋€Œ์ง€ ์•Š๋„๋ก ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. * ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋Š” ์ด๋ฏธ์ง€๊ฐ€ ๋ชจ๋ธ์ด ์˜ˆ์ƒํ•˜๋Š” ์ž…๋ ฅ ํ˜•์‹๊ณผ ์ผ์น˜ํ•˜๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ์ปดํ“จํ„ฐ ๋น„์ „ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ๋•Œ ์ด๋ฏธ์ง€๋Š” ๋ชจ๋ธ์ด ์ดˆ๊ธฐ์— ํ›ˆ๋ จ๋  ๋•Œ์™€ ์ •ํ™•ํžˆ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ „์ฒ˜๋ฆฌ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์—๋Š” ์›ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋ฌด์—‡์ด๋“  ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ์—๋Š” ๋ชจ๋ธ๊ณผ ์—ฐ๊ฒฐ๋œ `ImageProcessor`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. </Tip> [food101](https://huggingface.co/datasets/food101) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ปดํ“จํ„ฐ ๋น„์ „ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉํ•˜๋Š”์ง€ ์•Œ์•„๋ณด์„ธ์š”. ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— [๋ฐ์ดํ„ฐ ์„ธํŠธ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/datasets/load_hub)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”. <Tip> ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ์ƒ๋‹นํžˆ ํฌ๊ธฐ ๋•Œ๋ฌธ์— ๐Ÿค— Datasets์˜ `split` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ์„ธํŠธ์—์„œ ์ž‘์€ ์ƒ˜ํ”Œ๋งŒ ๊ฐ€์ ธ์˜ค์„ธ์š”! </Tip> ```py >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` ๋‹ค์Œ์œผ๋กœ, ๐Ÿค— Datasets์˜ [`image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image)๋กœ ์ด๋ฏธ์ง€๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> dataset[0]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png"/> </div> [`AutoImageProcessor.from_pretrained`]๋กœ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ๋จผ์ € ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ๋‹จ๊ณ„๋ฅผ ์ถ”๊ฐ€ํ•ด ๋ด…์‹œ๋‹ค. ์•„๋ฌด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋‚˜ ์‚ฌ์šฉํ•ด๋„ ๊ดœ์ฐฎ์ง€๋งŒ, ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” torchvision์˜ [`transforms`](https://pytorch.org/vision/stable/transforms.html) ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ด๋ณด๊ณ  ์‹ถ๋‹ค๋ฉด, [Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) ๋˜๋Š” [Kornia notebooks](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)์—์„œ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉํ•˜๋Š”์ง€ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html)๋กœ [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html)์™€ [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) ๋“ฑ ๋ณ€ํ™˜์„ ๋ช‡ ๊ฐ€์ง€ ์—ฐ๊ฒฐํ•˜์„ธ์š”. ์ฐธ๊ณ ๋กœ ํฌ๊ธฐ ์กฐ์ •์— ํ•„์š”ํ•œ ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ ์š”๊ตฌ์‚ฌํ•ญ์€ `image_processor`์—์„œ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์€ ์ •ํ™•ํ•œ ๋†’์ด์™€ ๋„ˆ๋น„๋ฅผ ์š”๊ตฌํ•˜์ง€๋งŒ, ์ œ์ผ ์งง์€ ๋ณ€์˜ ๊ธธ์ด(`shortest_edge`)๋งŒ ์ •์˜๋œ ๋ชจ๋ธ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)]) ``` 2. ๋ชจ๋ธ์€ ์ž…๋ ฅ์œผ๋กœ [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values)๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค. `ImageProcessor`๋Š” ์ด๋ฏธ์ง€ ์ •๊ทœํ™” ๋ฐ ์ ์ ˆํ•œ ํ…์„œ ์ƒ์„ฑ์„ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฐ์น˜ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ๋ฐ ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋ฅผ ๊ฒฐํ•ฉํ•˜๊ณ  `pixel_values`๋ฅผ ์ƒ์„ฑํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> def transforms(examples): ... images = [_transforms(img.convert("RGB")) for img in examples["image"]] ... examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"] ... return examples ``` <Tip> ์œ„์˜ ์˜ˆ์—์„œ๋Š” ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ์ค‘์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์— `do_resize=False`๋กœ ์„ค์ •ํ•˜๊ณ , ํ•ด๋‹น `image_processor`์—์„œ `size` ์†์„ฑ์„ ํ™œ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ์ค‘์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ด ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ƒ๋žตํ•˜์„ธ์š”. ๊ธฐ๋ณธ์ ์œผ๋กœ๋Š” `ImageProcessor`๊ฐ€ ํฌ๊ธฐ ์กฐ์ •์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ฆ๊ฐ• ๋ณ€ํ™˜ ๊ณผ์ •์—์„œ ์ด๋ฏธ์ง€๋ฅผ ์ •๊ทœํ™”ํ•˜๋ ค๋ฉด `image_processor.image_mean` ๋ฐ `image_processor.image_std` ๊ฐ’์„ ์‚ฌ์šฉํ•˜์„ธ์š”. </Tip> 3. ๐Ÿค— Datasets์˜ [`set_transform`](https://huggingface.co/docs/datasets/process#format-transform)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹ค์‹œ๊ฐ„์œผ๋กœ ๋ณ€ํ™˜์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> dataset.set_transform(transforms) ``` 4. ์ด์ œ ์ด๋ฏธ์ง€์— ์ ‘๊ทผํ•˜๋ฉด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ `pixel_values`๋ฅผ ์ถ”๊ฐ€ํ•œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋“œ๋””์–ด ์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py >>> dataset[0].keys() ``` ๋‹ค์Œ์€ ๋ณ€ํ˜•์ด ์ ์šฉ๋œ ํ›„์˜ ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ์ž˜๋ ค๋‚˜๊ฐ”๊ณ  ์ƒ‰์ƒ ์†์„ฑ์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png"/> </div> <Tip> `ImageProcessor`๋Š” ๊ฐ์ฒด ๊ฐ์ง€, ์‹œ๋งจํ‹ฑ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜(semantic segmentation), ์ธ์Šคํ„ด์Šค ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜(instance segmentation), ํŒŒ๋†‰ํ‹ฑ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜(panoptic segmentation)๊ณผ ๊ฐ™์€ ์ž‘์—…์— ๋Œ€ํ•œ ํ›„์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•์€ ๋ชจ๋ธ์˜ ์›์‹œ ์ถœ๋ ฅ์„ ๊ฒฝ๊ณ„ ์ƒ์ž๋‚˜ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜ ๋งต๊ณผ ๊ฐ™์€ ์˜๋ฏธ ์žˆ๋Š” ์˜ˆ์ธก์œผ๋กœ ๋ณ€ํ™˜ํ•ด์ค๋‹ˆ๋‹ค. </Tip> ### ํŒจ๋”ฉ[[pad]] ์˜ˆ๋ฅผ ๋“ค์–ด, [DETR](./model_doc/detr)์™€ ๊ฐ™์€ ๊ฒฝ์šฐ์—๋Š” ๋ชจ๋ธ์ด ํ›ˆ๋ จํ•  ๋•Œ ํฌ๊ธฐ ์กฐ์ • ์ฆ๊ฐ•์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด ๋ฐฐ์น˜ ๋‚ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๊ฐ€ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`DetrImageProcessor`]์˜ [`DetrImageProcessor.pad`]๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์‚ฌ์šฉ์ž ์ •์˜ `collate_fn`์„ ์ •์˜ํ•ด์„œ ๋ฐฐ์น˜ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ[[multimodal]] ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์ด ํ•„์š”ํ•œ ์ž‘์—…์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•œ [ํ”„๋กœ์„ธ์„œ](main_classes/processors)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ํ† ํฌ๋‚˜์ด์ €์™€ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์™€ ๊ฐ™์€ ๋‘ ๊ฐ€์ง€ ์ฒ˜๋ฆฌ ๊ฐ์ฒด๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. [LJ Speech](https://huggingface.co/datasets/lj_speech) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์„ ์œ„ํ•œ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์„ธ์š”. (๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๐Ÿค— [๋ฐ์ดํ„ฐ ์„ธํŠธ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/datasets/load_hub)์—์„œ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.) ```py >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์—์„œ๋Š” `audio`์™€ `text`์—๋งŒ ์ง‘์ค‘ํ•˜๋ฉด ๋˜๋ฏ€๋กœ, ๋‹ค๋ฅธ ์—ด๋“ค์€ ์ œ๊ฑฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` ์ด์ œ `audio`์™€ `text`์—ด์„ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` ๊ธฐ์กด์— ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์—์„œ ์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ์ƒˆ๋กœ์šด ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋ฅผ ์ผ์น˜์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋ฅผ [๋ฆฌ์ƒ˜ํ”Œ๋ง](preprocessing#audio)ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` [`AutoProcessor.from_pretrained`]๋กœ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. `array`์— ๋“ค์–ด ์žˆ๋Š” ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ๋ฅผ `input_values`๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  `text`๋ฅผ ํ† ํฐํ™”ํ•˜์—ฌ `labels`๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ž…๋ ฅ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. ์ƒ˜ํ”Œ์„ `prepare_dataset` ํ•จ์ˆ˜์— ์ ์šฉํ•˜์„ธ์š”: ```py >>> prepare_dataset(lj_speech[0]) ``` ์ด์ œ ํ”„๋กœ์„ธ์„œ๊ฐ€ `input_values`์™€ `labels`๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ , ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ 16kHz๋กœ ๋‹ค์šด์ƒ˜ํ”Œ๋งํ–ˆ์Šต๋‹ˆ๋‹ค. ๋“œ๋””์–ด ์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค!
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/token_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ† ํฐ ๋ถ„๋ฅ˜[[token-classification]] [[open-in-colab]] <Youtube id="wVHdVlPScxA"/> ํ† ํฐ ๋ถ„๋ฅ˜๋Š” ๋ฌธ์žฅ์˜ ๊ฐœ๋ณ„ ํ† ํฐ์— ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ํ† ํฐ ๋ถ„๋ฅ˜ ์ž‘์—… ์ค‘ ํ•˜๋‚˜๋Š” ๊ฐœ์ฒด๋ช… ์ธ์‹(Named Entity Recognition, NER)์ž…๋‹ˆ๋‹ค. ๊ฐœ์ฒด๋ช… ์ธ์‹์€ ๋ฌธ์žฅ์—์„œ ์‚ฌ๋žŒ, ์œ„์น˜ ๋˜๋Š” ์กฐ์ง๊ณผ ๊ฐ™์€ ๊ฐ ๊ฐœ์ฒด์˜ ๋ ˆ์ด๋ธ”์„ ์ฐพ์œผ๋ ค๊ณ  ์‹œ๋„ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. [WNUT 17](https://huggingface.co/datasets/wnut_17) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased)๋ฅผ ํŒŒ์ธ ํŠœ๋‹ํ•˜์—ฌ ์ƒˆ๋กœ์šด ๊ฐœ์ฒด๋ฅผ ํƒ์ง€ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธ ํŠœ๋‹ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/token-classification)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate seqeval ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## WNUT 17 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-wnut-17-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ WNUT 17 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> wnut = load_dataset("wnut_17") ``` ๋‹ค์Œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> wnut["train"][0] {'id': '0', 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0], 'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'] } ``` `ner_tags`์˜ ๊ฐ ์ˆซ์ž๋Š” ๊ฐœ์ฒด๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ˆซ์ž๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ๊ฐœ์ฒด๊ฐ€ ๋ฌด์—‡์ธ์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> label_list = wnut["train"].features[f"ner_tags"].feature.names >>> label_list [ "O", "B-corporation", "I-corporation", "B-creative-work", "I-creative-work", "B-group", "I-group", "B-location", "I-location", "B-person", "I-person", "B-product", "I-product", ] ``` ๊ฐ `ner_tag`์˜ ์•ž์— ๋ถ™์€ ๋ฌธ์ž๋Š” ๊ฐœ์ฒด์˜ ํ† ํฐ ์œ„์น˜๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค: - `B-`๋Š” ๊ฐœ์ฒด์˜ ์‹œ์ž‘์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. - `I-`๋Š” ํ† ํฐ์ด ๋™์ผํ•œ ๊ฐœ์ฒด ๋‚ด๋ถ€์— ํฌํ•จ๋˜์–ด ์žˆ์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค(์˜ˆ๋ฅผ ๋“ค์–ด `State` ํ† ํฐ์€ `Empire State Building`์™€ ๊ฐ™์€ ๊ฐœ์ฒด์˜ ์ผ๋ถ€์ž…๋‹ˆ๋‹ค). - `0`๋Š” ํ† ํฐ์ด ์–ด๋–ค ๊ฐœ์ฒด์—๋„ ํ•ด๋‹นํ•˜์ง€ ์•Š์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="iY2AZYdZAr0"/> ๋‹ค์Œ์œผ๋กœ `tokens` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` ์œ„์˜ ์˜ˆ์ œ `tokens` ํ•„๋“œ๋ฅผ ๋ณด๋ฉด ์ž…๋ ฅ์ด ์ด๋ฏธ ํ† ํฐํ™”๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‹ค์ œ๋กœ ์ž…๋ ฅ์€ ์•„์ง ํ† ํฐํ™”๋˜์ง€ ์•Š์•˜์œผ๋ฏ€๋กœ ๋‹จ์–ด๋ฅผ ํ•˜์œ„ ๋‹จ์–ด๋กœ ํ† ํฐํ™”ํ•˜๊ธฐ ์œ„ํ•ด `is_split_into_words=True`๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ œ๋กœ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> example = wnut["train"][0] >>> tokenized_input = tokenizer(example["tokens"], is_split_into_words=True) >>> tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"]) >>> tokens ['[CLS]', '@', 'paul', '##walk', 'it', "'", 's', 'the', 'view', 'from', 'where', 'i', "'", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]'] ``` ๊ทธ๋Ÿฌ๋‚˜ ์ด๋กœ ์ธํ•ด `[CLS]`๊ณผ `[SEP]`๋ผ๋Š” ํŠน์ˆ˜ ํ† ํฐ์ด ์ถ”๊ฐ€๋˜๊ณ , ํ•˜์œ„ ๋‹จ์–ด ํ† ํฐํ™”๋กœ ์ธํ•ด ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๋ถˆ์ผ์น˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ํ•˜๋‚˜์˜ ๋ ˆ์ด๋ธ”์— ํ•ด๋‹นํ•˜๋Š” ๋‹จ์ผ ๋‹จ์–ด๋Š” ์ด์ œ ๋‘ ๊ฐœ์˜ ํ•˜์œ„ ๋‹จ์–ด๋กœ ๋ถ„ํ• ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์žฌ์ •๋ ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. [`word_ids`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids) ๋ฉ”์†Œ๋“œ๋กœ ๋ชจ๋“  ํ† ํฐ์„ ํ•ด๋‹น ๋‹จ์–ด์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ˆ˜ ํ† ํฐ `[CLS]`์™€ `[SEP]`์— `-100` ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•˜์—ฌ, PyTorch ์†์‹ค ํ•จ์ˆ˜๊ฐ€ ํ•ด๋‹น ํ† ํฐ์„ ๋ฌด์‹œํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. 3. ์ฃผ์–ด์ง„ ๋‹จ์–ด์˜ ์ฒซ ๋ฒˆ์งธ ํ† ํฐ์—๋งŒ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ™์€ ๋‹จ์–ด์˜ ๋‹ค๋ฅธ ํ•˜์œ„ ํ† ํฐ์— `-100`์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ํ† ํฐ๊ณผ ๋ ˆ์ด๋ธ”์„ ์žฌ์ •๋ ฌํ•˜๊ณ  DistilBERT์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋‚ด๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py >>> def tokenize_and_align_labels(examples): ... tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) ... labels = [] ... for i, label in enumerate(examples[f"ner_tags"]): ... word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. ... previous_word_idx = None ... label_ids = [] ... for word_idx in word_ids: # Set the special tokens to -100. ... if word_idx is None: ... label_ids.append(-100) ... elif word_idx != previous_word_idx: # Only label the first token of a given word. ... label_ids.append(label[word_idx]) ... else: ... label_ids.append(-100) ... previous_word_idx = word_idx ... labels.append(label_ids) ... tokenized_inputs["labels"] = labels ... return tokenized_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True) ``` ์ด์ œ [`DataCollatorWithPadding`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค์–ด๋ด…์‹œ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹ , *๋™์  ํŒจ๋”ฉ*์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด์— ๋งž๊ฒŒ ๋ฌธ์žฅ์„ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ์ด ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluation]] ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋Š” ๊ฒƒ์ด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น ๋ฅด๊ฒŒ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” ๐Ÿค— Evaluate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). Seqeval์€ ์‹ค์ œ๋กœ ์ •๋ฐ€๋„, ์žฌํ˜„๋ฅ , F1 ๋ฐ ์ •ํ™•๋„์™€ ๊ฐ™์€ ์—ฌ๋Ÿฌ ์ ์ˆ˜๋ฅผ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> seqeval = evaluate.load("seqeval") ``` ๋จผ์ € NER ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜จ ๋‹ค์Œ, [`~evaluate.EvaluationModule.compute`]์— ์‹ค์ œ ์˜ˆ์ธก๊ณผ ์‹ค์ œ ๋ ˆ์ด๋ธ”์„ ์ „๋‹ฌํ•˜์—ฌ ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> labels = [label_list[i] for i in example[f"ner_tags"]] >>> def compute_metrics(p): ... predictions, labels = p ... predictions = np.argmax(predictions, axis=2) ... true_predictions = [ ... [label_list[p] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... true_labels = [ ... [label_list[l] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... results = seqeval.compute(predictions=true_predictions, references=true_labels) ... return { ... "precision": results["overall_precision"], ... "recall": results["overall_recall"], ... "f1": results["overall_f1"], ... "accuracy": results["overall_accuracy"], ... } ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•˜๋ฉด ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „์—, `id2label`์™€ `label2id`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ƒ๋˜๋Š” id์™€ ๋ ˆ์ด๋ธ”์˜ ๋งต์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> id2label = { ... 0: "O", ... 1: "B-corporation", ... 2: "I-corporation", ... 3: "B-creative-work", ... 4: "I-creative-work", ... 5: "B-group", ... 6: "I-group", ... 7: "B-location", ... 8: "I-location", ... 9: "B-person", ... 10: "I-person", ... 11: "B-product", ... 12: "I-product", ... } >>> label2id = { ... "O": 0, ... "B-corporation": 1, ... "I-corporation": 2, ... "B-creative-work": 3, ... "I-creative-work": 4, ... "B-group": 5, ... "I-group": 6, ... "B-location": 7, ... "I-location": 8, ... "B-person": 9, ... "I-person": 10, ... "B-product": 11, ... "I-product": 12, ... } ``` <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSequenceClassification`]๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer >>> model = AutoModelForTokenClassification.from_pretrained( ... "distilbert/distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.) ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๋Š” seqeval ์ ์ˆ˜๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜์™€ ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_wnut_model", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=2, ... weight_decay=0.01, ... eval_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_wnut["train"], ... eval_dataset=tokenized_wnut["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด, ๊ทธ๋ฆฌ๊ณ  ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 3 >>> num_train_steps = (len(tokenized_wnut["train"]) // batch_size) * num_train_epochs >>> optimizer, lr_schedule = create_optimizer( ... init_lr=2e-5, ... num_train_steps=num_train_steps, ... weight_decay_rate=0.01, ... num_warmup_steps=0, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSequenceClassification`]์„ ์‚ฌ์šฉํ•˜์—ฌ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ , ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained( ... "distilbert/distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_wnut["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_wnut["validation"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ seqeval ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ , ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_wnut_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด, ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์— ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ์˜ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์—ฌ ํŒŒ์ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ํ† ํฐ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ํ…์ŠคํŠธ๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> text = "The Golden State Warriors are an American professional basketball team based in San Francisco." ``` ํŒŒ์ธ ํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ NER์˜ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ด๋ณด์„ธ์š”: ```py >>> from transformers import pipeline >>> classifier = pipeline("ner", model="stevhliu/my_awesome_wnut_model") >>> classifier(text) [{'entity': 'B-location', 'score': 0.42658573, 'index': 2, 'word': 'golden', 'start': 4, 'end': 10}, {'entity': 'I-location', 'score': 0.35856336, 'index': 3, 'word': 'state', 'start': 11, 'end': 16}, {'entity': 'B-group', 'score': 0.3064001, 'index': 4, 'word': 'warriors', 'start': 17, 'end': 25}, {'entity': 'B-location', 'score': 0.65523505, 'index': 13, 'word': 'san', 'start': 80, 'end': 83}, {'entity': 'B-location', 'score': 0.4668663, 'index': 14, 'word': 'francisco', 'start': 84, 'end': 93}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predictions = torch.argmax(logits, dim=2) >>> predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="tf") ``` ์ž…๋ ฅ๊ฐ’์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/masked_language_modeling.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง(Masked language modeling)[[masked-language-modeling]] [[open-in-colab]] <Youtube id="mqElG5QJWUg"/> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์‹œํ€€์Šค์—์„œ ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์€ ์–‘๋ฐฉํ–ฅ์œผ๋กœ ํ† ํฐ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ํ† ํฐ์˜ ์™ผ์ชฝ๊ณผ ์˜ค๋ฅธ์ชฝ ์–‘์ชฝ์—์„œ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์ „์ฒด ์‹œํ€€์Šค์— ๋Œ€ํ•œ ๋ฌธ๋งฅ์  ์ดํ•ด๊ฐ€ ํ•„์š”ํ•œ ์ž‘์—…์— ์ ํ•ฉํ•˜๋ฉฐ, BERT๊ฐ€ ๊ทธ ์˜ˆ์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋‹ค๋ฃฐ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [ELI5](https://huggingface.co/datasets/eli5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [r/askscience](https://www.reddit.com/r/askscience/) ๋ถ€๋ถ„์„ ์‚ฌ์šฉํ•ด [DistilRoBERTa](https://huggingface.co/distilbert/distilroberta-base) ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก  ์‹œ์— ์ง์ ‘ ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/fill-mask)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€์˜ ๊ณต์œ ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด(When prompted) ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-eli5-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ r/askscience ์ค‘ ์ผ๋ถ€๋งŒ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ ํ•™์Šต์— ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> eli5 = load_dataset("eli5", split="train_asks[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train_asks`๋ฅผ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ์•„๋ž˜ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> eli5["train"][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} ``` ๋งŽ์•„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์‹ค์ œ๋กœ๋Š” `text` ํ•„๋“œ์—๋งŒ ์ง‘์ค‘ํ•˜๋ฉด ๋ฉ๋‚˜๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ์ž‘์—…์˜ ๋ฉ‹์ง„ ์ ์€ (๋น„์ง€๋„ ํ•™์Šต์œผ๋กœ) *๋‹ค์Œ ๋‹จ์–ด๊ฐ€ ๋ ˆ์ด๋ธ”*์ด๊ธฐ ๋•Œ๋ฌธ์— ๋ ˆ์ด๋ธ”์ด ๋”ฐ๋กœ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="8PmhEIXhBvI"/> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด, ๋‹ค์Œ ๋‹จ๊ณ„๋กœ DistilRoBERTa ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์™€์„œ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilroberta-base") ``` ์œ„์˜ ์˜ˆ์ œ์—์„œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, `text` ํ•„๋“œ๋Š” `answers` ์•ˆ์— ์ค‘์ฒฉ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ค‘์ฒฉ๋œ ๊ตฌ์กฐ์—์„œ [`flatten`](https://huggingface.co/docs/datasets/process#flatten) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} ``` ์ด์ œ ๊ฐ ํ•˜์œ„ ํ•„๋“œ๋Š” `answers` ์ ‘๋‘์‚ฌ(prefix)๋กœ ํ‘œ์‹œ๋œ ๋Œ€๋กœ ๋ณ„๋„์˜ ์—ด์ด ๋˜๊ณ , `text` ํ•„๋“œ๋Š” ์ด์ œ ๋ฆฌ์ŠคํŠธ๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ๋ฌธ์žฅ์„ ๊ฐœ๋ณ„์ ์œผ๋กœ ํ† ํฐํ™”ํ•˜๋Š” ๋Œ€์‹  ๋ฆฌ์ŠคํŠธ๋ฅผ ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ํ•œ๋ฒˆ์— ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ฐ ์˜ˆ์ œ์— ๋Œ€ํ•ด ๋ฌธ์ž์—ด๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋ฅผ `join`ํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ ์šฉํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋„๋ก `batched=True`๋ฅผ ์„ค์ •ํ•˜๊ณ  `num_proc`๋กœ ์ฒ˜๋ฆฌ ํšŸ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” ํ† ํฐ ์‹œํ€€์Šค๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ ์ด ์ค‘ ์ผ๋ถ€๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊น๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด - ๋ชจ๋“  ์‹œํ€€์Šค๋ฅผ ์—ฐ๊ฒฐํ•˜๊ณ  - ์—ฐ๊ฒฐ๋œ ์‹œํ€€์Šค๋ฅผ ์ •์˜ํ•œ `block_size` ๋ณด๋‹ค ๋” ์งง์€ ๋ฉ์–ด๋ฆฌ๋กœ ๋ถ„ํ• ํ•˜๋Š”๋ฐ, ์ด ๋ฉ์–ด๋ฆฌ๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ์งง๊ณ  GPU RAM์ด ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ธธ์ด์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> block_size = 128 >>> def group_texts(examples): ... # Concatenate all texts. ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can ... # customize this part to your needs. ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... # Split by chunks of block_size. ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... result["labels"] = result["input_ids"].copy() ... return result ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— `group_texts` ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` ์ด์ œ [`DataCollatorForLanguageModeling`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค collation ๋‹จ๊ณ„์—์„œ ๋งค ๋ฐฐ์น˜์•ˆ์—์„œ์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์‹œํ€€์Šค ๋ ํ† ํฐ์„ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ˜๋ณตํ•  ๋•Œ๋งˆ๋‹ค ํ† ํฐ์„ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚นํ•˜๋„๋ก `mlm_-probability`๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) ``` </pt> <tf> ์‹œํ€€์Šค ๋ ํ† ํฐ์„ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ˜๋ณตํ•  ๋•Œ๋งˆ๋‹ค ํ† ํฐ์„ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚นํ•˜๋„๋ก `mlm_-probability`๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForMaskedLM`]๋ฅผ ์‚ฌ์šฉํ•ด DistilRoBERTa ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("distilbert/distilroberta-base") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๊ฐ€ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์˜ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ €์žฅ ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์€ ์œ ์ผํ•œ ํ•„์ˆ˜ ํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(collator)์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_mlm_model", ... eval_strategy="epoch", ... learning_rate=2e-5, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.evaluate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(perplexity)๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import math >>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 8.76 ``` ๊ทธ๋ฆฌ๊ณ  [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก, Hub๋กœ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> TensorFlow๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์˜ตํ‹ฐ๋งˆ์ด์ €(optimizer) ํ•จ์ˆ˜ ์„ค์ •, ํ•™์Šต๋ฅ (learning rate) ์Šค์ผ€์ฅด๋ง, ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ •๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๋‹ค์Œ์œผ๋กœ [`TFAutoModelForMaskedLM`]๋ฅผ ์‚ฌ์šฉํ•ด DistilRoBERTa ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("distilbert/distilroberta-base") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฉ”์†Œ๋“œ๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ ํ›ˆ๋ จ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ์ด๋Š” ์—…๋กœ๋“œํ•  ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €์˜ ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์— ์ง€์ •ํ•˜์—ฌ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_mlm_model", ... tokenizer=tokenizer, ... ) ``` ๋“œ๋””์–ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ๋•Œ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํฌํฌ ์ˆ˜, ์ฝœ๋ฐฑ์ด ํฌํ•จ๋œ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ์ž๋™์œผ๋กœ Hub๋กœ ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์‹ฌ์ธต์ ์ธ ์˜ˆ์ œ๋Š” [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ง€๊ธˆ๊นŒ์ง€ ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •์„ ์ž˜ ํ–ˆ์œผ๋‹ˆ, ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋ชจ๋ธ์ด ๋นˆ์นธ์„ ์ฑ„์šธ ํ…์ŠคํŠธ๋ฅผ ์ŠคํŽ˜์…œ ํ† ํฐ(special token)์ธ `<mask>` ํ† ํฐ์œผ๋กœ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> text = "The Milky Way is a <mask> galaxy." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. `fill-mask`ํƒœ์Šคํฌ๋กœ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. `top_k` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ˜ํ™˜ํ•˜๋Š” ์˜ˆ์ธก์˜ ์ˆ˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> mask_filler = pipeline("fill-mask", "stevhliu/my_awesome_eli5_mlm_model") >>> mask_filler(text, top_k=3) [{'score': 0.5150994658470154, 'token': 21300, 'token_str': ' spiral', 'sequence': 'The Milky Way is a spiral galaxy.'}, {'score': 0.07087188959121704, 'token': 2232, 'token_str': ' massive', 'sequence': 'The Milky Way is a massive galaxy.'}, {'score': 0.06434620916843414, 'token': 650, 'token_str': ' small', 'sequence': 'The Milky Way is a small galaxy.'}] ``` <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `<mask>` ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="pt") >>> mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1] ``` ๋ชจ๋ธ์— `inputs`๋ฅผ ์ž…๋ ฅํ•˜๊ณ , ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์˜ `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์€ ๊ฐ€์ง„ ๋งˆ์Šคํฌ ํ† ํฐ 3๊ฐœ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `<mask>` ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="tf") >>> mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1] ``` ๋ชจ๋ธ์— `inputs`๋ฅผ ์ž…๋ ฅํ•˜๊ณ , ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์˜ `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์€ ๊ฐ€์ง„ ๋งˆ์Šคํฌ ํ† ํฐ 3๊ฐœ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/video_classification.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์˜์ƒ ๋ถ„๋ฅ˜ [[video-classification]] [[open-in-colab]] ์˜์ƒ ๋ถ„๋ฅ˜๋Š” ์˜์ƒ ์ „์ฒด์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๊ฐ ์˜์ƒ์—๋Š” ํ•˜๋‚˜์˜ ํด๋ž˜์Šค๊ฐ€ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ์˜์ƒ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์€ ์˜์ƒ์„ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์•„ ์–ด๋Š ํด๋ž˜์Šค์— ์†ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์˜์ƒ์ด ์–ด๋–ค ๋‚ด์šฉ์ธ์ง€ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜์ƒ ๋ถ„๋ฅ˜์˜ ์‹ค์ œ ์‘์šฉ ์˜ˆ๋Š” ํ”ผํŠธ๋‹ˆ์Šค ์•ฑ์—์„œ ์œ ์šฉํ•œ ๋™์ž‘ / ์šด๋™ ์ธ์‹ ์„œ๋น„์Šค๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋˜ํ•œ ์‹œ๊ฐ ์žฅ์• ์ธ์ด ์ด๋™ํ•  ๋•Œ ๋ณด์กฐํ•˜๋Š”๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ํ†ตํ•ด [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ. 2. ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/video-classification)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q pytorchvideo transformers evaluate ``` ์˜์ƒ์„ ์ฒ˜๋ฆฌํ•˜๊ณ  ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [PyTorchVideo](https://pytorchvideo.org/)(์ดํ•˜ `pytorchvideo`)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## UCF101 ๋ฐ์ดํ„ฐ์…‹ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ [[load-ufc101-dataset]] [UCF-101](https://www.crcv.ucf.edu/data/UCF101.php) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ(subset)์„ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ•™์Šตํ•˜๋Š”๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ฐ์ดํ„ฐ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๋ถˆ๋Ÿฌ์™€ ๋ชจ๋“  ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from huggingface_hub import hf_hub_download >>> hf_dataset_identifier = "sayakpaul/ucf101-subset" >>> filename = "UCF101_subset.tar.gz" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์ด ๋‹ค์šด๋กœ๋“œ ๋˜๋ฉด, ์••์ถ•๋œ ํŒŒ์ผ์˜ ์••์ถ•์„ ํ•ด์ œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tarfile >>> with tarfile.open(file_path) as t: ... t.extractall(".") ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ```bash UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... test/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... ``` ์ •๋ ฌ๋œ ์˜์ƒ์˜ ๊ฒฝ๋กœ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash ... 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ``` ๋™์ผํ•œ ๊ทธ๋ฃน/์žฅ๋ฉด์— ์†ํ•˜๋Š” ์˜์ƒ ํด๋ฆฝ์€ ํŒŒ์ผ ๊ฒฝ๋กœ์—์„œ `g`๋กœ ํ‘œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด, `v_ApplyEyeMakeup_g07_c04.avi`์™€ `v_ApplyEyeMakeup_g07_c06.avi` ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘˜์€ ๊ฐ™์€ ๊ทธ๋ฃน์ž…๋‹ˆ๋‹ค. ๊ฒ€์ฆ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ๋ถ„ํ• ์„ ํ•  ๋•Œ, [๋ฐ์ดํ„ฐ ๋ˆ„์ถœ(data leakage)](https://www.kaggle.com/code/alexisbcook/data-leakage)์„ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ๋™์ผํ•œ ๊ทธ๋ฃน / ์žฅ๋ฉด์˜ ์˜์ƒ ํด๋ฆฝ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ํ•˜์œ„ ์ง‘ํ•ฉ์€ ์ด๋Ÿฌํ•œ ์ •๋ณด๋ฅผ ๊ณ ๋ คํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ, ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์กด์žฌํ•˜๋Š” ๋ผ๋ฒจ์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋ชจ๋ธ์„ ์ดˆ๊ธฐํ™”ํ•  ๋•Œ ๋„์›€์ด ๋  ๋”•์…”๋„ˆ๋ฆฌ(dictionary data type)๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. * `label2id`: ํด๋ž˜์Šค ์ด๋ฆ„์„ ์ •์ˆ˜์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. * `id2label`: ์ •์ˆ˜๋ฅผ ํด๋ž˜์Šค ์ด๋ฆ„์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. ```py >>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()} >>> print(f"Unique classes: {list(label2id.keys())}.") # Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” ์ด 10๊ฐœ์˜ ๊ณ ์œ ํ•œ ํด๋ž˜์Šค๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ํด๋ž˜์Šค๋งˆ๋‹ค 30๊ฐœ์˜ ์˜์ƒ์ด ํ›ˆ๋ จ ์„ธํŠธ์— ์žˆ์Šต๋‹ˆ๋‹ค ## ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-a-model-to-fine-tune]] ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ์™€ ์ฒดํฌํฌ์ธํŠธ์— ์—ฐ๊ด€๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜์ƒ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ธ์ฝ”๋”์—๋Š” ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์ œ๊ณต๋˜๋ฉฐ, ๋ถ„๋ฅ˜ ํ—ค๋“œ(๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด)๋Š” ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ „์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ž‘์„ฑํ•  ๋•Œ๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification >>> model_ckpt = "MCG-NJU/videomae-base" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( ... model_ckpt, ... label2id=label2id, ... id2label=id2label, ... ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ... ) ``` ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋™์•ˆ, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฝ๊ณ ๋ฅผ ๋งˆ์ฃผ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` ์œ„ ๊ฒฝ๊ณ ๋Š” ์šฐ๋ฆฌ๊ฐ€ ์ผ๋ถ€ ๊ฐ€์ค‘์น˜(์˜ˆ: `classifier` ์ธต์˜ ๊ฐ€์ค‘์น˜์™€ ํŽธํ–ฅ)๋ฅผ ๋ฒ„๋ฆฌ๊ณ  ์ƒˆ๋กœ์šด `classifier` ์ธต์˜ ๊ฐ€์ค‘์น˜์™€ ํŽธํ–ฅ์„ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๊ฐ€ ์—†๋Š” ์ƒˆ๋กœ์šด ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์žˆ์œผ๋ฏ€๋กœ, ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ผ๊ณ  ๊ฒฝ๊ณ ๋ฅผ ๋ณด๋‚ด๋Š” ๊ฒƒ์€ ๋‹น์—ฐํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด์ œ ์šฐ๋ฆฌ๋Š” ์ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. **์ฐธ๊ณ ** ์ด [์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics)๋Š” ๋„๋ฉ”์ธ์ด ๋งŽ์ด ์ค‘์ฒฉ๋œ ์œ ์‚ฌํ•œ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ๋Œ€ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์–ป์€ ์ฒดํฌํฌ์ธํŠธ์ด๋ฏ€๋กœ ์ด ์ž‘์—…์—์„œ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `MCG-NJU/videomae-base-finetuned-kinetics` ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์–ป์€ [์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ค€๋น„ํ•˜๊ธฐ[[prepare-the-datasets-for-training]] ์˜์ƒ ์ „์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด [PyTorchVideo ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ](https://pytorchvideo.org/)๋ฅผ ํ™œ์šฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ข…์†์„ฑ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•˜์„ธ์š”. ```py >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ... ApplyTransformToKey, ... Normalize, ... RandomShortSideScale, ... RemoveKey, ... ShortSideScale, ... UniformTemporalSubsample, ... ) >>> from torchvision.transforms import ( ... Compose, ... Lambda, ... RandomCrop, ... RandomHorizontalFlip, ... Resize, ... ) ``` ํ•™์Šต ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ณ€ํ™˜์—๋Š” '๊ท ์ผํ•œ ์‹œ๊ฐ„ ์ƒ˜ํ”Œ๋ง(uniform temporal subsampling)', 'ํ”ฝ์…€ ์ •๊ทœํ™”(pixel normalization)', '๋žœ๋ค ์ž˜๋ผ๋‚ด๊ธฐ(random cropping)' ๋ฐ '๋žœ๋ค ์ˆ˜ํ‰ ๋’ค์ง‘๊ธฐ(random horizontal flipping)'์˜ ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ฒ€์ฆ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ณ€ํ™˜์—๋Š” '๋žœ๋ค ์ž˜๋ผ๋‚ด๊ธฐ'์™€ '๋žœ๋ค ๋’ค์ง‘๊ธฐ'๋ฅผ ์ œ์™ธํ•œ ๋™์ผํ•œ ๋ณ€ํ™˜ ์ฒด์ธ์„ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ณ€ํ™˜์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [PyTorchVideo ๊ณต์‹ ๋ฌธ์„œ](https://pytorchvideo.org)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์Œ ์ •๋ณด๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ์˜์ƒ ํ”„๋ ˆ์ž„ ํ”ฝ์…€์„ ์ •๊ทœํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ์ด๋ฏธ์ง€ ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ * ์˜์ƒ ํ”„๋ ˆ์ž„์ด ์กฐ์ •๋  ๊ณต๊ฐ„ ํ•ด์ƒ๋„ ๋จผ์ €, ๋ช‡ ๊ฐ€์ง€ ์ƒ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else: ... height = image_processor.size["height"] ... width = image_processor.size["width"] >>> resize_to = (height, width) >>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํŠนํ™”๋œ ์ „์ฒ˜๋ฆฌ(transform)๊ณผ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ž์ฒด๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> train_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... RandomShortSideScale(min_size=256, max_size=320), ... RandomCrop(resize_to), ... RandomHorizontalFlip(p=0.5), ... ] ... ), ... ), ... ] ... ) >>> train_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "train"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), ... decode_audio=False, ... transform=train_transform, ... ) ``` ๊ฐ™์€ ๋ฐฉ์‹์˜ ์ž‘์—… ํ๋ฆ„์„ ๊ฒ€์ฆ๊ณผ ํ‰๊ฐ€ ์„ธํŠธ์—๋„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> val_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... Resize(resize_to), ... ] ... ), ... ), ... ] ... ) >>> val_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "val"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) >>> test_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "test"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) ``` **์ฐธ๊ณ **: ์œ„์˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํŒŒ์ดํ”„๋ผ์ธ์€ [๊ณต์‹ ํŒŒ์ดํ† ์น˜ ์˜ˆ์ œ](https://pytorchvideo.org/docs/tutorial_classification#dataset)์—์„œ ๊ฐ€์ ธ์˜จ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” UCF-101 ๋ฐ์ดํ„ฐ์…‹์— ๋งž๊ฒŒ [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‚ด๋ถ€์ ์œผ๋กœ ์ด ํ•จ์ˆ˜๋Š” [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) ๊ฐ์ฒด๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `LabeledVideoDataset` ํด๋ž˜์Šค๋Š” PyTorchVideo ๋ฐ์ดํ„ฐ์…‹์—์„œ ๋ชจ๋“  ์˜์ƒ ๊ด€๋ จ ์ž‘์—…์˜ ๊ธฐ๋ณธ ํด๋ž˜์Šค์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PyTorchVideo์—์„œ ๋ฏธ๋ฆฌ ์ œ๊ณตํ•˜์ง€ ์•Š๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ์ด ํด๋ž˜์Šค๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ํ™•์žฅํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋” ์ž์„ธํ•œ ์‚ฌํ•ญ์ด ์•Œ๊ณ  ์‹ถ๋‹ค๋ฉด `data` API [๋ฌธ์„œ](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋˜ํ•œ ์œ„์˜ ์˜ˆ์‹œ์™€ ์œ ์‚ฌํ•œ ๊ตฌ์กฐ๋ฅผ ๊ฐ–๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋ฉด, `pytorchvideo.data.Ucf101()` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ ๋ฌธ์ œ๊ฐ€ ์—†์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์˜์ƒ์˜ ๊ฐœ์ˆ˜๋ฅผ ์•Œ๊ธฐ ์œ„ํ•ด `num_videos` ์ธ์ˆ˜์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) # (300, 30, 75) ``` ## ๋” ๋‚˜์€ ๋””๋ฒ„๊น…์„ ์œ„ํ•ด ์ „์ฒ˜๋ฆฌ ์˜์ƒ ์‹œ๊ฐํ™”ํ•˜๊ธฐ[[visualize-the-preprocessed-video-for-better-debugging]] ```py >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): ... """Un-normalizes the image pixels.""" ... img = (img * std) + mean ... img = (img * 255).astype("uint8") ... return img.clip(0, 255) >>> def create_gif(video_tensor, filename="sample.gif"): ... """Prepares a GIF from a video tensor. ... ... The video tensor is expected to have the following shape: ... (num_frames, num_channels, height, width). ... """ ... frames = [] ... for video_frame in video_tensor: ... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) ... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25} ... imageio.mimsave(filename, frames, "GIF", **kargs) ... return filename >>> def display_gif(video_tensor, gif_name="sample.gif"): ... """Prepares and displays a GIF from a video tensor.""" ... video_tensor = video_tensor.permute(1, 0, 2, 3) ... gif_filename = create_gif(video_tensor, gif_name) ... return Image(filename=gif_filename) >>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video["video"] >>> display_gif(video_tensor) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"/> </div> ## ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ[[train-the-model]] ๐Ÿค— Transformers์˜ [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œ์ผœ๋ณด์„ธ์š”. `Trainer`๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๋ ค๋ฉด ํ›ˆ๋ จ ์„ค์ •๊ณผ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ฒƒ์€ [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments)์ž…๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” ํ›ˆ๋ จ์„ ๊ตฌ์„ฑํ•˜๋Š” ๋ชจ๋“  ์†์„ฑ์„ ํฌํ•จํ•˜๋ฉฐ, ํ›ˆ๋ จ ์ค‘ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•  ์ถœ๋ ฅ ํด๋” ์ด๋ฆ„์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๐Ÿค— Hub์˜ ๋ชจ๋ธ ์ €์žฅ์†Œ์˜ ๋ชจ๋“  ์ •๋ณด๋ฅผ ๋™๊ธฐํ™”ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋Š” ๋”ฐ๋กœ ์„ค๋ช…ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ฌ๊ธฐ์—์„œ ์ค‘์š”ํ•œ ์ธ์ˆ˜๋Š” `remove_unused_columns=False` ์ž…๋‹ˆ๋‹ค. ์ด ์ธ์ž๋Š” ๋ชจ๋ธ์˜ ํ˜ธ์ถœ ํ•จ์ˆ˜์—์„œ ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๋ชจ๋“  ์†์„ฑ ์—ด(columns)์„ ์‚ญ์ œํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ๊ฐ’์€ ์ผ๋ฐ˜์ ์œผ๋กœ True์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๊ธฐ๋Šฅ ์—ด์„ ์‚ญ์ œํ•˜๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ด๋ฉฐ, ์ž…๋ ฅ์„ ๋ชจ๋ธ์˜ ํ˜ธ์ถœ ํ•จ์ˆ˜๋กœ ํ’€๊ธฐ(unpack)๊ฐ€ ์‰ฌ์›Œ์ง€๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด ๊ฒฝ์šฐ์—๋Š” `pixel_values`(๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ํ•„์ˆ˜์ ์ธ ํ‚ค)๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๊ธฐ๋Šฅ('video'๊ฐ€ ํŠนํžˆ ๊ทธ๋ ‡์Šต๋‹ˆ๋‹ค)์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ remove_unused_columns์„ False๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TrainingArguments, Trainer >>> model_name = model_ckpt.split("/")[-1] >>> new_model_name = f"{model_name}-finetuned-ucf101-subset" >>> num_epochs = 4 >>> args = TrainingArguments( ... new_model_name, ... remove_unused_columns=False, ... eval_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=batch_size, ... per_device_eval_batch_size=batch_size, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ... ) ``` `pytorchvideo.data.Ucf101()` ํ•จ์ˆ˜๋กœ ๋ฐ˜ํ™˜๋˜๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” `__len__` ๋ฉ”์†Œ๋“œ๊ฐ€ ์ด์‹๋˜์–ด ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, `TrainingArguments`๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•  ๋•Œ `max_steps`๋ฅผ ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ , ์˜ˆ์ธก๊ฐ’์—์„œ ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•  ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ „์ฒ˜๋ฆฌ ์ž‘์—…์€ ์˜ˆ์ธก๋œ ๋กœ์ง“(logits)์— argmax ๊ฐ’์„ ์ทจํ•˜๋Š” ๊ฒƒ๋ฟ์ž…๋‹ˆ๋‹ค: ```py import evaluate metric = evaluate.load("accuracy") def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **ํ‰๊ฐ€์— ๋Œ€ํ•œ ์ฐธ๊ณ ์‚ฌํ•ญ**: [VideoMAE ๋…ผ๋ฌธ](https://arxiv.org/abs/2203.12602)์—์„œ ์ €์ž๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ‰๊ฐ€ ์ „๋žต์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ์˜์ƒ์—์„œ ์—ฌ๋Ÿฌ ํด๋ฆฝ์„ ์„ ํƒํ•˜๊ณ  ๊ทธ ํด๋ฆฝ์— ๋‹ค์–‘ํ•œ ํฌ๋กญ์„ ์ ์šฉํ•˜์—ฌ ์ง‘๊ณ„ ์ ์ˆ˜๋ฅผ ๋ณด๊ณ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๊ฐ„๋‹จํ•จ๊ณผ ๊ฐ„๊ฒฐํ•จ์„ ์œ„ํ•ด ํ•ด๋‹น ์ „๋žต์„ ๊ณ ๋ คํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์˜ˆ์ œ๋ฅผ ๋ฌถ์–ด์„œ ๋ฐฐ์น˜๋ฅผ ํ˜•์„ฑํ•˜๋Š” `collate_fn`์„ ์ •์˜ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ฐฐ์น˜๋Š” `pixel_values`์™€ `labels`๋ผ๋Š” 2๊ฐœ์˜ ํ‚ค๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> def collate_fn(examples): ... # permute to (num_frames, num_channels, height, width) ... pixel_values = torch.stack( ... [example["video"].permute(1, 0, 2, 3) for example in examples] ... ) ... labels = torch.tensor([example["label"] for example in examples]) ... return {"pixel_values": pixel_values, "labels": labels} ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด ๋ชจ๋“  ๊ฒƒ์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ•จ๊ป˜ `Trainer`์— ์ „๋‹ฌํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model, ... args, ... train_dataset=train_dataset, ... eval_dataset=val_dataset, ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... data_collator=collate_fn, ... ) ``` ๋ฐ์ดํ„ฐ๋ฅผ ์ด๋ฏธ ์ฒ˜๋ฆฌํ–ˆ๋Š”๋ฐ๋„ ๋ถˆ๊ตฌํ•˜๊ณ  `image_processor`๋ฅผ ํ† ํฌ๋‚˜์ด์ € ์ธ์ˆ˜๋กœ ๋„ฃ์€ ์ด์œ ๋Š” JSON์œผ๋กœ ์ €์žฅ๋˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๊ตฌ์„ฑ ํŒŒ์ผ์ด Hub์˜ ์ €์žฅ์†Œ์— ์—…๋กœ๋“œ๋˜๋„๋ก ํ•˜๊ธฐ ์œ„ํ•จ์ž…๋‹ˆ๋‹ค. `train` ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”: ```py >>> train_results = trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์„ [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์—ฌ ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ข‹์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์˜์ƒ์„ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”: ```py >>> sample_test_video = next(iter(test_dataset)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"/> </div> ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline)์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ์˜์ƒ ๋ถ„๋ฅ˜๋ฅผ ํ•˜๊ธฐ ์œ„ํ•ด `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜์ƒ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> video_cls = pipeline(model="my_awesome_video_cls_model") >>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi") [{'score': 0.9272987842559814, 'label': 'BasketballDunk'}, {'score': 0.017777055501937866, 'label': 'BabyCrawling'}, {'score': 0.01663011871278286, 'label': 'BalanceBeam'}, {'score': 0.009560945443809032, 'label': 'BandMarching'}, {'score': 0.0068979403004050255, 'label': 'BaseballPitch'}] ``` ๋งŒ์•ฝ ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> def run_inference(model, video): ... # (num_frames, num_channels, height, width) ... perumuted_sample_test_video = video.permute(1, 0, 2, 3) ... inputs = { ... "pixel_values": perumuted_sample_test_video.unsqueeze(0), ... "labels": torch.tensor( ... [sample_test_video["label"]] ... ), # this can be skipped if you don't have labels available. ... } ... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ... inputs = {k: v.to(device) for k, v in inputs.items()} ... model = model.to(device) ... # forward pass ... with torch.no_grad(): ... outputs = model(**inputs) ... logits = outputs.logits ... return logits ``` ๋ชจ๋ธ์— ์ž…๋ ฅ๊ฐ’์„ ๋„ฃ๊ณ  `logits`์„ ๋ฐ˜ํ™˜๋ฐ›์œผ์„ธ์š”: ```py >>> logits = run_inference(trained_model, sample_test_video["video"]) ``` `logits`์„ ๋””์ฝ”๋”ฉํ•˜๋ฉด, ์šฐ๋ฆฌ๋Š” ๋‹ค์Œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) # Predicted class: BasketballDunk ```
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/question_answering.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์งˆ์˜ ์‘๋‹ต(Question Answering)[[question-answering]] [[open-in-colab]] <Youtube id="ajPx5LwJD-I"/> ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋Š” ์ฃผ์–ด์ง„ ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. Alexa, Siri ๋˜๋Š” Google๊ณผ ๊ฐ™์€ ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ๋‚ ์”จ๊ฐ€ ์–ด๋–ค์ง€ ๋ฌผ์–ด๋ณธ ์ ์ด ์žˆ๋‹ค๋ฉด ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด๋ณธ ์ ์ด ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ์—๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค. - ์ถ”์ถœ์ (Extractive) ์งˆ์˜ ์‘๋‹ต: ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์—์„œ ๋‹ต๋ณ€์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. - ์ƒ์„ฑ์ (Abstractive) ์งˆ์˜ ์‘๋‹ต: ๋ฌธ๋งฅ์—์„œ ์งˆ๋ฌธ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋‹ตํ•˜๋Š” ๋‹ต๋ณ€์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฐฉ๋ฒ•๋“ค์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. 1. ์ถ”์ถœ์  ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด [SQuAD](https://huggingface.co/datasets/squad) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ 2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ ์‚ฌ์šฉํ•˜๊ธฐ <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/question-answering)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ์—ฌ๋Ÿฌ๋ถ„์˜ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•ด์„œ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## SQuAD ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-squad-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ SQuAD ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ํ›ˆ๋ จํ•˜๋ฉฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> squad = load_dataset("squad", split="train[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ถ„ํ• ๋œ `train`์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์–ด์ค๋‹ˆ๋‹ค: ```py >>> squad = squad.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ ๋‚˜์„œ ์˜ˆ์‹œ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ํ•˜๋‚˜ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค: ```py >>> squad["train"][0] {'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame' } ``` ์ด ์ค‘์—์„œ ๋ช‡ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ํ•ญ๋ชฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - `answers`: ๋‹ต์•ˆ ํ† ํฐ์˜ ์‹œ์ž‘ ์œ„์น˜์™€ ๋‹ต์•ˆ ํ…์ŠคํŠธ - `context`: ๋ชจ๋ธ์ด ๋‹ต์„ ์ถ”์ถœํ•˜๋Š”๋ฐ ํ•„์š”ํ•œ ๋ฐฐ๊ฒฝ ์ง€์‹ - `question`: ๋ชจ๋ธ์ด ๋‹ตํ•ด์•ผ ํ•˜๋Š” ์งˆ๋ฌธ ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="qgaM0weJHpA"/> ๋‹ค์Œ ๋‹จ๊ณ„์—์„œ๋Š” `question` ๋ฐ `context` ํ•ญ๋ชฉ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ์™€ ๊ด€๋ จํ•ด์„œ ํŠนํžˆ ์œ ์˜ํ•ด์•ผํ•  ๋ช‡ ๊ฐ€์ง€ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€ ์˜ˆ์ œ์—๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ์ดˆ๊ณผํ•˜๋Š” ๋งค์šฐ ๊ธด `context`๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธด ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด์„œ๋Š”, `truncation="only_second"`๋กœ ์„ค์ •ํ•ด `context`๋งŒ ์ž˜๋ผ๋‚ด๋ฉด ๋ฉ๋‹ˆ๋‹ค. 2. ๊ทธ ๋‹ค์Œ, `return_offset_mapping=True`๋กœ ์„ค์ •ํ•ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘๊ณผ ์ข…๋ฃŒ ์œ„์น˜๋ฅผ ์›๋ž˜์˜ `context`์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. 3. ๋งคํ•‘์„ ์™„๋ฃŒํ•˜๋ฉด, ์ด์ œ ๋‹ต๋ณ€์—์„œ ์‹œ์ž‘ ํ† ํฐ๊ณผ ์ข…๋ฃŒ ํ† ํฐ์„ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜คํ”„์…‹์˜ ์–ด๋Š ๋ถ€๋ถ„์ด `question`๊ณผ `context`์— ํ•ด๋‹นํ•˜๋Š”์ง€ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก [`~tokenizers.Encoding.sequence_ids`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋‹ค์Œ์€ `answer`์˜ ์‹œ์ž‘ ํ† ํฐ๊ณผ ์ข…๋ฃŒ ํ† ํฐ์„ ์ž˜๋ผ๋‚ด์„œ `context`์— ๋งคํ•‘ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... questions = [q.strip() for q in examples["question"]] ... inputs = tokenizer( ... questions, ... examples["context"], ... max_length=384, ... truncation="only_second", ... return_offsets_mapping=True, ... padding="max_length", ... ) ... offset_mapping = inputs.pop("offset_mapping") ... answers = examples["answers"] ... start_positions = [] ... end_positions = [] ... for i, offset in enumerate(offset_mapping): ... answer = answers[i] ... start_char = answer["answer_start"][0] ... end_char = answer["answer_start"][0] + len(answer["text"][0]) ... sequence_ids = inputs.sequence_ids(i) ... # Find the start and end of the context ... idx = 0 ... while sequence_ids[idx] != 1: ... idx += 1 ... context_start = idx ... while sequence_ids[idx] == 1: ... idx += 1 ... context_end = idx - 1 ... # If the answer is not fully inside the context, label it (0, 0) ... if offset[context_start][0] > end_char or offset[context_end][1] < start_char: ... start_positions.append(0) ... end_positions.append(0) ... else: ... # Otherwise it's the start and end token positions ... idx = context_start ... while idx <= context_end and offset[idx][0] <= start_char: ... idx += 1 ... start_positions.append(idx - 1) ... idx = context_end ... while idx >= context_start and offset[idx][1] >= end_char: ... idx -= 1 ... end_positions.append(idx + 1) ... inputs["start_positions"] = start_positions ... inputs["end_positions"] = end_positions ... return inputs ``` ๋ชจ๋“  ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋“ค์„ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋น ๋ฅด๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ๋ชจ๋‘ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names) ``` ์ด์ œ [`DefaultDataCollator`]๋ฅผ ์ด์šฉํ•ด ์˜ˆ์‹œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(data collator)์™€ ๋‹ฌ๋ฆฌ, [`DefaultDataCollator`]๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` </pt> <tf> ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์ด์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ์ดˆ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForQuestionAnswering`]์œผ๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer >>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ผญ ํ•„์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir` ์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋กœ ์„ค์ •ํ•ด์„œ ์ด ๋ชจ๋ธ์„ Hub๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋“ค์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ด์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_qa_model", ... eval_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_squad["train"], ... eval_dataset=tokenized_squad["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋งค์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•ด์„œ ๋ชจ๋“  ์‚ฌ๋žŒ๋“ค์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๊ณต์œ ํ•ด์ฃผ์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ์ดˆ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> TensorFlow๋ฅผ ์ด์šฉํ•œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด ๋ฐ ๋ช‡ ๊ฐ€์ง€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 2 >>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs >>> optimizer, schedule = create_optimizer( ... init_lr=2e-5, ... num_warmup_steps=0, ... num_train_steps=total_train_steps, ... ) ``` ๊ทธ ๋‹ค์Œ [`TFAutoModelForQuestionAnswering`]์œผ๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering("distilbert/distilbert-base-uncased") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•ด์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_squad["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_squad["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋กœ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ชจ๋ธ์„ Hub๋กœ ํ‘ธ์‹œํ•  ๋ฐฉ๋ฒ•์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ‘ธ์‹œํ•  ๊ฒฝ๋กœ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_qa_model", ... tokenizer=tokenizer, ... ) ``` ๋“œ๋””์–ด ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜, ์ฝœ๋ฐฑ์„ ์„ค์ •ํ•œ ํ›„ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ์ด์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์งˆ์˜ ์‘๋‹ต์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ์˜ˆ์‹œ๋Š” [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ํ‰๊ฐ€[[evaluate]] ์งˆ์˜ ์‘๋‹ต์„ ํ‰๊ฐ€ํ•˜๋ ค๋ฉด ์ƒ๋‹นํ•œ ์–‘์˜ ํ›„์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์‹œ๊ฐ„์ด ๋„ˆ๋ฌด ๋งŽ์ด ๊ฑธ๋ฆฌ์ง€ ์•Š๋„๋ก ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ํ‰๊ฐ€ ๋‹จ๊ณ„๋ฅผ ์ƒ๋žตํ•ฉ๋‹ˆ๋‹ค. [`Trainer`]๋Š” ํ›ˆ๋ จ ๊ณผ์ •์—์„œ ํ‰๊ฐ€ ์†์‹ค(evaluation loss)์„ ๊ณ„์† ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ๋Œ€๋žต์ ์œผ๋กœ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹œ๊ฐ„์— ์—ฌ์œ ๊ฐ€ ์žˆ๊ณ  ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๊ด€์‹ฌ์ด ์žˆ๋‹ค๋ฉด ๐Ÿค— Hugging Face Course์˜ [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) ์ฑ•ํ„ฐ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! ## ์ถ”๋ก [[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์งˆ๋ฌธ๊ณผ ๋ชจ๋ธ์ด ์˜ˆ์ธกํ•˜๊ธฐ ์›ํ•˜๋Š” ๋ฌธ๋งฅ(context)๋ฅผ ์ƒ๊ฐํ•ด๋ณด์„ธ์š”: ```py >>> question = "How many programming languages does BLOOM support?" >>> context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฐ€์žฅ ์‰ฌ์šด ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด์„œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model="my_awesome_qa_model") >>> question_answerer(question=question, context=context) {'score': 0.2058267742395401, 'start': 10, 'end': 95, 'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'} ``` ์›ํ•œ๋‹ค๋ฉด `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ง์ ‘ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•ด์„œ PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") >>> inputs = tokenizer(question, context, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForQuestionAnswering >>> model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") >>> with torch.no_grad(): ... outputs = model(**inputs) ``` ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์—์„œ ์‹œ์ž‘ ๋ฐ ์ข…๋ฃŒ ์œ„์น˜๊ฐ€ ์–ด๋”˜์ง€ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() ``` ์˜ˆ์ธก๋œ ํ† ํฐ์„ ํ•ด๋…ํ•ด์„œ ๋‹ต์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•ด์„œ TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") >>> inputs = tokenizer(question, text, return_tensors="tf") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") >>> outputs = model(**inputs) ``` ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์—์„œ ์‹œ์ž‘ ๋ฐ ์ข…๋ฃŒ ์œ„์น˜๊ฐ€ ์–ด๋”˜์ง€ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) ``` ์˜ˆ์ธก๋œ ํ† ํฐ์„ ํ•ด๋…ํ•ด์„œ ๋‹ต์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/translation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ฒˆ์—ญ[[translation]] [[open-in-colab]] <Youtube id="1JvfrvZgi6c"/> ๋ฒˆ์—ญ์€ ํ•œ ์–ธ์–ด๋กœ ๋œ ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋ฒˆ์—ญ์ด๋‚˜ ์š”์•ฝ์€ ์ž…๋ ฅ์„ ๋ฐ›์•„ ์ผ๋ จ์˜ ์ถœ๋ ฅ์„ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฐ•๋ ฅํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์ธ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ‘œ์ ์ธ ํƒœ์Šคํฌ์ž…๋‹ˆ๋‹ค. ๋ฒˆ์—ญ ์‹œ์Šคํ…œ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋œ ํ…์ŠคํŠธ ๊ฐ„์˜ ๋ฒˆ์—ญ์— ์‚ฌ์šฉ๋˜์ง€๋งŒ, ์Œ์„ฑ ๊ฐ„์˜ ํ†ต์—ญ์ด๋‚˜ ํ…์ŠคํŠธ-์Œ์„ฑ ๋˜๋Š” ์Œ์„ฑ-ํ…์ŠคํŠธ์™€ ๊ฐ™์€ ์กฐํ•ฉ์—๋„ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. ์˜์–ด ํ…์ŠคํŠธ๋ฅผ ํ”„๋ž‘์Šค์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด [T5](https://huggingface.co/google-t5/t5-small) ๋ชจ๋ธ์„ OPUS Books ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์˜์–ด-ํ”„๋ž‘์Šค์–ด ํ•˜์œ„ ์ง‘ํ•ฉ์œผ๋กœ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ 2. ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/translation)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate sacrebleu ``` ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ์ฐฝ์ด ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## OPUS Books ๋ฐ์ดํ„ฐ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-opus-books-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ [OPUS Books](https://huggingface.co/datasets/opus_books) ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์˜์–ด-ํ”„๋ž‘์Šค์–ด ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ```py >>> from datasets import load_dataset >>> books = load_dataset("opus_books", "en-fr") ``` ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ [`~datasets.Dataset.train_test_split`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋กœ ๋ถ„ํ• ํ•˜์„ธ์š”. ```py >>> books = books["train"].train_test_split(test_size=0.2) ``` ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—์„œ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณผ๊นŒ์š”? ```py >>> books["train"][0] {'id': '90560', 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.', 'fr': 'Mais ce plateau รฉlevรฉ ne mesurait que quelques toises, et bientรดt nous fรปmes rentrรฉs dans notre รฉlรฉment.'}} ``` ๋ฐ˜ํ™˜๋œ ๋”•์…”๋„ˆ๋ฆฌ์˜ `translation` ํ‚ค๊ฐ€ ํ…์ŠคํŠธ์˜ ์˜์–ด, ํ”„๋ž‘์Šค์–ด ๋ฒ„์ „์„ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="XAR8jnZZuUs"/> ๋‹ค์Œ ๋‹จ๊ณ„๋กœ ์˜์–ด-ํ”„๋ž‘์Šค์–ด ์Œ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด T5 ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ```py >>> from transformers import AutoTokenizer >>> checkpoint = "google-t5/t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` ๋งŒ๋“ค ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋Š” ์•„๋ž˜ ์š”๊ตฌ์‚ฌํ•ญ์„ ์ถฉ์กฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. T5๊ฐ€ ๋ฒˆ์—ญ ํƒœ์Šคํฌ์ž„์„ ์ธ์ง€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ž…๋ ฅ ์•ž์— ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ์—ฌ๋Ÿฌ NLP ํƒœ์Šคํฌ๋ฅผ ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ ์ค‘ ์ผ๋ถ€๋Š” ์ด๋ ‡๊ฒŒ ํƒœ์Šคํฌ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋ฏธ๋ฆฌ ์ค˜์•ผํ•ฉ๋‹ˆ๋‹ค. 2. ์›์–ด(์˜์–ด)๊ณผ ๋ฒˆ์—ญ์–ด(ํ”„๋ž‘์Šค์–ด)๋ฅผ ๋ณ„๋„๋กœ ํ† ํฐํ™”ํ•˜์„ธ์š”. ์˜์–ด ์–ดํœ˜๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ํ† ํฌ๋‚˜์ด์ €๋กœ ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•  ์ˆ˜๋Š” ์—†๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. 3. `max_length` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์„ค์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์‹œํ€€์Šค๋ฅผ truncateํ•˜์„ธ์š”. ```py >>> source_lang = "en" >>> target_lang = "fr" >>> prefix = "translate English to French: " >>> def preprocess_function(examples): ... inputs = [prefix + example[source_lang] for example in examples["translation"]] ... targets = [example[target_lang] for example in examples["translation"]] ... model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True) ... return model_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets์˜ [`~datasets.Dataset.map`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ด๋ ค๋ฉด `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> tokenized_books = books.map(preprocess_function, batched=True) ``` ์ด์ œ [`DataCollatorForSeq2Seq`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ „๋ถ€๋ฅผ paddingํ•˜๋Š” ๋Œ€์‹ , ๋ฐ์ดํ„ฐ ์ •๋ ฌ ์ค‘ ๊ฐ ๋ฐฐ์น˜์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ padding*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evalulate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•(evaluation method)์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ˜„์žฌ ํƒœ์Šคํฌ์— ์ ํ•ฉํ•œ SacreBLEU ๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. (๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> metric = evaluate.load("sacrebleu") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`~evaluate.EvaluationModule.compute`]์— ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ ์ „๋‹ฌํ•˜์—ฌ SacreBLEU ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> import numpy as np >>> def postprocess_text(preds, labels): ... preds = [pred.strip() for pred in preds] ... labels = [[label.strip()] for label in labels] ... return preds, labels >>> def compute_metrics(eval_preds): ... preds, labels = eval_preds ... if isinstance(preds, tuple): ... preds = preds[0] ... decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) ... result = metric.compute(predictions=decoded_preds, references=decoded_labels) ... result = {"bleu": result["score"]} ... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] ... result["gen_len"] = np.mean(prediction_lens) ... result = {k: round(v, 4) for k, v in result.items()} ... return result ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋Š” ์ค€๋น„๋˜์—ˆ๊ณ , ํ›ˆ๋ จ ๊ณผ์ •์„ ์„ค์ •ํ•  ๋•Œ ๋‹ค์‹œ ์‚ดํŽด๋ณผ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค! </Tip> ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ๊ตฐ์š”! [`AutoModelForSeq2SeqLM`]์œผ๋กœ T5๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`Seq2SeqTrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜์ธ `output_dir`์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋กœ ์„ค์ •ํ•˜์„ธ์š”. (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.) [`Trainer`]๋Š” ์—ํญ์ด ๋๋‚ ๋•Œ๋งˆ๋‹ค SacreBLEU ๋ฉ”ํŠธ๋ฆญ์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Seq2SeqTrainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, data collator ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋„ ๋ฉ๋‹ฌ์•„ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_opus_books_model", ... eval_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=2, ... predict_with_generate=True, ... fp16=True, ... push_to_hub=True, ... ) >>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_books["train"], ... eval_dataset=tokenized_books["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.push_to_hub`] ๋ฉ”์„œ๋“œ๋กœ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•˜์„ธ์š”. ์ด๋Ÿฌ๋ฉด ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋ ค๋ฉด ์šฐ์„  optimizer ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„ ๋“ฑ์˜ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ์ด์ œ [`TFAutoModelForSeq2SeqLM`]๋กœ T5๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]๋กœ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_books["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... tokenized_books["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฉ”์„œ๋“œ๋กœ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•˜์„ธ์š”: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์˜ˆ์ธก๊ฐ’์œผ๋กœ๋ถ€ํ„ฐ SacreBLEU ๋ฉ”ํŠธ๋ฆญ์„ ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ• ๋‘ ๊ฐ€์ง€๋ฅผ ๋ฏธ๋ฆฌ ์„ค์ •ํ•ด๋‘ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‘˜ ๋‹ค [Keras callbacks](../main_classes/keras_callbacks)๋กœ ๊ตฌํ˜„ํ•˜์„ธ์š”. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์—์„œ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_opus_books_model", ... tokenizer=tokenizer, ... ) ``` ์ด์ œ ์ฝœ๋ฐฑ๋“ค์„ ํ•œ๋ฐ๋กœ ๋ฌถ์–ด์ฃผ์„ธ์š”: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ๋ชจ๋“  ์ค€๋น„๋ฅผ ๋งˆ์ณค๊ตฐ์š”! ์ด์ œ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ๋ฉ”์„œ๋“œ๋ฅผ ์—ํญ ์ˆ˜์™€ ๋งŒ๋“ค์–ด๋‘” ์ฝœ๋ฐฑ๊ณผ ํ•จ๊ป˜ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜๊ณ , ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๋ฒˆ์—ญ์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ํ•ด๋‹น [PyTorch ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) ๋˜๋Š” [TensorFlow ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ณ  ์‹ถ์€ ํ…์ŠคํŠธ๋ฅผ ์จ๋ณด์„ธ์š”. T5์˜ ๊ฒฝ์šฐ ์›ํ•˜๋Š” ํƒœ์Šคํฌ๋ฅผ ์ž…๋ ฅ์˜ ์ ‘๋‘์‚ฌ๋กœ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์˜์–ด์—์„œ ํ”„๋ž‘์Šค์–ด๋กœ ๋ฒˆ์—ญํ•˜๋Š” ๊ฒฝ์šฐ, ์•„๋ž˜์™€ ๊ฐ™์€ ์ ‘๋‘์‚ฌ๊ฐ€ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค: ```py >>> text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria." ``` ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•˜๊ธฐ์— ์ œ์ผ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•ด๋‹น ๋ชจ๋ธ๋กœ ๋ฒˆ์—ญ `pipeline`์„ ๋งŒ๋“  ๋’ค, ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline # Change `xx` to the language of the input and `yy` to the language of the desired output. # Examples: "en" for English, "fr" for French, "de" for German, "es" for Spanish, "zh" for Chinese, etc; translation_en_to_fr translates English to French # You can view all the lists of languages here - https://huggingface.co/languages >>> translator = pipeline("translation_xx_to_yy", model="my_awesome_opus_books_model") >>> translator(text) [{'translation_text': 'Legumes partagent des ressources avec des bactรฉries azotantes.'}] ``` ์›ํ•œ๋‹ค๋ฉด `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ง์ ‘ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` [`~generation.GenerationMixin.generate`] ๋ฉ”์„œ๋“œ๋กœ ๋ฒˆ์—ญ์„ ์ƒ์„ฑํ•˜์„ธ์š”. ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต ๋ฐ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [Text Generation](../main_classes/text_generation) API๋ฅผ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋“ค์„ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lignรฉes partagent des ressources avec des bactรฉries enfixant l'azote.' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ๋ฉ”์„œ๋“œ๋กœ ๋ฒˆ์—ญ์„ ์ƒ์„ฑํ•˜์„ธ์š”. ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต ๋ฐ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [Text Generation](../main_classes/text_generation) API๋ฅผ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋“ค์„ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lugumes partagent les ressources avec des bactรฉries fixatrices d'azote.' ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/monocular_depth_estimation.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ •[[depth-estimation-pipeline]] ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ •์€ ํ•œ ์žฅ๋ฉด์˜ ๋‹จ์ผ ์ด๋ฏธ์ง€์—์„œ ์žฅ๋ฉด์˜ ๊นŠ์ด ์ •๋ณด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋‹จ์ผ ์นด๋ฉ”๋ผ ์‹œ์ ์˜ ์žฅ๋ฉด์— ์žˆ๋Š” ๋ฌผ์ฒด์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ณผ์ •์ž…๋‹ˆ๋‹ค. ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ •์€ 3D ์žฌ๊ตฌ์„ฑ, ์ฆ๊ฐ• ํ˜„์‹ค, ์ž์œจ ์ฃผํ–‰, ๋กœ๋ด‡ ๊ณตํ•™ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์—์„œ ์‘์šฉ๋ฉ๋‹ˆ๋‹ค. ์กฐ๋ช… ์กฐ๊ฑด, ๊ฐ€๋ ค์ง, ํ…์Šค์ฒ˜์™€ ๊ฐ™์€ ์š”์†Œ์˜ ์˜ํ–ฅ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋Š” ์žฅ๋ฉด ๋‚ด ๋ฌผ์ฒด์™€ ํ•ด๋‹น ๊นŠ์ด ์ •๋ณด ๊ฐ„์˜ ๋ณต์žกํ•œ ๊ด€๊ณ„๋ฅผ ๋ชจ๋ธ์ด ์ดํ•ดํ•ด์•ผ ํ•˜๋ฏ€๋กœ ๊นŒ๋‹ค๋กœ์šด ์ž‘์—…์ž…๋‹ˆ๋‹ค. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/depth-estimation)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋ฐฐ์šธ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ๊นŠ์ด ์ถ”์ • ํŒŒ์ดํ”„๋ผ์ธ ๋งŒ๋“ค๊ธฐ * ์ง์ ‘ ๊นŠ์ด ์ถ”์ • ์ถ”๋ก ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ๊นŠ์ด ์ถ”์ • ํŒŒ์ดํ”„๋ผ์ธ[[depth-estimation-inference-by-hand]] ๊นŠ์ด ์ถ”์ •์„ ์ถ”๋ก ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ ํ•ด๋‹น ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•˜๋Š” [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [Hugging Face Hub ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads)์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> checkpoint = "vinvino02/glpn-nyu" >>> depth_estimator = pipeline("depth-estimation", model=checkpoint) ``` ๋‹ค์Œ์œผ๋กœ, ๋ถ„์„ํ•  ์ด๋ฏธ์ง€๋ฅผ ํ•œ ์žฅ ์„ ํƒํ•˜์„ธ์š”: ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg" alt="Photo of a busy street"/> </div> ์ด๋ฏธ์ง€๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> predictions = depth_estimator(image) ``` ํŒŒ์ดํ”„๋ผ์ธ์€ ๋‘ ๊ฐœ์˜ ํ•ญ๋ชฉ์„ ๊ฐ€์ง€๋Š” ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” `predicted_depth`๋กœ ๊ฐ ํ”ฝ์…€์˜ ๊นŠ์ด๋ฅผ ๋ฏธํ„ฐ๋กœ ํ‘œํ˜„ํ•œ ๊ฐ’์„ ๊ฐ€์ง€๋Š” ํ…์„œ์ž…๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ๋Š” `depth`๋กœ ๊นŠ์ด ์ถ”์ • ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•˜๋Š” PIL ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. ์ด์ œ ์‹œ๊ฐํ™”ํ•œ ๊ฒฐ๊ณผ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> predictions["depth"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/> </div> ## ์ง์ ‘ ๊นŠ์ด ์ถ”์ • ์ถ”๋ก ํ•˜๊ธฐ[[depth-estimation-inference-by-hand]] ์ด์ œ ๊นŠ์ด ์ถ”์ • ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ๋ฒ•์„ ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณต์ œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads)์—์„œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ์ด์ „์— ์‚ฌ์šฉํ•œ ์ฒดํฌํฌ์ธํŠธ์™€ ๋™์ผํ•œ ๊ฒƒ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation >>> checkpoint = "vinvino02/glpn-nyu" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) >>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint) ``` ํ•„์š”ํ•œ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ฒ˜๋ฆฌํ•˜๋Š” `image_processor`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ด๋ฏธ์ง€ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. `image_processor`๋Š” ํฌ๊ธฐ ์กฐ์ • ๋ฐ ์ •๊ทœํ™” ๋“ฑ ํ•„์š”ํ•œ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values ``` ์ค€๋น„ํ•œ ์ž…๋ ฅ์„ ๋ชจ๋ธ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(pixel_values) ... predicted_depth = outputs.predicted_depth ``` ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> # ์›๋ณธ ์‚ฌ์ด์ฆˆ๋กœ ๋ณต์› >>> prediction = torch.nn.functional.interpolate( ... predicted_depth.unsqueeze(1), ... size=image.size[::-1], ... mode="bicubic", ... align_corners=False, ... ).squeeze() >>> output = prediction.numpy() >>> formatted = (output * 255 / np.max(output)).astype("uint8") >>> depth = Image.fromarray(formatted) >>> depth ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/> </div>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/image_captioning.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ด๋ฏธ์ง€ ์บก์…”๋‹[[image-captioning]] [[open-in-colab]] ์ด๋ฏธ์ง€ ์บก์…”๋‹(Image captioning)์€ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์บก์…˜์„ ์˜ˆ์ธกํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์บก์…”๋‹์€ ์‹œ๊ฐ ์žฅ์• ์ธ์ด ๋‹ค์–‘ํ•œ ์ƒํ™ฉ์„ ํƒ์ƒ‰ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ๋„๋ก ์‹œ๊ฐ ์žฅ์• ์ธ์„ ๋ณด์กฐํ•˜๋Š” ๋“ฑ ์‹ค์ƒํ™œ์—์„œ ํ”ํžˆ ํ™œ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋ฏธ์ง€ ์บก์…”๋‹์€ ์ด๋ฏธ์ง€๋ฅผ ์„ค๋ช…ํ•จ์œผ๋กœ์จ ์‚ฌ๋žŒ๋“ค์˜ ์ฝ˜ํ…์ธ  ์ ‘๊ทผ์„ฑ์„ ๊ฐœ์„ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ด๋ฏธ์ง€ ์บก์…”๋‹ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค. * ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate -q pip install jiwer -q ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```python from huggingface_hub import notebook_login notebook_login() ``` ## ํฌ์ผ“๋ชฌ BLIP ์บก์…˜ ๋ฐ์ดํ„ฐ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-the-pokmon-blip-captions-dataset]] {์ด๋ฏธ์ง€-์บก์…˜} ์Œ์œผ๋กœ ๊ตฌ์„ฑ๋œ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด ๐Ÿค— Dataset ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. PyTorch์—์„œ ์ž์‹ ๋งŒ์˜ ์ด๋ฏธ์ง€ ์บก์…˜ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋งŒ๋“ค๋ ค๋ฉด [์ด ๋…ธํŠธ๋ถ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ```python from datasets import load_dataset ds = load_dataset("lambdalabs/pokemon-blip-captions") ds ``` ```bash DatasetDict({ train: Dataset({ features: ['image', 'text'], num_rows: 833 }) }) ``` ์ด ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” `image`์™€ `text`๋ผ๋Š” ๋‘ ํŠน์„ฑ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. <Tip> ๋งŽ์€ ์ด๋ฏธ์ง€ ์บก์…˜ ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ์ด๋ฏธ์ง€๋‹น ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์บก์…˜์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒฝ์šฐ, ์ผ๋ฐ˜์ ์œผ๋กœ ํ•™์Šต ์ค‘์— ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์บก์…˜ ์ค‘์—์„œ ๋ฌด์ž‘์œ„๋กœ ์ƒ˜ํ”Œ์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. </Tip> [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ํ•™์Šต ๋ถ„ํ• ์„ ํ•™์Šต ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค: ```python ds = ds["train"].train_test_split(test_size=0.1) train_ds = ds["train"] test_ds = ds["test"] ``` ํ•™์Šต ์„ธํŠธ์˜ ์ƒ˜ํ”Œ ๋ช‡ ๊ฐœ๋ฅผ ์‹œ๊ฐํ™”ํ•ด ๋ด…์‹œ๋‹ค. Let's visualize a couple of samples from the training set. ```python from textwrap import wrap import matplotlib.pyplot as plt import numpy as np def plot_images(images, captions): plt.figure(figsize=(20, 20)) for i in range(len(images)): ax = plt.subplot(1, len(images), i + 1) caption = captions[i] caption = "\n".join(wrap(caption, 12)) plt.title(caption) plt.imshow(images[i]) plt.axis("off") sample_images_to_visualize = [np.array(train_ds[i]["image"]) for i in range(5)] sample_captions = [train_ds[i]["text"] for i in range(5)] plot_images(sample_images_to_visualize, sample_captions) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_training_images_image_cap.png" alt="Sample training images"/> </div> ## ๋ฐ์ดํ„ฐ์„ธํŠธ ์ „์ฒ˜๋ฆฌ[[preprocess-the-dataset]] ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ์–‘์‹์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, ์ „์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ์ด๋ฏธ์ง€์™€ ์บก์…˜์„ ๋ชจ๋‘ ์ „์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒ˜๋ฆฌ ์ž‘์—…์„ ์œ„ํ•ด, ํŒŒ์ธํŠœ๋‹ํ•˜๋ ค๋Š” ๋ชจ๋ธ์— ์—ฐ๊ฒฐ๋œ ํ”„๋กœ์„ธ์„œ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ```python from transformers import AutoProcessor checkpoint = "microsoft/git-base" processor = AutoProcessor.from_pretrained(checkpoint) ``` ํ”„๋กœ์„ธ์„œ๋Š” ๋‚ด๋ถ€์ ์œผ๋กœ ํฌ๊ธฐ ์กฐ์ • ๋ฐ ํ”ฝ์…€ ํฌ๊ธฐ ์กฐ์ •์„ ํฌํ•จํ•œ ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ  ์บก์…˜์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```python def transforms(example_batch): images = [x for x in example_batch["image"]] captions = [x for x in example_batch["text"]] inputs = processor(images=images, text=captions, padding="max_length") inputs.update({"labels": inputs["input_ids"]}) return inputs train_ds.set_transform(transforms) test_ds.set_transform(transforms) ``` ๋ฐ์ดํ„ฐ์„ธํŠธ๊ฐ€ ์ค€๋น„๋˜์—ˆ์œผ๋‹ˆ ์ด์ œ ํŒŒ์ธํŠœ๋‹์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๊ธฐ๋ณธ ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-a-base-model]] ["microsoft/git-base"](https://huggingface.co/microsoft/git-base)๋ฅผ [`AutoModelForCausalLM`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) ๊ฐ์ฒด๋กœ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(checkpoint) ``` ## ํ‰๊ฐ€[[evaluate]] ์ด๋ฏธ์ง€ ์บก์…˜ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ [Rouge ์ ์ˆ˜](https://huggingface.co/spaces/evaluate-metric/rouge) ๋˜๋Š” [๋‹จ์–ด ์˜ค๋ฅ˜์œจ(Word Error Rate)](https://huggingface.co/spaces/evaluate-metric/wer)๋กœ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹จ์–ด ์˜ค๋ฅ˜์œจ(WER)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๐Ÿค— Evaluate ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. WER์˜ ์ž ์žฌ์  ์ œํ•œ ์‚ฌํ•ญ ๋ฐ ๊ธฐํƒ€ ๋ฌธ์ œ์ ์€ [์ด ๊ฐ€์ด๋“œ](https://huggingface.co/spaces/evaluate-metric/wer)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```python from evaluate import load import torch wer = load("wer") def compute_metrics(eval_pred): logits, labels = eval_pred predicted = logits.argmax(-1) decoded_labels = processor.batch_decode(labels, skip_special_tokens=True) decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=True) wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels) return {"wer_score": wer_score} ``` ## ํ•™์Šต![[train!]] ์ด์ œ ๋ชจ๋ธ ํŒŒ์ธํŠœ๋‹์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๐Ÿค— [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ €, [`TrainingArguments`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šต ์ธ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import TrainingArguments, Trainer model_name = checkpoint.split("/")[1] training_args = TrainingArguments( output_dir=f"{model_name}-pokemon", learning_rate=5e-5, num_train_epochs=50, fp16=True, per_device_train_batch_size=32, per_device_eval_batch_size=32, gradient_accumulation_steps=2, save_total_limit=3, eval_strategy="steps", eval_steps=50, save_strategy="steps", save_steps=50, logging_steps=50, remove_unused_columns=False, push_to_hub=True, label_names=["labels"], load_best_model_at_end=True, ) ``` ํ•™์Šต ์ธ์ˆ˜๋ฅผ ๋ฐ์ดํ„ฐ์„ธํŠธ, ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ๐Ÿค— Trainer์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```python trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics, ) ``` ํ•™์Šต์„ ์‹œ์ž‘ํ•˜๋ ค๋ฉด [`Trainer`] ๊ฐ์ฒด์—์„œ [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ```python trainer.train() ``` ํ•™์Šต์ด ์ง„ํ–‰๋˜๋ฉด์„œ ํ•™์Šต ์†์‹ค์ด ์›ํ™œํ•˜๊ฒŒ ๊ฐ์†Œํ•˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```python trainer.push_to_hub() ``` ## ์ถ”๋ก [[inference]] `test_ds`์—์„œ ์ƒ˜ํ”Œ ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์™€ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ```python from PIL import Image import requests url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png" image = Image.open(requests.get(url, stream=True).raw) image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/test_image_image_cap.png" alt="Test image"/> </div> ๋ชจ๋ธ์— ์‚ฌ์šฉํ•  ์ด๋ฏธ์ง€๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ```python device = "cuda" if torch.cuda.is_available() else "cpu" inputs = processor(images=image, return_tensors="pt").to(device) pixel_values = inputs.pixel_values ``` [`generate`]๋ฅผ ํ˜ธ์ถœํ•˜๊ณ  ์˜ˆ์ธก์„ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค. ```python generated_ids = model.generate(pixel_values=pixel_values, max_length=50) generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_caption) ``` ```bash a drawing of a pink and blue pokemon ``` ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์ด ๊ฝค ๊ดœ์ฐฎ์€ ์บก์…˜์„ ์ƒ์„ฑํ•œ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค!
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/summarization.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์š”์•ฝ[[summarization]] [[open-in-colab]] <Youtube id="yHnr5Dk2zCI"/> ์š”์•ฝ์€ ๋ฌธ์„œ๋‚˜ ๊ธฐ์‚ฌ์—์„œ ์ค‘์š”ํ•œ ์ •๋ณด๋ฅผ ๋ชจ๋‘ ํฌํ•จํ•˜๋˜ ์งง๊ฒŒ ๋งŒ๋“œ๋Š” ์ผ์ž…๋‹ˆ๋‹ค. ๋ฒˆ์—ญ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ‘œ์ ์ธ ์ž‘์—… ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์š”์•ฝ์—๋Š” ์•„๋ž˜์™€ ๊ฐ™์ด ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ์ถ”์ถœ(Extractive) ์š”์•ฝ: ๋ฌธ์„œ์—์„œ ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ ๋†’์€ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. - ์ƒ์„ฑ(Abstractive) ์š”์•ฝ: ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ ๋†’์€ ์ •๋ณด๋ฅผ ํฌ์ฐฉํ•ด๋‚ด๋Š” ์ƒˆ๋กœ์šด ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. ์ƒ์„ฑ ์š”์•ฝ์„ ์œ„ํ•œ [BillSum](https://huggingface.co/datasets/billsum) ๋ฐ์ดํ„ฐ์…‹ ์ค‘ ์บ˜๋ฆฌํฌ๋‹ˆ์•„ ์ฃผ ๋ฒ•์•ˆ ํ•˜์œ„ ์ง‘ํ•ฉ์œผ๋กœ [T5](https://huggingface.co/google-t5/t5-small)๋ฅผ ํŒŒ์ธํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค. 2. ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/summarization)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate rouge_score ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## BillSum ๋ฐ์ดํ„ฐ์…‹ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-billsum-dataset]] ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ BillSum ๋ฐ์ดํ„ฐ์…‹์˜ ์ž‘์€ ๋ฒ„์ „์ธ ์บ˜๋ฆฌํฌ๋‹ˆ์•„ ์ฃผ ๋ฒ•์•ˆ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from datasets import load_dataset >>> billsum = load_dataset("billsum", split="ca_test") ``` [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋กœ ๋ฐ์ดํ„ฐ์…‹์„ ํ•™์Šต์šฉ์™€ ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ๋‚˜๋ˆ„์„ธ์š”: ```py >>> billsum = billsum.train_test_split(test_size=0.2) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์‹œ๋ฅผ ํ•˜๋‚˜ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> billsum["train"][0] {'summary': 'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.', 'text': 'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employeeโ€™s or dependentโ€™s actual or perceived gender identity, including, but not limited to, the employeeโ€™s or dependentโ€™s identification as transgender.\n(2) For purposes of this section, โ€œcontractโ€ includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractorโ€™s operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractorโ€™s presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractorโ€™s insurance provider, any request by an employee or applicant for employment benefits or any documentation of eligibility for benefits submitted by an employee or applicant for employment.\n(c) After taking all reasonable measures to find a contractor that complies with this section, as determined by the state agency, the requirements of this section may be waived under any of the following circumstances:\n(1) There is only one prospective contractor willing to enter into a specific contract with the state agency.\n(2) The contract is necessary to respond to an emergency, as determined by the state agency, that endangers the public health, welfare, or safety, or the contract is necessary for the provision of essential services, and no entity that complies with the requirements of this section capable of responding to the emergency is immediately available.\n(3) The requirements of this section violate, or are inconsistent with, the terms or conditions of a grant, subvention, or agreement, if the agency has made a good faith attempt to change the terms or conditions of any grant, subvention, or agreement to authorize application of this section.\n(4) The contractor is providing wholesale or bulk water, power, or natural gas, the conveyance or transmission of the same, or ancillary services, as required for ensuring reliable services in accordance with good utility practice, if the purchase of the same cannot practically be accomplished through the standard competitive bidding procedures and the contractor is not providing direct retail services to end users.\n(d) (1) A contractor shall not be deemed to discriminate in the provision of benefits if the contractor, in providing the benefits, pays the actual costs incurred in obtaining the benefit.\n(2) If a contractor is unable to provide a certain benefit, despite taking reasonable measures to do so, the contractor shall not be deemed to discriminate in the provision of benefits.\n(e) (1) Every contract subject to this chapter shall contain a statement by which the contractor certifies that the contractor is in compliance with this section.\n(2) The department or other contracting agency shall enforce this section pursuant to its existing enforcement powers.\n(3) (A) If a contractor falsely certifies that it is in compliance with this section, the contract with that contractor shall be subject to Article 9 (commencing with Section 10420), unless, within a time period specified by the department or other contracting agency, the contractor provides to the department or agency proof that it has complied, or is in the process of complying, with this section.\n(B) The application of the remedies or penalties contained in Article 9 (commencing with Section 10420) to a contract subject to this chapter shall not preclude the application of any existing remedies otherwise available to the department or other contracting agency under its existing enforcement powers.\n(f) Nothing in this section is intended to regulate the contracting practices of any local jurisdiction.\n(g) This section shall be construed so as not to conflict with applicable federal laws, rules, or regulations. In the event that a court or agency of competent jurisdiction holds that federal law, rule, or regulation invalidates any clause, sentence, paragraph, or section of this code or the application thereof to any person or circumstances, it is the intent of the state that the court or agency sever that clause, sentence, paragraph, or section so that the remainder of this section shall remain in effect.\nSEC. 2.\nSection 10295.35 of the Public Contract Code shall not be construed to create any new enforcement authority or responsibility in the Department of General Services or any other contracting agency.\nSEC. 3.\nNo reimbursement is required by this act pursuant to Section 6 of Article XIII\u2009B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII\u2009B of the California Constitution.', 'title': 'An act to add Section 10295.35 to the Public Contract Code, relating to public contracts.'} ``` ์—ฌ๊ธฐ์„œ ๋‹ค์Œ ๋‘ ๊ฐœ์˜ ํ•„๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: - `text`: ๋ชจ๋ธ์˜ ์ž…๋ ฅ์ด ๋  ๋ฒ•์•ˆ ํ…์ŠคํŠธ์ž…๋‹ˆ๋‹ค. - `summary`: `text`์˜ ๊ฐ„๋žตํ•œ ๋ฒ„์ „์œผ๋กœ ๋ชจ๋ธ์˜ ํƒ€๊ฒŸ์ด ๋ฉ๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ์œผ๋กœ `text`์™€ `summary`๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ T5 ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> checkpoint = "google-t5/t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` ์ƒ์„ฑํ•˜๋ ค๋Š” ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋Š” ์•„๋ž˜ ์กฐ๊ฑด์„ ๋งŒ์กฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. ์ž…๋ ฅ ์•ž์— ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋ถ™์—ฌ T5๊ฐ€ ์š”์•ฝ ์ž‘์—…์ž„์„ ์ธ์‹ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๋Ÿฌ NLP ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ถ€ ๋ชจ๋ธ์€ ํŠน์ • ์ž‘์—…์— ๋Œ€ํ•œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ ˆ์ด๋ธ”์„ ํ† ํฐํ™”ํ•  ๋•Œ `text_target` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. 3. `max_length` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์„ค์ •๋œ ์ตœ๋Œ€ ๊ธธ์ด๋ฅผ ๋„˜์ง€ ์•Š๋„๋ก ๊ธด ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ```py >>> prefix = "summarize: " >>> def preprocess_function(examples): ... inputs = [prefix + doc for doc in examples["text"]] ... model_inputs = tokenizer(inputs, max_length=1024, truncation=True) ... labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True) ... model_inputs["labels"] = labels["input_ids"] ... return model_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets์˜ [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> tokenized_billsum = billsum.map(preprocess_function, batched=True) ``` ์ด์ œ [`DataCollatorForSeq2Seq`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“œ์„ธ์š”. ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์„ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋ฐฐ์น˜๋งˆ๋‹ค ๊ฐ€์žฅ ๊ธด ๋ฌธ์žฅ ๊ธธ์ด์— ๋งž์ถฐ *๋™์  ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluate]] ํ•™์Šต ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.) ```py >>> import evaluate >>> rouge = evaluate.load("rouge") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ROUGE ์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) ... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] ... result["gen_len"] = np.mean(prediction_lens) ... return {k: round(v, 4) for k, v in result.items()} ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ•™์Šต์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ•™์Šต[[train]] <frameworkcontent> <pt> <Tip> ๋ชจ๋ธ์„ [`Trainer`]๋กœ ํŒŒ์ธํŠœ๋‹ ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ•™์Šต์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSeq2SeqLM`]๋กœ T5๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`Seq2SeqTrainingArguments`]์—์„œ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.) [`Trainer`]๋Š” ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค ROUGE ์ง€ํ‘œ๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ•™์Šต ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ์…‹, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ•™์Šต ์ธ์ˆ˜๋ฅผ [`Seq2SeqTrainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_billsum_model", ... eval_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=4, ... predict_with_generate=True, ... fp16=True, ... push_to_hub=True, ... ) >>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_billsum["train"], ... eval_dataset=tokenized_billsum["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋กœ Hub์— ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ ํŒŒ์ธํŠœ๋‹์„ ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ๋ณธ์ ์ธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ €, ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„ ๊ทธ๋ฆฌ๊ณ  ๋ช‡ ๊ฐ€์ง€ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSeq2SeqLM`]์„ ์‚ฌ์šฉํ•˜์—ฌ T5๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_billsum["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... tokenized_billsum["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ตฌ์„ฑํ•˜์„ธ์š”: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ•™์Šต์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผ ํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ ROUGE ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‘ ์ž‘์—… ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)์œผ๋กœ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ‘ธ์‹œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_billsum_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ๋ฒˆ๋“ค๋กœ ๋ฌถ์–ด์ค๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด ๋ชจ๋ธ ํ•™์Šต์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ•™์Šต ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์…‹, ์—ํญ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ๊ณผ ํ•จ๊ป˜ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์š”์•ฝ์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋ฅผ ๋ณด๋ ค๋ฉด [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์š”์•ฝํ•  ํ…์ŠคํŠธ๋ฅผ ์ž‘์„ฑํ•ด๋ณด์„ธ์š”. T5์˜ ๊ฒฝ์šฐ ์ž‘์—…์— ๋”ฐ๋ผ ์ž…๋ ฅ ์•ž์— ์ ‘๋‘์‚ฌ๋ฅผ ๋ถ™์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์š”์•ฝ์˜ ๊ฒฝ์šฐ, ์•„๋ž˜์™€ ๊ฐ™์€ ์ ‘๋‘์‚ฌ๋ฅผ ์ž…๋ ฅ ์•ž์— ๋ถ™์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> text = "summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธํŠœ๋‹ํ•œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์š”์•ฝ์„ ์ˆ˜ํ–‰ํ•  [`pipeline`]์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> summarizer = pipeline("summarization", model="stevhliu/my_awesome_billsum_model") >>> summarizer(text) [{"summary_text": "The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country."}] ``` ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜์—ฌ [`pipeline`]์˜ ๊ฒฐ๊ณผ์™€ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ฆˆํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` ์š”์•ฝ๋ฌธ์„ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`~generation.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ์ „๋žต๊ณผ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ](../main_classes/text_generation) API๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```py >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ฆˆํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` ์š”์•ฝ๋ฌธ์„ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ์ „๋žต๊ณผ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ](../main_classes/text_generation) API๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/visual_question_answering.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต (Visual Question Answering) [[open-in-colab]] ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต(VQA)์€ ์ด๋ฏธ์ง€๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐœ๋ฐฉํ˜• ์งˆ๋ฌธ์— ๋Œ€์‘ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด ์ž‘์—…์„ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์€ ๋Œ€๋ถ€๋ถ„ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์˜ ์กฐํ•ฉ์ด๋ฉฐ, ์ถœ๋ ฅ์€ ์ž์—ฐ์–ด๋กœ ๋œ ๋‹ต๋ณ€์ž…๋‹ˆ๋‹ค. VQA์˜ ์ฃผ์š” ์‚ฌ์šฉ ์‚ฌ๋ก€๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์‹œ๊ฐ ์žฅ์• ์ธ์„ ์œ„ํ•œ ์ ‘๊ทผ์„ฑ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ๊ต์œก: ๊ฐ•์˜๋‚˜ ๊ต๊ณผ์„œ์— ๋‚˜์˜จ ์‹œ๊ฐ ์ž๋ฃŒ์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์ฒดํ—˜ํ˜• ์ „์‹œ์™€ ์œ ์  ๋“ฑ์—์„œ๋„ VQA๋ฅผ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ๊ณ ๊ฐ ์„œ๋น„์Šค ๋ฐ ์ „์ž์ƒ๊ฑฐ๋ž˜: VQA๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์ œํ’ˆ์— ๋Œ€ํ•ด ์งˆ๋ฌธํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•จ์œผ๋กœ์จ ์‚ฌ์šฉ์ž ๊ฒฝํ—˜์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰: VQA ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ํŠน์„ฑ์„ ๊ฐ€์ง„ ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์‚ฌ์šฉ์ž๋Š” "๊ฐ•์•„์ง€๊ฐ€ ์žˆ์–ด?"๋ผ๊ณ  ๋ฌผ์–ด๋ด์„œ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€ ๋ฌถ์Œ์—์„œ ๊ฐ•์•„์ง€๊ฐ€ ์žˆ๋Š” ๋ชจ๋“  ์ด๋ฏธ์ง€๋ฅผ ๋ฐ›์•„๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - VQA ๋ชจ๋ธ ์ค‘ ํ•˜๋‚˜์ธ [ViLT](../../en/model_doc/vilt)๋ฅผ [`Graphcore/vqa` ๋ฐ์ดํ„ฐ์…‹](https://huggingface.co/datasets/Graphcore/vqa) ์—์„œ ๋ฏธ์„ธ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• - ๋ฏธ์„ธ์กฐ์ •๋œ ViLT ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ• - BLIP-2 ๊ฐ™์€ ์ƒ์„ฑ ๋ชจ๋ธ๋กœ ์ œ๋กœ์ƒท VQA ์ถ”๋ก ์„ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ• ## ViLT ๋ฏธ์„ธ ์กฐ์ • [[finetuning-vilt]] ViLT๋Š” Vision Transformer (ViT) ๋‚ด์— ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ์„ ํฌํ•จํ•˜์—ฌ ๋น„์ „/์ž์—ฐ์–ด ์‚ฌ์ „ํ›ˆ๋ จ(VLP; Vision-and-Language Pretraining)์„ ์œ„ํ•œ ๊ธฐ๋ณธ ๋””์ž์ธ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ViLT ๋ชจ๋ธ์€ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ(ViT)์— ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ์„ ๋„ฃ์–ด ๋น„์ „/์–ธ์–ด ์‚ฌ์ „ํ›ˆ๋ จ(VLP; Vision-and-Language Pre-training)์„ ์œ„ํ•œ ๊ธฐ๋ณธ์ ์ธ ๋””์ž์ธ์„ ๊ฐ–์ท„์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์—ฌ๋Ÿฌ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. VQA ํƒœ์Šคํฌ์—์„œ๋Š” (`[CLS]` ํ† ํฐ์˜ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ ์œ„์— ์„ ํ˜• ๋ ˆ์ด์–ด์ธ) ๋ถ„๋ฅ˜ ํ—ค๋”๊ฐ€ ์žˆ์œผ๋ฉฐ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—ฌ๊ธฐ์—์„œ ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต์€ **๋ถ„๋ฅ˜ ๋ฌธ์ œ**๋กœ ์ทจ๊ธ‰๋ฉ๋‹ˆ๋‹ค. ์ตœ๊ทผ์˜ BLIP, BLIP-2, InstructBLIP์™€ ๊ฐ™์€ ๋ชจ๋ธ๋“ค์€ VQA๋ฅผ ์ƒ์„ฑํ˜• ์ž‘์—…์œผ๋กœ ๊ฐ„์ฃผํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์ด๋“œ์˜ ํ›„๋ฐ˜๋ถ€์—์„œ๋Š” ์ด๋Ÿฐ ๋ชจ๋ธ๋“ค์„ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๋กœ์ƒท VQA ์ถ”๋ก ์„ ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „ ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ```bash pip install -q transformers datasets ``` ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅ ๋“œ๋ฆฝ๋‹ˆ๋‹ค. Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๐Ÿค— Hub์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ๋กœ๊ทธ์ธํ•  ํ† ํฐ์„ ์ž…๋ ฅํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ „์—ญ ๋ณ€์ˆ˜๋กœ ์„ ์–ธํ•˜์„ธ์š”. ```py >>> model_checkpoint = "dandelin/vilt-b32-mlm" ``` ## ๋ฐ์ดํ„ฐ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-the-data]] ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `Graphcore/vqa` ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ž‘์€ ์ƒ˜ํ”Œ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” [๐Ÿค— Hub](https://huggingface.co/datasets/Graphcore/vqa) ์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`Graphcore/vqa` ๋ฐ์ดํ„ฐ์„ธํŠธ](https://huggingface.co/datasets/Graphcore/vqa) ์˜ ๋Œ€์•ˆ์œผ๋กœ ๊ณต์‹ [VQA ๋ฐ์ดํ„ฐ์„ธํŠธ ํŽ˜์ด์ง€](https://visualqa.org/download.html) ์—์„œ ๋™์ผํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋‹ค์šด๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ๊ณต์ˆ˜ํ•œ ๋ฐ์ดํ„ฐ๋กœ ํŠœํ† ๋ฆฌ์–ผ์„ ๋”ฐ๋ฅด๊ณ  ์‹ถ๋‹ค๋ฉด [์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์„ธํŠธ ๋งŒ๋“ค๊ธฐ](https://huggingface.co/docs/datasets/image_dataset#loading-script) ๋ผ๋Š” ๐Ÿค— Datasets ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์˜ ์ฒซ 200๊ฐœ ํ•ญ๋ชฉ์„ ๋ถˆ๋Ÿฌ์™€ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ํŠน์„ฑ์„ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Graphcore/vqa", split="validation[:200]") >>> dataset Dataset({ features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'], num_rows: 200 }) ``` ์˜ˆ์ œ๋ฅผ ํ•˜๋‚˜ ๋ฝ‘์•„ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ํŠน์„ฑ์„ ์ดํ•ดํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> dataset[0] {'question': 'Where is he looking?', 'question_type': 'none of the above', 'question_id': 262148000, 'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg', 'answer_type': 'other', 'label': {'ids': ['at table', 'down', 'skateboard', 'table'], 'weights': [0.30000001192092896, 1.0, 0.30000001192092896, 0.30000001192092896]}} ``` ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํŠน์„ฑ์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค: * `question`: ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์งˆ๋ฌธ * `image_id`: ์งˆ๋ฌธ๊ณผ ๊ด€๋ จ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ * `label`: ๋ฐ์ดํ„ฐ์˜ ๋ ˆ์ด๋ธ” (annotations) ๋‚˜๋จธ์ง€ ํŠน์„ฑ๋“ค์€ ํ•„์š”ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์‚ญ์ œํ•ด๋„ ๋ฉ๋‹ˆ๋‹ค: ```py >>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ `label` ํŠน์„ฑ์€ ๊ฐ™์€ ์งˆ๋ฌธ๋งˆ๋‹ค ๋‹ต๋ณ€์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋‘ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ๋ผ๋ฒจ๋Ÿฌ๋“ค๋กœ๋ถ€ํ„ฐ ์ˆ˜์ง‘๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์ธ๋ฐ์š”. ์งˆ๋ฌธ์˜ ๋‹ต๋ณ€์€ ์ฃผ๊ด€์ ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์งˆ๋ฌธ์€ "๊ทธ๋Š” ์–ด๋””๋ฅผ ๋ณด๊ณ  ์žˆ๋‚˜์š”?" ์˜€์ง€๋งŒ, ์–ด๋–ค ์‚ฌ๋žŒ๋“ค์€ "์•„๋ž˜"๋กœ ๋ ˆ์ด๋ธ”์„ ๋‹ฌ์•˜๊ณ , ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์€ "ํ…Œ์ด๋ธ”" ๋˜๋Š” "์Šค์ผ€์ดํŠธ๋ณด๋“œ" ๋“ฑ์œผ๋กœ ์ฃผ์„์„ ๋‹ฌ์•˜์Šต๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ์ด๋ฏธ์ง€๋ฅผ ๋ณด๊ณ  ์–ด๋–ค ๋‹ต๋ณ€์„ ์„ ํƒํ•  ๊ฒƒ์ธ์ง€ ์ƒ๊ฐํ•ด ๋ณด์„ธ์š”: ```python >>> from PIL import Image >>> image = Image.open(dataset[0]['image_id']) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png" alt="VQA Image Example"/> </div> ์งˆ๋ฌธ๊ณผ ๋‹ต๋ณ€์˜ ๋ชจํ˜ธ์„ฑ์œผ๋กœ ์ธํ•ด ์ด๋Ÿฌํ•œ ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋‹ต๋ณ€์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ๋‹ค์ค‘ ๋ ˆ์ด๋ธ” ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋กœ ์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ๊ฒŒ๋‹ค๊ฐ€, ์›ํ•ซ(one-hot) ์ธ์ฝ”๋”ฉ ๋ฒกํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ๋ณด๋‹ค๋Š” ๋ ˆ์ด๋ธ”์—์„œ ํŠน์ • ๋‹ต๋ณ€์ด ๋‚˜ํƒ€๋‚˜๋Š” ํšŸ์ˆ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์†Œํ”„ํŠธ ์ธ์ฝ”๋”ฉ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์œ„์˜ ์˜ˆ์‹œ์—์„œ "์•„๋ž˜"๋ผ๋Š” ๋‹ต๋ณ€์ด ๋‹ค๋ฅธ ๋‹ต๋ณ€๋ณด๋‹ค ํ›จ์”ฌ ๋” ์ž์ฃผ ์„ ํƒ๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ `weight`๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ์ ์ˆ˜๋กœ 1.0์„ ๊ฐ€์ง€๋ฉฐ, ๋‚˜๋จธ์ง€ ๋‹ต๋ณ€๋“ค์€ 1.0 ๋ฏธ๋งŒ์˜ ์ ์ˆ˜๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ ์ ˆํ•œ ๋ถ„๋ฅ˜ ํ—ค๋”๋กœ ๋ชจ๋ธ์„ ๋‚˜์ค‘์— ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์„ ์ •์ˆ˜๋กœ ๋งคํ•‘ํ•œ ๋”•์…”๋„ˆ๋ฆฌ ํ•˜๋‚˜, ๋ฐ˜๋Œ€๋กœ ์ •์ˆ˜๋ฅผ ๋ ˆ์ด๋ธ”๋กœ ๋งคํ•‘ํ•œ ๋”•์…”๋„ˆ๋ฆฌ ํ•˜๋‚˜ ์ด 2๊ฐœ์˜ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> import itertools >>> labels = [item['ids'] for item in dataset['label']] >>> flattened_labels = list(itertools.chain(*labels)) >>> unique_labels = list(set(flattened_labels)) >>> label2id = {label: idx for idx, label in enumerate(unique_labels)} >>> id2label = {idx: label for label, idx in label2id.items()} ``` ์ด์ œ ๋งคํ•‘์ด ์™„๋ฃŒ๋˜์—ˆ์œผ๋ฏ€๋กœ ๋ฌธ์ž์—ด ๋‹ต๋ณ€์„ ํ•ด๋‹น id๋กœ ๊ต์ฒดํ•˜๊ณ , ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ๋” ํŽธ๋ฆฌํ•œ ํ›„์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ํŽธํ‰ํ™” ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python >>> def replace_ids(inputs): ... inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]] ... return inputs >>> dataset = dataset.map(replace_ids) >>> flat_dataset = dataset.flatten() >>> flat_dataset.features {'question': Value(dtype='string', id=None), 'image_id': Value(dtype='string', id=None), 'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)} ``` ## ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ [[preprocessing-data]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ชจ๋ธ์„ ์œ„ํ•ด ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด ViLT ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [`ViltProcessor`]๋Š” BERT ํ† ํฌ๋‚˜์ด์ €์™€ ViLT ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ํŽธ๋ฆฌํ•˜๊ฒŒ ํ•˜๋‚˜์˜ ํ”„๋กœ์„ธ์„œ๋กœ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import ViltProcessor >>> processor = ViltProcessor.from_pretrained(model_checkpoint) ``` ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ [`ViltProcessor`]๋กœ ์ธ์ฝ”๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” [`BertTokenizerFast`]๋กœ ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ฆˆํ•˜๊ณ  ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์œ„ํ•ด `input_ids`, `attention_mask` ๋ฐ `token_type_ids`๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” [`ViltImageProcessor`]๋กœ ์ด๋ฏธ์ง€๋ฅผ ํฌ๊ธฐ ์กฐ์ •ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋ฉฐ, `pixel_values`์™€ `pixel_mask`๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฐ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๋Š” ๋ชจ๋‘ ๋‚ด๋ถ€์—์„œ ์ด๋ฃจ์–ด์ง€๋ฏ€๋กœ, `processor`๋ฅผ ํ˜ธ์ถœํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์•„์ง ํƒ€๊ฒŸ ๋ ˆ์ด๋ธ”์ด ์™„์„ฑ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ํƒ€๊ฒŸ์˜ ํ‘œํ˜„์—์„œ ๊ฐ ์š”์†Œ๋Š” ๊ฐ€๋Šฅํ•œ ๋‹ต๋ณ€(๋ ˆ์ด๋ธ”)์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ •ํ™•ํ•œ ๋‹ต๋ณ€์˜ ์š”์†Œ๋Š” ํ•ด๋‹น ์ ์ˆ˜(weight)๋ฅผ ์œ ์ง€์‹œํ‚ค๊ณ  ๋‚˜๋จธ์ง€ ์š”์†Œ๋Š” 0์œผ๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜ ํ•จ์ˆ˜๊ฐ€ ์œ„์—์„œ ์„ค๋ช…ํ•œ๋Œ€๋กœ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์— `processor`๋ฅผ ์ ์šฉํ•˜๊ณ  ๋ ˆ์ด๋ธ”์„ ํ˜•์‹์— ๋งž์ถฅ๋‹ˆ๋‹ค: ```py >>> import torch >>> def preprocess_data(examples): ... image_paths = examples['image_id'] ... images = [Image.open(image_path) for image_path in image_paths] ... texts = examples['question'] ... encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt") ... for k, v in encoding.items(): ... encoding[k] = v.squeeze() ... targets = [] ... for labels, scores in zip(examples['label.ids'], examples['label.weights']): ... target = torch.zeros(len(id2label)) ... for label, score in zip(labels, scores): ... target[label] = score ... targets.append(target) ... encoding["labels"] = targets ... return encoding ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets์˜ [`~datasets.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์‹ญ์‹œ์˜ค. `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•จ์œผ๋กœ์จ `map`์„ ๋” ๋น ๋ฅด๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ์ œ๊ฑฐํ•˜์„ธ์š”. ```py >>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights']) >>> processed_dataset Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'], num_rows: 200 }) ``` ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ, [`DefaultDataCollator`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ๋กœ ์“ธ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` ## ๋ชจ๋ธ ํ›ˆ๋ จ [[train-the-model]] ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด ์ค€๋น„๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`ViltForQuestionAnswering`]์œผ๋กœ ViLT๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์˜ ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import ViltForQuestionAnswering >>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id) ``` ์ด ์‹œ์ ์—์„œ๋Š” ๋‹ค์Œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”: ```py >>> from transformers import TrainingArguments >>> repo_id = "MariaK/vilt_finetuned_200" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ์„ธํŠธ, ํ”„๋กœ์„ธ์„œ, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=processed_dataset, ... tokenizer=processor, ... ) ``` 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”: ```py >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Hub์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` ## ์ถ”๋ก  [[inference]] ViLT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ์—…๋กœ๋“œํ–ˆ๋‹ค๋ฉด ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ด๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`Pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200") ``` ์ด ๊ฐ€์ด๋“œ์˜ ๋ชจ๋ธ์€ 200๊ฐœ์˜ ์˜ˆ์ œ์—์„œ๋งŒ ํ›ˆ๋ จ๋˜์—ˆ์œผ๋ฏ€๋กœ ๊ทธ๋‹ค์ง€ ๋งŽ์€ ๊ฒƒ์„ ๊ธฐ๋Œ€ํ•  ์ˆ˜๋Š” ์—†์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ฒซ ๋ฒˆ์งธ ์˜ˆ์ œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก  ๊ฒฐ๊ณผ๋ฅผ ์„ค๋ช…ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> print(question) >>> pipe(image, question, top_k=1) "Where is he looking?" [{'score': 0.5498199462890625, 'answer': 'down'}] ``` ๋น„๋ก ํ™•์‹ ์€ ๋ณ„๋กœ ์—†์ง€๋งŒ, ๋ชจ๋ธ์€ ์‹ค์ œ๋กœ ๋ฌด์–ธ๊ฐ€๋ฅผ ๋ฐฐ์› ์Šต๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ์˜ˆ์ œ์™€ ๋” ๊ธด ํ›ˆ๋ จ ๊ธฐ๊ฐ„์ด ์ฃผ์–ด์ง„๋‹ค๋ฉด ๋ถ„๋ช… ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค! ์›ํ•œ๋‹ค๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ ๊ฐ€์ ธ์™€์„œ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์— ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. 2. ์ „์ฒ˜๋ฆฌ๋œ ๊ฒฐ๊ณผ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. ๋กœ์ง“์—์„œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ ์žˆ๋Š” ๋‹ต๋ณ€์˜ id๋ฅผ ๊ฐ€์ ธ์™€์„œ `id2label`์—์„œ ์‹ค์ œ ๋‹ต๋ณ€์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ```py >>> processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200") >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> # prepare inputs >>> inputs = processor(image, question, return_tensors="pt") >>> model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200") >>> # forward pass >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits >>> idx = logits.argmax(-1).item() >>> print("Predicted answer:", model.config.id2label[idx]) Predicted answer: down ``` ## ์ œ๋กœ์ƒท VQA [[zeroshot-vqa]] ์ด์ „ ๋ชจ๋ธ์€ VQA๋ฅผ ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋กœ ์ฒ˜๋ฆฌํ–ˆ์Šต๋‹ˆ๋‹ค. BLIP, BLIP-2 ๋ฐ InstructBLIP์™€ ๊ฐ™์€ ์ตœ๊ทผ์˜ ๋ชจ๋ธ์€ VQA๋ฅผ ์ƒ์„ฑ ์ž‘์—…์œผ๋กœ ์ ‘๊ทผํ•ฉ๋‹ˆ๋‹ค. [BLIP-2](../../en/model_doc/blip-2)๋ฅผ ์˜ˆ๋กœ ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋น„์ „ ์ธ์ฝ”๋”์™€ LLM์˜ ๋ชจ๋“  ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ๋กœ์šด ๋น„์ „-์ž์—ฐ์–ด ์‚ฌ์ „ ํ•™์Šต ํŒจ๋Ÿฌ๋‹ค์ž„์„ ๋„์ž…ํ–ˆ์Šต๋‹ˆ๋‹ค. ([BLIP-2 ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/blip-2)๋ฅผ ํ†ตํ•ด ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณผ ์ˆ˜ ์žˆ์–ด์š”) ์ด๋ฅผ ํ†ตํ•ด ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต์„ ํฌํ•จํ•œ ์—ฌ๋Ÿฌ ๋น„์ „-์ž์—ฐ์–ด ์ž‘์—…์—์„œ SOTA๋ฅผ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ์–ด๋–ป๊ฒŒ VQA์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์„ค๋ช…ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ๋ชจ๋ธ์„ ๊ฐ€์ ธ์™€ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ GPU๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๊ฒฝ์šฐ ๋ชจ๋ธ์„ ๋ช…์‹œ์ ์œผ๋กœ GPU๋กœ ์ „์†กํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์ „์—๋Š” ํ›ˆ๋ จํ•  ๋•Œ ์“ฐ์ง€ ์•Š์€ ์ด์œ ๋Š” [`Trainer`]๊ฐ€ ์ด ๋ถ€๋ถ„์„ ์ž๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, Blip2ForConditionalGeneration >>> import torch >>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) ``` ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์œผ๋ฏ€๋กœ, VQA ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ฒซ ๋ฒˆ์งธ ์˜ˆ์ œ์—์„œ์™€ ๋™์ผํ•œ ์ด๋ฏธ์ง€/์งˆ๋ฌธ ์Œ์„ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] ``` BLIP-2๋ฅผ ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต ์ž‘์—…์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ๊ฐ€ `Question: {} Answer:` ํ˜•์‹์„ ๋”ฐ๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> prompt = f"Question: {question} Answer:" ``` ์ด์ œ ๋ชจ๋ธ์˜ ํ”„๋กœ์„ธ์„œ๋กœ ์ด๋ฏธ์ง€/ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ , ์ฒ˜๋ฆฌ๋œ ์ž…๋ ฅ์„ ๋ชจ๋ธ์„ ํ†ตํ•ด ์ „๋‹ฌํ•˜๊ณ , ์ถœ๋ ฅ์„ ๋””์ฝ”๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16) >>> generated_ids = model.generate(**inputs, max_new_tokens=10) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) "He is looking at the crowd" ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ ๋ชจ๋ธ์€ ๊ตฐ์ค‘์„ ์ธ์‹ํ•˜๊ณ , ์–ผ๊ตด์˜ ๋ฐฉํ–ฅ(์•„๋ž˜์ชฝ์„ ๋ณด๊ณ  ์žˆ์Œ)์„ ์ธ์‹ํ–ˆ์ง€๋งŒ, ๊ตฐ์ค‘์ด ์Šค์ผ€์ดํ„ฐ ๋’ค์— ์žˆ๋‹ค๋Š” ์‚ฌ์‹ค์„ ๋†“์ณค์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‚ฌ๋žŒ์ด ์ง์ ‘ ๋ผ๋ฒจ๋งํ•œ ๋ฐ์ดํ„ฐ์…‹์„ ์–ป์„ ์ˆ˜ ์—†๋Š” ๊ฒฝ์šฐ์—, ์ด ์ ‘๊ทผ๋ฒ•์€ ๋น ๋ฅด๊ฒŒ ์œ ์šฉํ•œ ๊ฒฐ๊ณผ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/language_modeling.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง[[causal-language-modeling]] [[open-in-colab]] ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง๊ณผ ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์œผ๋กœ ๋‚˜๋‰ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ์€ ํ…์ŠคํŠธ ์ƒ์„ฑ์— ์ž์ฃผ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋˜ ์ฐฝ์˜์ ์ธ ๋ฐฉํ–ฅ์œผ๋กœ ์‘์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ์‚ฌ์šฉํ•˜๋ฉฐ ์žฌ๋ฏธ์žˆ๋Š” ํƒ๊ตฌ๋ฅผ ํ•ด๋ณด๊ฑฐ๋‚˜, Copilot ๋˜๋Š” CodeParrot์™€ ๊ฐ™์€ ์ง€๋Šฅํ˜• ์ฝ”๋”ฉ ์–ด์‹œ์Šคํ„ดํŠธ์˜ ๊ธฐ๋ฐ˜์ด ๋˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. <Youtube id="Vpjb1lu0MDk"/> ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ํ† ํฐ ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์€ ์™ผ์ชฝ์˜ ํ† ํฐ์—๋งŒ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋ฏธ๋ž˜์˜ ํ† ํฐ์„ ๋ณผ ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ์˜ ์˜ˆ๋กœ GPT-2๊ฐ€ ์žˆ์ฃ . ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค: 1. [DistilGPT2](https://huggingface.co/distilbert/distilgpt2) ๋ชจ๋ธ์„ [ELI5](https://huggingface.co/datasets/eli5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ [r/askscience](https://www.reddit.com/r/askscience/) ํ•˜์œ„ ์ง‘ํ•ฉ์œผ๋กœ ๋ฏธ์„ธ ์กฐ์ • 2. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉ <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/text-generation)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•ด Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ์•Œ๋ฆผ์ด ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[load-eli5-dataset]] ๋จผ์ €, ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ r/askscience์˜ ์ž‘์€ ํ•˜์œ„ ์ง‘ํ•ฉ์ธ ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ํ•™์Šตํ•˜๋Š” ๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํˆฌ์žํ•˜๊ธฐ ์ „์—, ์‹คํ—˜ํ•ด๋ด„์œผ๋กœ์จ ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> eli5 = load_dataset("eli5", split="train_asks[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train_asks` ๋ถ„ํ• ์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šต ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.train_test_split(test_size=0.2) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> eli5["train"][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} ``` ๋งŽ์•„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ, ์‹ค์ œ๋กœ๋Š” `text` ํ•„๋“œ๋งŒ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ์ž‘์—…์˜ ์žฅ์ ์€ ๋ ˆ์ด๋ธ”์ด ํ•„์š”ํ•˜์ง€ ์•Š๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ๋‹จ์–ด *์ž์ฒด๊ฐ€* ๋ ˆ์ด๋ธ”์ž…๋‹ˆ๋‹ค. (์ด๋ ‡๊ฒŒ ๋ ˆ์ด๋ธ”์„ ์ œ๊ณตํ•˜์ง€ ์•Š์•„๋„ ๋˜๋Š” ํ•™์Šต์„ ๋น„์ง€๋„ ํ•™์Šต์ด๋ผ๊ณ  ์ผ์ปซ์Šต๋‹ˆ๋‹ค) ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="ma1TrR7gE7I"/> ๋‹ค์Œ ๋‹จ๊ณ„๋Š” `text` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilGPT2 ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") ``` ์œ„์˜ ์˜ˆ์ œ์—์„œ ์•Œ ์ˆ˜ ์žˆ๋“ฏ์ด, `text` ํ•„๋“œ๋Š” `answers` ์•„๋ž˜์— ์ค‘์ฒฉ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ [`flatten`](https://huggingface.co/docs/datasets/process#flatten) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ค‘์ฒฉ ๊ตฌ์กฐ์—์„œ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ถ”์ถœํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} ``` ๊ฐ ํ•˜์œ„ ํ•„๋“œ๋Š” ์ด์ œ `answers` ์ ‘๋‘์‚ฌ๋ฅผ ๊ฐ€์ง„ ๋ณ„๋„์˜ ์—ด๋กœ ๋‚˜๋‰˜์—ˆ์œผ๋ฉฐ, `text` ํ•„๋“œ๋Š” ์ด์ œ ๋ฆฌ์ŠคํŠธ์ž…๋‹ˆ๋‹ค. ๊ฐ ๋ฌธ์žฅ์„ ๊ฐœ๋ณ„์ ์œผ๋กœ ํ† ํฐํ™”ํ•˜๋Š” ๋Œ€์‹ , ๋จผ์ € ๋ฆฌ์ŠคํŠธ๋ฅผ ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ํ•œ๊บผ๋ฒˆ์— ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๋ฌธ์ž์—ด ๋ฆฌ์ŠคํŠธ๋ฅผ ๊ฒฐํ•ฉํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๊ณ , `num_proc`๋ฅผ ์ฆ๊ฐ€์‹œ์ผœ ํ”„๋กœ์„ธ์Šค ์ˆ˜๋ฅผ ๋Š˜๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š” ์—†๋Š” ์—ด์€ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ์‹œํ€€์Šค๊ฐ€ ํ† ํฐํ™”๋์ง€๋งŒ, ์ผ๋ถ€ ์‹œํ€€์Šค๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ - ๋ชจ๋“  ์‹œํ€€์Šค๋ฅผ ์—ฐ๊ฒฐํ•˜๊ณ , - `block_size`๋กœ ์ •์˜๋œ ๊ธธ์ด๋กœ ์—ฐ๊ฒฐ๋œ ์‹œํ€€์Šค๋ฅผ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์งง์€ ๋ฌถ์Œ์œผ๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค. ์ด ๊ฐ’์€ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด์™€ GPU RAM์„ ๊ณ ๋ คํ•ด ์ถฉ๋ถ„ํžˆ ์งง์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> block_size = 128 >>> def group_texts(examples): ... # Concatenate all texts. ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can ... # customize this part to your needs. ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... # Split by chunks of block_size. ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... result["labels"] = result["input_ids"].copy() ... return result ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— `group_texts` ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์„ธ์š”: ```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`DataCollatorForLanguageModeling`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค, ์ทจํ•ฉ ๋‹จ๊ณ„์—์„œ ๊ฐ ๋ฐฐ์น˜์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์ข…๊ฒฐ ํ† ํฐ์„ ์‚ฌ์šฉํ•˜๊ณ  `mlm=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ž…๋ ฅ์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ํ•œ ์นธ์”ฉ ์‹œํ”„ํŠธํ•œ ๊ฐ’์„ ๋ ˆ์ด๋ธ”๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) ``` </pt> <tf> ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์ข…๊ฒฐ ํ† ํฐ์„ ์‚ฌ์šฉํ•˜๊ณ  `mlm=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ž…๋ ฅ์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ํ•œ ์นธ์”ฉ ์‹œํ”„ํŠธํ•œ ๊ฐ’์„ ๋ ˆ์ด๋ธ”๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ž˜ ๋ชจ๋ฅด์‹ ๋‹ค๋ฉด [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](../training#train-with-pytorch-trainer)์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForCausalLM`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ DistilGPT2๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") ``` ์—ฌ๊ธฐ๊นŒ์ง€ ์ง„ํ–‰ํ•˜๋ฉด ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`์€ ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ, ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. (๋จผ์ € Hugging Face์— ๋กœ๊ทธ์ธ ํ•„์ˆ˜) `push_to_hub=True`๋กœ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 2. ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_clm-model", ... eval_strategy="epoch", ... learning_rate=2e-5, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.evaluate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๊ณ  ํผํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import math >>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 49.61 ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](../training#train-a-tensorflow-model-with-keras)์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„ ๋ฐ ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForCausalLM`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ DistilGPT2๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด ๊ตฌ์„ฑํ•˜์„ธ์š”. Transformers ๋ชจ๋ธ์€ ๋ชจ๋‘ ๊ธฐ๋ณธ์ ์ธ ์ž‘์—… ๊ด€๋ จ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฏ€๋กœ, ์›ํ•œ๋‹ค๋ฉด ๋ณ„๋„๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์•„๋„ ๋ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # ๋ณ„๋„๋กœ loss ์ธ์ž๋ฅผ ๋„ฃ์ง€ ์•Š์•˜์–ด์š”! ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_clm-model", ... tokenizer=tokenizer, ... ) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ชจ๋‘๊ฐ€ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ํ•ด๋‹นํ•˜๋Š” [PyTorch ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) ๋˜๋Š” [TensorFlow ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋ฏ€๋กœ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ƒ์„ฑํ•  ํ…์ŠคํŠธ๋ฅผ ์œ„ํ•œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋งŒ๋“ค์–ด๋ณด์„ธ์š”: ```py >>> prompt = "Somatic hypermutation allows the immune system to" ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ๊ฐ„๋‹จํžˆ ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ํ…์ŠคํŠธ ์ƒ์„ฑ์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> generator = pipeline("text-generation", model="my_awesome_eli5_clm-model") >>> generator(prompt) [{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."}] ``` <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model") >>> inputs = tokenizer(prompt, return_tensors="pt").input_ids ``` [`~generation.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”. ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๋Š” ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต๊ณผ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](../generation_strategies) ํŽ˜์ด์ง€๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ```py >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"] ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model") >>> inputs = tokenizer(prompt, return_tensors="tf").input_ids ``` [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์š”์•ฝ์„ ์ƒ์„ฑํ•˜์„ธ์š”. ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๋Š” ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต๊ณผ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](../generation_strategies) ํŽ˜์ด์ง€๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ```py >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model") >>> outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for'] ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/image_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[image-classification]] [[open-in-colab]] <Youtube id="tjAIM7BOYhw"/> ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ์ด๋ฏธ์ง€์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋˜๋Š” ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์™€ ๋‹ฌ๋ฆฌ ์ž…๋ ฅ์€ ์ด๋ฏธ์ง€๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ํ”ฝ์…€ ๊ฐ’์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—๋Š” ์ž์—ฐ์žฌํ•ด ํ›„ ํ”ผํ•ด ๊ฐ์ง€, ๋†์ž‘๋ฌผ ๊ฑด๊ฐ• ๋ชจ๋‹ˆํ„ฐ๋ง, ์˜๋ฃŒ ์ด๋ฏธ์ง€์—์„œ ์งˆ๋ณ‘์˜ ์ง•ํ›„ ๊ฒ€์‚ฌ ์ง€์› ๋“ฑ ๋‹ค์–‘ํ•œ ์‘์šฉ ์‚ฌ๋ก€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค: 1. [Food-101](https://huggingface.co/datasets/food101) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [ViT](model_doc/vit)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์ด๋ฏธ์ง€์—์„œ ์‹ํ’ˆ ํ•ญ๋ชฉ์„ ๋ถ„๋ฅ˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ • ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/image-classification)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Food-101 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-food101-dataset]] ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ Food-101 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋” ์ž‘์€ ๋ถ€๋ถ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ์‹คํ—˜์„ ํ†ตํ•ด ๋ชจ๋“  ๊ฒƒ์ด ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> food = load_dataset("food101", split="train[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋ถ„ํ• ํ•˜์„ธ์š”: ```py >>> food = food.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> food["train"][0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at 0x7F52AFC8AC50>, 'label': 79} ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๊ฐ ์˜ˆ์ œ์—๋Š” ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `image`: ์‹ํ’ˆ ํ•ญ๋ชฉ์˜ PIL ์ด๋ฏธ์ง€ - `label`: ์‹ํ’ˆ ํ•ญ๋ชฉ์˜ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค ๋ชจ๋ธ์ด ๋ ˆ์ด๋ธ” ID์—์„œ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์‰ฝ๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ๋„๋ก ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์ •์ˆ˜๋กœ ๋งคํ•‘ํ•˜๊ณ , ์ •์ˆ˜๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „์„ ๋งŒ๋“œ์„ธ์š”: ```py >>> labels = food["train"].features["label"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` ์ด์ œ ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> id2label[str(79)] 'prime_rib' ``` ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ์ด๋ฏธ์ง€๋ฅผ ํ…์„œ๋กœ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ViT ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "google/vit-base-patch16-224-in21k" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` <frameworkcontent> <pt> ์ด๋ฏธ์ง€์— ๋ช‡ ๊ฐ€์ง€ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜์—ฌ ๊ณผ์ ํ•ฉ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋” ๊ฒฌ๊ณ ํ•˜๊ฒŒ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ Torchvision์˜ [`transforms`](https://pytorch.org/vision/stable/transforms.html) ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ, ์›ํ•˜๋Š” ์ด๋ฏธ์ง€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€์˜ ์ž„์˜ ๋ถ€๋ถ„์„ ํฌ๋กญํ•˜๊ณ  ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•œ ๋‹ค์Œ, ์ด๋ฏธ์ง€ ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ๋กœ ์ •๊ทœํ™”ํ•˜์„ธ์š”: ```py >>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor >>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize]) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๊ณ  ์ด๋ฏธ์ง€์˜ `pixel_values`(๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž…๋ ฅ)๋ฅผ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]] ... del examples["image"] ... return examples ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ๊ธฐ๋Šฅ์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.with_transform`]์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์š”์†Œ๋ฅผ ๊ฐ€์ ธ์˜ฌ ๋•Œ ๋ณ€ํ™˜์ด ์ฆ‰์‹œ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค: ```py >>> food = food.with_transform(transforms) ``` ์ด์ œ [`DefaultDataCollator`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ, `DefaultDataCollator`๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€์ ์ธ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ๊ณผ์ ํ•ฉ์„ ๋ฐฉ์ง€ํ•˜๊ณ  ๋ชจ๋ธ์„ ๋ณด๋‹ค ๊ฒฌ๊ณ ํ•˜๊ฒŒ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ›ˆ๋ จ ๋ถ€๋ถ„์— ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ Keras ์ „์ฒ˜๋ฆฌ ๋ ˆ์ด์–ด๋กœ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๋ณ€ํ™˜(๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ํฌํ•จ)๊ณผ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๋ณ€ํ™˜(์ค‘์•™ ํฌ๋กœํ•‘, ํฌ๊ธฐ ์กฐ์ •, ์ •๊ทœํ™”๋งŒ)์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. `tf.image` ๋˜๋Š” ๋‹ค๋ฅธ ์›ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from tensorflow import keras >>> from tensorflow.keras import layers >>> size = (image_processor.size["height"], image_processor.size["width"]) >>> train_data_augmentation = keras.Sequential( ... [ ... layers.RandomCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... layers.RandomFlip("horizontal"), ... layers.RandomRotation(factor=0.02), ... layers.RandomZoom(height_factor=0.2, width_factor=0.2), ... ], ... name="train_data_augmentation", ... ) >>> val_data_augmentation = keras.Sequential( ... [ ... layers.CenterCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... ], ... name="val_data_augmentation", ... ) ``` ๋‹ค์Œ์œผ๋กœ ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ ์ด๋ฏธ์ง€๊ฐ€ ์•„๋‹ˆ๋ผ ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์— ์ ์ ˆํ•œ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ```py >>> import numpy as np >>> import tensorflow as tf >>> from PIL import Image >>> def convert_to_tf_tensor(image: Image): ... np_image = np.array(image) ... tf_image = tf.convert_to_tensor(np_image) ... # `expand_dims()` is used to add a batch dimension since ... # the TF augmentation layers operates on batched inputs. ... return tf.expand_dims(tf_image, 0) >>> def preprocess_train(example_batch): ... """Apply train_transforms across a batch.""" ... images = [ ... train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ... def preprocess_val(example_batch): ... """Apply val_transforms across a batch.""" ... images = [ ... val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ``` ๐Ÿค— Datasets [`~datasets.Dataset.set_transform`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ฆ‰์‹œ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜์„ธ์š”: ```py food["train"].set_transform(preprocess_train) food["test"].set_transform(preprocess_val) ``` ์ตœ์ข… ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๋กœ `DefaultDataCollator`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ `DefaultDataCollator`๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (๐Ÿค— Evaluate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•˜๋ฉด ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForImageClassification`]๋กœ ViT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜, ๋ ˆ์ด๋ธ” ๋งคํ•‘ ๋ฐ ๋ ˆ์ด๋ธ” ์ˆ˜๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer >>> model = AutoModelForImageClassification.from_pretrained( ... checkpoint, ... num_labels=len(labels), ... id2label=id2label, ... label2id=label2id, ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `image` ์—ด์ด ์‚ญ์ œ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฏธ์‚ฌ์šฉ ์—ด์„ ์ œ๊ฑฐํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. `image` ์—ด์ด ์—†์œผ๋ฉด `pixel_values`์„ ์ƒ์„ฑํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด ๋™์ž‘์„ ๋ฐฉ์ง€ํ•˜๋ ค๋ฉด `remove_unused_columns=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”! ๋‹ค๋ฅธ ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ ์ €์žฅ ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋กœ ์„ค์ •ํ•˜๋ฉด ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๊ฐ€ ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_food_model", ... remove_unused_columns=False, ... eval_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=16, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=food["train"], ... eval_dataset=food["test"], ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋กœ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, ๋จผ์ € [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](./training#train-a-tensorflow-model-with-keras)์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅด์„ธ์š”: 1. ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜๊ณ  ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. 2. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. 3. ๐Ÿค— Dataset์„ `tf.data.Dataset`์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 4. ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•ฉ๋‹ˆ๋‹ค. 5. ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด `fit()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. 6. ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์„ ๐Ÿค— Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ์˜ตํ‹ฐ๋งˆ์ด์ € ๋ฐ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด์„ ์ •์˜ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 5 >>> num_train_steps = len(food["train"]) * num_epochs >>> learning_rate = 3e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( ... init_lr=learning_rate, ... num_train_steps=num_train_steps, ... weight_decay_rate=weight_decay_rate, ... num_warmup_steps=0, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ ˆ์ด๋ธ” ๋งคํ•‘๊ณผ ํ•จ๊ป˜ [`TFAuto ModelForImageClassification`]์œผ๋กœ ViT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ [`~datasets.Dataset.to_tf_dataset`]์™€ `data_collator`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> # converting our train dataset to tf.data.Dataset >>> tf_train_dataset = food["train"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) >>> # converting our test dataset to tf.data.Dataset >>> tf_eval_dataset = food["test"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) ``` `compile()`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•˜์„ธ์š”: ```py >>> from tensorflow.keras.losses import SparseCategoricalCrossentropy >>> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) >>> model.compile(optimizer=optimizer, loss=loss) ``` ์˜ˆ์ธก์—์„œ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ๐Ÿค— Hub๋กœ ํ‘ธ์‹œํ•˜๋ ค๋ฉด [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `compute_metrics` ํ•จ์ˆ˜๋ฅผ [KerasMetricCallback](../main_classes/keras_callbacks#transformers.KerasMetricCallback)์— ์ „๋‹ฌํ•˜๊ณ , [PushToHubCallback](../main_classes/keras_callbacks#transformers.PushToHubCallback)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset) >>> push_to_hub_callback = PushToHubCallback( ... output_dir="food_classifier", ... tokenizer=image_processor, ... save_strategy="no", ... ) >>> callbacks = [metric_callback, push_to_hub_callback] ``` ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜์™€ ํ•จ๊ป˜ `fit()`์„ ํ˜ธ์ถœํ•˜๊ณ , ์ฝœ๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks) Epoch 1/5 250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290 Epoch 2/5 250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690 Epoch 3/5 250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820 Epoch 4/5 250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900 Epoch 5/5 250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890 ``` ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ๊ณต์œ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> ds = load_dataset("food101", split="validation[:10]") >>> image = ds["image"][0] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png" alt="image of beignets"/> </div> ๋ฏธ์„ธ ์กฐ์ • ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> classifier = pipeline("image-classification", model="my_awesome_food_model") >>> classifier(image) [{'score': 0.31856709718704224, 'label': 'beignets'}, {'score': 0.015232225880026817, 'label': 'bruschetta'}, {'score': 0.01519392803311348, 'label': 'chicken_wings'}, {'score': 0.013022331520915031, 'label': 'pork_chop'}, {'score': 0.012728818692266941, 'label': 'prime_rib'}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ์ด๋ฏธ์ง€๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  `input`์„ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> import torch >>> image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model") >>> inputs = image_processor(image, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  logits์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForImageClassification >>> model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ํ™•๋ฅ ์ด ๊ฐ€์žฅ ๋†’์€ ์˜ˆ์ธก ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜ค๊ณ , ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_label = logits.argmax(-1).item() >>> model.config.id2label[predicted_label] 'beignets' ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ์ด๋ฏธ์ง€๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  `input`์„ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier") >>> inputs = image_processor(image, return_tensors="tf") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  logits์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier") >>> logits = model(**inputs).logits ``` ํ™•๋ฅ ์ด ๊ฐ€์žฅ ๋†’์€ ์˜ˆ์ธก ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜ค๊ณ , ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'beignets' ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/multiple_choice.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๊ฐ๊ด€์‹ ๋ฌธ์ œ[[multiple-choice]] [[open-in-colab]] ๊ฐ๊ด€์‹ ๊ณผ์ œ๋Š” ๋ฌธ๋งฅ๊ณผ ํ•จ๊ป˜ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํ›„๋ณด ๋‹ต๋ณ€์ด ์ œ๊ณต๋˜๊ณ  ๋ชจ๋ธ์ด ์ •๋‹ต์„ ์„ ํƒํ•˜๋„๋ก ํ•™์Šต๋œ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์งˆ์˜์‘๋‹ต๊ณผ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์ง„ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [SWAG](https://huggingface.co/datasets/swag) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ 'regular' ๊ตฌ์„ฑ์œผ๋กœ [BERT](https://huggingface.co/google-bert/bert-base-uncased)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์—ฌ๋Ÿฌ ์˜ต์…˜๊ณผ ์ผ๋ถ€ ์ปจํ…์ŠคํŠธ๊ฐ€ ์ฃผ์–ด์กŒ์„ ๋•Œ ๊ฐ€์žฅ ์ ํ•ฉํ•œ ๋‹ต์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ—ˆ๊น…ํŽ˜์ด์Šค ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## SWAG ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-swag-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ SWAG ๋ฐ์ดํ„ฐ์…‹์˜ '์ผ๋ฐ˜' ๊ตฌ์„ฑ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> swag = load_dataset("swag", "regular") ``` ์ด์ œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค: ```py >>> swag["train"][0] {'ending0': 'passes by walking down the street playing their instruments.', 'ending1': 'has heard approaching them.', 'ending2': "arrives and they're outside dancing and asleep.", 'ending3': 'turns the lead singer watches the performance.', 'fold-ind': '3416', 'gold-source': 'gold', 'label': 0, 'sent1': 'Members of the procession walk down the street holding small horn brass instruments.', 'sent2': 'A drum line', 'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line', 'video-id': 'anetv_jkn6uvmqwh4'} ``` ์—ฌ๊ธฐ์—๋Š” ๋งŽ์€ ํ•„๋“œ๊ฐ€ ์žˆ๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ด์ง€๋งŒ ์‹ค์ œ๋กœ๋Š” ๋งค์šฐ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค: - `sent1` ๋ฐ `sent2`: ์ด ํ•„๋“œ๋Š” ๋ฌธ์žฅ์ด ์–ด๋–ป๊ฒŒ ์‹œ์ž‘๋˜๋Š”์ง€ ๋ณด์—ฌ์ฃผ๋ฉฐ, ์ด ๋‘ ํ•„๋“œ๋ฅผ ํ•ฉ์น˜๋ฉด `์‹œ์ž‘ ๊ตฌ์ ˆ(startphrase)` ํ•„๋“œ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. - `์ข…๋ฃŒ ๊ตฌ์ ˆ(ending)`: ๋ฌธ์žฅ์ด ์–ด๋–ป๊ฒŒ ๋๋‚  ์ˆ˜ ์žˆ๋Š”์ง€์— ๋Œ€ํ•œ ๊ฐ€๋Šฅํ•œ ์ข…๋ฃŒ ๊ตฌ์ ˆ๋ฅผ ์ œ์‹œํ•˜์ง€๋งŒ ๊ทธ ์ค‘ ํ•˜๋‚˜๋งŒ ์ •๋‹ต์ž…๋‹ˆ๋‹ค. - `๋ ˆ์ด๋ธ”(label)`: ์˜ฌ๋ฐ”๋ฅธ ๋ฌธ์žฅ ์ข…๋ฃŒ ๊ตฌ์ ˆ์„ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ฌธ์žฅ์˜ ์‹œ์ž‘๊ณผ ๋„ค ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ๊ตฌ์ ˆ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด BERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") ``` ์ƒ์„ฑํ•˜๋ ค๋Š” ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. `sent1` ํ•„๋“œ๋ฅผ ๋„ค ๊ฐœ ๋ณต์‚ฌํ•œ ๋‹ค์Œ ๊ฐ๊ฐ์„ `sent2`์™€ ๊ฒฐํ•ฉํ•˜์—ฌ ๋ฌธ์žฅ์ด ์‹œ์ž‘๋˜๋Š” ๋ฐฉ์‹์„ ์žฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. 2. `sent2`๋ฅผ ๋„ค ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ๋ฌธ์žฅ ๊ตฌ์ ˆ ๊ฐ๊ฐ๊ณผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. 3. ์ด ๋‘ ๋ชฉ๋ก์„ ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ‰ํƒ„ํ™”(flatten)ํ•˜๊ณ , ๊ฐ ์˜ˆ์ œ์— ํ•ด๋‹นํ•˜๋Š” `input_ids`, `attention_mask` ๋ฐ `labels` ํ•„๋“œ๋ฅผ ๊ฐ–๋„๋ก ๋‹ค์ฐจ์›ํ™”(unflatten) ํ•ฉ๋‹ˆ๋‹ค. ```py >>> ending_names = ["ending0", "ending1", "ending2", "ending3"] >>> def preprocess_function(examples): ... first_sentences = [[context] * 4 for context in examples["sent1"]] ... question_headers = examples["sent2"] ... second_sentences = [ ... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers) ... ] ... first_sentences = sum(first_sentences, []) ... second_sentences = sum(second_sentences, []) ... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True) ... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()} ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์— ์ „์ฒ˜๋ฆฌ ๊ธฐ๋Šฅ์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py tokenized_swag = swag.map(preprocess_function, batched=True) ``` ๐Ÿค— Transformers์—๋Š” ๊ฐ๊ด€์‹์šฉ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค๋ ค๋ฉด [`DataCollatorWithPadding`]์„ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์ •๋ ฌ ์ค‘์— ์ „์ฒด ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹  ๋ฐฐ์น˜ ์ค‘ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์  ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. `DataCollatorForMultipleChoice`๋Š” ๋ชจ๋“  ๋ชจ๋ธ ์ž…๋ ฅ์„ ํ‰ํƒ„ํ™”ํ•˜๊ณ  ํŒจ๋”ฉ์„ ์ ์šฉํ•˜๋ฉฐ ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ๊ฒฐ๊ณผ๋ฅผ ๋‹ค์ฐจ์›ํ™”ํ•ฉ๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import torch >>> @dataclass ... class DataCollatorForMultipleChoice: ... """ ... Data collator that will dynamically pad the inputs for multiple choice received. ... """ ... tokenizer: PreTrainedTokenizerBase ... padding: Union[bool, str, PaddingStrategy] = True ... max_length: Optional[int] = None ... pad_to_multiple_of: Optional[int] = None ... def __call__(self, features): ... label_name = "label" if "label" in features[0].keys() else "labels" ... labels = [feature.pop(label_name) for feature in features] ... batch_size = len(features) ... num_choices = len(features[0]["input_ids"]) ... flattened_features = [ ... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ... ] ... flattened_features = sum(flattened_features, []) ... batch = self.tokenizer.pad( ... flattened_features, ... padding=self.padding, ... max_length=self.max_length, ... pad_to_multiple_of=self.pad_to_multiple_of, ... return_tensors="pt", ... ) ... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()} ... batch["labels"] = torch.tensor(labels, dtype=torch.int64) ... return batch ``` </pt> <tf> ```py >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import tensorflow as tf >>> @dataclass ... class DataCollatorForMultipleChoice: ... """ ... Data collator that will dynamically pad the inputs for multiple choice received. ... """ ... tokenizer: PreTrainedTokenizerBase ... padding: Union[bool, str, PaddingStrategy] = True ... max_length: Optional[int] = None ... pad_to_multiple_of: Optional[int] = None ... def __call__(self, features): ... label_name = "label" if "label" in features[0].keys() else "labels" ... labels = [feature.pop(label_name) for feature in features] ... batch_size = len(features) ... num_choices = len(features[0]["input_ids"]) ... flattened_features = [ ... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ... ] ... flattened_features = sum(flattened_features, []) ... batch = self.tokenizer.pad( ... flattened_features, ... padding=self.padding, ... max_length=self.max_length, ... pad_to_multiple_of=self.pad_to_multiple_of, ... return_tensors="tf", ... ) ... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()} ... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64) ... return batch ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€ ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค—[Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค(๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋ฆฌ๊ณ  ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋Œ์•„๊ฐ€๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ ํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForMultipleChoice`]๋กœ BERT๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer >>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ [`TrainingArguments`]์— ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด ํ—ˆ๊น… ํŽ˜์ด์Šค์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค [`Trainer`]๊ฐ€ ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ž๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_swag_model", ... eval_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... learning_rate=5e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_swag["train"], ... eval_dataset=tokenized_swag["validation"], ... tokenizer=tokenizer, ... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer), ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)๋ฅผ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ์ตœ์ ํ™” ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด ๋ฐ ๋ช‡ ๊ฐ€์ง€ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 2 >>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs >>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps) ``` ๊ทธ๋ฆฌ๊ณ  [`TFAutoModelForMultipleChoice`]๋กœ BERT๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer) >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_swag["train"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_swag["validation"], ... shuffle=False, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผ ํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์˜ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐ€์ง€ ์ž‘์—…์€ ๋ชจ๋‘ [Keras ์ฝœ๋ฐฑ](../main_classes/keras_callbacks)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `compute_metrics`ํ•จ์ˆ˜๋ฅผ [`~transformers.KerasMetricCallback`]์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์—์„œ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋ฆฌ๊ณ  ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜, ์ฝœ๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•˜๊ณ  ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๊ฐ๊ด€์‹ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์‹ฌ์ธต์ ์ธ ์˜ˆ๋Š” ์•„๋ž˜ ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). </Tip> ## ์ถ”๋ก  ํ•˜๊ธฐ[[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ํ…์ŠคํŠธ์™€ ๋‘ ๊ฐœ์˜ ํ›„๋ณด ๋‹ต์•ˆ์„ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> prompt = "France has a bread law, Le Dรฉcret Pain, with strict rules on what is allowed in a traditional baguette." >>> candidate1 = "The law does not apply to croissants and brioche." >>> candidate2 = "The law applies to baguettes." ``` <frameworkcontent> <pt> ๊ฐ ํ”„๋กฌํ”„ํŠธ์™€ ํ›„๋ณด ๋‹ต๋ณ€ ์Œ์„ ํ† ํฐํ™”ํ•˜์—ฌ PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `labels`์„ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True) >>> labels = torch.tensor(0).unsqueeze(0) ``` ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMultipleChoice >>> model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") >>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels) >>> logits = outputs.logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> predicted_class = logits.argmax().item() >>> predicted_class '0' ``` </pt> <tf> ๊ฐ ํ”„๋กฌํ”„ํŠธ์™€ ํ›„๋ณด ๋‹ต์•ˆ ์Œ์„ ํ† ํฐํ™”ํ•˜์—ฌ ํ…์„œํ”Œ๋กœ ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True) ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") >>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()} >>> outputs = model(inputs) >>> logits = outputs.logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0]) >>> predicted_class '0' ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/sequence_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ…์ŠคํŠธ ๋ถ„๋ฅ˜[[text-classification]] [[open-in-colab]] <Youtube id="leNG9fN9FQU"/> ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋Š” ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ์˜ ์ผ์ข…์œผ๋กœ, ํ…์ŠคํŠธ์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๋งŽ์€ ๋Œ€๊ธฐ์—…์ด ๋‹ค์–‘ํ•œ ์‹ค์šฉ์ ์ธ ์‘์šฉ ๋ถ„์•ผ์—์„œ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋ฅผ ์šด์˜ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ํ˜•ํƒœ ์ค‘ ํ•˜๋‚˜๋Š” ๊ฐ์„ฑ ๋ถ„์„์œผ๋กœ, ํ…์ŠคํŠธ ์‹œํ€€์Šค์— ๐Ÿ™‚ ๊ธ์ •, ๐Ÿ™ ๋ถ€์ • ๋˜๋Š” ๐Ÿ˜ ์ค‘๋ฆฝ๊ณผ ๊ฐ™์€ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. [IMDb](https://huggingface.co/datasets/imdb) ๋ฐ์ดํ„ฐ์…‹์—์„œ [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased)๋ฅผ ํŒŒ์ธ ํŠœ๋‹ํ•˜์—ฌ ์˜ํ™” ๋ฆฌ๋ทฐ๊ฐ€ ๊ธ์ •์ ์ธ์ง€ ๋ถ€์ •์ ์ธ์ง€ ํŒ๋‹จํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธ ํŠœ๋‹ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/text-classification)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## IMDb ๋ฐ์ดํ„ฐ์…‹ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-imdb-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ IMDb ๋ฐ์ดํ„ฐ์…‹์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> imdb = load_dataset("imdb") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค: ```py >>> imdb["test"][0] { "label": 0, "text": "I love sci-fi and am willing to put up with a lot. Sci-fi movies/TV are usually underfunded, under-appreciated and misunderstood. I tried to like this, I really did, but it is to good TV sci-fi as Babylon 5 is to Star Trek (the original). Silly prosthetics, cheap cardboard sets, stilted dialogues, CG that doesn't match the background, and painfully one-dimensional characters cannot be overcome with a 'sci-fi' setting. (I'm sure there are those of you out there who think Babylon 5 is good sci-fi TV. It's not. It's clichรฉd and uninspiring.) While US viewers might like emotion and character development, sci-fi is a genre that does not take itself seriously (cf. Star Trek). It may treat important issues, yet not as a serious philosophy. It's really difficult to care about the characters here as they are not simply foolish, just missing a spark of life. Their actions and reactions are wooden and predictable, often painful to watch. The makers of Earth KNOW it's rubbish as they have to always say \"Gene Roddenberry's Earth...\" otherwise people would not continue watching. Roddenberry's ashes must be turning in their orbit as this dull, cheap, poorly edited (watching it without advert breaks really brings this home) trudging Trabant of a show lumbers into space. Spoiler. So, kill off a main character. And then bring him back as another actor. Jeeez! Dallas all over again.", } ``` ์ด ๋ฐ์ดํ„ฐ์…‹์—๋Š” ๋‘ ๊ฐ€์ง€ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `text`: ์˜ํ™” ๋ฆฌ๋ทฐ ํ…์ŠคํŠธ - `label`: `0`์€ ๋ถ€์ •์ ์ธ ๋ฆฌ๋ทฐ, `1`์€ ๊ธ์ •์ ์ธ ๋ฆฌ๋ทฐ๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์™€์„œ `text` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` `text`๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  ์‹œํ€€์Šค๊ฐ€ DistilBERT์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์ž๋ฅด๊ธฐ ์œ„ํ•œ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> def preprocess_function(examples): ... return tokenizer(examples["text"], truncation=True) ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋ฐ์ดํ„ฐ์…‹์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด `batched=True`๋กœ ์„ค์ •ํ•จ์œผ๋กœ์จ ๋ฐ์ดํ„ฐ์…‹ `map`๋ฅผ ๋” ๋น ๋ฅด๊ฒŒ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py tokenized_imdb = imdb.map(preprocess_function, batched=True) ``` ์ด์ œ [`DataCollatorWithPadding`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค์–ด๋ด…์‹œ๋‹ค. ๋ฐ์ดํ„ฐ์…‹ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹ , *๋™์  ํŒจ๋”ฉ*์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด์— ๋งž๊ฒŒ ๋ฌธ์žฅ์„ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ์ด ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` </pt> <tf> ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋Š” ๊ฒƒ์ด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น ๋ฅด๊ฒŒ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” ๐Ÿค— Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด์„œ ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ ๊ณ„์‚ฐํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋„๋ก [`~evaluate.EvaluationModule.compute`]๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋Š” ์ค€๋น„๋˜์—ˆ๊ณ , ํ›ˆ๋ จ ๊ณผ์ •์„ ์„ค์ •ํ•  ๋•Œ ๋‹ค์‹œ ์‚ดํŽด๋ณผ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „์—, `id2label`์™€ `label2id`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ƒ๋˜๋Š” id์™€ ๋ ˆ์ด๋ธ”์˜ ๋งต์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> id2label = {0: "NEGATIVE", 1: "POSITIVE"} >>> label2id = {"NEGATIVE": 0, "POSITIVE": 1} ``` <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSequenceClassification`]๋กœ DistilBERT๋ฅผ ๊ฐ€์ณ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer >>> model = AutoModelForSequenceClassification.from_pretrained( ... "distilbert/distilbert-base-uncased", num_labels=2, id2label=id2label, label2id=label2id ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.) ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๋Š” ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜์™€ ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ์…‹, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘๊ธฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์€ ํŒŒ์ธ ํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_model", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=2, ... weight_decay=0.01, ... eval_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_imdb["train"], ... eval_dataset=tokenized_imdb["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` <Tip> [`Trainer`]๋Š” `tokenizer`๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๊ธฐ๋ณธ์ ์œผ๋กœ ๋™์  ๋งคํ•‘์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ, ๋ช…์‹œ์ ์œผ๋กœ ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘๊ธฐ๋ฅผ ์ง€์ •ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. </Tip> ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด, ๊ทธ๋ฆฌ๊ณ  ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> import tensorflow as tf >>> batch_size = 16 >>> num_epochs = 5 >>> batches_per_epoch = len(tokenized_imdb["train"]) // batch_size >>> total_train_steps = int(batches_per_epoch * num_epochs) >>> optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSequenceClassification`]์„ ์‚ฌ์šฉํ•˜์—ฌ DistilBERT๋ฅผ ๋กœ๋“œํ•˜๊ณ , ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained( ... "distilbert/distilbert-base-uncased", num_labels=2, id2label=id2label, label2id=label2id ... ) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_imdb["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_imdb["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ , ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•  ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics`๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๋†’์ž…๋‹ˆ๋‹ค. ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด, ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์— ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์…‹, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์…‹, ์—ํญ์˜ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์—ฌ ํŒŒ์ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ํ…์ŠคํŠธ๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> text = "This was a masterpiece. Not completely faithful to the books, but enthralling from beginning to end. Might be my favorite of the three." ``` ํŒŒ์ธ ํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ด๋ณด์„ธ์š”: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis", model="stevhliu/my_awesome_model") >>> classifier(text) [{'label': 'POSITIVE', 'score': 0.9994940757751465}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model") >>> inputs = tokenizer(text, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'POSITIVE' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model") >>> inputs = tokenizer(text, return_tensors="tf") ``` ์ž…๋ ฅ๊ฐ’์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model") >>> logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'POSITIVE' ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/asr.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic-speech-recognition]] [[open-in-colab]] <Youtube id="TksaY_FDgnk"/> ์ž๋™ ์Œ์„ฑ ์ธ์‹(Automatic Speech Recognition, ASR)์€ ์Œ์„ฑ ์‹ ํ˜ธ๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์Œ์„ฑ ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ํ…์ŠคํŠธ ์ถœ๋ ฅ์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. Siri์™€ Alexa์™€ ๊ฐ™์€ ๊ฐ€์ƒ ์–ด์‹œ์Šคํ„ดํŠธ๋Š” ASR ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ผ์ƒ์ ์œผ๋กœ ์‚ฌ์šฉ์ž๋ฅผ ๋•๊ณ  ์žˆ์œผ๋ฉฐ, ํšŒ์˜ ์ค‘ ๋ผ์ด๋ธŒ ์บก์…˜ ๋ฐ ๋ฉ”๋ชจ ์ž‘์„ฑ๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์‚ฌ์šฉ์ž ์นœํ™”์  ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ๋„ ๋งŽ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/automatic-speech-recognition)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate jiwer ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-minds-14-dataset]] ๋จผ์ €, ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€๋ถ„์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ์‹œ๊ฐ„์„ ๋“ค์ด๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ๊ฒ€์ฆํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]") ``` [`~Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์„ธ์š”: ```py >>> minds = minds.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: ```py >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 16 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 4 }) }) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” `lang_id`์™€ `english_transcription`๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์ •๋ณด๊ฐ€ ๋งŽ์ด ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ, ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `audio`์™€ `transcription`์— ์ดˆ์ ์„ ๋งž์ถœ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์—ด์€ [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"]) ``` ์˜ˆ์‹œ๋ฅผ ๋‹ค์‹œ ํ•œ๋ฒˆ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> minds["train"][0] {'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414, 0.00024414, 0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 8000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `audio`: ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด ํ˜ธ์ถœํ•ด์•ผ ํ•˜๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์˜ 1์ฐจ์› `array(๋ฐฐ์—ด)` - `transcription`: ๋ชฉํ‘œ ํ…์ŠคํŠธ ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ์œผ๋กœ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ Wav2Vec2 ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base") ``` MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋Š” 8000kHz์ด๋ฏ€๋กœ([๋ฐ์ดํ„ฐ ์„ธํŠธ ์นด๋“œ](https://huggingface.co/datasets/PolyAI/minds14)์—์„œ ํ™•์ธ), ์‚ฌ์ „ ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ 16000kHz๋กœ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ..., 2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 16000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ์œ„์˜ 'transcription'์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด ํ…์ŠคํŠธ๋Š” ๋Œ€๋ฌธ์ž์™€ ์†Œ๋ฌธ์ž๊ฐ€ ์„ž์—ฌ ์žˆ์Šต๋‹ˆ๋‹ค. Wav2Vec2 ํ† ํฌ๋‚˜์ด์ €๋Š” ๋Œ€๋ฌธ์ž ๋ฌธ์ž์— ๋Œ€ํ•ด์„œ๋งŒ ํ›ˆ๋ จ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ…์ŠคํŠธ๊ฐ€ ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> def uppercase(example): ... return {"transcription": example["transcription"].upper()} >>> minds = minds.map(uppercase) ``` ์ด์ œ ๋‹ค์Œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: 1. `audio` ์—ด์„ ํ˜ธ์ถœํ•˜์—ฌ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒ์ผ์—์„œ `input_values`๋ฅผ ์ถ”์ถœํ•˜๊ณ  ํ”„๋กœ์„ธ์„œ๋กœ `transcription` ์—ด์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def prepare_dataset(batch): ... audio = batch["audio"] ... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"]) ... batch["input_length"] = len(batch["input_values"][0]) ... return batch ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `num_proc` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ”„๋กœ์„ธ์Šค ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๋ฉด `map`์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์„ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4) ``` ๐Ÿค— Transformers์—๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์šฉ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`DataCollatorWithPadding`]์„ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” ํ…์ŠคํŠธ์™€ ๋ ˆ์ด๋ธ”์„ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์š”์†Œ์˜ ๊ธธ์ด์— ๋™์ ์œผ๋กœ ํŒจ๋”ฉํ•˜์—ฌ ๊ธธ์ด๋ฅผ ๊ท ์ผํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. `tokenizer` ํ•จ์ˆ˜์—์„œ `padding=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ํŒจ๋”ฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๋™์  ํŒจ๋”ฉ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ ์ด ํŠน์ • ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” `input_values`์™€ `labels`์— ๋Œ€ํ•ด ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> from dataclasses import dataclass, field >>> from typing import Any, Dict, List, Optional, Union >>> @dataclass ... class DataCollatorCTCWithPadding: ... processor: AutoProcessor ... padding: Union[bool, str] = "longest" ... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: ... # ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค ... # ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅด๊ณ , ๊ฐ๊ฐ ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค ... input_features = [{"input_values": feature["input_values"][0]} for feature in features] ... label_features = [{"input_ids": feature["labels"]} for feature in features] ... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt") ... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt") ... # ํŒจ๋”ฉ์— ๋Œ€ํ•ด ์†์‹ค์„ ์ ์šฉํ•˜์ง€ ์•Š๋„๋ก -100์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค ... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) ... batch["labels"] = labels ... return batch ``` ์ด์ œ `DataCollatorForCTCWithPadding`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest") ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [๋‹จ์–ด ์˜ค๋ฅ˜์œจ(Word Error Rate, WER)](https://huggingface.co/spaces/evaluate-metric/wer) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> wer = evaluate.load("wer") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ WER์„ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(pred): ... pred_logits = pred.predictions ... pred_ids = np.argmax(pred_logits, axis=-1) ... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id ... pred_str = processor.batch_decode(pred_ids) ... label_str = processor.batch_decode(pred.label_ids, group_tokens=False) ... wer = wer.compute(predictions=pred_str, references=label_str) ... return {"wer": wer} ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForCTC`]๋กœ Wav2Vec2๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. `ctc_loss_reduction` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ CTC ์†์‹ค์— ์ ์šฉํ•  ์ถ•์†Œ(reduction) ๋ฐฉ๋ฒ•์„ ์ง€์ •ํ•˜์„ธ์š”. ๊ธฐ๋ณธ๊ฐ’์ธ ํ•ฉ๊ณ„ ๋Œ€์‹  ํ‰๊ท ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋” ์ข‹์€ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForCTC, TrainingArguments, Trainer >>> model = AutoModelForCTC.from_pretrained( ... "facebook/wav2vec2-base", ... ctc_loss_reduction="mean", ... pad_token_id=processor.tokenizer.pad_token_id, ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`์€ ๋ชจ๋ธ์„ ์ €์žฅํ•  ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). [`Trainer`]๋Š” ๊ฐ ์—ํญ๋งˆ๋‹ค WER์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_asr_mind_model", ... per_device_train_batch_size=8, ... gradient_accumulation_steps=2, ... learning_rate=1e-5, ... warmup_steps=500, ... max_steps=2000, ... gradient_checkpointing=True, ... fp16=True, ... group_by_length=True, ... eval_strategy="steps", ... per_device_eval_batch_size=8, ... save_steps=1000, ... eval_steps=1000, ... logging_steps=25, ... load_best_model_at_end=True, ... metric_for_best_model="wer", ... greater_is_better=False, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=processor.feature_extractor, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋‘๊ฐ€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ์˜์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-wav2vec2-english)์™€ ๋‹ค๊ตญ์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ๋น„์œจ์„ ๋ชจ๋ธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์— ๋งž๊ฒŒ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model") >>> transcriber(audio_file) {'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'} ``` <Tip> ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜๋œ ๊ฒฐ๊ณผ๊ฐ€ ๊ฝค ๊ดœ์ฐฎ์ง€๋งŒ ๋” ์ข‹์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์œผ๋ ค๋ฉด ๋” ๋งŽ์€ ์˜ˆ์ œ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”! </Tip> `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ์žฌํ˜„ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ์˜ค๋””์˜ค ํŒŒ์ผ๊ณผ ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  PyTorch ํ…์„œ๋กœ `input`์„ ๋ฐ˜ํ™˜ํ•  ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  ๋กœ์ง“์„ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForCTC >>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์˜ `input_ids`๋ฅผ ์˜ˆ์ธกํ•˜๊ณ , ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ธก๋œ `input_ids`๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> import torch >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids) >>> transcription ['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] ``` </pt> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/object_detection.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๊ฐ์ฒด ํƒ์ง€ [[object-detection]] [[open-in-colab]] ๊ฐ์ฒด ํƒ์ง€๋Š” ์ด๋ฏธ์ง€์—์„œ ์ธ์Šคํ„ด์Šค(์˜ˆ: ์‚ฌ๋žŒ, ๊ฑด๋ฌผ ๋˜๋Š” ์ž๋™์ฐจ)๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€ ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›๊ณ  ํƒ์ง€๋œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์˜ ์ขŒํ‘œ์™€ ๊ด€๋ จ๋œ ๋ ˆ์ด๋ธ”์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ํ•˜๋‚˜์˜ ์ด๋ฏธ์ง€์—๋Š” ์—ฌ๋Ÿฌ ๊ฐ์ฒด๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ ๊ฐ๊ฐ์€ ์ž์ฒด์ ์ธ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์™€ ๋ ˆ์ด๋ธ”์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ์ฐจ์™€ ๊ฑด๋ฌผ์ด ์žˆ๋Š” ์ด๋ฏธ์ง€). ๋˜ํ•œ ๊ฐ ๊ฐ์ฒด๋Š” ์ด๋ฏธ์ง€์˜ ๋‹ค๋ฅธ ๋ถ€๋ถ„์— ์กด์žฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ์ด๋ฏธ์ง€์— ์—ฌ๋Ÿฌ ๋Œ€์˜ ์ฐจ๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Œ). ์ด ์ž‘์—…์€ ๋ณดํ–‰์ž, ๋„๋กœ ํ‘œ์ง€ํŒ, ์‹ ํ˜ธ๋“ฑ๊ณผ ๊ฐ™์€ ๊ฒƒ๋“ค์„ ๊ฐ์ง€ํ•˜๋Š” ์ž์œจ ์ฃผํ–‰์— ์ผ๋ฐ˜์ ์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์‘์šฉ ๋ถ„์•ผ๋กœ๋Š” ์ด๋ฏธ์ง€ ๋‚ด ๊ฐ์ฒด ์ˆ˜ ๊ณ„์‚ฐ ๋ฐ ์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰ ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ๋‹ค์Œ์„ ๋ฐฐ์šธ ๊ฒƒ์ž…๋‹ˆ๋‹ค: 1. ํ•ฉ์„ฑ๊ณฑ ๋ฐฑ๋ณธ(์ธํ’‹ ๋ฐ์ดํ„ฐ์˜ ํŠน์„ฑ์„ ์ถ”์ถœํ•˜๋Š” ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ)๊ณผ ์ธ์ฝ”๋”-๋””์ฝ”๋” ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์„ ๊ฒฐํ•ฉํ•œ [DETR](https://huggingface.co/docs/transformers/model_doc/detr) ๋ชจ๋ธ์„ [CPPE-5](https://huggingface.co/datasets/cppe-5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•ด ๋ฏธ์„ธ์กฐ์ • ํ•˜๊ธฐ 2. ๋ฏธ์„ธ์กฐ์ • ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/object-detection)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q datasets transformers evaluate timm albumentations ``` ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์—์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•œ ๐Ÿค— Datasets๊ณผ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•œ ๐Ÿค— Transformers, ๋ฐ์ดํ„ฐ๋ฅผ ์ฆ๊ฐ•ํ•˜๊ธฐ ์œ„ํ•œ `albumentations`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. DETR ๋ชจ๋ธ์˜ ํ•ฉ์„ฑ๊ณฑ ๋ฐฑ๋ณธ์„ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ˜„์žฌ `timm`์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## CPPE-5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-the-CPPE-5-dataset]] [CPPE-5](https://huggingface.co/datasets/cppe-5) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” COVID-19 ๋Œ€์œ ํ–‰ ์ƒํ™ฉ์—์„œ ์˜๋ฃŒ ์ „๋ฌธ์ธ๋ ฅ ๋ณดํ˜ธ ์žฅ๋น„(PPE)๋ฅผ ์‹๋ณ„ํ•˜๋Š” ์–ด๋…ธํ…Œ์ด์…˜์ด ํฌํ•จ๋œ ์ด๋ฏธ์ง€๋ฅผ ๋‹ด๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from datasets import load_dataset >>> cppe5 = load_dataset("cppe-5") >>> cppe5 DatasetDict({ train: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 1000 }) test: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 29 }) }) ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ํ•™์Šต ์„ธํŠธ ์ด๋ฏธ์ง€ 1,000๊ฐœ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ ์ด๋ฏธ์ง€ 29๊ฐœ๋ฅผ ๊ฐ–๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์— ์ต์ˆ™ํ•ด์ง€๊ธฐ ์œ„ํ•ด, ์˜ˆ์‹œ๊ฐ€ ์–ด๋–ป๊ฒŒ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”์ง€ ์‚ดํŽด๋ณด์„ธ์š”. ```py >>> cppe5["train"][0] {'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x7F9EC9E77C10>, 'width': 943, 'height': 663, 'objects': {'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [[302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0]], 'category': [4, 4, 0, 0]}} ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์žˆ๋Š” ์˜ˆ์‹œ๋Š” ๋‹ค์Œ์˜ ์˜์—ญ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค: - `image_id`: ์˜ˆ์‹œ ์ด๋ฏธ์ง€ id - `image`: ์ด๋ฏธ์ง€๋ฅผ ํฌํ•จํ•˜๋Š” `PIL.Image.Image` ๊ฐ์ฒด - `width`: ์ด๋ฏธ์ง€์˜ ๋„ˆ๋น„ - `height`: ์ด๋ฏธ์ง€์˜ ๋†’์ด - `objects`: ์ด๋ฏธ์ง€ ์•ˆ์˜ ๊ฐ์ฒด๋“ค์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ๋ฅผ ํฌํ•จํ•˜๋Š” ๋”•์…”๋„ˆ๋ฆฌ: - `id`: ์–ด๋…ธํ…Œ์ด์…˜ id - `area`: ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์˜ ๋ฉด์  - `bbox`: ๊ฐ์ฒด์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ([COCO ํฌ๋งท](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco)์œผ๋กœ) - `category`: ๊ฐ์ฒด์˜ ์นดํ…Œ๊ณ ๋ฆฌ, ๊ฐ€๋Šฅํ•œ ๊ฐ’์œผ๋กœ๋Š” `Coverall (0)`, `Face_Shield (1)`, `Gloves (2)`, `Goggles (3)` ๋ฐ `Mask (4)` ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. `bbox` ํ•„๋“œ๊ฐ€ DETR ๋ชจ๋ธ์ด ์š”๊ตฌํ•˜๋Š” COCO ํ˜•์‹์„ ๋”ฐ๋ฅธ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ `objects` ๋‚ด๋ถ€์˜ ํ•„๋“œ ๊ทธ๋ฃน์€ DETR์ด ์š”๊ตฌํ•˜๋Š” ์–ด๋…ธํ…Œ์ด์…˜ ํ˜•์‹๊ณผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต์— ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ๋ฅผ ๋” ์ž˜ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ํ•œ ๊ฐ€์ง€ ์˜ˆ์‹œ๋ฅผ ์‹œ๊ฐํ™”ํ•˜์„ธ์š”. ```py >>> import numpy as np >>> import os >>> from PIL import Image, ImageDraw >>> image = cppe5["train"][0]["image"] >>> annotations = cppe5["train"][0]["objects"] >>> draw = ImageDraw.Draw(image) >>> categories = cppe5["train"].features["objects"].feature["category"].names >>> id2label = {index: x for index, x in enumerate(categories, start=0)} >>> label2id = {v: k for k, v in id2label.items()} >>> for i in range(len(annotations["id"])): ... box = annotations["bbox"][i - 1] ... class_idx = annotations["category"][i - 1] ... x, y, w, h = tuple(box) ... draw.rectangle((x, y, x + w, y + h), outline="red", width=1) ... draw.text((x, y), id2label[class_idx], fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://i.imgur.com/TdaqPJO.png" alt="CPPE-5 Image Example"/> </div> ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์™€ ์—ฐ๊ฒฐ๋œ ๋ ˆ์ด๋ธ”์„ ์‹œ๊ฐํ™”ํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ฉ”ํƒ€ ๋ฐ์ดํ„ฐ, ํŠนํžˆ `category` ํ•„๋“œ์—์„œ ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์™€์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค์— ๋งคํ•‘ํ•˜๋Š” `id2label`๊ณผ ๋ฐ˜๋Œ€๋กœ ๋งคํ•‘ํ•˜๋Š” `label2id` ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋งŒ๋“ค์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์„ค์ •ํ•  ๋•Œ ์ด๋Ÿฌํ•œ ๋งคํ•‘์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋งคํ•‘์€ ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์—์„œ ๋ชจ๋ธ์„ ๊ณต์œ ํ–ˆ์„ ๋•Œ ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ๋ฅผ ๋” ์ž˜ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•œ ์ตœ์ข… ๋‹จ๊ณ„๋กœ, ์ž ์žฌ์ ์ธ ๋ฌธ์ œ๋ฅผ ์ฐพ์•„๋ณด์„ธ์š”. ๊ฐ์ฒด ๊ฐ์ง€๋ฅผ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ž์ฃผ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ ์ค‘ ํ•˜๋‚˜๋Š” ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๊ฐ€ ์ด๋ฏธ์ง€์˜ ๊ฐ€์žฅ์ž๋ฆฌ๋ฅผ ๋„˜์–ด๊ฐ€๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ "๋„˜์–ด๊ฐ€๋Š” ๊ฒƒ(run away)"์€ ํ›ˆ๋ จ ์ค‘์— ์˜ค๋ฅ˜๋ฅผ ๋ฐœ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ๊ธฐ์— ์ด ๋‹จ๊ณ„์—์„œ ์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋„ ๊ฐ™์€ ๋ฌธ์ œ๊ฐ€ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๊ฐ„๋‹จํ•˜๊ฒŒํ•˜๊ธฐ ์œ„ํ•ด ๋ฐ์ดํ„ฐ์—์„œ ์ด๋Ÿฌํ•œ ์ด๋ฏธ์ง€๋ฅผ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. ```py >>> remove_idx = [590, 821, 822, 875, 876, 878, 879] >>> keep = [i for i in range(len(cppe5["train"])) if i not in remove_idx] >>> cppe5["train"] = cppe5["train"].select(keep) ``` ## ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ [[preprocess-the-data]] ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ํ•˜๋ ค๋ฉด, ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ์—์„œ ์‚ฌ์šฉํ•œ ์ „์ฒ˜๋ฆฌ ๋ฐฉ์‹๊ณผ ์ •ํ™•ํ•˜๊ฒŒ ์ผ์น˜ํ•˜๋„๋ก ์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [`AutoImageProcessor`]๋Š” ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜์—ฌ DETR ๋ชจ๋ธ์ด ํ•™์Šต์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” `pixel_values`, `pixel_mask`, ๊ทธ๋ฆฌ๊ณ  `labels`๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์ž‘์—…์„ ๋‹ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์—๋Š” ๊ฑฑ์ •ํ•˜์ง€ ์•Š์•„๋„ ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ์†์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - `image_mean = [0.485, 0.456, 0.406 ]` - `image_std = [0.229, 0.224, 0.225]` ์ด ๊ฐ’๋“ค์€ ๋ชจ๋ธ ์‚ฌ์ „ ํ›ˆ๋ จ ์ค‘ ์ด๋ฏธ์ง€๋ฅผ ์ •๊ทœํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ’๋“ค์€ ์ถ”๋ก  ๋˜๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ด๋ฏธ์ง€ ๋ชจ๋ธ์„ ์„ธ๋ฐ€ํ•˜๊ฒŒ ์กฐ์ •ํ•  ๋•Œ ๋ณต์ œํ•ด์•ผ ํ•˜๋Š” ์ค‘์š”ํ•œ ๊ฐ’์ž…๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "facebook/detr-resnet-50" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` `image_processor`์— ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•˜๊ธฐ ์ „์—, ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋‘ ๊ฐ€์ง€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - ์ด๋ฏธ์ง€ ์ฆ๊ฐ• - DETR ๋ชจ๋ธ์˜ ์š”๊ตฌ์— ๋งž๊ฒŒ ์–ด๋…ธํ…Œ์ด์…˜์„ ๋‹ค์‹œ ํฌ๋งทํŒ… ์ฒซ์งธ๋กœ, ๋ชจ๋ธ์ด ํ•™์Šต ๋ฐ์ดํ„ฐ์— ๊ณผ์ ํ•ฉ ๋˜์ง€ ์•Š๋„๋ก ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์ค‘ ์•„๋ฌด๊ฑฐ๋‚˜ ์‚ฌ์šฉํ•˜์—ฌ ๋ณ€ํ™˜์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ๋Š” [Albumentations](https://albumentations.ai/docs/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค... ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋ณ€ํ™˜์„ ์ด๋ฏธ์ง€์— ์ ์šฉํ•˜๊ณ  ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ์—…๋ฐ์ดํŠธํ•˜๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋ฌธ์„œ์—๋Š” [๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•ด ์ด๋ฏธ์ง€๋ฅผ ๋ณด๊ฐ•ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๊ฐ€์ด๋“œ](https://huggingface.co/docs/datasets/object_detection)๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ด ์˜ˆ์ œ์™€ ์ •ํ™•ํžˆ ๋™์ผํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ๊ฐ ์ด๋ฏธ์ง€๋ฅผ (480, 480) ํฌ๊ธฐ๋กœ ์กฐ์ •ํ•˜๊ณ , ์ขŒ์šฐ๋กœ ๋’ค์ง‘๊ณ , ๋ฐ๊ธฐ๋ฅผ ๋†’์ด๋Š” ๋™์ผํ•œ ์ ‘๊ทผ๋ฒ•์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> import albumentations >>> import numpy as np >>> import torch >>> transform = albumentations.Compose( ... [ ... albumentations.Resize(480, 480), ... albumentations.HorizontalFlip(p=1.0), ... albumentations.RandomBrightnessContrast(p=1.0), ... ], ... bbox_params=albumentations.BboxParams(format="coco", label_fields=["category"]), ... ) ``` ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋Š” ์–ด๋…ธํ…Œ์ด์…˜์ด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ˜•์‹์ผ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•ฉ๋‹ˆ๋‹ค: `{'image_id': int, 'annotations': List[Dict]}`, ์—ฌ๊ธฐ์„œ ๊ฐ ๋”•์…”๋„ˆ๋ฆฌ๋Š” COCO ๊ฐ์ฒด ์–ด๋…ธํ…Œ์ด์…˜์ž…๋‹ˆ๋‹ค. ๋‹จ์ผ ์˜ˆ์ œ์— ๋Œ€ํ•ด ์–ด๋…ธํ…Œ์ด์…˜์˜ ํ˜•์‹์„ ๋‹ค์‹œ ์ง€์ •ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> def formatted_anns(image_id, category, area, bbox): ... annotations = [] ... for i in range(0, len(category)): ... new_ann = { ... "image_id": image_id, ... "category_id": category[i], ... "isCrowd": 0, ... "area": area[i], ... "bbox": list(bbox[i]), ... } ... annotations.append(new_ann) ... return annotations ``` ์ด์ œ ์ด๋ฏธ์ง€์™€ ์–ด๋…ธํ…Œ์ด์…˜ ์ „์ฒ˜๋ฆฌ ๋ณ€ํ™˜์„ ๊ฒฐํ•ฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> # transforming a batch >>> def transform_aug_ann(examples): ... image_ids = examples["image_id"] ... images, bboxes, area, categories = [], [], [], [] ... for image, objects in zip(examples["image"], examples["objects"]): ... image = np.array(image.convert("RGB"))[:, :, ::-1] ... out = transform(image=image, bboxes=objects["bbox"], category=objects["category"]) ... area.append(objects["area"]) ... images.append(out["image"]) ... bboxes.append(out["bboxes"]) ... categories.append(out["category"]) ... targets = [ ... {"image_id": id_, "annotations": formatted_anns(id_, cat_, ar_, box_)} ... for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes) ... ] ... return image_processor(images=images, annotations=targets, return_tensors="pt") ``` ์ด์ „ ๋‹จ๊ณ„์—์„œ ๋งŒ๋“  ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๐Ÿค— Datasets์˜ [`~datasets.Dataset.with_transform`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฉ”์†Œ๋“œ๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์š”์†Œ๋ฅผ ๊ฐ€์ ธ์˜ฌ ๋•Œ๋งˆ๋‹ค ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ๋Š” ์ „์ฒ˜๋ฆฌ ํ›„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์˜ˆ์‹œ ํ•˜๋‚˜๋ฅผ ๊ฐ€์ ธ์™€์„œ ๋ณ€ํ™˜ ํ›„ ๋ชจ์–‘์ด ์–ด๋–ป๊ฒŒ ๋˜๋Š”์ง€ ํ™•์ธํ•ด ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋•Œ, `pixel_values` ํ…์„œ, `pixel_mask` ํ…์„œ, ๊ทธ๋ฆฌ๊ณ  `labels`๋กœ ๊ตฌ์„ฑ๋œ ํ…์„œ๊ฐ€ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> cppe5["train"] = cppe5["train"].with_transform(transform_aug_ann) >>> cppe5["train"][15] {'pixel_values': tensor([[[ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9638, -1.9638, -1.9638], ..., [-1.5699, -1.5699, -1.5699, ..., -1.9980, -1.9980, -1.9980], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809]], [[ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8256, -1.8256, -1.8256], ..., [-1.3179, -1.3179, -1.3179, ..., -1.8606, -1.8606, -1.8606], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431]], [[ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6302, -1.6302, -1.6302], ..., [-1.0201, -1.0201, -1.0201, ..., -1.5604, -1.5604, -1.5604], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430]]]), 'pixel_mask': tensor([[1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], ..., [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1]]), 'labels': {'size': tensor([800, 800]), 'image_id': tensor([756]), 'class_labels': tensor([4]), 'boxes': tensor([[0.7340, 0.6986, 0.3414, 0.5944]]), 'area': tensor([519544.4375]), 'iscrowd': tensor([0]), 'orig_size': tensor([480, 480])}} ``` ๊ฐ๊ฐ์˜ ์ด๋ฏธ์ง€๋ฅผ ์„ฑ๊ณต์ ์œผ๋กœ ์ฆ๊ฐ•ํ•˜๊ณ  ์ด๋ฏธ์ง€์˜ ์–ด๋…ธํ…Œ์ด์…˜์„ ์ค€๋น„ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ „์ฒ˜๋ฆฌ๋Š” ์•„์ง ๋๋‚˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ, ์ด๋ฏธ์ง€๋ฅผ ๋ฐฐ์น˜๋กœ ๋งŒ๋“ค ์‚ฌ์šฉ์ž ์ •์˜ `collate_fn`์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ํฐ ์ด๋ฏธ์ง€์— ์ด๋ฏธ์ง€(ํ˜„์žฌ `pixel_values` ์ธ)๋ฅผ ํŒจ๋“œํ•˜๊ณ , ์‹ค์ œ ํ”ฝ์…€(1)๊ณผ ํŒจ๋”ฉ(0)์„ ๋‚˜ํƒ€๋‚ด๊ธฐ ์œ„ํ•ด ๊ทธ์— ํ•ด๋‹นํ•˜๋Š” ์ƒˆ๋กœ์šด `pixel_mask`๋ฅผ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## DETR ๋ชจ๋ธ ํ•™์Šต์‹œํ‚ค๊ธฐ [[training-the-DETR-model]] ์ด์ „ ์„น์…˜์—์„œ ๋Œ€๋ถ€๋ถ„์˜ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜์—ฌ ์ด์ œ ๋ชจ๋ธ์„ ํ•™์Šตํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ด๋ฏธ์ง€๋Š” ๋ฆฌ์‚ฌ์ด์ฆˆ ํ›„์—๋„ ์—ฌ์ „ํžˆ ์šฉ๋Ÿ‰์ด ํฌ๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ํ•˜๋ ค๋ฉด ์ ์–ด๋„ ํ•˜๋‚˜์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์€ ๋‹ค์Œ์˜ ๋‹จ๊ณ„๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค: 1. [`AutoModelForObjectDetection`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒ˜๋ฆฌ์™€ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. 2. [`TrainingArguments`]์—์„œ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. 3. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 4. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒ˜๋ฆฌ์— ์‚ฌ์šฉํ•œ ์ฒดํฌํฌ์ธํŠธ์™€ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ๋•Œ, ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ์—์„œ ๋งŒ๋“  `label2id`์™€ `id2label` ๋งคํ•‘์„ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `ignore_mismatched_sizes=True`๋ฅผ ์ง€์ •ํ•˜์—ฌ ๊ธฐ์กด ๋ถ„๋ฅ˜ ํ—ค๋“œ(๋ชจ๋ธ์—์„œ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉ๋˜๋Š” ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด)๋ฅผ ์ƒˆ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForObjectDetection >>> model = AutoModelForObjectDetection.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ignore_mismatched_sizes=True, ... ) ``` [`TrainingArguments`]์—์„œ `output_dir`์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•œ ๋‹ค์Œ, ํ•„์š”์— ๋”ฐ๋ผ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ตฌ์„ฑํ•˜์„ธ์š”. ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ์—ด์„ ์ œ๊ฑฐํ•˜์ง€ ์•Š๋„๋ก ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ `remove_unused_columns`๊ฐ€ `True`์ผ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ์—ด์ด ์‚ญ์ œ๋ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์—ด์ด ์—†๋Š” ๊ฒฝ์šฐ `pixel_values`๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์—†๊ธฐ ๋•Œ๋ฌธ์— `remove_unused_columns`๋ฅผ `False`๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•˜์—ฌ ๊ณต์œ ํ•˜๋ ค๋ฉด `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์‹ญ์‹œ์˜ค(ํ—ˆ๊น…ํŽ˜์ด์Šค์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="detr-resnet-50_finetuned_cppe5", ... per_device_train_batch_size=8, ... num_train_epochs=10, ... fp16=True, ... save_steps=200, ... logging_steps=50, ... learning_rate=1e-5, ... weight_decay=1e-4, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ `model`, `training_args`, `collate_fn`, `image_processor`์™€ ๋ฐ์ดํ„ฐ ์„ธํŠธ(`cppe5`)๋ฅผ ๋ชจ๋‘ ๊ฐ€์ ธ์˜จ ํ›„, [`~transformers.Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=collate_fn, ... train_dataset=cppe5["train"], ... tokenizer=image_processor, ... ) >>> trainer.train() ``` `training_args`์—์„œ `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•œ ๊ฒฝ์šฐ, ํ•™์Šต ์ฒดํฌํฌ์ธํŠธ๋Š” ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋ฉ๋‹ˆ๋‹ค. ํ•™์Šต ์™„๋ฃŒ ํ›„, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ์ตœ์ข… ๋ชจ๋ธ์„ ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ [[evaluate]] ๊ฐ์ฒด ํƒ์ง€ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ผ๋ จ์˜ <a href="https://cocodataset.org/#detection-eval">COCO-์Šคํƒ€์ผ ์ง€ํ‘œ</a>๋กœ ํ‰๊ฐ€๋ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด์— ๊ตฌํ˜„๋œ ํ‰๊ฐ€ ์ง€ํ‘œ ์ค‘ ํ•˜๋‚˜๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ์—ฌ๊ธฐ์—์„œ๋Š” ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•œ ์ตœ์ข… ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ `torchvision`์—์„œ ์ œ๊ณตํ•˜๋Š” ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. `torchvision` ํ‰๊ฐ€์ž(evaluator)๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์‹ค์ธก๊ฐ’์ธ COCO ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. COCO ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋นŒ๋“œํ•˜๋Š” API๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ํŠน์ • ํ˜•์‹์œผ๋กœ ์ €์žฅํ•ด์•ผ ํ•˜๋ฏ€๋กœ, ๋จผ์ € ์ด๋ฏธ์ง€์™€ ์–ด๋…ธํ…Œ์ด์…˜์„ ๋””์Šคํฌ์— ์ €์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•  ๋•Œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, cppe5["test"]์—์„œ์˜ ์–ด๋…ธํ…Œ์ด์…˜์€ ํฌ๋งท์„ ๋งž์ถฐ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฏธ์ง€๋Š” ๊ทธ๋Œ€๋กœ ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ‰๊ฐ€ ๋‹จ๊ณ„๋Š” ์•ฝ๊ฐ„์˜ ์ž‘์—…์ด ํ•„์š”ํ•˜์ง€๋งŒ, ํฌ๊ฒŒ ์„ธ ๊ฐ€์ง€ ์ฃผ์š” ๋‹จ๊ณ„๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋จผ์ €, `cppe5["test"]` ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค: ์–ด๋…ธํ…Œ์ด์…˜์„ ํฌ๋งท์— ๋งž๊ฒŒ ๋งŒ๋“ค๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋””์Šคํฌ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ```py >>> import json >>> # format annotations the same as for training, no need for data augmentation >>> def val_formatted_anns(image_id, objects): ... annotations = [] ... for i in range(0, len(objects["id"])): ... new_ann = { ... "id": objects["id"][i], ... "category_id": objects["category"][i], ... "iscrowd": 0, ... "image_id": image_id, ... "area": objects["area"][i], ... "bbox": objects["bbox"][i], ... } ... annotations.append(new_ann) ... return annotations >>> # Save images and annotations into the files torchvision.datasets.CocoDetection expects >>> def save_cppe5_annotation_file_images(cppe5): ... output_json = {} ... path_output_cppe5 = f"{os.getcwd()}/cppe5/" ... if not os.path.exists(path_output_cppe5): ... os.makedirs(path_output_cppe5) ... path_anno = os.path.join(path_output_cppe5, "cppe5_ann.json") ... categories_json = [{"supercategory": "none", "id": id, "name": id2label[id]} for id in id2label] ... output_json["images"] = [] ... output_json["annotations"] = [] ... for example in cppe5: ... ann = val_formatted_anns(example["image_id"], example["objects"]) ... output_json["images"].append( ... { ... "id": example["image_id"], ... "width": example["image"].width, ... "height": example["image"].height, ... "file_name": f"{example['image_id']}.png", ... } ... ) ... output_json["annotations"].extend(ann) ... output_json["categories"] = categories_json ... with open(path_anno, "w") as file: ... json.dump(output_json, file, ensure_ascii=False, indent=4) ... for im, img_id in zip(cppe5["image"], cppe5["image_id"]): ... path_img = os.path.join(path_output_cppe5, f"{img_id}.png") ... im.save(path_img) ... return path_output_cppe5, path_anno ``` ๋‹ค์Œ์œผ๋กœ, `cocoevaluator`์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” `CocoDetection` ํด๋ž˜์Šค์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torchvision >>> class CocoDetection(torchvision.datasets.CocoDetection): ... def __init__(self, img_folder, image_processor, ann_file): ... super().__init__(img_folder, ann_file) ... self.image_processor = image_processor ... def __getitem__(self, idx): ... # read in PIL image and target in COCO format ... img, target = super(CocoDetection, self).__getitem__(idx) ... # preprocess image and target: converting target to DETR format, ... # resizing + normalization of both image and target) ... image_id = self.ids[idx] ... target = {"image_id": image_id, "annotations": target} ... encoding = self.image_processor(images=img, annotations=target, return_tensors="pt") ... pixel_values = encoding["pixel_values"].squeeze() # remove batch dimension ... target = encoding["labels"][0] # remove batch dimension ... return {"pixel_values": pixel_values, "labels": target} >>> im_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> path_output_cppe5, path_anno = save_cppe5_annotation_file_images(cppe5["test"]) >>> test_ds_coco_format = CocoDetection(path_output_cppe5, im_processor, path_anno) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์™€์„œ ํ‰๊ฐ€๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> from tqdm import tqdm >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> module = evaluate.load("ybelkada/cocoevaluate", coco=test_ds_coco_format.coco) >>> val_dataloader = torch.utils.data.DataLoader( ... test_ds_coco_format, batch_size=8, shuffle=False, num_workers=4, collate_fn=collate_fn ... ) >>> with torch.no_grad(): ... for idx, batch in enumerate(tqdm(val_dataloader)): ... pixel_values = batch["pixel_values"] ... pixel_mask = batch["pixel_mask"] ... labels = [ ... {k: v for k, v in t.items()} for t in batch["labels"] ... ] # these are in DETR format, resized + normalized ... # forward pass ... outputs = model(pixel_values=pixel_values, pixel_mask=pixel_mask) ... orig_target_sizes = torch.stack([target["orig_size"] for target in labels], dim=0) ... results = im_processor.post_process(outputs, orig_target_sizes) # convert outputs of model to Pascal VOC format (xmin, ymin, xmax, ymax) ... module.add(prediction=results, reference=labels) ... del batch >>> results = module.compute() >>> print(results) Accumulating evaluation results... DONE (t=0.08s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.352 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.681 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.292 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.208 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.429 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.274 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.501 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.191 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.323 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.590 ``` ์ด๋Ÿฌํ•œ ๊ฒฐ๊ณผ๋Š” [`~transformers.TrainingArguments`]์˜ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์กฐ์ •ํ•˜์—ฌ ๋”์šฑ ๊ฐœ์„ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•œ๋ฒˆ ์‹œ๋„ํ•ด ๋ณด์„ธ์š”! ## ์ถ”๋ก ํ•˜๊ธฐ [[inference]] DETR ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ๋ฐ ํ‰๊ฐ€ํ•˜๊ณ , ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ ํ–ˆ์œผ๋ฏ€๋กœ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> import requests >>> url = "https://i.imgur.com/2lnWoly.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> obj_detector = pipeline("object-detection", model="devonho/detr-resnet-50_finetuned_cppe5") >>> obj_detector(image) ``` ๋งŒ์•ฝ ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> image_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> with torch.no_grad(): ... inputs = image_processor(images=image, return_tensors="pt") ... outputs = model(**inputs) ... target_sizes = torch.tensor([image.size[::-1]]) ... results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[0] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected Coverall with confidence 0.566 at location [1215.32, 147.38, 4401.81, 3227.08] Detected Mask with confidence 0.584 at location [2449.06, 823.19, 3256.43, 1413.9] ``` ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> draw = ImageDraw.Draw(image) >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... x, y, x2, y2 = tuple(box) ... draw.rectangle((x, y, x2, y2), outline="red", width=1) ... draw.text((x, y), model.config.id2label[label.item()], fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://i.imgur.com/4QZnf9A.png" alt="Object detection result on a new image"/> </div>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/audio_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์˜ค๋””์˜ค ๋ถ„๋ฅ˜[[audio_classification]] [[open-in-colab]] <Youtube id="KWwzcmG98Ds"/> ์˜ค๋””์˜ค ๋ถ„๋ฅ˜๋Š” ํ…์ŠคํŠธ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์— ํด๋ž˜์Šค ๋ ˆ์ด๋ธ” ์ถœ๋ ฅ์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ํ…์ŠคํŠธ ์ž…๋ ฅ ๋Œ€์‹  ์›์‹œ ์˜ค๋””์˜ค ํŒŒํ˜•์ด ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์˜ ์‹ค์ œ ์ ์šฉ ๋ถ„์•ผ์—๋Š” ํ™”์ž์˜ ์˜๋„ ํŒŒ์•…, ์–ธ์–ด ๋ถ„๋ฅ˜, ์†Œ๋ฆฌ๋กœ ๋™๋ฌผ ์ข…์„ ์‹๋ณ„ํ•˜๋Š” ๊ฒƒ ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: 1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ํ™”์ž์˜ ์˜๋„๋ฅผ ๋ถ„๋ฅ˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/audio-classification)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ—ˆ๊น…ํŽ˜์ด์Šค ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## MInDS-14 ๋ฐ์ดํ„ฐ์…‹ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[load_minds_14_dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ MinDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train` ๋ถ„ํ• ์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ์ž‘์€ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ์ง‘ํ•ฉ์œผ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ์†Œ๋น„ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> minds = minds.train_test_split(test_size=0.2) ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์‚ดํŽด๋ณผ๊ฒŒ์š”: ```py >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 450 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 113 }) }) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” `lang_id` ๋ฐ `english_transcription`๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์ •๋ณด๊ฐ€ ๋งŽ์ด ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `audio` ๋ฐ `intent_class`์— ์ค‘์ ์„ ๋‘˜ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์—ด์€ [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.remove_columns(["path", "transcription", "english_transcription", "lang_id"]) ``` ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> minds["train"][0] {'audio': {'array': array([ 0. , 0. , 0. , ..., -0.00048828, -0.00024414, -0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 8000}, 'intent_class': 2} ``` ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `audio`: ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด ํ˜ธ์ถœํ•ด์•ผ ํ•˜๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์˜ 1์ฐจ์› `๋ฐฐ์—ด`์ž…๋‹ˆ๋‹ค. - `intent_class`: ํ™”์ž์˜ ์˜๋„์— ๋Œ€ํ•œ ํด๋ž˜์Šค ID๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ๋ ˆ์ด๋ธ” ID์—์„œ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์‰ฝ๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ๋„๋ก ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์ •์ˆ˜๋กœ ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „์„ ๋งŒ๋“ค๊ฑฐ๋‚˜ ๊ทธ ๋ฐ˜๋Œ€๋กœ ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> labels = minds["train"].features["intent_class"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` ์ด์ œ ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> id2label[str(2)] 'app_error' ``` ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด Wav2Vec2 ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` MinDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„๋Š” 8000khz์ด๋ฏ€๋กœ(์ด ์ •๋ณด๋Š” [๋ฐ์ดํ„ฐ์„ธํŠธ ์นด๋“œ](https://huggingface.co/datasets/PolyAI/minds14)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค), ์‚ฌ์ „ ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ 16000kHz๋กœ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([ 2.2098757e-05, 4.6582241e-05, -2.2803260e-05, ..., -2.8419291e-04, -2.3305941e-04, -1.1425107e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 16000}, 'intent_class': 2} ``` ์ด์ œ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: 1. ๊ฐ€์ ธ์˜ฌ `์˜ค๋””์˜ค` ์—ด์„ ํ˜ธ์ถœํ•˜๊ณ  ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„๊ฐ€ ๋ชจ๋ธ์— ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์ด ์ •๋ณด๋Š” Wav2Vec2 [๋ชจ๋ธ ์นด๋“œ](https://huggingface.co/facebook/wav2vec2-base)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. ๊ธด ์ž…๋ ฅ์ด ์ž˜๋ฆฌ์ง€ ์•Š๊ณ  ์ผ๊ด„ ์ฒ˜๋ฆฌ๋˜๋„๋ก ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True ... ) ... return inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ๊ธฐ๋Šฅ์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map`์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์„ ์ œ๊ฑฐํ•˜๊ณ  `intent_class`์˜ ์ด๋ฆ„์„ ๋ชจ๋ธ์ด ์˜ˆ์ƒํ•˜๋Š” ์ด๋ฆ„์ธ `label`๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค: ```py >>> encoded_minds = minds.map(preprocess_function, remove_columns="audio", batched=True) >>> encoded_minds = encoded_minds.rename_column("intent_class", "label") ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy(์ •ํ™•๋„)](https://huggingface.co/spaces/evaluate-metric/accuracy) ๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค(๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๐Ÿค— Evalutate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour) ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions = np.argmax(eval_pred.predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=eval_pred.label_ids) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํŠธ๋ ˆ์ด๋‹์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForAudioClassification`]์„ ์ด์šฉํ•ด์„œ Wav2Vec2๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer >>> num_labels = len(id2label) >>> model = AutoModelForAudioClassification.from_pretrained( ... "facebook/wav2vec2-base", num_labels=num_labels, label2id=label2id, id2label=id2label ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ [`TrainingArguments`]์— ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub = True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด ํ—ˆ๊น… ํŽ˜์ด์Šค์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค [`Trainer`]๊ฐ€ ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ž๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_mind_model", ... eval_strategy="epoch", ... save_strategy="epoch", ... learning_rate=3e-5, ... per_device_train_batch_size=32, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=32, ... num_train_epochs=10, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=feature_extractor, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> For a more in-depth example of how to finetune a model for audio classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). </Tip> ## ์ถ”๋ก [[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์‹คํ–‰ํ•  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„๋ฅผ ๋ชจ๋ธ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„์™€ ์ผ์น˜ํ•˜๋„๋ก ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์˜ค๋””์˜ค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> classifier = pipeline("audio-classification", model="stevhliu/my_awesome_minds_model") >>> classifier(audio_file) [ {'score': 0.09766869246959686, 'label': 'cash_deposit'}, {'score': 0.07998877018690109, 'label': 'app_error'}, {'score': 0.0781070664525032, 'label': 'joint_account'}, {'score': 0.07667109370231628, 'label': 'pay_bill'}, {'score': 0.0755252093076706, 'label': 'balance'} ] ``` ์›ํ•˜๋Š” ๊ฒฝ์šฐ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  `์ž…๋ ฅ`์„ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("stevhliu/my_awesome_minds_model") >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  ๋กœ์ง“์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForAudioClassification >>> model = AutoModelForAudioClassification.from_pretrained("stevhliu/my_awesome_minds_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ํ™•๋ฅ ์ด ๊ฐ€์žฅ ๋†’์€ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜จ ๋‹ค์Œ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฅผ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'cash_deposit' ``` </pt> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/zero_shot_image_classification.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[zeroshot-image-classification]] [[open-in-colab]] ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ํŠน์ • ์นดํ…Œ๊ณ ๋ฆฌ์˜ ์˜ˆ์‹œ๊ฐ€ ํฌํ•จ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต๋˜์ง€ ์•Š์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋ ˆ์ด๋ธ”์ด ๋‹ฌ๋ฆฐ ํŠน์ • ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋กœ ๋ชจ๋ธ ํ•™์Šต์ด ํ•„์š”ํ•˜๋ฉฐ, ์ด ๋ชจ๋ธ์€ ํŠน์ • ์ด๋ฏธ์ง€์˜ ํŠน์ง•์„ ๋ ˆ์ด๋ธ”์— "๋งคํ•‘"ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ ˆ์ด๋ธ”์ด ์žˆ๋Š” ๋ถ„๋ฅ˜ ์ž‘์—…์— ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š”, ๋ชจ๋ธ์„ "์žฌ๋ณด์ •"ํ•˜๊ธฐ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด์™€ ๋Œ€์กฐ์ ์œผ๋กœ, ์ œ๋กœ์ƒท ๋˜๋Š” ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜(open vocabulary) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋Œ€๊ทœ๋ชจ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์™€ ํ•ด๋‹น ์„ค๋ช…์— ๋Œ€ํ•ด ํ•™์Šต๋œ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ(multimodal) ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ํฌํ•จํ•œ ๋งŽ์€ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ •๋ ฌ๋œ(aligned) ๋น„์ „ ์–ธ์–ด ํ‘œํ˜„์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ๋Œ€ํ•œ ๋ณด๋‹ค ์œ ์—ฐํ•œ ์ ‘๊ทผ ๋ฐฉ์‹์œผ๋กœ, ์ถ”๊ฐ€ ํ•™์Šต ๋ฐ์ดํ„ฐ ์—†์ด ์ƒˆ๋กœ์šด ๋ ˆ์ด๋ธ”์ด๋‚˜ ํ•™์Šตํ•˜์ง€ ๋ชปํ•œ ์นดํ…Œ๊ณ ๋ฆฌ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ์ผ๋ฐ˜ํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž๊ฐ€ ๋Œ€์ƒ ๊ฐœ์ฒด์— ๋Œ€ํ•œ ์ž์œ  ํ˜•์‹์˜ ํ…์ŠคํŠธ ์„ค๋ช…์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋ฐฐ์šธ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ ๋งŒ๋“ค๊ธฐ * ์ง์ ‘ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ ์ถ”๋ก  ์‹คํ–‰ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ[[zeroshot-image-classification-pipeline]] [`pipeline`]์„ ํ™œ์šฉํ•˜๋ฉด ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๊ฒŒ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ```python >>> from transformers import pipeline >>> checkpoint = "openai/clip-vit-large-patch14" >>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification") ``` ๋‹ค์Œ์œผ๋กœ, ๋ถ„๋ฅ˜ํ•˜๊ณ  ์‹ถ์€ ์ด๋ฏธ์ง€๋ฅผ ์„ ํƒํ•˜์„ธ์š”. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"/> </div> ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ์ด๋ฏธ์ง€์˜ ํ›„๋ณด ๋ ˆ์ด๋ธ”์ธ `candidate_labels`๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ์ „๋‹ฌํ•˜์ง€๋งŒ, ์ปดํ“จํ„ฐ์— ์ €์žฅ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ๋‚˜ url๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `candidate_labels`๋Š” ์ด ์˜ˆ์‹œ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•œ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์ข€ ๋” ์„ค๋ช…์ ์ธ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> predictions = classifier(image, candidate_labels=["fox", "bear", "seagull", "owl"]) >>> predictions [{'score': 0.9996670484542847, 'label': 'owl'}, {'score': 0.000199399160919711, 'label': 'seagull'}, {'score': 7.392891711788252e-05, 'label': 'fox'}, {'score': 5.96074532950297e-05, 'label': 'bear'}] ``` ## ์ง์ ‘ ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ํ•˜๊ธฐ[[zeroshot-image-classification-by-hand]] ์ด์ œ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ, ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด์ „๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification >>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` ๋‹ค๋ฅธ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"/> </div> ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์˜ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฌ๋‚˜์ด์ €๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> candidate_labels = ["tree", "car", "bike", "cat"] >>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True) ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ , ๊ฒฐ๊ณผ๋ฅผ ํ›„์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits_per_image[0] >>> probs = logits.softmax(dim=-1).numpy() >>> scores = probs.tolist() >>> result = [ ... {"score": score, "label": candidate_label} ... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0]) ... ] >>> result [{'score': 0.998572, 'label': 'car'}, {'score': 0.0010570387, 'label': 'bike'}, {'score': 0.0003393686, 'label': 'tree'}, {'score': 3.1572064e-05, 'label': 'cat'}] ```
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/document_question_answering.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต(Document Question Answering) [[document_question_answering]] [[open-in-colab]] ๋ฌธ์„œ ์‹œ๊ฐ์  ์งˆ์˜ ์‘๋‹ต(Document Visual Question Answering)์ด๋ผ๊ณ ๋„ ํ•˜๋Š” ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต(Document Question Answering)์€ ๋ฌธ์„œ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ต๋ณ€์„ ์ฃผ๋Š” ํƒœ์Šคํฌ์ž…๋‹ˆ๋‹ค. ์ด ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์˜ ์กฐํ•ฉ์ด๊ณ , ์ถœ๋ ฅ์€ ์ž์—ฐ์–ด๋กœ ๋œ ๋‹ต๋ณ€์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ํ…์ŠคํŠธ, ๋‹จ์–ด์˜ ์œ„์น˜(๋ฐ”์šด๋”ฉ ๋ฐ•์Šค), ์ด๋ฏธ์ง€ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๋ฅผ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ๋Š” ๋‹ค์Œ ๋‚ด์šฉ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค: - [DocVQA dataset](https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut)์„ ์‚ฌ์šฉํ•ด [LayoutLMv2](../model_doc/layoutlmv2) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ - ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/image-to-text)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> LayoutLMv2๋Š” ํ† ํฐ์˜ ๋งˆ์ง€๋ง‰ ์€๋‹‰์ธต ์œ„์— ์งˆ์˜ ์‘๋‹ต ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘ ํ† ํฐ๊ณผ ๋ ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์˜ˆ์ธกํ•จ์œผ๋กœ์จ ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๋ฌธ๋งฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ ์งˆ๋ฌธ์— ๋‹ตํ•˜๋Š” ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š” ์ถ”์ถœํ˜• ์งˆ์˜ ์‘๋‹ต(Extractive question answering)์œผ๋กœ ๋ฌธ์ œ๋ฅผ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ๋งฅ์€ OCR ์—”์ง„์˜ ์ถœ๋ ฅ์—์„œ ๊ฐ€์ ธ์˜ค๋ฉฐ, ์—ฌ๊ธฐ์„œ๋Š” Google์˜ Tesseract๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. LayoutLMv2๋Š” detectron2, torchvision ๋ฐ ํ…Œ์„œ๋ž™ํŠธ๋ฅผ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ```bash pip install -q transformers datasets ``` ```bash pip install 'git+https://github.com/facebookresearch/detectron2.git' pip install torchvision ``` ```bash sudo apt install tesseract-ocr pip install -q pytesseract ``` ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค์„ ๋ชจ๋‘ ์„ค์น˜ํ•œ ํ›„ ๋Ÿฐํƒ€์ž„์„ ๋‹ค์‹œ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋‹น์‹ ์˜ ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•ด์„œ ๋ชจ๋ธ์„ ๐Ÿค— Hub์— ์—…๋กœ๋“œํ•˜์„ธ์š”. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์‹คํ–‰๋˜๋ฉด, ๋กœ๊ทธ์ธ์„ ์œ„ํ•ด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ๋ช‡ ๊ฐ€์ง€ ์ „์—ญ ๋ณ€์ˆ˜๋ฅผ ์ •์˜ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> model_checkpoint = "microsoft/layoutlmv2-base-uncased" >>> batch_size = 4 ``` ## ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ [[load-the-data]] ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๐Ÿค— Hub์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š” ์ „์ฒ˜๋ฆฌ๋œ DocVQA์˜ ์ž‘์€ ์ƒ˜ํ”Œ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. DocVQA์˜ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, [DocVQA homepage](https://rrc.cvc.uab.es/?ch=17)์— ๊ฐ€์ž… ํ›„ ๋‹ค์šด๋กœ๋“œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œ ํ–ˆ๋‹ค๋ฉด, ์ด ๊ฐ€์ด๋“œ๋ฅผ ๊ณ„์† ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด [๐Ÿค— dataset์— ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•](https://huggingface.co/docs/datasets/loading#local-and-remote-files)์„ ํ™•์ธํ•˜์„ธ์š”. ```py >>> from datasets import load_dataset >>> dataset = load_dataset("nielsr/docvqa_1200_examples") >>> dataset DatasetDict({ train: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 1000 }) test: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 200 }) }) ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ, ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ์ด๋ฏธ ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์–ด์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌด์ž‘์œ„๋กœ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด๋ฉด์„œ ํŠน์„ฑ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ```py >>> dataset["train"].features ``` ๊ฐ ํ•„๋“œ๊ฐ€ ๋‚˜ํƒ€๋‚ด๋Š” ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * `id`: ์˜ˆ์ œ์˜ id * `image`: ๋ฌธ์„œ ์ด๋ฏธ์ง€๋ฅผ ํฌํ•จํ•˜๋Š” PIL.Image.Image ๊ฐ์ฒด * `query`: ์งˆ๋ฌธ ๋ฌธ์ž์—ด - ์—ฌ๋Ÿฌ ์–ธ์–ด์˜ ์ž์—ฐ์–ด๋กœ ๋œ ์งˆ๋ฌธ * `answers`: ์‚ฌ๋žŒ์ด ์ฃผ์„์„ ๋‹จ ์ •๋‹ต ๋ฆฌ์ŠคํŠธ * `words` and `bounding_boxes`: OCR์˜ ๊ฒฐ๊ณผ๊ฐ’๋“ค์ด๋ฉฐ ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์‚ฌ์šฉํ•˜์ง€ ์•Š์„ ์˜ˆ์ • * `answer`: ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ์ผ์น˜ํ•˜๋Š” ๋‹ต๋ณ€์ด๋ฉฐ ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์‚ฌ์šฉํ•˜์ง€ ์•Š์„ ์˜ˆ์ • ์˜์–ด๋กœ ๋œ ์งˆ๋ฌธ๋งŒ ๋‚จ๊ธฐ๊ณ  ๋‹ค๋ฅธ ๋ชจ๋ธ์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ํฌํ•จํ•˜๋Š” `answer` ํŠน์„ฑ์„ ์‚ญ์ œํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ฃผ์„ ์ž‘์„ฑ์ž๊ฐ€ ์ œ๊ณตํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ฒซ ๋ฒˆ์งธ ๋‹ต๋ณ€์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ๋˜๋Š” ๋ฌด์ž‘์œ„๋กœ ์ƒ˜ํ”Œ์„ ์ถ”์ถœํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> updated_dataset = dataset.map(lambda example: {"question": example["query"]["en"]}, remove_columns=["query"]) >>> updated_dataset = updated_dataset.map( ... lambda example: {"answer": example["answers"][0]}, remove_columns=["answer", "answers"] ... ) ``` ์ด ๊ฐ€์ด๋“œ์—์„œ ์‚ฌ์šฉํ•˜๋Š” LayoutLMv2 ์ฒดํฌํฌ์ธํŠธ๋Š” `max_position_embeddings = 512`๋กœ ํ›ˆ๋ จ๋˜์—ˆ์Šต๋‹ˆ๋‹ค(์ด ์ •๋ณด๋Š” [์ฒดํฌํฌ์ธํŠธ์˜ `config.json` ํŒŒ์ผ](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค). ๋ฐ”๋กœ ์˜ˆ์ œ๋ฅผ ์ž˜๋ผ๋‚ผ ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ๊ธด ๋ฌธ์„œ์˜ ๋์— ๋‹ต๋ณ€์ด ์žˆ์–ด ์ž˜๋ฆฌ๋Š” ์ƒํ™ฉ์„ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ์—ฌ๊ธฐ์„œ๋Š” ์ž„๋ฒ ๋”ฉ์ด 512๋ณด๋‹ค ๊ธธ์–ด์งˆ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ์ œ๋ฅผ ์ œ๊ฑฐํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์žˆ๋Š” ๋Œ€๋ถ€๋ถ„์˜ ๋ฌธ์„œ๊ฐ€ ๊ธด ๊ฒฝ์šฐ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค - ์ž์„ธํ•œ ๋‚ด์šฉ์„ ํ™•์ธํ•˜๊ณ  ์‹ถ์œผ๋ฉด ์ด [๋…ธํŠธ๋ถ](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb)์„ ํ™•์ธํ•˜์„ธ์š”. ```py >>> updated_dataset = updated_dataset.filter(lambda x: len(x["words"]) + len(x["question"].split()) < 512) ``` ์ด ์‹œ์ ์—์„œ ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ OCR ํŠน์„ฑ๋„ ์ œ๊ฑฐํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. OCR ํŠน์„ฑ์€ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์œผ๋กœ, ์ด ๊ฐ€์ด๋“œ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ ์š”๊ตฌ ์‚ฌํ•ญ๊ณผ ์ผ์น˜ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์ด ํŠน์„ฑ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ผ๋ถ€ ์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋Œ€์‹ , ์›๋ณธ ๋ฐ์ดํ„ฐ์— [`LayoutLMv2Processor`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ OCR ๋ฐ ํ† ํฐํ™”๋ฅผ ๋ชจ๋‘ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ชจ๋ธ์ด ์š”๊ตฌํ•˜๋Š” ์ž…๋ ฅ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋ฅผ ์ˆ˜๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด, [`LayoutLMv2` model documentation](../model_doc/layoutlmv2)์—์„œ ๋ชจ๋ธ์ด ์š”๊ตฌํ•˜๋Š” ์ž…๋ ฅ ํฌ๋งท์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ```py >>> updated_dataset = updated_dataset.remove_columns("words") >>> updated_dataset = updated_dataset.remove_columns("bounding_boxes") ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ฐ์ดํ„ฐ ํƒ์ƒ‰์„ ์™„๋ฃŒํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค. ```py >>> updated_dataset["train"][11]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.jpg" alt="DocVQA Image Example"/> </div> ## ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ [[preprocess-the-data]] ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋Š” ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ํƒœ์Šคํฌ์ด๋ฉฐ, ๊ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์˜ ์ž…๋ ฅ์ด ๋ชจ๋ธ์˜ ์š”๊ตฌ์— ๋งž๊ฒŒ ์ „์ฒ˜๋ฆฌ ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ธ์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ๋Š” ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฒฐํ•ฉํ•œ [`LayoutLMv2Processor`]๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(model_checkpoint) ``` ### ๋ฌธ์„œ ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ [[preprocessing-document-images]] ๋จผ์ €, ํ”„๋กœ์„ธ์„œ์˜ `image_processor`๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ฌธ์„œ ์ด๋ฏธ์ง€๋ฅผ ์ค€๋น„ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ๊ฐ’์œผ๋กœ, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋Š” ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ 224x224๋กœ ์กฐ์ •ํ•˜๊ณ  ์ƒ‰์ƒ ์ฑ„๋„์˜ ์ˆœ์„œ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ์ง€ ํ™•์ธํ•œ ํ›„ ๋‹จ์–ด์™€ ์ •๊ทœํ™”๋œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด ํ…Œ์„œ๋ž™ํŠธ๋ฅผ ์‚ฌ์šฉํ•ด OCR๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์šฐ๋ฆฌ๊ฐ€ ํ•„์š”ํ•œ ๊ฒƒ๊ณผ ๊ธฐ๋ณธ๊ฐ’์€ ์™„์ „ํžˆ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์— ๊ธฐ๋ณธ ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜๊ณ  OCR์˜ ๊ฒฐ๊ณผ๋ฅผ ๋ณ€ํ™˜ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ```py >>> image_processor = processor.image_processor >>> def get_ocr_words_and_boxes(examples): ... images = [image.convert("RGB") for image in examples["image"]] ... encoded_inputs = image_processor(images) ... examples["image"] = encoded_inputs.pixel_values ... examples["words"] = encoded_inputs.words ... examples["boxes"] = encoded_inputs.boxes ... return examples ``` ์ด ์ „์ฒ˜๋ฆฌ๋ฅผ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด์— ๋น ๋ฅด๊ฒŒ ์ ์šฉํ•˜๋ ค๋ฉด [`~datasets.Dataset.map`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ```py >>> dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2) ``` ### ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ [[preprocessing-text-data]] ์ด๋ฏธ์ง€์— OCR์„ ์ ์šฉํ–ˆ์œผ๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ…์ŠคํŠธ ๋ถ€๋ถ„์„ ๋ชจ๋ธ์— ๋งž๊ฒŒ ์ธ์ฝ”๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ธ์ฝ”๋”ฉ์—๋Š” ์ด์ „ ๋‹จ๊ณ„์—์„œ ๊ฐ€์ ธ์˜จ ๋‹จ์–ด์™€ ๋ฐ•์Šค๋ฅผ ํ† ํฐ ์ˆ˜์ค€์˜ `input_ids`, `attention_mask`, `token_type_ids` ๋ฐ `bbox`๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ž‘์—…์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด ํ”„๋กœ์„ธ์„œ์˜ `tokenizer`๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> tokenizer = processor.tokenizer ``` ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ ์ „์ฒ˜๋ฆฌ ์™ธ์—๋„ ๋ชจ๋ธ์„ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ `xxxForQuestionAnswering` ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ๋ ˆ์ด๋ธ”์€ `start_positions`์™€ `end_positions`๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ ์–ด๋–ค ํ† ํฐ์ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘๊ณผ ๋์— ์žˆ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ” ์ถ”๊ฐ€๋ฅผ ์œ„ํ•ด์„œ, ๋จผ์ € ๋” ํฐ ๋ฆฌ์ŠคํŠธ(๋‹จ์–ด ๋ฆฌ์ŠคํŠธ)์—์„œ ํ•˜์œ„ ๋ฆฌ์ŠคํŠธ(๋‹จ์–ด๋กœ ๋ถ„ํ• ๋œ ๋‹ต๋ณ€)์„ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š” ํ—ฌํผ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” `words_list`์™€ `answer_list`, ์ด๋ ‡๊ฒŒ ๋‘ ๋ฆฌ์ŠคํŠธ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `words_list`๋ฅผ ๋ฐ˜๋ณตํ•˜์—ฌ `words_list`์˜ ํ˜„์žฌ ๋‹จ์–ด(words_list[i])๊ฐ€ `answer_list`์˜ ์ฒซ ๋ฒˆ์งธ ๋‹จ์–ด(answer_list[0])์™€ ๊ฐ™์€์ง€, ํ˜„์žฌ ๋‹จ์–ด์—์„œ ์‹œ์ž‘ํ•ด `answer_list`์™€ ๊ฐ™์€ ๊ธธ์ด๋งŒํผ์˜ `words_list`์˜ ํ•˜์œ„ ๋ฆฌ์ŠคํŠธ๊ฐ€ `answer_list`์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์ด ์กฐ๊ฑด์ด ์ฐธ์ด๋ผ๋ฉด ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ฐœ๊ฒฌํ–ˆ์Œ์„ ์˜๋ฏธํ•˜๋ฉฐ, ํ•จ์ˆ˜๋Š” ์ผ์น˜ ํ•ญ๋ชฉ, ์‹œ์ž‘ ์ธ๋ฑ์Šค(idx) ๋ฐ ์ข…๋ฃŒ ์ธ๋ฑ์Šค(idx + len(answer_list) - 1)๋ฅผ ๊ธฐ๋กํ•ฉ๋‹ˆ๋‹ค. ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์ด ๋‘ ๊ฐœ ์ด์ƒ ๋ฐœ๊ฒฌ๋˜๋ฉด ํ•จ์ˆ˜๋Š” ์ฒซ ๋ฒˆ์งธ ํ•ญ๋ชฉ๋งŒ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์ด ์—†๋‹ค๋ฉด ํ•จ์ˆ˜๋Š” (`None`, 0, 0)์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def subfinder(words_list, answer_list): ... matches = [] ... start_indices = [] ... end_indices = [] ... for idx, i in enumerate(range(len(words_list))): ... if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list: ... matches.append(answer_list) ... start_indices.append(idx) ... end_indices.append(idx + len(answer_list) - 1) ... if matches: ... return matches[0], start_indices[0], end_indices[0] ... else: ... return None, 0, 0 ``` ์ด ํ•จ์ˆ˜๊ฐ€ ์–ด๋–ป๊ฒŒ ์ •๋‹ต์˜ ์œ„์น˜๋ฅผ ์ฐพ๋Š”์ง€ ์„ค๋ช…ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์Œ ์˜ˆ์ œ์—์„œ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset_with_ocr["train"][1] >>> words = [word.lower() for word in example["words"]] >>> match, word_idx_start, word_idx_end = subfinder(words, example["answer"].lower().split()) >>> print("Question: ", example["question"]) >>> print("Words:", words) >>> print("Answer: ", example["answer"]) >>> print("start_index", word_idx_start) >>> print("end_index", word_idx_end) Question: Who is in cc in this letter? Words: ['wie', 'baw', 'brown', '&', 'williamson', 'tobacco', 'corporation', 'research', '&', 'development', 'internal', 'correspondence', 'to:', 'r.', 'h.', 'honeycutt', 'ce:', 't.f.', 'riehl', 'from:', '.', 'c.j.', 'cook', 'date:', 'may', '8,', '1995', 'subject:', 'review', 'of', 'existing', 'brainstorming', 'ideas/483', 'the', 'major', 'function', 'of', 'the', 'product', 'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to', 'manufacture', 'and', 'sell.', 'novel', 'is', 'defined', 'as:', 'of', 'a', 'new', 'kind,', 'or', 'different', 'from', 'anything', 'seen', 'or', 'known', 'before.', 'innovation', 'is', 'defined', 'as:', 'something', 'new', 'or', 'different', 'introduced;', 'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate', 'the', 'latest', 'technologies,', 'materials', 'and', 'know-how', 'available', 'to', 'give', 'then', 'a', 'unique', 'taste', 'or', 'look.', 'the', 'first', 'task', 'of', 'the', 'product', 'innovation', 'group', 'was', 'to', 'assemble,', 'review', 'and', 'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major', 'categories', 'labeled', 'appearance', 'and', 'taste/aroma.', 'these', 'categories', 'are', 'used', 'for', 'novel', 'products', 'that', 'may', 'differ', 'from', 'a', 'visual', 'and/or', 'taste/aroma', 'point', 'of', 'view', 'compared', 'to', 'canventional', 'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and', 'brand', 'extensions.', 'appearance', 'this', 'category', 'is', 'used', 'for', 'novel', 'cigarette', 'constructions', 'that', 'yield', 'visually', 'different', 'products', 'with', 'minimal', 'changes', 'in', 'smoke', 'chemistry', 'two', 'cigarettes', 'in', 'cne.', 'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.', '*cigarettes', 'with', 'interspaced', 'perforations', 'to', 'enable', 'smoker', 'to', 'separate', 'unburned', 'section', 'for', 'future', 'smoking.', 'ยซshort', 'cigarette,', 'tobacco', 'section', '30', 'mm.', 'ยซextremely', 'fast', 'buming', 'cigarette.', 'ยซnovel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while', 'maintaining', 'smoking', 'mechanics', 'and', 'visual', 'characteristics.', 'higher', 'basis', 'weight', 'paper:', 'potential', 'reduction', 'in', 'tobacco', 'weight.', 'ยซmore', 'rigid', 'tobacco', 'column;', 'stiffing', 'agent', 'for', 'tobacco;', 'e.g.', 'starch', '*colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes', 'for', 'easter', 'or', 'in', 'an', 'ebony', 'and', 'ivory', 'brand', 'containing', 'a', 'mixture', 'of', 'all', 'black', '(black', 'paper', 'and', 'tow)', 'and', 'ail', 'white', 'cigarettes.', '499150498'] Answer: T.F. Riehl start_index 17 end_index 18 ``` ํ•œํŽธ, ์œ„ ์˜ˆ์ œ๊ฐ€ ์ธ์ฝ”๋”ฉ๋˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค: ```py >>> encoding = tokenizer(example["question"], example["words"], example["boxes"]) >>> tokenizer.decode(encoding["input_ids"]) [CLS] who is in cc in this letter? [SEP] wie baw brown & williamson tobacco corporation research & development ... ``` ์ด์ œ ์ธ์ฝ”๋”ฉ๋œ ์ž…๋ ฅ์—์„œ ์ •๋‹ต์˜ ์œ„์น˜๋ฅผ ์ฐพ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. * `token_type_ids`๋Š” ์–ด๋–ค ํ† ํฐ์ด ์งˆ๋ฌธ์— ์†ํ•˜๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ์–ด๋–ค ํ† ํฐ์ด ๋ฌธ์„œ์˜ ๋‹จ์–ด์— ํฌํ•จ๋˜๋Š”์ง€๋ฅผ ์•Œ๋ ค์ค๋‹ˆ๋‹ค. * `tokenizer.cls_token_id` ์ž…๋ ฅ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์žˆ๋Š” ํŠน์ˆ˜ ํ† ํฐ์„ ์ฐพ๋Š” ๋ฐ ๋„์›€์„ ์ค๋‹ˆ๋‹ค. * `word_ids`๋Š” ์›๋ณธ `words`์—์„œ ์ฐพ์€ ๋‹ต๋ณ€์„ ์ „์ฒด ์ธ์ฝ”๋”ฉ๋œ ์ž…๋ ฅ์˜ ๋™์ผํ•œ ๋‹ต๊ณผ ์ผ์น˜์‹œํ‚ค๊ณ  ์ธ์ฝ”๋”ฉ๋œ ์ž…๋ ฅ์—์„œ ๋‹ต๋ณ€์˜ ์‹œ์ž‘/๋ ์œ„์น˜๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ์œ„ ๋‚ด์šฉ๋“ค์„ ์—ผ๋‘์— ๋‘๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ์ธ์ฝ”๋”ฉํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> def encode_dataset(examples, max_length=512): ... questions = examples["question"] ... words = examples["words"] ... boxes = examples["boxes"] ... answers = examples["answer"] ... # ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ธ์ฝ”๋”ฉํ•˜๊ณ  start_positions์™€ end_positions๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค ... encoding = tokenizer(questions, words, boxes, max_length=max_length, padding="max_length", truncation=True) ... start_positions = [] ... end_positions = [] ... # ๋ฐฐ์น˜์˜ ์˜ˆ์ œ๋ฅผ ๋ฐ˜๋ณตํ•ฉ๋‹ˆ๋‹ค ... for i in range(len(questions)): ... cls_index = encoding["input_ids"][i].index(tokenizer.cls_token_id) ... # ์˜ˆ์ œ์˜ words์—์„œ ๋‹ต๋ณ€์˜ ์œ„์น˜๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค ... words_example = [word.lower() for word in words[i]] ... answer = answers[i] ... match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split()) ... if match: ... # ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ฐœ๊ฒฌํ•˜๋ฉด, `token_type_ids`๋ฅผ ์‚ฌ์šฉํ•ด ์ธ์ฝ”๋”ฉ์—์„œ ๋‹จ์–ด๊ฐ€ ์‹œ์ž‘ํ•˜๋Š” ์œ„์น˜๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค ... token_type_ids = encoding["token_type_ids"][i] ... token_start_index = 0 ... while token_type_ids[token_start_index] != 1: ... token_start_index += 1 ... token_end_index = len(encoding["input_ids"][i]) - 1 ... while token_type_ids[token_end_index] != 1: ... token_end_index -= 1 ... word_ids = encoding.word_ids(i)[token_start_index : token_end_index + 1] ... start_position = cls_index ... end_position = cls_index ... # words์˜ ๋‹ต๋ณ€ ์œ„์น˜์™€ ์ผ์น˜ํ•  ๋•Œ๊นŒ์ง€ word_ids๋ฅผ ๋ฐ˜๋ณตํ•˜๊ณ  `token_start_index`๋ฅผ ๋Š˜๋ฆฝ๋‹ˆ๋‹ค ... # ์ผ์น˜ํ•˜๋ฉด `token_start_index`๋ฅผ ์ธ์ฝ”๋”ฉ์—์„œ ๋‹ต๋ณ€์˜ `start_position`์œผ๋กœ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค ... for id in word_ids: ... if id == word_idx_start: ... start_position = token_start_index ... else: ... token_start_index += 1 ... # ๋น„์Šทํ•˜๊ฒŒ, ๋์—์„œ ์‹œ์ž‘ํ•ด `word_ids`๋ฅผ ๋ฐ˜๋ณตํ•˜๋ฉฐ ๋‹ต๋ณ€์˜ `end_position`์„ ์ฐพ์Šต๋‹ˆ๋‹ค ... for id in word_ids[::-1]: ... if id == word_idx_end: ... end_position = token_end_index ... else: ... token_end_index -= 1 ... start_positions.append(start_position) ... end_positions.append(end_position) ... else: ... start_positions.append(cls_index) ... end_positions.append(cls_index) ... encoding["image"] = examples["image"] ... encoding["start_positions"] = start_positions ... encoding["end_positions"] = end_positions ... return encoding ``` ์ด์ œ ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๊ฐ€ ์žˆ์œผ๋‹ˆ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ธ์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> encoded_train_dataset = dataset_with_ocr["train"].map( ... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["train"].column_names ... ) >>> encoded_test_dataset = dataset_with_ocr["test"].map( ... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["test"].column_names ... ) ``` ์ธ์ฝ”๋”ฉ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํŠน์„ฑ์ด ์–ด๋–ป๊ฒŒ ์ƒ๊ฒผ๋Š”์ง€ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> encoded_train_dataset.features {'image': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='uint8', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'bbox': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'start_positions': Value(dtype='int64', id=None), 'end_positions': Value(dtype='int64', id=None)} ``` ## ํ‰๊ฐ€ [[evaluation]] ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต์„ ํ‰๊ฐ€ํ•˜๋ ค๋ฉด ์ƒ๋‹นํ•œ ์–‘์˜ ํ›„์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์‹œ๊ฐ„์ด ๋„ˆ๋ฌด ๋งŽ์ด ๊ฑธ๋ฆฌ์ง€ ์•Š๋„๋ก ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ํ‰๊ฐ€ ๋‹จ๊ณ„๋ฅผ ์ƒ๋žตํ•ฉ๋‹ˆ๋‹ค. [`Trainer`]๊ฐ€ ํ›ˆ๋ จ ๊ณผ์ •์—์„œ ํ‰๊ฐ€ ์†์‹ค(evaluation loss)์„ ๊ณ„์† ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ๋Œ€๋žต์ ์œผ๋กœ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”์ถœ์ (Extractive) ์งˆ์˜ ์‘๋‹ต์€ ๋ณดํ†ต F1/exact match ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด ํ‰๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ง์ ‘ ๊ตฌํ˜„ํ•ด๋ณด๊ณ  ์‹ถ์œผ์‹œ๋‹ค๋ฉด, Hugging Face course์˜ [Question Answering chapter](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”. ## ํ›ˆ๋ จ [[train]] ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ์ด ๊ฐ€์ด๋“œ์˜ ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ฒ˜๋ฆฌํ–ˆ์œผ๋‹ˆ ์ด์ œ ๋‚˜๋งŒ์˜ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‹จ๊ณ„๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์Šต๋‹ˆ๋‹ค: * ์ „์ฒ˜๋ฆฌ์—์„œ์˜ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด [`AutoModelForDocumentQuestionAnswering`]์œผ๋กœ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. * [`TrainingArguments`]๋กœ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •ํ•ฉ๋‹ˆ๋‹ค. * ์˜ˆ์ œ๋ฅผ ๋ฐฐ์น˜ ์ฒ˜๋ฆฌํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” [`DefaultDataCollator`]๊ฐ€ ์ ๋‹นํ•ฉ๋‹ˆ๋‹ค. * ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(Data collator)์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋“ค์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. * [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ด์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForDocumentQuestionAnswering >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint) ``` [`TrainingArguments`]์—์„œ `output_dir`์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๊ณ , ์ ์ ˆํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๋ ค๋ฉด `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์„ธ์š” (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ์ด ๊ฒฝ์šฐ `output_dir`์€ ๋ชจ๋ธ์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ‘ธ์‹œํ•  ๋ ˆํฌ์ง€ํ† ๋ฆฌ์˜ ์ด๋ฆ„์ด ๋ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TrainingArguments >>> # ๋ณธ์ธ์˜ ๋ ˆํฌ์ง€ํ† ๋ฆฌ ID๋กœ ๋ฐ”๊พธ์„ธ์š” >>> repo_id = "MariaK/layoutlmv2-base-uncased_finetuned_docvqa" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... eval_strategy="steps", ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` ๊ฐ„๋‹จํ•œ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋ฅผ ์ •์˜ํ•˜์—ฌ ์˜ˆ์ œ๋ฅผ ํ•จ๊ป˜ ๋ฐฐ์น˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ชจ๋“  ๊ฒƒ์„ ํ•œ ๊ณณ์— ๋ชจ์•„ [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=encoded_train_dataset, ... eval_dataset=encoded_test_dataset, ... tokenizer=processor, ... ) >>> trainer.train() ``` ์ตœ์ข… ๋ชจ๋ธ์„ ๐Ÿค— Hub์— ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด, ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  `push_to_hub`๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.create_model_card() >>> trainer.push_to_hub() ``` ## ์ถ”๋ก  [[inference]] ์ด์ œ LayoutLMv2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ์—…๋กœ๋“œํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`Pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset["test"][2] >>> question = example["query"]["en"] >>> image = example["image"] >>> print(question) >>> print(example["answers"]) 'Who is โ€˜presidingโ€™ TRRF GENERAL SESSION (PART 1)?' ['TRRF Vice President', 'lee a. waller'] ``` ๊ทธ ๋‹ค์Œ, ๋ชจ๋ธ๋กœ ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์ด๋ฏธ์ง€ + ์งˆ๋ฌธ ์กฐํ•ฉ์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> qa_pipeline = pipeline("document-question-answering", model="MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> qa_pipeline(image, question) [{'score': 0.9949808120727539, 'answer': 'Lee A. Waller', 'start': 55, 'end': 57}] ``` ์›ํ•œ๋‹ค๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ ๊ฐ€์ ธ์™€ ๋ชจ๋ธ์˜ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์— ๋งž๊ฒŒ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ์„ ํ†ตํ•ด ๊ฒฐ๊ณผ ๋˜๋Š” ์ „์ฒ˜๋ฆฌ๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. ๋ชจ๋ธ์€ ์–ด๋–ค ํ† ํฐ์ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘์— ์žˆ๋Š”์ง€, ์–ด๋–ค ํ† ํฐ์ด ๋‹ต๋ณ€์ด ๋์— ์žˆ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” `start_logits`์™€ `end_logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋‘˜ ๋‹ค (batch_size, sequence_length) ํ˜•ํƒœ๋ฅผ ๊ฐ–์Šต๋‹ˆ๋‹ค. 4. `start_logits`์™€ `end_logits`์˜ ๋งˆ์ง€๋ง‰ ์ฐจ์›์„ ์ตœ๋Œ€๋กœ ๋งŒ๋“œ๋Š” ๊ฐ’์„ ์ฐพ์•„ ์˜ˆ์ƒ `start_idx`์™€ `end_idx`๋ฅผ ์–ป์Šต๋‹ˆ๋‹ค. 5. ํ† ํฌ๋‚˜์ด์ €๋กœ ๋‹ต๋ณ€์„ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> from transformers import AutoProcessor >>> from transformers import AutoModelForDocumentQuestionAnswering >>> processor = AutoProcessor.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> with torch.no_grad(): ... encoding = processor(image.convert("RGB"), question, return_tensors="pt") ... outputs = model(**encoding) ... start_logits = outputs.start_logits ... end_logits = outputs.end_logits ... predicted_start_idx = start_logits.argmax(-1).item() ... predicted_end_idx = end_logits.argmax(-1).item() >>> processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1]) 'lee a. waller' ```
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/semantic_segmentation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์˜๋ฏธ์  ๋ถ„ํ• (Semantic segmentation)[[semantic-segmentation]] [[open-in-colab]] <Youtube id="dKE8SIt9C-w"/> ์˜๋ฏธ์  ๋ถ„ํ• (semantic segmentation)์€ ์ด๋ฏธ์ง€์˜ ๊ฐ ํ”ฝ์…€์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๋ถ„ํ• (segmentation)์—๋Š” ์—ฌ๋Ÿฌ ์ข…๋ฅ˜๊ฐ€ ์žˆ์œผ๋ฉฐ, ์˜๋ฏธ์  ๋ถ„ํ• ์˜ ๊ฒฝ์šฐ ๋™์ผํ•œ ๋ฌผ์ฒด์˜ ๊ณ ์œ  ์ธ์Šคํ„ด์Šค๋ฅผ ๊ตฌ๋ถ„ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฌผ์ฒด ๋ชจ๋‘ ๋™์ผํ•œ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋ฉ๋‹ˆ๋‹ค(์˜ˆ์‹œ๋กœ, "car-1" ๊ณผ "car-2" ๋Œ€์‹  "car"๋กœ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค). ์‹ค์ƒํ™œ์—์„œ ํ”ํžˆ ๋ณผ ์ˆ˜ ์žˆ๋Š” ์˜๋ฏธ์  ๋ถ„ํ• ์˜ ์ ์šฉ ์‚ฌ๋ก€๋กœ๋Š” ๋ณดํ–‰์ž์™€ ์ค‘์š”ํ•œ ๊ตํ†ต ์ •๋ณด๋ฅผ ์‹๋ณ„ํ•˜๋Š” ์ž์œจ ์ฃผํ–‰ ์ž๋™์ฐจ ํ•™์Šต, ์˜๋ฃŒ ์ด๋ฏธ์ง€์˜ ์„ธํฌ์™€ ์ด์ƒ ์ง•ํ›„ ์‹๋ณ„, ๊ทธ๋ฆฌ๊ณ  ์œ„์„ฑ ์ด๋ฏธ์ง€์˜ ํ™˜๊ฒฝ ๋ณ€ํ™” ๋ชจ๋‹ˆํ„ฐ๋ง๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋ฐฐ์šธ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [SceneParse150](https://huggingface.co/datasets/scene_parse_150) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ด์šฉํ•ด [SegFormer](https://huggingface.co/docs/transformers/main/en/model_doc/segformer#segformer) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ. 2. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ. <Tip> ์ด ์ž‘์—…๊ณผ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณด๋ ค๋ฉด [์ž‘์—… ํŽ˜์ด์ง€](https://huggingface.co/tasks/image-segmentation)๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q datasets transformers evaluate ``` ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## SceneParse150 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[load-sceneparse150-dataset]] ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ SceneParse150 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋” ์ž‘์€ ๋ถ€๋ถ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ์‹คํ—˜์„ ํ†ตํ•ด ๋ชจ๋“  ๊ฒƒ์ด ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> ds = load_dataset("scene_parse_150", split="train[:50]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋ถ„ํ• ํ•˜์„ธ์š”: ```py >>> ds = ds.train_test_split(test_size=0.2) >>> train_ds = ds["train"] >>> test_ds = ds["test"] ``` ๊ทธ๋ฆฌ๊ณ  ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> train_ds[0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x683 at 0x7F9B0C201F90>, 'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=512x683 at 0x7F9B0C201DD0>, 'scene_category': 368} ``` - `image`: ์žฅ๋ฉด์˜ PIL ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. - `annotation`: ๋ถ„ํ•  ์ง€๋„(segmentation map)์˜ PIL ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ํƒ€๊ฒŸ์ด๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. - `scene_category`: "์ฃผ๋ฐฉ" ๋˜๋Š” "์‚ฌ๋ฌด์‹ค"๊ณผ ๊ฐ™์ด ์ด๋ฏธ์ง€ ์žฅ๋ฉด์„ ์„ค๋ช…ํ•˜๋Š” ์นดํ…Œ๊ณ ๋ฆฌ ID์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‘˜ ๋‹ค PIL ์ด๋ฏธ์ง€์ธ `image`์™€ `annotation`๋งŒ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‚˜์ค‘์— ๋ชจ๋ธ์„ ์„ค์ •ํ•  ๋•Œ ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค์— ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „๋„ ๋งŒ๋“ค๊ณ  ์‹ถ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. Hub์—์„œ ๋งคํ•‘์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  `id2label` ๋ฐ `label2id` ์‚ฌ์ „์„ ๋งŒ๋“œ์„ธ์š”: ```py >>> import json >>> from huggingface_hub import cached_download, hf_hub_url >>> repo_id = "huggingface/label-files" >>> filename = "ade20k-id2label.json" >>> id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r")) >>> id2label = {int(k): v for k, v in id2label.items()} >>> label2id = {v: k for k, v in id2label.items()} >>> num_labels = len(id2label) ``` ## ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ[[preprocess] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ชจ๋ธ์— ์‚ฌ์šฉํ•  ์ด๋ฏธ์ง€์™€ ์ฃผ์„์„ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด SegFormer ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ๊ฐ™์€ ์ผ๋ถ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ๋ฐฐ๊ฒฝ ํด๋ž˜์Šค๋กœ ์ œ๋กœ ์ธ๋ฑ์Šค๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ฐฐ๊ฒฝ ํด๋ž˜์Šค๋Š” 150๊ฐœ์˜ ํด๋ž˜์Šค์— ์‹ค์ œ๋กœ๋Š” ํฌํ•จ๋˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— `reduce_labels=True` ๋ฅผ ์„ค์ •ํ•ด ๋ชจ๋“  ๋ ˆ์ด๋ธ”์—์„œ ๋ฐฐ๊ฒฝ ํด๋ž˜์Šค๋ฅผ ์ œ๊ฑฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ œ๋กœ ์ธ๋ฑ์Šค๋Š” `255`๋กœ ๋Œ€์ฒด๋˜๋ฏ€๋กœ SegFormer์˜ ์†์‹ค ํ•จ์ˆ˜์—์„œ ๋ฌด์‹œ๋ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "nvidia/mit-b0" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True) ``` <frameworkcontent> <pt> ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์„ ์ ์šฉํ•˜์—ฌ ๊ณผ์ ํ•ฉ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋ณด๋‹ค ๊ฐ•๊ฑดํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” [torchvision](https://pytorch.org/vision/stable/index.html)์˜ [`ColorJitter`](https://pytorch.org/vision/stable/generated/torchvision.transforms.ColorJitter.html)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์˜ ์ƒ‰์ƒ ์†์„ฑ์„ ์ž„์˜๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ์ž์‹ ์ด ์›ํ•˜๋Š” ์ด๋ฏธ์ง€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from torchvision.transforms import ColorJitter >>> jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) ``` ์ด์ œ ๋ชจ๋ธ์— ์‚ฌ์šฉํ•  ์ด๋ฏธ์ง€์™€ ์ฃผ์„์„ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด ๋‘ ๊ฐœ์˜ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋“ค์€ ์ด๋ฏธ์ง€๋ฅผ `pixel_values`๋กœ, ์ฃผ์„์„ `labels`๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์„ธํŠธ์˜ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์— ์ด๋ฏธ์ง€๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ์ „์— `jitter`๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ์„ธํŠธ์˜ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋Š” `images`๋ฅผ ์ž๋ฅด๊ณ  ์ •๊ทœํ™”ํ•˜๋ฉฐ, ํ…Œ์ŠคํŠธ ์ค‘์—๋Š” ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์ด ์ ์šฉ๋˜์ง€ ์•Š์œผ๋ฏ€๋กœ `labels`๋งŒ ์ž๋ฆ…๋‹ˆ๋‹ค. ```py >>> def train_transforms(example_batch): ... images = [jitter(x) for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs >>> def val_transforms(example_batch): ... images = [x for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs ``` ๋ชจ๋“  ๋ฐ์ดํ„ฐ ์„ธํŠธ์— `jitter`๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.set_transform`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ์ฆ‰์‹œ ๋ณ€ํ™˜์ด ์ ์šฉ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋” ๋น ๋ฅด๊ณ  ๋””์Šคํฌ ๊ณต๊ฐ„์„ ๋œ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค: ```py >>> train_ds.set_transform(train_transforms) >>> test_ds.set_transform(val_transforms) ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์„ ์ ์šฉํ•˜์—ฌ ๊ณผ์ ํ•ฉ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋ณด๋‹ค ๊ฐ•๊ฑดํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” [`tf.image`](https://www.tensorflow.org/api_docs/python/tf/image)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์˜ ์ƒ‰์ƒ ์†์„ฑ์„ ์ž„์˜๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ์ž์‹ ์ด ์›ํ•˜๋Š” ์ด๋ฏธ์ง€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณ„๊ฐœ์˜ ๋‘ ๋ณ€ํ™˜ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค: - ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์„ ํฌํ•จํ•˜๋Š” ํ•™์Šต ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ - ๐Ÿค— Transformers์˜ ์ปดํ“จํ„ฐ ๋น„์ „ ๋ชจ๋ธ์€ ์ฑ„๋„ ์šฐ์„  ๋ ˆ์ด์•„์›ƒ์„ ๊ธฐ๋Œ€ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ด๋ฏธ์ง€๋งŒ ๋ฐ”๊พธ๋Š” ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ ```py >>> import tensorflow as tf >>> def aug_transforms(image): ... image = tf.keras.utils.img_to_array(image) ... image = tf.image.random_brightness(image, 0.25) ... image = tf.image.random_contrast(image, 0.5, 2.0) ... image = tf.image.random_saturation(image, 0.75, 1.25) ... image = tf.image.random_hue(image, 0.1) ... image = tf.transpose(image, (2, 0, 1)) ... return image >>> def transforms(image): ... image = tf.keras.utils.img_to_array(image) ... image = tf.transpose(image, (2, 0, 1)) ... return image ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ชจ๋ธ์„ ์œ„ํ•ด ๋‘ ๊ฐœ์˜ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ์ด๋ฏธ์ง€ ๋ฐ ์ฃผ์„ ๋ฐฐ์น˜๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋“ค์€ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๊ณ  ์ด์ „์— ๋กœ๋“œํ•œ `image_processor`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ `pixel_values`๋กœ, ์ฃผ์„์„ `label`๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `ImageProcessor` ๋Š” ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ ์กฐ์ •๊ณผ ์ •๊ทœํ™”๋„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> def train_transforms(example_batch): ... images = [aug_transforms(x.convert("RGB")) for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs >>> def val_transforms(example_batch): ... images = [transforms(x.convert("RGB")) for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์— ์ „์ฒ˜๋ฆฌ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.set_transform`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ์ฆ‰์‹œ ๋ณ€ํ™˜์ด ์ ์šฉ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋” ๋น ๋ฅด๊ณ  ๋””์Šคํฌ ๊ณต๊ฐ„์„ ๋œ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค: ```py >>> train_ds.set_transform(train_transforms) >>> test_ds.set_transform(val_transforms) ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ํƒœ์Šคํฌ์—์„œ๋Š” [mean Intersection over Union](https://huggingface.co/spaces/evaluate-metric/accuracy) (IoU) ๋ฉ”ํŠธ๋ฆญ์„ ๋กœ๋“œํ•˜์„ธ์š” (๋ฉ”ํŠธ๋ฆญ์„ ๋กœ๋“œํ•˜๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด ๐Ÿค— Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”). ```py >>> import evaluate >>> metric = evaluate.load("mean_iou") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ฉ”ํŠธ๋ฆญ์„ [`~evaluate.EvaluationModule.compute`]ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์˜ˆ์ธก์„ ๋จผ์ € ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•œ ๋‹ค์Œ, ๋ ˆ์ด๋ธ”์˜ ํฌ๊ธฐ์— ๋งž๊ฒŒ ๋ชจ์–‘์„ ๋‹ค์‹œ ์ง€์ •ํ•ด์•ผ [`~evaluate.EvaluationModule.compute`]๋ฅผ ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> import numpy as np >>> import torch >>> from torch import nn >>> def compute_metrics(eval_pred): ... with torch.no_grad(): ... logits, labels = eval_pred ... logits_tensor = torch.from_numpy(logits) ... logits_tensor = nn.functional.interpolate( ... logits_tensor, ... size=labels.shape[-2:], ... mode="bilinear", ... align_corners=False, ... ).argmax(dim=1) ... pred_labels = logits_tensor.detach().cpu().numpy() ... metrics = metric.compute( ... predictions=pred_labels, ... references=labels, ... num_labels=num_labels, ... ignore_index=255, ... reduce_labels=False, ... ) ... for key, value in metrics.items(): ... if isinstance(value, np.ndarray): ... metrics[key] = value.tolist() ... return metrics ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... logits = tf.transpose(logits, perm=[0, 2, 3, 1]) ... logits_resized = tf.image.resize( ... logits, ... size=tf.shape(labels)[1:], ... method="bilinear", ... ) ... pred_labels = tf.argmax(logits_resized, axis=-1) ... metrics = metric.compute( ... predictions=pred_labels, ... references=labels, ... num_labels=num_labels, ... ignore_index=-1, ... reduce_labels=image_processor.do_reduce_labels, ... ) ... per_category_accuracy = metrics.pop("per_category_accuracy").tolist() ... per_category_iou = metrics.pop("per_category_iou").tolist() ... metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)}) ... metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)}) ... return {"val_" + k: v for k, v in metrics.items()} ``` </tf> </frameworkcontent> ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํŠธ๋ ˆ์ด๋‹์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋Œ์•„๊ฐ€๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ## ํ•™์Šตํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> ๋งŒ์•ฝ [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#finetune-with-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ•™์Šต์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSemanticSegmentation`]๋กœ SegFormer๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ , ๋ชจ๋ธ์— ๋ ˆ์ด๋ธ” ID์™€ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค ๊ฐ„์˜ ๋งคํ•‘์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer >>> model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ [`TrainingArguments`]์— ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. `image` ์—ด์ด ์‚ญ์ œ๋˜๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ์—ด์„ ์ œ๊ฑฐํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. `image` ์—ด์ด ์—†์œผ๋ฉด `pixel_values`์„ ์ƒ์„ฑํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๊ฒฝ์šฐ๋ฅผ ๋ฐฉ์ง€ํ•˜๋ ค๋ฉด `remove_unused_columns=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”! ์œ ์ผํ•˜๊ฒŒ ํ•„์š”ํ•œ ๋‹ค๋ฅธ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํฌํฌ๊ฐ€ ๋๋‚  ๋•Œ๋งˆ๋‹ค [`Trainer`]๊ฐ€ IoU ๋ฉ”ํŠธ๋ฆญ์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ•™์Šต ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ•™์Šต ์ธ์ž๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”. 3. ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="segformer-b0-scene-parse-150", ... learning_rate=6e-5, ... num_train_epochs=50, ... per_device_train_batch_size=2, ... per_device_eval_batch_size=2, ... save_total_limit=3, ... eval_strategy="steps", ... save_strategy="steps", ... save_steps=20, ... eval_steps=20, ... logging_steps=1, ... eval_accumulation_steps=5, ... remove_unused_columns=False, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=train_ds, ... eval_dataset=test_ds, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด Hub์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, ๋จผ์ € [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](../training#train-a-tensorflow-model-with-keras)์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅด์„ธ์š”: 1. ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜๊ณ  ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด๋Ÿฌ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. 2. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜์„ธ์š”. 3. ๐Ÿค— Dataset์„ `tf.data.Dataset`๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”. 4. ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜์„ธ์š”. 5. ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜์—ฌ ๋ฉ”ํŠธ๋ฆญ์„ ๊ณ„์‚ฐํ•˜๊ณ  ๐Ÿค— Hub์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜์„ธ์š”. 6. `fit()` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ์„ ์‹คํ–‰ํ•˜์„ธ์š”. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ์˜ตํ‹ฐ๋งˆ์ด์ €, ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด๋Ÿฌ๋ฅผ ์ •์˜ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer >>> batch_size = 2 >>> num_epochs = 50 >>> num_train_steps = len(train_ds) * num_epochs >>> learning_rate = 6e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( ... init_lr=learning_rate, ... num_train_steps=num_train_steps, ... weight_decay_rate=weight_decay_rate, ... num_warmup_steps=0, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ ˆ์ด๋ธ” ๋งคํ•‘๊ณผ ํ•จ๊ป˜ [`TFAutoModelForSemanticSegmentation`]์„ ์‚ฌ์šฉํ•˜์—ฌ SegFormer๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ์˜ตํ‹ฐ๋งˆ์ด์ €๋กœ ์ปดํŒŒ์ผํ•ฉ๋‹ˆ๋‹ค. ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์€ ๋ชจ๋‘ ๋””ํดํŠธ๋กœ ํƒœ์Šคํฌ ๊ด€๋ จ ์†์‹ค ํ•จ์ˆ˜๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ์›์น˜ ์•Š์œผ๋ฉด ์ง€์ •ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSemanticSegmentation >>> model = TFAutoModelForSemanticSegmentation.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ) >>> model.compile(optimizer=optimizer) # ์†์‹ค ํ•จ์ˆ˜ ์ธ์ž๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค! ``` [`~datasets.Dataset.to_tf_dataset`] ์™€ [`DefaultDataCollator`]๋ฅผ ์‚ฌ์šฉํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํฌ๋งท์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") >>> tf_train_dataset = train_ds.to_tf_dataset( ... columns=["pixel_values", "label"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) >>> tf_eval_dataset = test_ds.to_tf_dataset( ... columns=["pixel_values", "label"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) ``` ์˜ˆ์ธก์œผ๋กœ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ๐Ÿค— Hub๋กœ ํ‘ธ์‹œํ•˜๋ ค๋ฉด [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `compute_metrics` ํ•จ์ˆ˜๋ฅผ [`KerasMetricCallback`]์— ์ „๋‹ฌํ•˜๊ณ , ๋ชจ๋ธ ์—…๋กœ๋“œ๋ฅผ ์œ„ํ•ด [`PushToHubCallback`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback( ... metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=["labels"] ... ) >>> push_to_hub_callback = PushToHubCallback(output_dir="scene_segmentation", tokenizer=image_processor) >>> callbacks = [metric_callback, push_to_hub_callback] ``` ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํฌํฌ ์ˆ˜์™€ ํ•จ๊ป˜ `fit()`์„ ํ˜ธ์ถœํ•˜๊ณ , ์ฝœ๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit( ... tf_train_dataset, ... validation_data=tf_eval_dataset, ... callbacks=callbacks, ... epochs=num_epochs, ... ) ``` ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ๊ณต์œ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ํ•  ์ด๋ฏธ์ง€๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> image = ds[0]["image"] >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png" alt="Image of bedroom"/> </div> <frameworkcontent> <pt> ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ๋ถ„ํ• ์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> segmenter = pipeline("image-segmentation", model="my_awesome_seg_model") >>> segmenter(image) [{'score': None, 'label': 'wall', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062690>}, {'score': None, 'label': 'sky', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A50>}, {'score': None, 'label': 'floor', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062B50>}, {'score': None, 'label': 'ceiling', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A10>}, {'score': None, 'label': 'bed ', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E90>}, {'score': None, 'label': 'windowpane', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062390>}, {'score': None, 'label': 'cabinet', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062550>}, {'score': None, 'label': 'chair', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062D90>}, {'score': None, 'label': 'armchair', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E10>}] ``` ์›ํ•˜๋Š” ๊ฒฝ์šฐ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ  `pixel_values`์„ GPU์— ๋ฐฐ์น˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # ๊ฐ€๋Šฅํ•˜๋‹ค๋ฉด GPU๋ฅผ ์‚ฌ์šฉํ•˜๊ณ , ๊ทธ๋ ‡์ง€ ์•Š๋‹ค๋ฉด CPU๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š” >>> encoding = image_processor(image, return_tensors="pt") >>> pixel_values = encoding.pixel_values.to(device) ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> outputs = model(pixel_values=pixel_values) >>> logits = outputs.logits.cpu() ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋กœ์ง“์˜ ํฌ๊ธฐ๋ฅผ ์›๋ณธ ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋กœ ๋‹ค์‹œ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> upsampled_logits = nn.functional.interpolate( ... logits, ... size=image.size[::-1], ... mode="bilinear", ... align_corners=False, ... ) >>> pred_seg = upsampled_logits.argmax(dim=1)[0] ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๋กœ๋“œํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  ์ž…๋ ฅ์„ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("MariaK/scene_segmentation") >>> inputs = image_processor(image, return_tensors="tf") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSemanticSegmentation >>> model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation") >>> logits = model(**inputs).logits ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋กœ๊ทธ๋ฅผ ์›๋ณธ ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋กœ ์žฌ์กฐ์ •ํ•˜๊ณ  ํด๋ž˜์Šค ์ฐจ์›์— argmax๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> logits = tf.transpose(logits, [0, 2, 3, 1]) >>> upsampled_logits = tf.image.resize( ... logits, ... # `image.size`๊ฐ€ ๋„ˆ๋น„์™€ ๋†’์ด๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— `image`์˜ ๋ชจ์–‘์„ ๋ฐ˜์ „์‹œํ‚ต๋‹ˆ๋‹ค ... image.size[::-1], ... ) >>> pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0] ``` </tf> </frameworkcontent> ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•˜๋ ค๋ฉด [dataset color palette](https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.py#L51)๋ฅผ ๊ฐ ํด๋ž˜์Šค๋ฅผ RGB ๊ฐ’์— ๋งคํ•‘ํ•˜๋Š” `ade_palette()`๋กœ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด๋ฏธ์ง€์™€ ์˜ˆ์ธก๋œ ๋ถ„ํ•  ์ง€๋„(segmentation map)์„ ๊ฒฐํ•ฉํ•˜์—ฌ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import matplotlib.pyplot as plt >>> import numpy as np >>> color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8) >>> palette = np.array(ade_palette()) >>> for label, color in enumerate(palette): ... color_seg[pred_seg == label, :] = color >>> color_seg = color_seg[..., ::-1] # BGR๋กœ ๋ณ€ํ™˜ >>> img = np.array(image) * 0.5 + color_seg * 0.5 # ๋ถ„ํ•  ์ง€๋„์œผ๋กœ ์ด๋ฏธ์ง€ ๊ตฌ์„ฑ >>> img = img.astype(np.uint8) >>> plt.figure(figsize=(15, 10)) >>> plt.imshow(img) >>> plt.show() ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-preds.png" alt="Image of bedroom overlaid with segmentation map"/> </div>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/tasks/zero_shot_object_detection.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€[[zeroshot-object-detection]] [[open-in-colab]] ์ผ๋ฐ˜์ ์œผ๋กœ [๊ฐ์ฒด ํƒ์ง€](object_detection)์— ์‚ฌ์šฉ๋˜๋Š” ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ํ•™์Šต ๋ฐ์ดํ„ฐ์— ์กด์žฌํ•˜๋Š” ํด๋ž˜์Šค(๋ ˆ์ด๋ธ”)๋งŒ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ํ•œ๊ณ„์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋Š” [OWL-ViT](../model_doc/owlvit) ๋ชจ๋ธ๋กœ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€๊ฐ€ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. OWL-ViT๋Š” ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜(open-vocabulary) ๊ฐ์ฒด ํƒ์ง€๊ธฐ์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋ฏธ์„ธ ์กฐ์ •ํ•˜์ง€ ์•Š๊ณ  ์ž์œ  ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ด๋ฏธ์ง€์—์„œ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. OWL-ViT ๋ชจ๋ธ์€ ๋ฉ€ํ‹ฐ ๋ชจ๋‹ฌ ํ‘œํ˜„์„ ํ™œ์šฉํ•ด ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜ ํƒ์ง€(open-vocabulary detection)๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. [CLIP](../model_doc/clip) ๋ชจ๋ธ์— ๊ฒฝ๋Ÿ‰ํ™”(lightweight)๋œ ๊ฐ์ฒด ๋ถ„๋ฅ˜์™€ ์ง€์—ญํ™”(localization) ํ—ค๋“œ๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜ ํƒ์ง€๋Š” CLIP์˜ ํ…์ŠคํŠธ ์ธ์ฝ”๋”๋กœ free-text ์ฟผ๋ฆฌ๋ฅผ ์ž„๋ฒ ๋”ฉํ•˜๊ณ , ๊ฐ์ฒด ๋ถ„๋ฅ˜์™€ ์ง€์—ญํ™” ํ—ค๋“œ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ํ…์ŠคํŠธ ์„ค๋ช…์„ ์—ฐ๊ฒฐํ•˜๋ฉด ViT๊ฐ€ ์ด๋ฏธ์ง€ ํŒจ์น˜(image patches)๋ฅผ ์ž…๋ ฅ์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. OWL-ViT ๋ชจ๋ธ์˜ ์ €์ž๋“ค์€ CLIP ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ•™์Šต(scratch learning)ํ•œ ํ›„์—, bipartite matching loss๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‘œ์ค€ ๊ฐ์ฒด ์ธ์‹ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ OWL-ViT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์€ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ์‚ฌ์ „ ํ•™์Šต ์—†์ด๋„ ํ…์ŠคํŠธ ์„ค๋ช…์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ๋Š” OWL-ViT ๋ชจ๋ธ์˜ ์‚ฌ์šฉ๋ฒ•์„ ๋‹ค๋ฃฐ ๊ฒƒ์ž…๋‹ˆ๋‹ค: - ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ๊ธฐ๋ฐ˜ ๊ฐ์ฒด ํƒ์ง€ - ์ผ๊ด„ ๊ฐ์ฒด ํƒ์ง€ - ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€ ํŒŒ์ดํ”„๋ผ์ธ[[zeroshot-object-detection-pipeline]] [`pipeline`]์„ ํ™œ์šฉํ•˜๋ฉด ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๊ฒŒ OWL-ViT ๋ชจ๋ธ์„ ์ถ”๋ก ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€์šฉ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```python >>> from transformers import pipeline >>> checkpoint = "google/owlvit-base-patch32" >>> detector = pipeline(model=checkpoint, task="zero-shot-object-detection") ``` ๋‹ค์Œ์œผ๋กœ, ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•˜๊ณ  ์‹ถ์€ ์ด๋ฏธ์ง€๋ฅผ ์„ ํƒํ•˜์„ธ์š”. ์—ฌ๊ธฐ์„œ๋Š” [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€์ธ ์šฐ์ฃผ๋น„ํ–‰์‚ฌ ์—์ผ๋ฆฐ ์ฝœ๋ฆฐ์Šค(Eileen Collins) ์‚ฌ์ง„์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> import skimage >>> import numpy as np >>> from PIL import Image >>> image = skimage.data.astronaut() >>> image = Image.fromarray(np.uint8(image)).convert("RGB") >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png" alt="Astronaut Eileen Collins"/> </div> ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ์ด๋ฏธ์ง€์˜ ํ›„๋ณด ๋ ˆ์ด๋ธ”์„ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ์ „๋‹ฌํ•˜์ง€๋งŒ, ์ปดํ“จํ„ฐ์— ์ €์žฅ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ๋‚˜ url๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. candidate_labels๋Š” ์ด ์˜ˆ์‹œ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•œ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์ข€ ๋” ์„ค๋ช…์ ์ธ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰(query)ํ•˜๋ ค๋Š” ๋ชจ๋“  ํ•ญ๋ชฉ์— ๋Œ€ํ•œ ํ…์ŠคํŠธ ์„ค๋ช…๋„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> predictions = detector( ... image, ... candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"], ... ) >>> predictions [{'score': 0.3571370542049408, 'label': 'human face', 'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}}, {'score': 0.28099656105041504, 'label': 'nasa badge', 'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}}, {'score': 0.2110239565372467, 'label': 'rocket', 'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}}, {'score': 0.13790413737297058, 'label': 'star-spangled banner', 'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}}, {'score': 0.11950037628412247, 'label': 'nasa badge', 'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}}, {'score': 0.10649408400058746, 'label': 'rocket', 'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}] ``` ์ด์ œ ์˜ˆ์ธก๊ฐ’์„ ์‹œ๊ฐํ™”ํ•ด๋ด…์‹œ๋‹ค: ```py >>> from PIL import ImageDraw >>> draw = ImageDraw.Draw(image) >>> for prediction in predictions: ... box = prediction["box"] ... label = prediction["label"] ... score = prediction["score"] ... xmin, ymin, xmax, ymax = box.values() ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png" alt="Visualized predictions on NASA image"/> </div> ## ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ๊ธฐ๋ฐ˜ ๊ฐ์ฒด ํƒ์ง€[[textprompted-zeroshot-object-detection-by-hand]] ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ๋ฒ•์— ๋Œ€ํ•ด ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ, ์ด์ œ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณต์ œํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?other=owlvit)์—์„œ ๊ด€๋ จ ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด์ „๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection >>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` ๋‹ค๋ฅธ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import requests >>> url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640" >>> im = Image.open(requests.get(url, stream=True).raw) >>> im ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png" alt="Beach photo"/> </div> ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์˜ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๋Š” [`CLIPTokenizer`]๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> text_queries = ["hat", "book", "sunglasses", "camera"] >>> inputs = processor(text=text_queries, images=im, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ›„์ฒ˜๋ฆฌ ๋ฐ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ๋ชจ๋ธ์— ์ด๋ฏธ์ง€๋ฅผ ์ž…๋ ฅํ•˜๊ธฐ ์ „์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์—, [`~OwlViTImageProcessor.post_process_object_detection`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ์˜ˆ์ธก๊ฐ’์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค(bounding box)๊ฐ€ ์›๋ณธ ์ด๋ฏธ์ง€์˜ ์ขŒํ‘œ์™€ ์ƒ๋Œ€์ ์œผ๋กœ ๋™์ผํ•œ์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = torch.tensor([im.size[::-1]]) ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(im) >>> scores = results["scores"].tolist() >>> labels = results["labels"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white") >>> im ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/> </div> ## ์ผ๊ด„ ์ฒ˜๋ฆฌ[[batch-processing]] ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€์—์„œ ์„œ๋กœ ๋‹ค๋ฅธ(๋˜๋Š” ๋™์ผํ•œ) ๊ฐ์ฒด๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๊ด„ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด์„œ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋Š” ์ด์ค‘ ๋ฆฌ์ŠคํŠธ๋กœ, ์ด๋ฏธ์ง€๋Š” PIL ์ด๋ฏธ์ง€, PyTorch ํ…์„œ, ๋˜๋Š” NumPy ๋ฐฐ์—ด๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋กœ ํ”„๋กœ์„ธ์„œ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> images = [image, im] >>> text_queries = [ ... ["human face", "rocket", "nasa badge", "star-spangled banner"], ... ["hat", "book", "sunglasses", "camera"], ... ] >>> inputs = processor(text=text_queries, images=images, return_tensors="pt") ``` ์ด์ „์—๋Š” ํ›„์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ๋‹จ์ผ ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ๋ฅผ ํ…์„œ๋กœ ์ „๋‹ฌํ–ˆ์ง€๋งŒ, ํŠœํ”Œ์„ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๊ณ , ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ํŠœํ”Œ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ๋‘ ์˜ˆ์ œ์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ์ƒ์„ฑํ•˜๊ณ , ๋‘ ๋ฒˆ์งธ ์ด๋ฏธ์ง€(`image_idx = 1`)๋ฅผ ์‹œ๊ฐํ™”ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = [x.size[::-1] for x in images] ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes) >>> image_idx = 1 >>> draw = ImageDraw.Draw(images[image_idx]) >>> scores = results[image_idx]["scores"].tolist() >>> labels = results[image_idx]["labels"].tolist() >>> boxes = results[image_idx]["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white") >>> images[image_idx] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/> </div> ## ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€[[imageguided-object-detection]] ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ์ด์šฉํ•œ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€ ์™ธ์—๋„ OWL-ViT ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋ฅผ ์ฟผ๋ฆฌ๋กœ ์‚ฌ์šฉํ•ด ๋Œ€์ƒ ์ด๋ฏธ์ง€์—์„œ ์œ ์‚ฌํ•œ ๊ฐ์ฒด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ์ฟผ๋ฆฌ์™€ ๋‹ฌ๋ฆฌ ํ•˜๋‚˜์˜ ์˜ˆ์ œ ์ด๋ฏธ์ง€์—์„œ๋งŒ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์†ŒํŒŒ์— ๊ณ ์–‘์ด ๋‘ ๋งˆ๋ฆฌ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋Œ€์ƒ ์ด๋ฏธ์ง€(target image)๋กœ, ๊ณ ์–‘์ด ํ•œ ๋งˆ๋ฆฌ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ฟผ๋ฆฌ๋กœ ์‚ฌ์šฉํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image_target = Image.open(requests.get(url, stream=True).raw) >>> query_url = "http://images.cocodataset.org/val2017/000000524280.jpg" >>> query_image = Image.open(requests.get(query_url, stream=True).raw) ``` ๋‹ค์Œ ์ด๋ฏธ์ง€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import matplotlib.pyplot as plt >>> fig, ax = plt.subplots(1, 2) >>> ax[0].imshow(image_target) >>> ax[1].imshow(query_image) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png" alt="Cats"/> </div> ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„์—์„œ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ ๋Œ€์‹ ์— `query_images`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> inputs = processor(images=image_target, query_images=query_image, return_tensors="pt") ``` ์˜ˆ์ธก์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๋Š” ๋Œ€์‹  [`~OwlViTForObjectDetection.image_guided_detection`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์ด ์—†๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์ด์ „๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ด์ „๊ณผ ๋™์ผํ•˜๊ฒŒ ์ด๋ฏธ์ง€๋ฅผ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> with torch.no_grad(): ... outputs = model.image_guided_detection(**inputs) ... target_sizes = torch.tensor([image_target.size[::-1]]) ... results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(image_target) >>> scores = results["scores"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4) >>> image_target ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png" alt="Cats with bounding boxes"/> </div> OWL-ViT ๋ชจ๋ธ์„ ์ถ”๋ก ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ์•„๋ž˜ ๋ฐ๋ชจ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: <iframe src="https://adirik-owl-vit.hf.space" frameborder="0" width="850" height="450" ></iframe>
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/model_doc/llama.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # LLaMA [[llama]] ## ๊ฐœ์š” [[overview]] LLaMA ๋ชจ๋ธ์€ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothรฉe Lacroix, Baptiste Roziรจre, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample์— ์˜ํ•ด ์ œ์•ˆ๋œ [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)์—์„œ ์†Œ๊ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ 7B์—์„œ 65B๊ฐœ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๊นŒ์ง€ ๋‹ค์–‘ํ•œ ํฌ๊ธฐ์˜ ๊ธฐ์ดˆ ์–ธ์–ด ๋ชจ๋ธ์„ ๋ชจ์•„๋†“์€ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋…ผ๋ฌธ์˜ ์ดˆ๋ก์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: *"LLaMA๋Š” 7B์—์„œ 65B๊ฐœ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ ์ˆ˜๋ฅผ ๊ฐ€์ง„ ๊ธฐ์ดˆ ์–ธ์–ด ๋ชจ๋ธ์˜ ๋ชจ์Œ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์ˆ˜์กฐ ๊ฐœ์˜ ํ† ํฐ์œผ๋กœ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œ์ผฐ๊ณ , ๊ณต๊ฐœ์ ์œผ๋กœ ์ด์šฉ ๊ฐ€๋Šฅํ•œ ๋ฐ์ดํ„ฐ์…‹๋งŒ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ตœ๊ณ  ์ˆ˜์ค€์˜ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ํŠนํžˆ, LLaMA-13B ๋ชจ๋ธ์€ ๋Œ€๋ถ€๋ถ„์˜ ๋ฒค์น˜๋งˆํฌ์—์„œ GPT-3 (175B)๋ฅผ ๋Šฅ๊ฐ€ํ•˜๋ฉฐ, LLaMA-65B๋Š” ์ตœ๊ณ  ์ˆ˜์ค€์˜ ๋ชจ๋ธ์ธ Chinchilla-70B์™€ PaLM-540B์— ๋ฒ„๊ธˆ๊ฐ€๋Š” ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋ชจ๋“  ๋ชจ๋ธ์„ ์—ฐ๊ตฌ ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต๊ฐœํ•ฉ๋‹ˆ๋‹ค."* ํŒ: - LLaMA ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜๋Š” [์ด ์–‘์‹](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form)์„ ์ž‘์„ฑํ•˜์—ฌ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๊ฐ€์ค‘์น˜๋ฅผ ๋‹ค์šด๋กœ๋“œํ•œ ํ›„์—๋Š” ์ด๋ฅผ [๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ Hugging Face Transformers ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ์•„๋ž˜์˜ ์˜ˆ์‹œ ๋ช…๋ น์–ด๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` - ๋ณ€ํ™˜์„ ํ•˜์˜€๋‹ค๋ฉด ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("/output/path") model = LlamaForCausalLM.from_pretrained("/output/path") ``` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ชจ๋ธ์„ float16 ์ •๋ฐ€๋„๋กœ ์ „๋ถ€ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์„ ๋งŒํผ์˜ ์ถฉ๋ถ„ํ•œ CPU RAM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. (๊ฐ€์žฅ ํฐ ๋ฒ„์ „์˜ ๋ชจ๋ธ์ด ์—ฌ๋Ÿฌ ์ฒดํฌํฌ์ธํŠธ๋กœ ๋‚˜๋‰˜์–ด ์žˆ๋”๋ผ๋„, ๊ฐ ์ฒดํฌํฌ์ธํŠธ๋Š” ๋ชจ๋ธ์˜ ๊ฐ ๊ฐ€์ค‘์น˜์˜ ์ผ๋ถ€๋ฅผ ํฌํ•จํ•˜๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ๋ฅผ RAM์— ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค) 65B ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ์ด 130GB์˜ RAM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - LLaMA ํ† ํฌ๋‚˜์ด์ €๋Š” [sentencepiece](https://github.com/google/sentencepiece)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” BPE ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. sentencepiece์˜ ํŠน์ง• ์ค‘ ํ•˜๋‚˜๋Š” ์‹œํ€€์Šค๋ฅผ ๋””์ฝ”๋”ฉํ•  ๋•Œ ์ฒซ ํ† ํฐ์ด ๋‹จ์–ด์˜ ์‹œ์ž‘์ด๋ผ๋ฉด (์˜ˆ๋ฅผ ๋“ค์–ด "Banana"), ํ† ํฌ๋‚˜์ด์ €๋Š” ๋ฌธ์ž์—ด ์•ž์— ๊ณต๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜์ง€ ์•Š๋Š”๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ [BlackSamorez](https://huggingface.co/BlackSamorez)์˜ ๊ธฐ์—ฌ์™€ ํ•จ๊ป˜, [zphang](https://huggingface.co/zphang)์— ์˜ํ•ด ์ œ๊ณต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. Hugging Face์—์„œ์˜ ๊ตฌํ˜„ ์ฝ”๋“œ๋Š” GPT-NeoX๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋ฉฐ [์—ฌ๊ธฐ](https://github.com/EleutherAI/gpt-neox)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๊ณ , ์ €์ž์˜ ์ฝ”๋“œ ์›๋ณธ์€ [์—ฌ๊ธฐ](https://github.com/facebookresearch/llama)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ž˜ LLaMA ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ Meta AI์—์„œ ๋ช‡ ๊ฐ€์ง€ ํ›„์† ์ž‘์—…์„ ๋ฐœํ‘œํ–ˆ์Šต๋‹ˆ๋‹ค: - **Llama2**: Llama2๋Š” ๊ตฌ์กฐ์ ์ธ ๋ช‡ ๊ฐ€์ง€ ์ˆ˜์ •(Grouped Query Attention)์„ ํ†ตํ•ด ๊ฐœ์„ ๋œ ๋ฒ„์ „์ด๋ฉฐ, 2์กฐ ๊ฐœ์˜ ํ† ํฐ์œผ๋กœ ์‚ฌ์ „ ํ›ˆ๋ จ์ด ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. Llama2์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์ด ๋ฌธ์„œ](llama2)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ## ๋ฆฌ์†Œ์Šค [[resources]] LLaMA๋ฅผ ์‹œ์ž‘ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  Hugging Face ๋ฐ ์ปค๋ฎค๋‹ˆํ‹ฐ(๐ŸŒŽ๋กœ ํ‘œ์‹œ)์˜ ๊ณต์‹ ์ž๋ฃŒ ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์— ์ž๋ฃŒ๋ฅผ ์ œ์ถœํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด Pull Request๋ฅผ ์˜ฌ๋ ค์ฃผ์„ธ์š”! ์ถ”๊ฐ€ํ•  ์ž๋ฃŒ๋Š” ๊ธฐ์กด์˜ ์ž๋ฃŒ์™€ ์ค‘๋ณต๋˜์ง€ ์•Š๊ณ  ์ƒˆ๋กœ์šด ๋‚ด์šฉ์„ ๋ณด์—ฌ์ฃผ๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. <PipelineTag pipeline="text-classification"/> - LLaMA ๋ชจ๋ธ์„ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ์ž‘์—…์— ์ ์šฉํ•˜๊ธฐ ์œ„ํ•œ ํ”„๋กฌํ”„ํŠธ ํŠœ๋‹ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb#scrollTo=f04ba4d2) ๐ŸŒŽ <PipelineTag pipeline="question-answering"/> - [Stack Exchange](https://stackexchange.com/)์—์„œ ์งˆ๋ฌธ์— ๋‹ตํ•˜๋Š” LLaMA๋ฅผ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์œ„ํ•œ [StackLLaMA: RLHF๋กœ LLaMA๋ฅผ ํ›ˆ๋ จํ•˜๋Š” ์‹ค์ „ ๊ฐ€์ด๋“œ](https://huggingface.co/blog/stackllama#stackllama-a-hands-on-guide-to-train-llama-with-rlhf) ๐ŸŒŽ โš—๏ธ ์ตœ์ ํ™” - ์ œํ•œ๋œ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ฐ€์ง„ GPU์—์„œ xturing ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ LLaMA ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing) ๐ŸŒŽ โšก๏ธ ์ถ”๋ก  - ๐Ÿค— PEFT ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ PeftModel์„ ์‚ฌ์šฉํ•˜์—ฌ LLaMA ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/02%20-%20Evaluate.ipynb) ๐ŸŒŽ - LangChain์„ ์‚ฌ์šฉํ•˜์—ฌ PEFT ์–ด๋Œ‘ํ„ฐ LLaMA ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1l2GiSSPbajVyp2Nk3CFT4t3uH6-5TiBe?usp=sharing) ๐ŸŒŽ ๐Ÿš€ ๋ฐฐํฌ - ๐Ÿค— PEFT ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ์‚ฌ์šฉ์ž ์นœํ™”์ ์ธ UI๋กœ LLaMA ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/lxe/simple-llama-finetuner/blob/master/Simple_LLaMA_FineTuner.ipynb#scrollTo=3PM_DilAZD8T) ๐ŸŒŽ - Amazon SageMaker์—์„œ ํ…์ŠคํŠธ ์ƒ์„ฑ์„ ์œ„ํ•ด Open-LLaMA ๋ชจ๋ธ์„ ๋ฐฐํฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart-foundation-models/text-generation-open-llama.ipynb) ๐ŸŒŽ ## LlamaConfig [[llamaconfig]] [[autodoc]] LlamaConfig ## LlamaTokenizer [[llamatokenizer]] [[autodoc]] LlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LlamaTokenizerFast [[llamatokenizerfast]] [[autodoc]] LlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary ## LlamaModel [[llamamodel]] [[autodoc]] LlamaModel - forward ## LlamaForCausalLM [[llamaforcausallm]] [[autodoc]] LlamaForCausalLM - forward ## LlamaForSequenceClassification [[llamaforsequenceclassification]] [[autodoc]] LlamaForSequenceClassification - forward
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/model_doc/whisper.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Whisper [[whisper]] ## ๊ฐœ์š” [[overview]] Whisper ๋ชจ๋ธ์€ Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever์— ์˜ํ•ด [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)์—์„œ ์ œ์•ˆ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋…ผ๋ฌธ์˜ ์ดˆ๋ก์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: *์šฐ๋ฆฌ๋Š” ์ธํ„ฐ๋„ท์—์„œ ๋Œ€๋Ÿ‰์˜ ์˜ค๋””์˜ค๋ฅผ ๊ธ€๋กœ ์˜ฎ๊ธด ๊ฒƒ์„ ์˜ˆ์ธกํ•˜๋„๋ก ๊ฐ„๋‹จํžˆ ํ›ˆ๋ จ๋œ ์Œ์„ฑ ์ฒ˜๋ฆฌ ์‹œ์Šคํ…œ์˜ ์„ฑ๋Šฅ์„ ์—ฐ๊ตฌํ•ฉ๋‹ˆ๋‹ค. 68๋งŒ ์‹œ๊ฐ„์˜ ๋‹ค๊ตญ์–ด ๋ฐ ๋‹ค์ค‘ ์ž‘์—… ์ง€๋„(multitask supervision)์— ํ™•์žฅํ–ˆ์„ ๋•Œ, ๊ฒฐ๊ณผ ๋ชจ๋ธ์€ ํ‘œ์ค€ ๋ฒค์น˜๋งˆํฌ์— ์ž˜ ์ผ๋ฐ˜ํ™”๋˜๋ฉฐ, ๋ฏธ์„ธ ์กฐ์ •์ด ํ•„์š” ์—†๋Š” ์ œ๋กœ์ƒท ์ „์†ก ์„ค์ •์—์„œ ์ด์ „์˜ ์™„์ „ํžˆ ์ง€๋„๋œ(fully-supervised) ๊ฒฐ๊ณผ์™€ ๊ฒฝ์Ÿํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ์‚ฌ๋žŒ๊ณผ ๋น„๊ตํ•˜๋ฉด, ์ด ๋ชจ๋ธ์€ ์‚ฌ๋žŒ์˜ ์ •ํ™•๋„์™€ ๊ฒฌ๊ณ ์„ฑ์— ๊ทผ์ ‘ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ฐ•๋ ฅํ•œ ์Œ์„ฑ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•œ ์ถ”๊ฐ€ ์ž‘์—…์˜ ๊ธฐ๋ฐ˜์ด ๋  ๋ชจ๋ธ๊ณผ ์ถ”๋ก  ์ฝ”๋“œ๋ฅผ ๊ณต๊ฐœํ•ฉ๋‹ˆ๋‹ค.* ํŒ: - ์ด ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋ณ„๋„์˜ ๋ฏธ์„ธ ์กฐ์ • ์—†์ด๋„ ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. - ์•„ํ‚คํ…์ฒ˜๋Š” ๊ณ ์ „์ ์ธ ์ธ์ฝ”๋”-๋””์ฝ”๋” ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋”ฐ๋ฅด๊ธฐ ๋•Œ๋ฌธ์—, ์ถ”๋ก ์„ ์œ„ํ•ด [`~generation.GenerationMixin.generate`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. - ํ˜„์žฌ ์ถ”๋ก ์€ ์งง์€ ํ˜•์‹์—๋งŒ ๊ตฌํ˜„๋˜์–ด ์žˆ์œผ๋ฉฐ, ์˜ค๋””์˜ค๋Š” 30์ดˆ ๋ฏธ๋งŒ์˜ ์„ธ๊ทธ๋จผํŠธ๋กœ ๋ฏธ๋ฆฌ ๋ถ„ํ• ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํƒ€์ž„์Šคํƒฌํ”„๋ฅผ ํฌํ•จํ•œ ๊ธด ํ˜•์‹์— ๋Œ€ํ•œ ์ถ”๋ก ์€ ํ–ฅํ›„ ๋ฆด๋ฆฌ์Šค์—์„œ ๊ตฌํ˜„๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. - [`WhisperProcessor`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์— ์‚ฌ์šฉํ•  ์˜ค๋””์˜ค๋ฅผ ์ค€๋น„ํ•˜๊ณ , ์˜ˆ์ธก๋œ ID๋ฅผ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค: ```bash python src/transformers/models/whisper/convert_openai_to_hf.py --checkpoint_path "" --pytorch_dump_folder_path "Arthur/whisper-3" --convert_preprocessor True ``` ์Šคํฌ๋ฆฝํŠธ๋Š” OpenAI ์ฒดํฌํฌ์ธํŠธ์—์„œ ํ•„์š”ํ•œ ๋ชจ๋“  ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ž๋™์œผ๋กœ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. OpenAI ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•˜๋ ค๋ฉด `tiktoken` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•ด์•ผ OpenAI ํ† ํฐํ™”๊ธฐ๋ฅผ `tokenizers` ๋ฒ„์ „์œผ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ [Arthur Zucker](https://huggingface.co/ArthurZ)์— ์˜ํ•ด ์ œ๊ณต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์˜ Tensorflow ๋ฒ„์ „์€ [amyeroberts](https://huggingface.co/amyeroberts)์— ์˜ํ•ด ์ œ๊ณต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ฝ”๋“œ๋Š” [์—ฌ๊ธฐ](https://github.com/openai/whisper)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## WhisperConfig [[whisperconfig]] [[autodoc]] WhisperConfig ## WhisperTokenizer [[whispertokenizer]] [[autodoc]] WhisperTokenizer - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## WhisperTokenizerFast [[whispertokenizerfast]] [[autodoc]] WhisperTokenizerFast - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## WhisperFeatureExtractor [[whisperfeatureextractor]] [[autodoc]] WhisperFeatureExtractor - __call__ ## WhisperProcessor [[whisperprocessor]] [[autodoc]] WhisperProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## WhisperModel [[whispermodel]] [[autodoc]] WhisperModel - forward - _mask_input_features ## WhisperForConditionalGeneration [[whisperforconditionalgeneration]] [[autodoc]] WhisperForConditionalGeneration - forward ## WhisperForAudioClassification [[whisperforaudioclassification]] [[autodoc]] WhisperForAudioClassification - forward ## TFWhisperModel [[tfwhispermodel]] [[autodoc]] TFWhisperModel - call ## TFWhisperForConditionalGeneration [[tfwhisperforconditionalgeneration]] [[autodoc]] TFWhisperForConditionalGeneration - call ## FlaxWhisperModel [[flaxwhispermodel]] [[autodoc]] FlaxWhisperModel - __call__ ## FlaxWhisperForConditionalGeneration [[flaxwhisperforconditionalgeneration]] [[autodoc]] FlaxWhisperForConditionalGeneration - __call__ ## FlaxWhisperForAudioClassification [[flaxwhisperforaudioclassification]] [[autodoc]] FlaxWhisperForAudioClassification - __call__
0
mavonic_private_repos/transformers/docs/source/ko
mavonic_private_repos/transformers/docs/source/ko/model_doc/llama2.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Llama2 [[llama2]] ## ๊ฐœ์š” [[overview]] Llama2 ๋ชจ๋ธ์€ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Ya1smine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom์˜ ๋…ผ๋ฌธ [LLaMA: Open Foundation and Fine-Tuned Chat Models](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/)์—์„œ ์ œ์•ˆ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ฑ„ํŒ… ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์— ๋งž๊ฒŒ ๋ฏธ์„ธ ์กฐ์ •๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํฌํ•จ๋œ 7B์—์„œ 70B ๋ฒ”์œ„์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง„ ๊ธฐ์ดˆ ์–ธ์–ด ๋ชจ๋ธ ๋ชจ์Œ์ž…๋‹ˆ๋‹ค! ๋…ผ๋ฌธ์˜ ์ดˆ๋ก์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: *์ด ์—ฐ๊ตฌ์—์„œ ์šฐ๋ฆฌ๋Š” 70์–ต์—์„œ 700์–ต ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ๋ฒ”์œ„์—์„œ ์‚ฌ์ „ ํ›ˆ๋ จ ๋ฐ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(LLMs)์˜ ๋ชจ์Œ์ธ Llama 2๋ฅผ ๊ฐœ๋ฐœ ๋ฐ ๊ณต๊ฐœํ•ฉ๋‹ˆ๋‹ค. Llama 2-Chat๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ๋ฏธ์„ธ ์กฐ์ •๋œ LLMs์€ ๋Œ€ํ™” ์‚ฌ์šฉ ์‚ฌ๋ก€์— ์ตœ์ ํ™”๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ๋ชจ๋ธ์€ ํ…Œ์ŠคํŠธํ•œ ๋Œ€๋ถ€๋ถ„์˜ ๋ฒค์น˜๋งˆํฌ์—์„œ ์˜คํ”ˆ ์†Œ์Šค ์ฑ„ํŒ… ๋ชจ๋ธ๋ณด๋‹ค ์„ฑ๋Šฅ์ด ๋›ฐ์–ด๋‚˜๋ฉฐ, ์œ ์šฉ์„ฑ๊ณผ ์•ˆ์ „์„ฑ์— ๋Œ€ํ•œ ์ธ์  ํ‰๊ฐ€๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋น„๊ณต๊ฐœ ์†Œ์Šค ๋ชจ๋ธ์„ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ๋Š” ์ ์ ˆํ•œ ๋Œ€์•ˆ์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” Llama 2-Chat์˜ ๋ฏธ์„ธ ์กฐ์ • ๋ฐ ์•ˆ์ „์„ฑ ํ–ฅ์ƒ์˜ ์ ‘๊ทผ ๋ฐฉ์‹์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์„ค๋ช…์„ ์ œ๊ณตํ•˜์—ฌ ์ปค๋ฎค๋‹ˆํ‹ฐ๊ฐ€ ์šฐ๋ฆฌ์˜ ์ž‘์—…์„ ๊ธฐ๋ฐ˜์œผ๋กœ LLMs์˜ ์ฑ…์ž„์žˆ๋Š” ๊ฐœ๋ฐœ์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค.* [์—ฌ๊ธฐ](https://huggingface.co/models?search=llama2)์—์„œ ๋ชจ๋“  Llama2 ๋ชจ๋ธ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <Tip warning={true}> `Llama2` ๋ชจ๋ธ์€ `bfloat16`์„ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ๋˜์—ˆ์ง€๋งŒ, ์›๋ž˜ ์ถ”๋ก ์€ `float16`์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ๋Š” `torch_dtype = 'float16'`์„ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ด๋Š” `AutoModel` API์— ์˜ํ•ด ์ฒดํฌํฌ์ธํŠธ๋ฅผ `torch.float32`์—์„œ `torch.float16`์œผ๋กœ ์บ์ŠคํŒ…ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜จ๋ผ์ธ ๊ฐ€์ค‘์น˜์˜ `dtype`์€ `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ดˆ๊ธฐํ™”ํ•  ๋•Œ `torch_dtype="auto"`๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ํ•œ ๋Œ€๋ถ€๋ถ„ ๊ด€๋ จ์ด ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ ์ด์œ ๋Š” ๋ชจ๋ธ์ด ๋จผ์ € ๋‹ค์šด๋กœ๋“œ๋  ๊ฒƒ์ด๊ณ  (์˜จ๋ผ์ธ ์ฒดํฌํฌ์ธํŠธ์˜ `dtype`์„ ์‚ฌ์šฉํ•˜์—ฌ) ๊ทธ๋‹ค์Œ์— ๊ธฐ๋ณธ `dtype`์ธ `torch`๋กœ ์บ์ŠคํŒ…ํ•˜๊ณ (`torch.float32`๊ฐ€ ๋จ), ๋งˆ์ง€๋ง‰์œผ๋กœ ๊ตฌ์„ฑ(configuration)์—์„œ ์ œ๊ณต๋œ `torch_dtype`์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ด๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ `float16`์—์„œ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ๊ถŒ์žฅ๋˜์ง€ ์•Š์œผ๋ฉฐ `nan`์„ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์•Œ๋ ค์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ชจ๋ธ์€ `bfloat16`์—์„œ ํ›ˆ๋ จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๐Ÿฏ ํŒ: - Llama2 ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜๋Š” [์ด ์–‘์‹](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)์„ ์ž‘์„ฑํ•˜์—ฌ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์•„ํ‚คํ…์ฒ˜๋Š” ์ฒ˜์Œ ๋ฒ„์ „์˜ Llama์™€ ๋งค์šฐ ์œ ์‚ฌํ•˜๋ฉฐ, [์ด ๋…ผ๋ฌธ](https://arxiv.org/pdf/2305.13245.pdf)์˜ ๋‚ด์šฉ์— ๋”ฐ๋ผ Grouped Query Attention (GQA)์ด ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. - `config.pretraining_tp`๋ฅผ 1๊ณผ ๋‹ค๋ฅธ ๊ฐ’์œผ๋กœ ์„ค์ •ํ•˜๋ฉด ๋” ์ •ํ™•ํ•˜์ง€๋งŒ ๋Š๋ฆฐ ์„ ํ˜• ๋ ˆ์ด์–ด ๊ณ„์‚ฐ์ด ํ™œ์„ฑํ™”๋˜์–ด ์›๋ณธ ๋กœ์ง“๊ณผ ๋” ์ž˜ ์ผ์น˜ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. - ์›๋ž˜ ๋ชจ๋ธ์€ `pad_id = -1`์„ ์‚ฌ์šฉํ•˜๋Š”๋ฐ, ์ด๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์—†์Œ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๋™์ผํ•œ ๋กœ์ง์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์œผ๋ฏ€๋กœ `tokenizer.add_special_tokens({"pad_token":"<pad>"})`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŒจ๋”ฉ ํ† ํฐ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ์ด์— ๋”ฐ๋ผ ํ† ํฐ ์ž„๋ฒ ๋”ฉ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `model.config.pad_token_id`๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ `embed_tokens` ๋ ˆ์ด์–ด๋Š” `self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx)`๋กœ ์ดˆ๊ธฐํ™”๋˜์–ด, ํŒจ๋”ฉ ํ† ํฐ ์ธ์ฝ”๋”ฉ์ด 0์„ ์ถœ๋ ฅํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ดˆ๊ธฐํ™” ์‹œ์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. - ์–‘์‹์„ ์ž‘์„ฑํ•˜๊ณ  ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ ์ ‘๊ทผ ๊ถŒํ•œ์„ ์–ป์€ ํ›„์—๋Š” ์ด๋ฏธ ๋ณ€ํ™˜๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š๊ณ  ์ž์‹ ์˜ ๋ชจ๋ธ์„ ์ง์ ‘ ๋ณ€ํ™˜ํ•˜๋ ค๋Š” ๊ฒฝ์šฐ, [๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)๋ฅผ ์ž์œ ๋กญ๊ฒŒ ์‚ฌ์šฉํ•˜์„ธ์š”. ์Šคํฌ๋ฆฝํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์˜ˆ์‹œ์˜ ๋ช…๋ น์–ด๋กœ ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` - ๋ณ€ํ™˜ ํ›„ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("/output/path") model = LlamaForCausalLM.from_pretrained("/output/path") ``` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋ชจ๋ธ์„ float16 ์ •๋ฐ€๋„๋กœ ์ „๋ถ€ ํ˜ธ์ŠคํŠธํ•  ์ˆ˜ ์žˆ์„ ๋งŒํผ ์ถฉ๋ถ„ํ•œ CPU RAM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค (๊ฐ€์žฅ ํฐ ๋ฒ„์ „์ด ์—ฌ๋Ÿฌ ์ฒดํฌํฌ์ธํŠธ๋กœ ์ œ๊ณต๋˜๋”๋ผ๋„ ๊ฐ ์ฒดํฌํฌ์ธํŠธ๋Š” ๋ชจ๋ธ ๊ฐ€์ค‘์น˜์˜ ์ผ๋ถ€๋งŒ์„ ํฌํ•จํ•˜๋ฏ€๋กœ ๋ชจ๋‘ RAM์— ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). 75B ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ์ด 145GB์˜ RAM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - LLaMA ํ† ํฌ๋‚˜์ด์ €๋Š” [sentencepiece](https://github.com/google/sentencepiece)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ BPE ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. sentencepiece์˜ ํŠน์ง• ์ค‘ ํ•˜๋‚˜๋Š” ์‹œํ€€์Šค๋ฅผ ๋””์ฝ”๋”ฉํ•  ๋•Œ ์ฒซ ๋ฒˆ์งธ ํ† ํฐ์ด ๋‹จ์–ด์˜ ์‹œ์ž‘์ด๋ฉด (์˜ˆ: "Banana") ํ† ํฌ๋‚˜์ด์ €๋Š” ๋ฌธ์ž์—ด ์•ž์— ์ ‘๋‘์‚ฌ ๊ณต๊ฐ„์„ ์ถ”๊ฐ€ํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ [Arthur Zucker](https://huggingface.co/ArthurZ)๊ฐ€ [Lysandre Debut](https://huggingface.co/lysandre)์˜ ๋„์›€์„ ๋ฐ›์•„ ์ œ๊ณตํ•˜์˜€์Šต๋‹ˆ๋‹ค. Hugging Face์—์„œ์˜ ๊ตฌํ˜„ ์ฝ”๋“œ๋Š” [์—ฌ๊ธฐ](https://github.com/EleutherAI/gpt-neox)์˜ GPT-NeoX ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ €์ž์˜ ์›๋ž˜ ์ฝ”๋“œ๋Š” [์—ฌ๊ธฐ](https://github.com/facebookresearch/llama)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋ฆฌ์†Œ์Šค [[resources]] LLaMA2๋ฅผ ์‹œ์ž‘ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  Hugging Face์˜ ๊ณต์‹ ๋ฐ ์ปค๋ฎค๋‹ˆํ‹ฐ(๐ŸŒŽ๋กœ ํ‘œ์‹œ) ๋ฆฌ์†Œ์Šค ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์— ์ƒˆ๋กœ์šด ๋ฆฌ์†Œ์Šค๋ฅผ ์ถ”๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด์„œ Pull Request๋ฅผ ์—ด์–ด ์ฃผ์‹œ๋ฉด ๊ฒ€ํ† ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! ๋ฆฌ์†Œ์Šค๋Š” ๊ธฐ์กด ๋ฆฌ์†Œ์Šค์™€ ์ค‘๋ณต๋˜์ง€ ์•Š๋Š” ์ƒˆ๋กœ์šด ๊ฒƒ์„ ๋ณด์—ฌ์ฃผ๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ž…๋‹ˆ๋‹ค. - [Llama 2 is here - get it on Hugging Face](https://huggingface.co/blog/llama2), Llama 2์— ๊ด€ํ•œ ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ์™€ ๐Ÿค— Transformers ๋ฐ ๐Ÿค— PEFT์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋‚ด์šฉ์ž…๋‹ˆ๋‹ค. - [LLaMA 2 - Every Resource you need](https://www.philschmid.de/llama-2), LLaMA 2์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ณ  ๋น ๋ฅด๊ฒŒ ์‹œ์ž‘ํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ๊ด€๋ จ ๋ฆฌ์†Œ์Šค์˜ ๋ชจ์Œ์ž…๋‹ˆ๋‹ค. <PipelineTag pipeline="text-generation"/> - Google Colab์—์„œ QLoRA์™€ 4-bit ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ Llama 2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ - "Llama-v2-7b-guanaco" ๋ชจ๋ธ์„ 4-bit QLoRA๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  PDF์—์„œ Q&A ๋ฐ์ดํ„ฐ์…‹์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/134o_cXcMe_lsvl15ZE_4Y75Kstepsntu?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ โš—๏ธ ์ตœ์ ํ™” - [Llama 2๋ฅผ DPO๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://huggingface.co/blog/dpo-trl), TRL ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ DPO ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ • ๋ฐ์ดํ„ฐ์…‹์—์„œ Llama 2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•˜๋Š” ๊ฐ€์ด๋“œ์ž…๋‹ˆ๋‹ค. - [ํ™•์žฅ ๊ฐ€์ด๋“œ: Llama 2 ๋ช…๋ น์–ด ์กฐ์ •](https://www.philschmid.de/instruction-tune-llama-2), ์ž…๋ ฅ์—์„œ ๋ช…๋ น์–ด๋ฅผ ์ƒ์„ฑํ•˜๋„๋ก Llama 2๋ฅผ ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•˜๋Š” ๊ฐ€์ด๋“œ๋กœ, ๋ช…๋ น์–ด๋ฅผ ๋”ฐ๋ฅด๋Š” ๋ชจ๋ธ์—์„œ ๋ช…๋ น์–ด๋ฅผ ์ฃผ๋Š” ๋ชจ๋ธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. - ๊ฐœ์ธ ์ปดํ“จํ„ฐ์—์„œ QLoRA์™€ TRL์„ ์‚ฌ์šฉํ•˜์—ฌ Llama 2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1SYpgFpcmtIUzdE7pxqknrM4ArCASfkFQ?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ โšก๏ธ ์ถ”๋ก  - AutoGPTQ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ GPTQ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ Llama 2 ๋ชจ๋ธ์„ ์–‘์žํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1TC56ArKerXUpbgRy5vM3woRsbTEVNq7h?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ - ๋กœ์ปฌ ์ปดํ“จํ„ฐ๋‚˜ Google Colab์—์„œ 4-bit ์–‘์žํ™”๋กœ Llama 2 ์ฑ„ํŒ… ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1X1z9Q6domMKl2CnEM0QGHNwidLfR4dW2?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ ๐Ÿš€ ๋ฐฐํฌ - [Amazon SageMaker์—์„œ LLaMA 2 (7-70B) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://www.philschmid.de/sagemaker-llama2-qlora), Amazon SageMaker์—์„œ QLoRA ๋ฏธ์„ธ ์กฐ์ • ๋ฐ ๋ฐฐํฌ์— ์ด๋ฅด๊ธฐ๊นŒ์ง€์˜ ์™„์ „ํ•œ ๊ฐ€์ด๋“œ์ž…๋‹ˆ๋‹ค. - [Amazon SageMaker์—์„œ Llama 2 7B/13B/70B ๋ฐฐํฌํ•˜๊ธฐ](https://www.philschmid.de/sagemaker-llama-llm), ์•ˆ์ „ํ•˜๊ณ  ํ™•์žฅ ๊ฐ€๋Šฅํ•œ ๋ฐฐํฌ๋ฅผ ์œ„ํ•ด Hugging Face์˜ LLM DLC ์ปจํ…Œ์ด๋„ˆ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๊ฐ€์ด๋“œ์ž…๋‹ˆ๋‹ค. ## LlamaConfig [[llamaconfig]] [[autodoc]] LlamaConfig ## LlamaTokenizer [[llamatokenizer]] [[autodoc]] LlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LlamaTokenizerFast [[llamatokenizerfast]] [[autodoc]] LlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary ## LlamaModel [[llamamodel]] [[autodoc]] LlamaModel - forward ## LlamaForCausalLM [[llamaforcausallm]] [[autodoc]] LlamaForCausalLM - forward ## LlamaForSequenceClassification [[llamaforsequenceclassification]] [[autodoc]] LlamaForSequenceClassification - forward
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/run_scripts.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento con script Insieme ai [notebooks](./notebooks) ๐Ÿค— Transformers, ci sono anche esempi di script che dimostrano come addestrare un modello per un task con [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), o [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). Troverai anche script che abbiamo usato nei nostri [progetti di ricerca](https://github.com/huggingface/transformers/tree/main/examples/research_projects) e [precedenti esempi](https://github.com/huggingface/transformers/tree/main/examples/legacy) a cui contribuisce per lo piรน la comunitร . Questi script non sono attivamente mantenuti e richiedono una specifica versione di ๐Ÿค— Transformers che sarร  molto probabilmente incompatibile con l'ultima versione della libreria. Non รจ dato per scontato che gli script di esempio funzionino senza apportare modifiche per ogni problema, bensรฌ potrebbe essere necessario adattare lo script al tuo caso specifico. Per aiutarti in ciรฒ, la maggioranza degli script espone le modalitร  di pre-processamento dei dati, consentendoti di modificare lo script come preferisci. Per qualsiasi feature che vorresti implementare in uno script d'esempio, per favore discutine nel [forum](https://discuss.huggingface.co/) o in un'[issue](https://github.com/huggingface/transformers/issues) prima di inviare una Pull Request. Mentre accogliamo con piacere la correzione di bug, รจ piรน improbabile che faremo la stessa con una PR che aggiunge funzionalitร  sacrificando la leggibilitร . Questa guida ti mostrerร  come eseguire uno script di esempio relativo al task di summarization in [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) e [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). Tutti gli esempi funzioneranno con entrambi i framework a meno che non sia specificato altrimenti. ## Installazione Per eseguire con successo l'ultima versione degli script di esempio, devi **installare ๐Ÿค— Transformers dalla fonte** in un nuovo ambiente virtuale: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` Per le precedenti versioni degli script di esempio, clicca sul pulsante di seguito: <details> <summary>Esempi per versioni precedenti di ๐Ÿค— Transformers</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> Successivamente, cambia la tua attuale copia di ๐Ÿค— Transformers specificandone la versione, ad esempio v3.5.1: ```bash git checkout tags/v3.5.1 ``` Dopo aver configurato correttamente la versione della libreria, naviga nella cartella degli esempi di tua scelta e installa i requisiti: ```bash pip install -r requirements.txt ``` ## Esegui uno script <frameworkcontent> <pt> Lo script di esempio scarica e pre-processa un dataset dalla libreria ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Successivamente, lo script esegue il fine-tuning su un dataset usando il [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) su un'architettura che supporta la summarization. Il seguente esempio mostra come eseguire il fine-tuning di [T5-small](https://huggingface.co/google-t5/t5-small) sul dataset [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). Il modello T5 richiede un parametro addizionale `source_prefix` a causa del modo in cui รจ stato addestrato. Questo prefisso permette a T5 di sapere che si tratta di un task di summarization. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Lo script di esempio scarica e pre-processa un dataset dalla libreria ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Successivamente, lo script esegue il fine-tuning su un dataset usando Keras su un'architettura che supporta la summarization. Il seguente esempio mostra come eseguire il fine-tuning di [T5-small](https://huggingface.co/google-t5/t5-small) sul dataset [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). Il modello T5 richiede un parametro addizionale `source_prefix` a causa del modo in cui รจ stato addestrato. Questo prefisso permette a T5 di sapere che si tratta di un task di summarization. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Addestramento distribuito e precisione mista Il [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) supporta l'addestramento distribuito e la precisione mista, che significa che puoi anche usarla in uno script. Per abilitare entrambe le funzionalitร : - Aggiunto l'argomento `fp16` per abilitare la precisione mista. - Imposta un numero di GPU da usare con l'argomento `nproc_per_node`. ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Gli script TensorFlow utilizzano una [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) per il training distribuito e non devi aggiungere alcun argomento addizionale allo script di training. Lo script TensorFlow userร  multiple GPU in modo predefinito se quest'ultime sono disponibili: ## Esegui uno script su TPU <frameworkcontent> <pt> Le Tensor Processing Units (TPU) sono state progettate per migliorare le prestazioni. PyTorch supporta le TPU con il compilatore per deep learning [XLA](https://www.tensorflow.org/xla) (guarda [questo link](https://github.com/pytorch/xla/blob/master/README.md) per maggiori dettagli). Per usare una TPU, avvia lo script `xla_spawn.py` e usa l'argomento `num_cores` per impostare il numero di core TPU che intendi usare. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Le Tensor Processing Units (TPU) sono state progettate per migliorare le prestazioni. Gli script TensorFlow utilizzano una [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) per eseguire l'addestramento su TPU. Per usare una TPU, passa il nome della risorsa TPU all'argomento `tpu`. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path google-t5/t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Esegui uno script con ๐Ÿค— Accelerate ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) รจ una libreria compatibile solo con PyTorch che offre un metodo unificato per addestrare modelli su diverse tipologie di configurazioni (CPU, multiple GPU, TPU) mantenendo una completa visibilitร  rispetto al ciclo di training di PyTorch. Assicurati di aver effettuato l'installazione di ๐Ÿค— Accelerate, nel caso non lo avessi fatto: > Nota: dato che Accelerate รจ in rapido sviluppo, รจ necessario installare la versione proveniente da git per eseguire gli script: ```bash pip install git+https://github.com/huggingface/accelerate ``` Invece che usare lo script `run_summarization.py`, devi usare lo script `run_summarization_no_trainer.py`. Gli script supportati in ๐Ÿค— Accelerate avranno un file chiamato `task_no_trainer.py` nella rispettiva cartella. Per iniziare, esegui il seguente comando per creare e salvare un file di configurazione: ```bash accelerate config ``` Testa la tua configurazione per assicurarti della sua correttezza: ```bash accelerate test ``` Ora sei pronto per avviare l'addestramento: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path google-t5/t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## Uso di un dataset personalizzato Lo script di summarization supporta dataset personalizzati purchรฉ siano file CSV o JSON Line. Quando usi il tuo dataset, devi specificare diversi argomenti aggiuntivi: - `train_file` e `validation_file` specificano dove si trovano i file di addestramento e validazione. - `text_column` รจ il file di input da riassumere. - `summary_column` รจ il file di destinazione per l'output. Uno script di summarization usando un dataset personalizzato sarebbe simile a questo: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## Testare uno script รˆ spesso una buona idea avviare il tuo script su un numero inferiore di esempi tratti dal dataset, per assicurarti che tutto funzioni come previsto prima di eseguire lo script sull'intero dataset, che potrebbe necessitare di ore. Usa i seguenti argomenti per limitare il dataset ad un massimo numero di esempi: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Non tutti gli esempi di script supportano l'argomento `max_predict_samples`. Se non sei sicuro circa il supporto di questo argomento da parte del tuo script, aggiungi l'argomento `-h` per controllare: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## Riavviare addestramento da un checkpoint Un'altra utile opzione รจ riavviare un addestramento da un checkpoint precedente. Questo garantirร  che tu possa riprendere da dove hai interrotto senza ricominciare se l'addestramento viene interrotto. Ci sono due metodi per riavviare l'addestramento da un checkpoint: Il primo metodo usa l'argomento `output_dir previous_output_dir` per riavviare l'addestramento dall'ultima versione del checkpoint contenuto in `output_dir`. In questo caso, dovresti rimuovere `overwrite_output_dir`: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` Il secondo metodo usa l'argomento `resume_from_checkpoint path_to_specific_checkpoint` per riavviare un addestramento da una specifica cartella di checkpoint. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## Condividi il tuo modello Tutti gli script possono caricare il tuo modello finale al [Model Hub](https://huggingface.co/models). Prima di iniziare, assicurati di aver effettuato l'accesso su Hugging Face: ```bash huggingface-cli login ``` Poi, aggiungi l'argomento `push_to_hub` allo script. Questo argomento consentirร  di creare un repository con il tuo username Hugging Face e la cartella specificata in `output_dir`. Per dare uno specifico nome al repository, usa l'argomento `push_to_hub_model_id`. Il repository verrร  automaticamente elencata sotto al tuo namespace. Il seguente esempio mostra come caricare un modello specificando il nome del repository: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/create_a_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Crea un'architettura personalizzata Una [`AutoClass`](model_doc/auto) deduce automaticamente il modello dell'architettura e scarica la configurazione e i pesi pre-allenati. Generalmente, noi consigliamo di usare un `AutoClass` per produrre un codice indipendente dal checkpoint. Ma gli utenti che desiderano un controllo maggiore su parametri specifici del modello possono creare un modello ๐Ÿค— Transformers personalizzato da poche classi base. Questo potrebbe essere particolarmente utile per qualunque persona sia interessata nel studiare, allenare o sperimentare con un modello ๐Ÿค— Transformers. In questa guida, approfondisci la creazione di un modello personalizzato senza `AutoClass`. Impara come: - Caricare e personalizzare una configurazione del modello. - Creare un'architettura modello. - Creare un tokenizer lento e veloce per il testo. - Creare un estrattore di caratteristiche per attivitร  riguardanti audio o immagini. - Creare un processore per attivitร  multimodali. ## Configurazione Una [configurazione](main_classes/configuration) si riferisce agli attributi specifici di un modello. Ogni configurazione del modello ha attributi diversi; per esempio, tutti i modelli npl hanno questi attributi in comune `hidden_size`, `num_attention_heads`, `num_hidden_layers` e `vocab_size`. Questi attributi specificano il numero di attention heads o strati nascosti con cui costruire un modello. Dai un'occhiata piรน da vicino a [DistilBERT](model_doc/distilbert) accedendo a [`DistilBertConfig`] per ispezionare i suoi attributi: ```py >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` [`DistilBertConfig`] mostra tutti gli attributi predefiniti usati per costruire una base [`DistilBertModel`]. Tutti gli attributi sono personalizzabili, creando uno spazio per sperimentare. Per esempio, puoi configurare un modello predefinito per: - Provare un funzione di attivazione diversa con il parametro `activation`. - Utilizzare tasso di drop out piรน elevato per le probalitร  di attention con il parametro `attention_dropout`. ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` Nella funzione [`~PretrainedConfig.from_pretrained`] possono essere modificati gli attributi del modello pre-allenato: ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert/distilbert-base-uncased", activation="relu", attention_dropout=0.4) ``` Quando la configurazione del modello ti soddisfa, la puoi salvare con [`~PretrainedConfig.save_pretrained`]. Il file della tua configurazione รจ memorizzato come file JSON nella save directory specificata: ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` Per riutilizzare la configurazione del file, caricalo con [`~PretrainedConfig.from_pretrained`]: ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") ``` <Tip> Puoi anche salvare il file di configurazione come dizionario oppure come la differenza tra gli attributi della tua configurazione personalizzata e gli attributi della configurazione predefinita! Guarda la documentazione [configuration](main_classes/configuration) per piรน dettagli. </Tip> ## Modello Il prossimo passo e di creare [modello](main_classes/models). Il modello - vagamente riferito anche come architettura - definisce cosa ogni strato deve fare e quali operazioni stanno succedendo. Attributi come `num_hidden_layers` provenienti dalla configurazione sono usati per definire l'architettura. Ogni modello condivide la classe base [`PreTrainedModel`] e alcuni metodi comuni come il ridimensionamento degli input embeddings e la soppressione delle self-attention heads . Inoltre, tutti i modelli sono la sottoclasse di [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) o [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html). Cio significa che i modelli sono compatibili con l'uso di ciascun di framework. <frameworkcontent> <pt> Carica gli attributi della tua configurazione personalizzata nel modello: ```py >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> model = DistilBertModel(my_config) ``` Questo crea modelli con valori casuali invece di pesi pre-allenati. Non sarai in grado di usare questo modello per niente di utile finchรฉ non lo alleni. L'allenamento รจ un processo costoso e che richiede tempo . Generalmente รจ meglio usare un modello pre-allenato per ottenere risultati migliori velocemente, utilizzando solo una frazione delle risorse neccesarie per l'allenamento. Crea un modello pre-allenato con [`~PreTrainedModel.from_pretrained`]: ```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` Quando carichi pesi pre-allenati, la configurazione del modello predefinito รจ automaticamente caricata se il modello รจ fornito da ๐Ÿค— Transformers. Tuttavia, puoi ancora sostituire gli attributi - alcuni o tutti - di configurazione del modello predefinito con i tuoi se lo desideri: ```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config) ``` </pt> <tf> Carica gli attributi di configurazione personalizzati nel modello: ```py >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` Questo crea modelli con valori casuali invece di pesi pre-allenati. Non sarai in grado di usare questo modello per niente di utile finchรฉ non lo alleni. L'allenamento รจ un processo costoso e che richiede tempo . Generalmente รจ meglio usare un modello pre-allenato per ottenere risultati migliori velocemente, utilizzando solo una frazione delle risorse neccesarie per l'allenamento. Crea un modello pre-allenoto con [`~TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` Quando carichi pesi pre-allenati, la configurazione del modello predefinito รจ automaticamente caricato se il modello รจ fornito da ๐Ÿค— Transformers. Tuttavia, puoi ancora sostituire gli attributi - alcuni o tutti - di configurazione del modello predefinito con i tuoi se lo desideri: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config) ``` </tf> </frameworkcontent> ### Model head A questo punto, hai un modello DistilBERT base i cui output sono gli *hidden states* (in italiano stati nascosti). Gli stati nascosti sono passati come input a un model head per produrre l'output finale. ๐Ÿค— Transformers fornisce un model head diverso per ogni attivitร  fintanto che il modello supporta l'attivitร  (i.e., non puoi usare DistilBERT per un attivitร  sequence-to-sequence come la traduzione). <frameworkcontent> <pt> Per esempio, [`DistilBertForSequenceClassification`] รจ un modello DistilBERT base con una testa di classificazione per sequenze. La sequenza di classificazione head รจ uno strato lineare sopra gli output ragruppati. ```py >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Riutilizza facilmente questo checkpoint per un'altra attivitร  passando ad un model head differente. Per un attivitร  di risposta alle domande, utilizzerai il model head [`DistilBertForQuestionAnswering`]. La head per compiti di question answering รจ simile alla classificazione di sequenza head tranne per il fatto che รจ uno strato lineare sopra l'output degli stati nascosti (hidden states in inglese) ```py >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </pt> <tf> Per esempio, [`TFDistilBertForSequenceClassification`] รจ un modello DistilBERT base con classificazione di sequenza head. La classificazione di sequenza head รจ uno strato lineare sopra gli output raggruppati. ```py >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Riutilizza facilmente questo checkpoint per un altra attivitร  passando ad un modello head diverso. Per un attivitร  di risposta alle domande, utilizzerai il model head [`TFDistilBertForQuestionAnswering`]. Il head di risposta alle domande รจ simile alla sequenza di classificazione head tranne per il fatto che รจ uno strato lineare sopra l'output degli stati nascosti (hidden states in inglese) ```py >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </tf> </frameworkcontent> ## Tokenizer L'ultima classe base di cui hai bisogno prima di utilizzare un modello per i dati testuali รจ un [tokenizer](main_classes/tokenizer) per convertire il testo grezzo in tensori. Ci sono due tipi di tokenizer che puoi usare con ๐Ÿค— Transformers: - [`PreTrainedTokenizer`]: un'implementazione Python di un tokenizer. - [`PreTrainedTokenizerFast`]: un tokenizer dalla nostra libreria [๐Ÿค— Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/) basata su Rust. Questo tipo di tokenizer รจ significativamente piรน veloce, specialmente durante la batch tokenization, grazie alla sua implementazione Rust. Il tokenizer veloce offre anche metodi aggiuntivi come *offset mapping* che associa i token alle loro parole o caratteri originali. Entrambi i tokenizer supportano metodi comuni come la codifica e la decodifica, l'aggiunta di nuovi token e la gestione di token speciali. <Tip warning={true}> Non tutti i modelli supportano un tokenizer veloce. Dai un'occhiata a questo [tabella](index#supported-frameworks) per verificare se un modello ha il supporto per tokenizer veloce. </Tip> Se hai addestrato il tuo tokenizer, puoi crearne uno dal tuo file *vocabolario*: ```py >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left") ``` รˆ importante ricordare che il vocabolario di un tokenizer personalizzato sarร  diverso dal vocabolario generato dal tokenizer di un modello preallenato. รˆ necessario utilizzare il vocabolario di un modello preallenato se si utilizza un modello preallenato, altrimenti gli input non avranno senso. Crea un tokenizer con il vocabolario di un modello preallenato con la classe [`DistilBertTokenizer`]: ```py >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` Crea un tokenizer veloce con la classe [`DistilBertTokenizerFast`]: ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert/distilbert-base-uncased") ``` <Tip> Per l'impostazione predefinita, [`AutoTokenizer`] proverร  a caricare un tokenizer veloce. Puoi disabilitare questo comportamento impostando `use_fast=False` in `from_pretrained`. </Tip> ## Estrattore Di Feature Un estrattore di caratteristiche (feature in inglese) elabora input audio o immagini. Eredita dalla classe [`~feature_extraction_utils.FeatureExtractionMixin`] base e puรฒ anche ereditare dalla classe [`ImageFeatureExtractionMixin`] per l'elaborazione delle caratteristiche dell'immagine o dalla classe [`SequenceFeatureExtractor`] per l'elaborazione degli input audio. A seconda che tu stia lavorando a un'attivitร  audio o visiva, crea un estrattore di caratteristiche associato al modello che stai utilizzando. Ad esempio, crea un [`ViTFeatureExtractor`] predefinito se stai usando [ViT](model_doc/vit) per la classificazione delle immagini: ```py >>> from transformers import ViTFeatureExtractor >>> vit_extractor = ViTFeatureExtractor() >>> print(vit_extractor) ViTFeatureExtractor { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> Se non stai cercando alcuna personalizzazione, usa il metodo `from_pretrained` per caricare i parametri di default dell'estrattore di caratteristiche di un modello. </Tip> Modifica uno qualsiasi dei parametri [`ViTFeatureExtractor`] per creare il tuo estrattore di caratteristiche personalizzato: ```py >>> from transformers import ViTFeatureExtractor >>> my_vit_extractor = ViTFeatureExtractor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTFeatureExtractor { "do_normalize": false, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ``` Per gli input audio, puoi creare un [`Wav2Vec2FeatureExtractor`] e personalizzare i parametri in modo simile: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 } ``` ## Processore Per modelli che supportano attivitร  multimodali, ๐Ÿค— Transformers offre una classe di processore che racchiude comodamente un estrattore di caratteristiche e un tokenizer in un unico oggetto. Ad esempio, utilizziamo [`Wav2Vec2Processor`] per un'attivitร  di riconoscimento vocale automatico (ASR). ASR trascrive l'audio in testo, quindi avrai bisogno di un estrattore di caratteristiche e di un tokenizer. Crea un estrattore di feature per gestire gli input audio: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) ``` Crea un tokenizer per gestire gli input di testo: ```py >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt") ``` Combinare l'estrattore di caratteristiche e il tokenizer in [`Wav2Vec2Processor`]: ```py >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` Con due classi di base - configurazione e modello - e una classe di preelaborazione aggiuntiva (tokenizer, estrattore di caratteristiche o processore), puoi creare qualsiasi modello supportato da ๐Ÿค— Transformers. Ognuna di queste classi base รจ configurabile, consentendoti di utilizzare gli attributi specifici che desideri. รˆ possibile impostare facilmente un modello per l'addestramento o modificare un modello preallenato esistente per la messa a punto.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/community.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Comunitร  Questa pagina raggruppa le risorse sviluppate dalla comunitร  riguardo ๐Ÿค— Transformers. ## Risorse della comunitร : | Risorsa | Descrizione | Autore | |:----------|:-------------|------:| | [Glossario delle Flashcards di Transformers](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | Un insieme di flashcards basate sul [glossario della documentazione di Transformers](glossary), creato in un formato tale da permettere un facile apprendimento e revisione usando [Anki](https://apps.ankiweb.net/), un'applicazione open-source e multi-piattaforma, specificatamente progettata per ricordare informazioni nel lungo termine. Guarda questo [video introduttivo su come usare le flashcards](https://www.youtube.com/watch?v=Dji_h7PILrw). | [Darigov Research](https://www.darigovresearch.com/) | ## Notebook della comunitร : | Notebook | Descrizione | Autore | | |:----------|:-------------|:-------------|------:| | [Fine-tuning di un Transformer pre-addestrato, al fine di generare testi di canzoni](https://github.com/AlekseyKorshuk/huggingartists) | Come generare testi di canzoni nello stile del vostro artista preferito attraverso il fine-tuning di un modello GPT-2. | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | [Addestramento di T5 in Tensorflow 2](https://github.com/snapthat/TF-T5-text-to-text) | Come addestrare T5 per qualsiasi attivitร  usando Tensorflow 2. Questo notebook mostra come risolvere l'attivitร  di "Question Answering" usando Tensorflow 2 e SQUAD. | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | [Addestramento di T5 con TPU](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | Come addestrare T5 su SQUAD con Transformers e NLP. | [Suraj Patil](https://github.com/patil-suraj) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | [Fine-tuning di T5 per la classificazione e scelta multipla](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | Come effettuare il fine-tuning di T5 per le attivitร  di classificazione a scelta multipla - usando un formato testo-a-testo - con PyTorch Lightning. | [Suraj Patil](https://github.com/patil-suraj) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | [Fine-tuning di DialoGPT su nuovi dataset e lingue](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | Come effettuare il fine-tuning di un modello DialoGPT su un nuovo dataset per chatbots conversazionali open-dialog. | [Nathan Cooper](https://github.com/ncoop57) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | [Modellamento di una lunga sequenza con Reformer](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | Come addestrare su sequenze di lunghezza fino a 500 mila token con Reformer. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | [Fine-tuning di BART per riassumere testi](https://github.com/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) | Come effettuare il fine-tuning di BART per riassumere testi con fastai usando blurr. | [Wayde Gilliam](https://ohmeow.com/) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) | | [Fine-tuning di un Transformer pre-addestrato su tweet](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | Come generare tweet nello stile del tuo account Twitter preferito attraverso il fine-tuning di un modello GPT-2. | [Boris Dayma](https://github.com/borisdayma) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | [Ottimizzazione di modelli ๐Ÿค— Hugging Face con Weights & Biases](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | Un tutorial completo che mostra l'integrazione di W&B con Hugging Face. | [Boris Dayma](https://github.com/borisdayma) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | [Longformer pre-addestrato](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | Come costruire una versione "long" degli esistenti modelli pre-addestrati. | [Iz Beltagy](https://beltagy.net) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | [Fine-tuning di Longformer per QA](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | Come effettuare il fine-tuning di un modello longformer per un task di QA.| [Suraj Patil](https://github.com/patil-suraj) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | [Valutazione di modelli con ๐Ÿค—NLP](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | Come valutare longformer su TriviaQA con `NLP`. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | [Fine-tuning di T5 per Sentiment Span Extraction](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | Come effettuare il fine-tuning di T5 per la sentiment span extraction - usando un formato testo-a-testo - con PyTorch Lightning. | [Lorenzo Ampil](https://github.com/enzoampil) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | [Fine-tuning di DistilBert per la classificazione multi-classe](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | Come effettuare il fine-tuning di DistilBert per la classificazione multi-classe con PyTorch. | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| |[Fine-tuning di BERT per la classificazione multi-etichetta](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|Come effettuare il fine-tuning di BERT per la classificazione multi-etichetta con PyTorch. |[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| |[Accelerazione del fine-tuning con il Dynamic Padding / Bucketing](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)| Come velocizzare il fine-tuning di un fattore 2X usando il dynamic padding / bucketing. |[Michael Benesty](https://github.com/pommedeterresautee) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| |[Pre-addestramento di Reformer per Masked Language Modeling](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| Come addestrare un modello Reformer usando livelli di self-attention bi-direzionali.| [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| |[Espansione e fine-tuning di Sci-BERT](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| Come incrementare il vocabolario di un modello SciBERT - pre-addestrato da AllenAI sul dataset CORD - e crearne una pipeline. | [Tanmay Thakur](https://github.com/lordtt13) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| |[Fine-tuning di BlenderBotSmall per riassumere testi usando Trainer API](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| Come effettuare il fine-tuning di BlenderBotSmall per riassumere testi su un dataset personalizzato, usando Trainer API. | [Tanmay Thakur](https://github.com/lordtt13) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| |[Fine-tuning di Electra e interpretazione con Integrated Gradients](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | Come effettuare il fine-tuning di Electra per l'analisi dei sentimenti e intepretare le predizioni con Captum Integrated Gradients. | [Eliza Szczechla](https://elsanns.github.io) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| |[Fine-tuning di un modello GPT-2 non inglese con la classe Trainer](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | Come effettuare il fine-tuning di un modello GPT-2 non inglese con la classe Trainer. | [Philipp Schmid](https://www.philschmid.de) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| |[Fine-tuning di un modello DistilBERT per la classficazione multi-etichetta](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | Come effettuare il fine-tuning di un modello DistilBERT per l'attivitร  di classificazione multi-etichetta. | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| |[Fine-tuning di ALBERT per la classifcazione di coppie di frasi](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | Come effettuare il fine-tuning di un modello ALBERT - o un altro modello BERT-based - per l'attivitร  di classificazione di coppie di frasi. | [Nadir El Manouzi](https://github.com/NadirEM) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| |[Fine-tuning di Roberta per l'analisi di sentimenti](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | Come effettuare il fine-tuning di un modello Roberta per l'analisi di sentimenti. | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| |[Valutazione di modelli che generano domande](https://github.com/flexudy-pipe/qugeev) | Quanto sono accurante le risposte alle domande generate dal tuo modello transformer seq2seq? | [Pascal Zoleko](https://github.com/zolekode) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| |[Classificazione di testo con DistilBERT e Tensorflow](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | Come effettuare il fine-tuning di DistilBERT per la classificazione di testo in TensorFlow. | [Peter Bayerle](https://github.com/peterbayerle) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| |[Utilizzo di BERT per riassumere testi con un modello Encoder-Decoder su CNN/Dailymail](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | Come avviare "a caldo" un *EncoderDecoderModel* attraverso l'utilizzo di un checkpoint *google-bert/bert-base-uncased* per riassumere testi su CNN/Dailymail. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| |[Utilizzo di RoBERTa per riassumere testi con un modello Encoder-Decoder su BBC XSum](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | Come avviare "a caldo" un *EncoderDecoderModel* (condiviso) attraverso l'utilizzo di un checkpoint *FacebookAI/roberta-base* per riassumere testi su BBC/XSum. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| |[Fine-tuning di TAPAS su Sequential Question Answering (SQA)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | Come effettuare il fine-tuning di un modello *TapasForQuestionAnswering* attraverso l'utilizzo di un checkpoint *tapas-base* sul dataset Sequential Question Answering (SQA). | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| |[Valutazione di TAPAS su Table Fact Checking (TabFact)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | Come valutare un modello *TapasForSequenceClassification* - fine-tuned con un checkpoint *tapas-base-finetuned-tabfact* - usando una combinazione delle librerie ๐Ÿค— datasets e ๐Ÿค— transformers. | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| |[Fine-tuning di mBART per la traduzione](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | Come effettuare il fine-tuning di mBART usando Seq2SeqTrainer per la traduzione da hindi a inglese.| [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| |[Fine-tuning di LayoutLM su FUNSD (un dataset per la comprensione della forma)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | Come effettuare il fine-tuning di un modello *LayoutLMForTokenClassification* sul dataset FUNSD per l'estrazione di informazioni da documenti scannerizzati.| [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| |[Fine-tuning di DistilGPT2 e generazione di testo](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | Come effettuare il fine-tuning di DistilGPT2 e generare testo. | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| |[Fine-tuning di LED fino a 8 mila token](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | Come effettuare il fine-tuning di LED su PubMed per riassumere "lunghi" testi. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| |[Valutazione di LED su Arxiv](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | Come valutare efficacemente LED sull'attivitร  di riassumere "lunghi" testi. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| |[Fine-tuning di LayoutLM su RVL-CDIP, un dataset per la classificazione di documenti (immagini)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | Come effettuare il fine-tuning di un modello *LayoutLMForSequenceClassification* sul dataset RVL-CDIP per la classificazione di documenti scannerizzati. | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| |[Decodifica Wav2Vec2 CTC con variazioni di GPT2](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | Come decodificare sequenze CTC, variate da modelli di linguaggio. | [Eric Lam](https://github.com/voidful) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing) |[Fine-tuning di BART per riassumere testi in due lingue con la classe Trainer](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | Come effettuare il fine-tuning di BART per riassumere testi in due lingue usando la classe Trainer. | [Eliza Szczechla](https://github.com/elsanns) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| |[Valutazione di Big Bird su Trivia QA](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | Come valutare BigBird su question answering di "lunghi" documenti attraverso Trivia QA. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | [Creazione di sottotitoli per video usando Wav2Vec2](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | Come creare sottotitoli per qualsiasi video di YouTube trascrivendo l'audio con Wav2Vec. | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | [Fine-tuning di Vision Transformer su CIFAR-10 usando PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | Come effettuare il fine-tuning di Vision Transformer (ViT) su CIFAR-10 usando HuggingFace Transformers, Datasets e PyTorch Lightning.| [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | [Fine-tuning di Vision Transformer su CIFAR-10 usando ๐Ÿค— Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | Come effettuare il fine-tuning di Vision Transformer (ViT) su CIFAR-10 usando HuggingFace Transformers, Datasets e ๐Ÿค— Trainer. | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | [Valutazione di LUKE su Open Entity, un dataset di entity typing](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | Come valutare un modello *LukeForEntityClassification* sul dataset Open Entity. | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | [Valutazione di LUKE su TACRED, un dataset per l'estrazione di relazioni](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | Come valutare un modello *LukeForEntityPairClassification* sul dataset TACRED. | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | [Valutazione di LUKE su CoNLL-2003, un importante benchmark NER](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | Come valutare un modello *LukeForEntitySpanClassification* sul dataset CoNLL-2003. | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | [Valutazione di BigBird-Pegasus su dataset PubMed](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | Come valutare un modello *BigBirdPegasusForConditionalGeneration* su dataset PubMed. | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | [Classificazione di emozioni dal discorso con Wav2Vec2](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | Come utilizzare un modello pre-addestrato Wav2Vec2 per la classificazione di emozioni sul dataset MEGA. | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | [Rilevamento oggetti in un'immagine con DETR](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | Come usare un modello addestrato *DetrForObjectDetection* per rilevare oggetti in un'immagine e visualizzare l'attention. | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | [Fine-tuning di DETR su un dataset personalizzato per rilevare oggetti](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | Come effettuare fine-tuning di un modello *DetrForObjectDetection* su un dataset personalizzato per rilevare oggetti. | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | [Fine-tuning di T5 per Named Entity Recognition](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | Come effettuare fine-tunining di *T5* per un'attivitร  di Named Entity Recognition. | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/perf_train_tpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento su TPU <Tip> Nota: Molte delle strategie introdotte nella [sezione sulla GPU singola](perf_train_gpu_one) (come mixed precision training o gradient accumulation) e [sezione multi-GPU](perf_train_gpu_many) sono generiche e applicabili all'addestramento di modelli in generale quindi assicurati di dargli un'occhiata prima di immergerti in questa sezione. </Tip> Questo documento sarร  presto completato con informazioni su come effettuare la formazione su TPU.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Installazione di Transformers ! pip install transformers datasets evaluate accelerate # Per installare dalla fonte invece dell'ultima versione rilasciata, commenta il comando sopra e # rimuovi la modalitร  commento al comando seguente. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/perf_train_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento efficiente su CPU Questa guida si concentra su come addestrare in maniera efficiente grandi modelli su CPU. ## Mixed precision con IPEX IPEX รจ ottimizzato per CPU con AVX-512 o superiore, e funziona per le CPU con solo AVX2. Pertanto, si prevede che le prestazioni saranno piรน vantaggiose per le le CPU Intel con AVX-512 o superiori, mentre le CPU con solo AVX2 (ad esempio, le CPU AMD o le CPU Intel piรน vecchie) potrebbero ottenere prestazioni migliori con IPEX, ma non sono garantite. IPEX offre ottimizzazioni delle prestazioni per l'addestramento della CPU sia con Float32 che con BFloat16. L'uso di BFloat16 รจ l'argomento principale delle seguenti sezioni. Il tipo di dati a bassa precisione BFloat16 รจ stato supportato in modo nativo su 3rd Generation Xeonยฎ Scalable Processors (aka Cooper Lake) con AVX512 e sarร  supportata dalla prossima generazione di Intelยฎ Xeonยฎ Scalable Processors con Intelยฎ Advanced Matrix Extensions (Intelยฎ AMX) instruction set con prestazioni ulteriormente migliorate. L'Auto Mixed Precision per il backende della CPU รจ stato abilitato da PyTorch-1.10. allo stesso tempo, il supporto di Auto Mixed Precision con BFloat16 per CPU e l'ottimizzazione degli operatori BFloat16 รจ stata abilitata in modo massiccio in Intelยฎ Extension per PyTorch, and parzialmente aggiornato al branch master di PyTorch. Gli utenti possono ottenere prestazioni migliori ed users experience con IPEX Auto Mixed Precision.. Vedi informazioni piรน dettagliate su [Auto Mixed Precision](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html). ### Installazione di IPEX: Il rilascio di IPEX segue quello di PyTorch, da installare via pip: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | ```bash pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` Vedi altri approcci per [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html). ### Utilizzo nel Trainer Per abilitare la auto mixed precision con IPEX in Trainer, l'utende dovrebbe aggiungere `use_ipex`, `bf16` e `no_cuda` negli argomenti del comando di addestramento. Vedi un sempio di un caso d'uso [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) - Training with IPEX using BF16 auto mixed precision on CPU: <pre> python run_qa.py \ --model_name_or_path google-bert/bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex \</b> <b>--bf16 --no_cuda</b></pre> ### Esempi pratici Blog: [Accelerating PyTorch Transformers with Intel Sapphire Rapids](https://huggingface.co/blog/intel-sapphire-rapids)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Carica istanze pre-allenate con AutoClass Con cosรฌ tante architetture Transformer differenti, puรฒ essere sfidante crearne una per il tuo checkpoint. Come parte della filosofia centrale di ๐Ÿค— Transformers per rendere la libreria facile, semplice e flessibile da utilizzare, una `AutoClass` inferisce e carica automaticamente l'architettura corretta da un dato checkpoint. Il metodo `from_pretrained` ti permette di caricare velocemente un modello pre-allenato per qualsiasi architettura, cosรฌ non devi utilizzare tempo e risorse per allenare un modello da zero. Produrre questo codice agnostico ai checkpoint significa che se il tuo codice funziona per un checkpoint, funzionerร  anche per un altro checkpoint, purchรฉ sia stato allenato per un compito simile, anche se l'architettura รจ differente. <Tip> Ricorda, con architettura ci si riferisce allo scheletro del modello e con checkpoint ai pesi di una determinata architettura. Per esempio, [BERT](https://huggingface.co/google-bert/bert-base-uncased) รจ un'architettura, mentre `google-bert/bert-base-uncased` รจ un checkpoint. Modello รจ un termine generale che puรฒ significare sia architettura che checkpoint. </Tip> In questo tutorial, imparerai a: * Caricare un tokenizer pre-allenato. * Caricare un estrattore di caratteristiche (feature extractor, in inglese) pre-allenato. * Caricare un processore pre-allenato. * Caricare un modello pre-allenato. ## AutoTokenizer Quasi tutti i compiti di NLP iniziano con un tokenizer. Un tokenizer converte il tuo input in un formato che possa essere elaborato dal modello. Carica un tokenizer con [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("FacebookAI/xlm-roberta-base") ``` Poi tokenizza il tuo input come mostrato in seguito: ```py >>> sequenza = "In un buco nel terreno viveva uno Hobbit." >>> print(tokenizer(sequenza)) {'input_ids': [0, 360, 51, 373, 587, 1718, 54644, 22597, 330, 3269, 2291, 22155, 18, 5, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoFeatureExtractor Per compiti inerenti a audio e video, un feature extractor processa il segnale audio o l'immagine nel formato di input corretto. Carica un feature extractor con [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor Compiti multimodali richiedono un processore che combini i due tipi di strumenti di elaborazione. Per esempio, il modello [LayoutLMV2](model_doc/layoutlmv2) richiede un feature extractor per gestire le immagine e un tokenizer per gestire il testo; un processore li combina entrambi. Carica un processore con [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel <frameworkcontent> <pt> Infine, le classi `AutoModelFor` ti permettono di caricare un modello pre-allenato per un determinato compito (guarda [qui](model_doc/auto) per una lista completa di compiti presenti). Per esempio, carica un modello per la classificazione di sequenze con [`AutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Semplicemente utilizza lo stesso checkpoint per caricare un'architettura per un task differente: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Generalmente, raccomandiamo di utilizzare la classe `AutoTokenizer` e la classe `AutoModelFor` per caricare istanze pre-allenate dei modelli. Questo ti assicurerร  di aver caricato la corretta architettura ogni volta. Nel prossimo [tutorial](preprocessing), imparerai come utilizzare il tokenizer, il feature extractor e il processore per elaborare un dataset per il fine-tuning. </pt> <tf> Infine, le classi `TFAutoModelFor` ti permettono di caricare un modello pre-allenato per un determinato compito (guarda [qui](model_doc/auto) per una lista completa di compiti presenti). Per esempio, carica un modello per la classificazione di sequenze con [`TFAutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Semplicemente utilizza lo stesso checkpoint per caricare un'architettura per un task differente: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Generalmente, raccomandiamo di utilizzare la classe `AutoTokenizer` e la classe `TFAutoModelFor` per caricare istanze pre-allenate dei modelli. Questo ti assicurerร  di aver caricato la corretta architettura ogni volta. Nel prossimo [tutorial](preprocessing), imparerai come utilizzare il tokenizer, il feature extractor e il processore per elaborare un dataset per il fine-tuning. </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/multilingual.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Modelli multilingue per l'inferenza [[open-in-colab]] Ci sono diversi modelli multilingue in ๐Ÿค— Transformers, e il loro utilizzo per l'inferenza differisce da quello dei modelli monolingua. Non *tutti* gli utilizzi dei modelli multilingue sono perรฒ diversi. Alcuni modelli, come [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased), possono essere usati come un modello monolingua. Questa guida ti mostrerร  come utilizzare modelli multilingue che utilizzano un modo diverso per fare l'inferenza. ## XLM XLM ha dieci diversi checkpoint, di cui solo uno รจ monolingua. I nove checkpoint rimanenti possono essere suddivisi in due categorie: i checkpoint che utilizzano i language embeddings e quelli che non li utilizzano. ### XLM con language embeddings I seguenti modelli XLM utilizzano gli embeddings linguistici per specificare la lingua utilizzata per l'inferenza: - `FacebookAI/xlm-mlm-ende-1024` (Modellazione mascherata del linguaggio (Masked language modeling, in inglese), Inglese-Tedesco) - `FacebookAI/xlm-mlm-enfr-1024` (Modellazione mascherata del linguaggio, Inglese-Francese) - `FacebookAI/xlm-mlm-enro-1024` (Modellazione mascherata del linguaggio, Inglese-Rumeno) - `FacebookAI/xlm-mlm-xnli15-1024` (Modellazione mascherata del linguaggio, lingue XNLI) - `FacebookAI/xlm-mlm-tlm-xnli15-1024` (Modellazione mascherata del linguaggio + traduzione, lingue XNLI) - `FacebookAI/xlm-clm-enfr-1024` (Modellazione causale del linguaggio, Inglese-Francese) - `FacebookAI/xlm-clm-ende-1024` (Modellazione causale del linguaggio, Inglese-Tedesco) Gli embeddings linguistici sono rappresentati come un tensore delle stesse dimensioni dell' `input_ids` passato al modello. I valori in questi tensori dipendono dal linguaggio usato e sono identificati dagli attributi `lang2id` e `id2lang` del tokenizer. In questo esempio, carica il checkpoint `FacebookAI/xlm-clm-enfr-1024` (Modellazione causale del linguaggio, Inglese-Francese): ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("FacebookAI/xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("FacebookAI/xlm-clm-enfr-1024") ``` L'attributo `lang2id` del tokenizer mostra il linguaggio del modello e il suo ids: ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` Poi, crea un esempio di input: ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1 ``` Imposta l'id del linguaggio a `"en"` e usalo per definire il language embedding. Il language embedding รจ un tensore riempito con `0` perchรฉ questo รจ il language id per l'inglese. Questo tensore dovrebbe avere la stessa dimensione di `input_ids`. ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1) ``` Adesso puoi inserire `input_ids` e language embedding nel modello: ```py >>> outputs = model(input_ids, langs=langs) ``` Lo script [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) puรฒ generare testo tramite i language embeddings usando i checkpoints `xlm-clm`. ### XLM senza language embeddings I seguenti modelli XLM non richiedono l'utilizzo dei language embeddings per fare inferenza: - `FacebookAI/xlm-mlm-17-1280` (Modellazione mascherata del linguaggio, 17 lingue) - `FacebookAI/xlm-mlm-100-1280` (Modellazione mascherata del linguaggio, 100 lingue) Questi modelli sono utilizzati per rappresentazioni generiche di frasi, a differenza dei precedenti checkpoints XML. ## BERT Il seguente modello BERT puรฒ essere usato per compiti multilingue: - `google-bert/bert-base-multilingual-uncased` (Modellazione mascherata del linguaggio + Previsione della prossima frase, 102 lingue) - `google-bert/bert-base-multilingual-cased` (Modellazione mascherata del linguaggio + Previsione della prossima frase, 104 lingue) Questi modelli non richiedono language embeddings per fare inferenza. Riescono ad identificare il linguaggio dal contesto e inferire di conseguenza. ## XLM-RoBERTa Il seguente modello XLM-RoBERTa puรฒ essere usato per compiti multilingue: - `FacebookAI/xlm-roberta-base` (Modellazione mascherata del linguaggio, 100 lingue) - `FacebookAI/xlm-roberta-large` (Modellazione mascherata del linguaggio, 100 lingue) XLM-RoBERTa รจ stato addestrato su 2.5TB di dati CommonCrawl appena creati e puliti in 100 lingue. Offre notevoli vantaggi rispetto ai modelli multilingue rilasciati in precedenza, come mBERT o XLM, in compiti come la classificazione, l'etichettatura delle sequenze e la risposta alle domande. ## M2M100 Il seguente modello M2M100 puรฒ essere usato per compiti multilingue: - `facebook/m2m100_418M` (Traduzione) - `facebook/m2m100_1.2B` (Traduzione) In questo esempio, carica il checkpoint `facebook/m2m100_418M` per tradurre dal cinese all'inglese. Puoi impostare la lingua di partenza nel tokenizer: ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` Applica il tokenizer al testo: ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100 forza l'id della lingua obiettivo come primo token generato per tradurre nella lingua obiettivo. Imposta il parametro `forced_bos_token_id` a `en` nel metodo `generate` per tradurre in inglese: ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart Il seguente modello MBart puรฒ essere usato per compiti multilingue: - `facebook/mbart-large-50-one-to-many-mmt` (Traduzione automatica multilingue uno-a-molti, 50 lingue) - `facebook/mbart-large-50-many-to-many-mmt` (Traduzione automatica multilingue molti-a-molti, 50 lingue) - `facebook/mbart-large-50-many-to-one-mmt` (Traduzione automatica multilingue molti-a-uno, 50 lingue) - `facebook/mbart-large-50` (Traduzione multilingue, 50 lingue) - `facebook/mbart-large-cc25` In questo esempio, carica il checkpoint `facebook/mbart-large-50-many-to-many-mmt` per tradurre dal finlandese all'inglese. Puoi impostare la lingua di partenza nel tokenizer: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` Applica il tokenizer sul testo: ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart forza l'id della lingua obiettivo come primo token generato per tradurre nella lingua obiettivo. Imposta il parametro `forced_bos_token_id` a `en` nel metodo `generate` per tradurre in inglese: ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` Se stai usando il checkpoint `facebook/mbart-large-50-many-to-one-mmt`, non hai bisogno di forzare l'id della lingua obiettivo come primo token generato altrimenti l'uso รจ lo stesso.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Condividi un modello Gli ultimi due tutorial ti hanno mostrato come puoi fare fine-tuning di un modello con PyTorch, Keras e ๐Ÿค— Accelerate per configurazioni distribuite. Il prossimo passo รจ quello di condividere il tuo modello con la community! In Hugging Face, crediamo nella condivisione della conoscenza e delle risorse in modo da democratizzare l'intelligenza artificiale per chiunque. Ti incoraggiamo a considerare di condividere il tuo modello con la community per aiutare altre persone a risparmiare tempo e risorse. In questo tutorial, imparerai due metodi per la condivisione di un modello trained o fine-tuned nel [Model Hub](https://huggingface.co/models): - Condividi in modo programmatico i tuoi file nell'Hub. - Trascina i tuoi file nell'Hub mediante interfaccia grafica. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> Per condividere un modello con la community, hai bisogno di un account su [huggingface.co](https://huggingface.co/join). Puoi anche unirti ad un'organizzazione esistente o crearne una nuova. </Tip> ## Caratteristiche dei repository Ogni repository nel Model Hub si comporta come un tipico repository di GitHub. I nostri repository offrono il versionamento, la cronologia dei commit, e la possibilitร  di visualizzare le differenze. Il versionamento all'interno del Model Hub รจ basato su git e [git-lfs](https://git-lfs.github.com/). In altre parole, puoi trattare un modello come un unico repository, consentendo un maggiore controllo degli accessi e maggiore scalabilitร . Il controllo delle versioni consente *revisions*, un metodo per appuntare una versione specifica di un modello con un hash di commit, un tag o un branch. Come risultato, puoi caricare una specifica versione di un modello con il parametro `revision`: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # nome di un tag, di un branch, o commit hash ... ) ``` Anche i file possono essere modificati facilmente in un repository ed รจ possibile visualizzare la cronologia dei commit e le differenze: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Configurazione Prima di condividere un modello nell'Hub, hai bisogno delle tue credenziali di Hugging Face. Se hai accesso ad un terminale, esegui il seguente comando nell'ambiente virtuale in cui รจ installata la libreria ๐Ÿค— Transformers. Questo memorizzerร  il tuo token di accesso nella cartella cache di Hugging Face (di default `~/.cache/`): ```bash huggingface-cli login ``` Se stai usando un notebook come Jupyter o Colaboratory, assicurati di avere la libreria [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) installata. Questa libreria ti permette di interagire in maniera programmatica con l'Hub. ```bash pip install huggingface_hub ``` Utilizza `notebook_login` per accedere all'Hub, e segui il link [qui](https://huggingface.co/settings/token) per generare un token con cui effettuare il login: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Converti un modello per tutti i framework Per assicurarti che il tuo modello possa essere utilizzato da persone che lavorano con un framework differente, ti raccomandiamo di convertire e caricare il tuo modello sia con i checkpoint di PyTorch che con quelli di TensorFlow. Anche se รจ possibile caricare il modello da un framework diverso, se si salta questo passaggio, il caricamento sarร  piรน lento perchรฉ ๐Ÿค— Transformers ha bisogno di convertire i checkpoint al momento. Convertire un checkpoint per un altro framework รจ semplice. Assicurati di avere PyTorch e TensorFlow installati (vedi [qui](installation) per le istruzioni d'installazione), e poi trova il modello specifico per il tuo compito nell'altro framework. <frameworkcontent> <pt> Specifica `from_tf=True` per convertire un checkpoint da TensorFlow a PyTorch: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained( ... "path/verso/il-nome-magnifico-che-hai-scelto", from_tf=True ... ) >>> pt_model.save_pretrained("path/verso/il-nome-magnifico-che-hai-scelto") ``` </pt> <tf> Specifica `from_pt=True` per convertire un checkpoint da PyTorch a TensorFlow: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained( ... "path/verso/il-nome-magnifico-che-hai-scelto", from_pt=True ... ) ``` Poi puoi salvare il tuo nuovo modello in TensorFlow con il suo nuovo checkpoint: ```py >>> tf_model.save_pretrained("path/verso/il-nome-magnifico-che-hai-scelto") ``` </tf> <jax> Se un modello รจ disponibile in Flax, puoi anche convertire un checkpoint da PyTorch a Flax: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/verso/il-nome-magnifico-che-hai-scelto", from_pt=True ... ) ``` </jax> </frameworkcontent> ## Condividi un modello durante il training <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> Condividere un modello nell'Hub รจ tanto semplice quanto aggiungere un parametro extra o un callback. Ricorda dal [tutorial sul fine-tuning](training), la classe [`TrainingArguments`] รจ dove specifichi gli iperparametri e le opzioni addizionali per l'allenamento. Una di queste opzioni di training include l'abilitร  di condividere direttamente un modello nell'Hub. Imposta `push_to_hub=True` in [`TrainingArguments`]: ```py >>> training_args = TrainingArguments(output_dir="il-mio-bellissimo-modello", push_to_hub=True) ``` Passa gli argomenti per il training come di consueto al [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Dopo aver effettuato il fine-tuning del tuo modello, chiama [`~transformers.Trainer.push_to_hub`] sul [`Trainer`] per condividere il modello allenato nell'Hub. ๐Ÿค— Transformers aggiungerร  in modo automatico persino gli iperparametri, i risultati del training e le versioni del framework alla scheda del tuo modello (model card, in inglese)! ```py >>> trainer.push_to_hub() ``` </pt> <tf> Condividi un modello nell'Hub con [`PushToHubCallback`]. Nella funzione [`PushToHubCallback`], aggiungi: - Una directory di output per il tuo modello. - Un tokenizer. - L'`hub_model_id`, che รจ il tuo username sull'Hub e il nome del modello. ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./il_path_dove_salvare_il_tuo_modello", ... tokenizer=tokenizer, ... hub_model_id="il-tuo-username/il-mio-bellissimo-modello", ... ) ``` Aggiungi il callback a [`fit`](https://keras.io/api/models/model_training_apis/), e ๐Ÿค— Transformers caricherร  il modello allenato nell'Hub: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## Utilizzare la funzione `push_to_hub` Puoi anche chiamare `push_to_hub` direttamente sul tuo modello per caricarlo nell'Hub. Specifica il nome del tuo modello in `push_to_hub`: ```py >>> pt_model.push_to_hub("il-mio-bellissimo-modello") ``` Questo crea un repository sotto il proprio username con il nome del modello `il-mio-bellissimo-modello`. Ora chiunque puรฒ caricare il tuo modello con la funzione `from_pretrained`: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("il-tuo-username/il-mio-bellissimo-modello") ``` Se fai parte di un'organizzazione e vuoi invece condividere un modello sotto il nome dell'organizzazione, aggiungi il parametro `organization`: ```py >>> pt_model.push_to_hub("il-mio-bellissimo-modello", organization="la-mia-fantastica-org") ``` La funzione `push_to_hub` puรฒ essere anche utilizzata per aggiungere altri file al repository del modello. Per esempio, aggiungi un tokenizer ad un repository di un modello: ```py >>> tokenizer.push_to_hub("il-mio-bellissimo-modello") ``` O magari potresti voler aggiungere la versione di TensorFlow del tuo modello PyTorch a cui hai fatto fine-tuning: ```py >>> tf_model.push_to_hub("il-mio-bellissimo-modello") ``` Ora quando navighi nel tuo profilo Hugging Face, dovresti vedere il tuo repository del modello appena creato. Premendo sulla scheda **Files** vengono visualizzati tutti i file caricati nel repository. Per maggiori dettagli su come creare e caricare file ad un repository, fai riferimento alla documentazione [qui](https://huggingface.co/docs/hub/how-to-upstream). ## Carica un modello utilizzando l'interfaccia web Chi preferisce un approccio senza codice puรฒ caricare un modello tramite l'interfaccia web dell'hub. Visita [huggingface.co/new](https://huggingface.co/new) per creare un nuovo repository: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) Da qui, aggiungi alcune informazioni sul tuo modello: - Seleziona il/la **owner** del repository. Puoi essere te o qualunque organizzazione di cui fai parte. - Scegli un nome per il tuo modello, il quale sarร  anche il nome del repository. - Scegli se il tuo modello รจ pubblico o privato. - Specifica la licenza utilizzata per il tuo modello. Ora premi sulla scheda **Files** e premi sul pulsante **Add file** per caricare un nuovo file al tuo repository. Trascina poi un file per caricarlo e aggiungere un messaggio di commit. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Aggiungi una scheda del modello Per assicurarti che chiunque possa comprendere le abilitร , limitazioni, i potenziali bias e le considerazioni etiche del tuo modello, per favore aggiungi una scheda del modello (model card, in inglese) al tuo repository. La scheda del modello รจ definita nel file `README.md`. Puoi aggiungere una scheda del modello: * Creando manualmente e caricando un file `README.md`. * Premendo sul pulsante **Edit model card** nel repository del tuo modello. Dai un'occhiata alla [scheda del modello](https://huggingface.co/distilbert/distilbert-base-uncased) di DistilBert per avere un buon esempio del tipo di informazioni che una scheda di un modello deve includere. Per maggiori dettagli legati ad altre opzioni che puoi controllare nel file `README.md`, come l'impatto ambientale o widget di esempio, fai riferimento alla documentazione [qui](https://huggingface.co/docs/hub/models-cards).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/custom_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Condividere modelli personalizzati La libreria ๐Ÿค— Transformers รจ studiata per essere facilmente estendibile. Il codice di ogni modello รจ interamente situato in una sottocartella del repository senza alcuna astrazione, perciรฒ puoi facilmente copiare il file di un modello e modificarlo in base ai tuoi bisogni. Se stai scrivendo un nuovo modello, potrebbe essere piรน semplice iniziare da zero. In questo tutorial, ti mostreremo come scrivere un modello personalizzato e la sua configurazione in modo che possa essere utilizzato allโ€™interno di Transformers, e come condividerlo con la community (assieme al relativo codice) cosรฌ che tutte le persone possano usarlo, anche se non presente nella libreria ๐Ÿค— Transformers. Illustriamo tutto questo su un modello ResNet, avvolgendo la classe ResNet della [libreria timm](https://github.com/rwightman/pytorch-image-models) in un [`PreTrainedModel`]. ## Scrivere una configurazione personalizzata Prima di iniziare a lavorare al modello, scriviamone la configurazione. La configurazione di un modello รจ un oggetto che contiene tutte le informazioni necessarie per la build del modello. Come vedremo nella prossima sezione, il modello puรฒ soltanto essere inizializzato tramite `config`, per cui dovremo rendere tale oggetto piรน completo possibile. Nel nostro esempio, prenderemo un paio di argomenti della classe ResNet che potremmo voler modificare. Configurazioni differenti ci daranno quindi i differenti possibili tipi di ResNet. Salveremo poi questi argomenti, dopo averne controllato la validitร . ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` Le tre cose piรน importanti da ricordare quando scrivi le tue configurazioni sono le seguenti: - Devi ereditare da `Pretrainedconfig`, - Il metodo `__init__` del tuo `Pretrainedconfig` deve accettare i kwargs, - I `kwargs` devono essere passati alla superclass `__init__` Lโ€™ereditร  รจ importante per assicurarsi di ottenere tutte le funzionalitร  della libreria ๐Ÿค— transformers, mentre gli altri due vincoli derivano dal fatto che un `Pretrainedconfig` ha piรน campi di quelli che stai settando. Quando ricarichi una config da un metodo `from_pretrained`, questi campi devono essere accettati dalla tua config e poi inviati alla superclasse. Definire un `model_type` per la tua configurazione (qua `model_type = โ€œresnetโ€`) non รจ obbligatorio, a meno che tu non voglia registrare il modello con le classi Auto (vedi l'ultima sezione). Una volta completato, puoi facilmente creare e salvare la tua configurazione come faresti con ogni altra configurazione di modelli della libreria. Ecco come possiamo creare la config di un resnet50d e salvarlo: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` Questo salverร  un file chiamato `config.json` all'interno della cartella `custom-resnet`. Potrai poi ricaricare la tua config con il metodo `from_pretrained`. ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` Puoi anche usare qualunque altro metodo della classe [`PretrainedConfig`], come [`~PretrainedConfig.push_to_hub`] per caricare direttamente la tua configurazione nell'hub. ## Scrivere un modello personalizzato Ora che abbiamo la nostra configurazione ResNet, possiamo continuare a scrivere il modello. In realtร , ne scriveremo due: uno che estrae le features nascoste da una batch di immagini (come [`BertModel`]) e uno che รจ utilizzabile per la classificazione di immagini (come [`BertModelForSequenceClassification`]). Come abbiamo menzionato in precedenza, scriveremo soltanto un wrapper del modello, per mantenerlo semplice ai fini di questo esempio. L'unica cosa che dobbiamo fare prima di scrivere questa classe รจ una mappatura fra i tipi di blocco e le vere classi dei blocchi. Successivamente il modello รจ definito tramite la configurazione, passando tutto quanto alla classe `ResNet`. ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` Per il modello che classificherร  le immagini, cambiamo soltanto il metodo forward: ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` Nota come, in entrambi i casi, ereditiamo da `PreTrainedModel` e chiamiamo l'inizializzazione della superclasse con il metodo `config` (un po' come quando scrivi un normale `torch.nn.Module`). La riga che imposta la `config_class` non รจ obbligatoria, a meno che tu non voglia registrare il modello con le classi Auto (vedi l'ultima sezione). <Tip> Se il tuo modello รจ molto simile a un modello all'interno della libreria, puoi ri-usare la stessa configurazione di quel modello. </Tip> Puoi fare in modo che il tuo modello restituisca in output qualunque cosa tu voglia, ma far restituire un dizionario come abbiamo fatto per `ResnetModelForImageClassification`, con la funzione di perdita inclusa quando vengono passate le labels, renderร  il tuo modello direttamente utilizzabile all'interno della classe [`Trainer`]. Utilizzare altri formati di output va bene se hai in progetto di utilizzare un tuo loop di allenamento, o se utilizzerai un'altra libreria per l'addestramento. Ora che abbiamo la classe del nostro modello, creiamone uno: ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` Ribadiamo, puoi usare qualunque metodo dei [`PreTrainedModel`], come [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`]. Utilizzeremo quest'ultimo nella prossima sezione, e vedremo come caricare i pesi del modello assieme al codice del modello stesso. Ma prima, carichiamo alcuni pesi pre-allenati all'interno del nostro modello. Nel tuo caso specifico, probabilmente allenerai il tuo modello sui tuoi dati. Per velocizzare in questo tutorial, utilizzeremo la versione pre-allenata del resnet50d. Dato che il nostro modello รจ soltanto un wrapper attorno a quel modello, sarร  facile trasferirne i pesi: ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Vediamo adesso come assicurarci che quando facciamo [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`], il codice del modello venga salvato. ## Inviare il codice all'Hub <Tip warning={true}> Questa API รจ sperimentale e potrebbe avere alcuni cambiamenti nei prossimi rilasci. </Tip> Innanzitutto, assicurati che il tuo modello sia completamente definito in un file `.py`. Puรฒ sfruttare import relativi ad altri file, purchรจ questi siano nella stessa directory (non supportiamo ancora sotto-moduli per questa funzionalitร ). Per questo esempio, definiremo un file `modeling_resnet.py` e un file `configuration_resnet.py` in una cartella dell'attuale working directory chiamata `resnet_model`. Il file configuration contiene il codice per `ResnetConfig` e il file modeling contiene il codice di `ResnetModel` e `ResnetModelForImageClassification`. ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` Il file `__init__.py` puรฒ essere vuoto, serve solo perchรจ Python capisca che `resnet_model` puรฒ essere utilizzato come un modulo. <Tip warning={true}> Se stai copiando i file relativi alla modellazione della libreria, dovrai sostituire tutti gli import relativi in cima al file con import del pacchetto `transformers`. </Tip> Nota che puoi ri-utilizzare (o usare come sottoclassi) un modello/configurazione esistente. Per condividere il tuo modello con la community, segui questi passi: prima importa il modello ResNet e la sua configurazione dai nuovi file creati: ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` Dopodichรจ dovrai dire alla libreria che vuoi copiare i file con il codice di quegli oggetti quando utilizzi il metodo `save_pretrained` e registrarli in modo corretto con una Auto classe (specialmente per i modelli). Utilizza semplicemente: ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` Nota che non c'รจ bisogno di specificare una Auto classe per la configurazione (c'รจ solo una Auto classe per le configurazioni, [`AutoConfig`], ma รจ diversa per i modelli). Il tuo modello personalizato potrebbe essere utilizzato per diverse tasks, per cui devi specificare quale delle classi Auto รจ quella corretta per il tuo modello. Successivamente, creiamo i modelli e la config come abbiamo fatto in precedenza: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Adesso, per inviare il modello all'Hub, assicurati di aver effettuato l'accesso. Lancia dal tuo terminale: ```bash huggingface-cli login ``` O da un notebook: ```py from huggingface_hub import notebook_login notebook_login() ``` Potrai poi inviare il tutto sul tuo profilo (o di un'organizzazione di cui fai parte) in questo modo: ```py resnet50d.push_to_hub("custom-resnet50d") ``` Oltre ai pesi del modello e alla configurazione in formato json, questo ha anche copiato i file `.py` modeling e configuration all'interno della cartella `custom-resnet50d` e ha caricato i risultati sull'Hub. Puoi controllare i risultati in questa [model repo](https://huggingface.co/sgugger/custom-resnet50d). Puoi controllare il tutorial di condivisione [tutorial di condivisione](model_sharing) per piรน informazioni sul metodo con cui inviare all'Hub. ## Usare un modello con codice personalizzato Puoi usare ogni configurazione, modello o tokenizer con file di codice personalizzati nella sua repository con le classi Auto e il metodo `from_pretrained`. Tutti i files e il codice caricati sull'Hub sono scansionati da malware (fai riferimento alla documentazione [Hub security](https://huggingface.co/docs/hub/security#malware-scanning) per piรน informazioni), ma dovresti comunque assicurarti dell'affidabilitร  del codice e dell'autore per evitare di eseguire codice dannoso sulla tua macchina. Imposta `trust_remote_code=True` per usare un modello con codice personalizzato: ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` Inoltre, raccomandiamo fortemente di passare un hash del commit come `revision` per assicurarti che le autrici o gli autori del modello non abbiano modificato il codice con alcune nuove righe dannose (a meno che non ti fidi completamente della fonte): ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` Nota che quando cerchi la storia dei commit della repo del modello sull'Hub, c'รจ un bottone con cui facilmente copiare il commit hash di ciascun commit. ## Registrare un modello con codice personalizzato nelle classi Auto Se stai scrivendo una libreria che estende ๐Ÿค— Transformers, potresti voler estendere le classi Auto per includere il tuo modello. Questo รจ diverso dall'inviare codice nell'Hub: gli utenti dovranno importare la tua libreria per ottenere il modello personalizzato (anzichรจ scaricare automaticamente il modello dall'Hub). Finchรจ il tuo file di configurazione ha un attributo `model_type` diverso dai model types esistenti, e finchรจ le tue classi modello hanno i corretti attributi `config_class`, potrai semplicemente aggiungerli alle classi Auto come segue: ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` Nota che il primo argomento utilizzato quando registri la configurazione di un modello personalizzato con [`AutoConfig`] deve corrispondere al `model_type` della tua configurazione personalizzata, ed il primo argomento utilizzato quando registri i tuoi modelli personalizzati in una qualunque classe Auto del modello deve corrispondere alla `config_class` di quei modelli.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/perf_infer_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza Efficiente su CPU Questa guida si concentra sull'inferenza di modelli di grandi dimensioni in modo efficiente sulla CPU. ## `BetterTransformer` per inferenza piรน rapida Abbiamo integrato di recente `BetterTransformer` per fare inferenza piรน rapidamente con modelli per testi, immagini e audio. Visualizza la documentazione sull'integrazione [qui](https://huggingface.co/docs/optimum/bettertransformer/overview) per maggiori dettagli. ## PyTorch JIT-mode (TorchScript) TorchScript รจ un modo di creare modelli serializzabili e ottimizzabili da codice PyTorch. Ogni programmma TorchScript puรฒ esere salvato da un processo Python e caricato in un processo dove non ci sono dipendenze Python. Comparandolo con l'eager mode di default, jit mode in PyTorch normalmente fornisce prestazioni migliori per l'inferenza del modello da parte di metodologie di ottimizzazione come la operator fusion. Per una prima introduzione a TorchScript, vedi la Introduction to [PyTorch TorchScript tutorial](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules). ### IPEX Graph Optimization con JIT-mode Intelยฎ Extension per PyTorch fornnisce ulteriori ottimizzazioni in jit mode per i modelli della serie Transformers. Consigliamo vivamente agli utenti di usufruire dei vantaggi di Intelยฎ Extension per PyTorch con jit mode. Alcuni operator patterns usati fequentemente dai modelli Transformers models sono giร  supportati in Intelยฎ Extension per PyTorch con jit mode fusions. Questi fusion patterns come Multi-head-attention fusion, Concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm fusion and etc. sono abilitati e hanno buone performance. I benefici della fusion รจ fornito agli utenti in modo trasparente. In base alle analisi, il ~70% dei problemi piรน popolari in NLP question-answering, text-classification, and token-classification possono avere benefici sulle performance grazie ai fusion patterns sia per Float32 precision che per BFloat16 Mixed precision. Vedi maggiori informazioni per [IPEX Graph Optimization](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html). #### Installazione di IPEX I rilasci di IPEX seguono PyTorch, verifica i vari approcci per [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/). ### Utilizzo del JIT-mode Per abilitare JIT-mode in Trainer per evaluation e prediction, devi aggiungere `jit_mode_eval` negli argomenti di Trainer. <Tip warning={true}> per PyTorch >= 1.14.0. JIT-mode potrebe giovare a qualsiasi modello di prediction e evaluaion visto che il dict input รจ supportato in jit.trace per PyTorch < 1.14.0. JIT-mode potrebbe giovare ai modelli il cui ordine dei parametri corrisponde all'ordine delle tuple in ingresso in jit.trace, come i modelli per question-answering. Nel caso in cui l'ordine dei parametri seguenti non corrisponda all'ordine delle tuple in ingresso in jit.trace, come nei modelli di text-classification, jit.trace fallirร  e lo cattureremo con una eccezione al fine di renderlo un fallback. Il logging รจ usato per notificare gli utenti. </Tip> Trovi un esempo con caso d'uso in [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) - Inference using jit mode on CPU: <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--jit_mode_eval </b></pre> - Inference with IPEX using jit mode on CPU: <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--use_ipex \</b> <b>--jit_mode_eval</b></pre>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/perf_train_cpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento effciente su multiple CPU Quando l'addestramento su una singola CPU รจ troppo lento, possiamo usare CPU multiple. Quasta guida si concentra su DDP basato su PyTorch abilitando l'addetramento distribuito su CPU in maniera efficiente. ## Intelยฎ oneCCL Bindings per PyTorch [Intelยฎ oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) รจ una libreria per l'addestramento efficiente del deep learning in distribuito e implementa collettivi come allreduce, allgather, alltoall. Per maggiori informazioni su oneCCL, fai riferimento a [oneCCL documentation](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html) e [oneCCL specification](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html). Il modulo `oneccl_bindings_for_pytorch` (`torch_ccl` precedentemente alla versione 1.12) implementa PyTorch C10D ProcessGroup API e puรฒ essere caricato dinamicamente com external ProcessGroup e funziona solo su piattaforma Linux al momento. Qui trovi informazioni piรน dettagliate per [oneccl_bind_pt](https://github.com/intel/torch-ccl). ### Intelยฎ oneCCL Bindings per l'installazione PyTorch: I file wheel sono disponibili per le seguenti versioni di Python: | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 1.13.0 | | โˆš | โˆš | โˆš | โˆš | | 1.12.100 | | โˆš | โˆš | โˆš | โˆš | | 1.12.0 | | โˆš | โˆš | โˆš | โˆš | | 1.11.0 | | โˆš | โˆš | โˆš | โˆš | | 1.10.0 | โˆš | โˆš | โˆš | โˆš | | ```bash pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu ``` dove `{pytorch_version}` deve essere la tua versione di PyTorch, per l'stanza 1.13.0. Verifica altri approcci per [oneccl_bind_pt installation](https://github.com/intel/torch-ccl). Le versioni di oneCCL e PyTorch devono combaciare. <Tip warning={true}> oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0) PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100 </Tip> ## Intelยฎ MPI library Usa questa implementazione basata su standard MPI per fornire una architettura flessibile, efficiente, scalabile su cluster per Intelยฎ. Questo componente รจ parte di Intelยฎ oneAPI HPC Toolkit. oneccl_bindings_for_pytorch รจ installato insieme al set di strumenti MPI. Necessitร  di reperire l'ambiente prima di utilizzarlo. per Intelยฎ oneCCL >= 1.12.0 ```bash oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh ``` per Intelยฎ oneCCL con versione < 1.12.0 ```bash torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") source $torch_ccl_path/env/setvars.sh ``` #### Installazione IPEX: IPEX fornisce ottimizzazioni delle prestazioni per l'addestramento della CPU sia con Float32 che con BFloat16; puoi fare riferimento a [single CPU section](./perf_train_cpu). Il seguente "Utilizzo in Trainer" prende come esempio mpirun nella libreria Intelยฎ MPI. ## Utilizzo in Trainer Per abilitare l'addestramento distribuito multi CPU nel Trainer con il ccl backend, gli utenti devono aggiungere **`--ddp_backend ccl`** negli argomenti del comando. Vediamo un esempio per il [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) Il seguente comando abilita due processi sul nodo Xeon, con un processo in esecuzione per ogni socket. Le variabili OMP_NUM_THREADS/CCL_WORKER_COUNT possono essere impostate per una prestazione ottimale. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ``` Il seguente comando abilita l'addestramento per un totale di quattro processi su due Xeon (node0 e node1, prendendo node0 come processo principale), ppn (processes per node) รจ impostato a 2, on un processo in esecuzione per ogni socket. Le variabili OMP_NUM_THREADS/CCL_WORKER_COUNT possono essere impostate per una prestazione ottimale. In node0, รจ necessario creare un file di configurazione che contenga gli indirizzi IP di ciascun nodo (per esempio hostfile) e passare il percorso del file di configurazione come parametro. ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip ``` A questo punto, esegui il seguente comando nel nodo0 e **4DDP** sarร  abilitato in node0 e node1 con BF16 auto mixed precision: ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pipeline per l'inferenza La [`pipeline`] rende semplice usare qualsiasi modello dal [Model Hub](https://huggingface.co/models) per fare inferenza su diversi compiti come generazione del testo, segmentazione di immagini e classificazione di audio. Anche se non hai esperienza con una modalitร  specifica o non comprendi bene il codice che alimenta i modelli, รจ comunque possibile utilizzarli con l'opzione [`pipeline`]! Questa esercitazione ti insegnerร  a: * Usare una [`pipeline`] per fare inferenza. * Usare uno specifico tokenizer o modello. * Usare una [`pipeline`] per compiti che riguardano audio e video. <Tip> Dai un'occhiata alla documentazione di [`pipeline`] per una lista completa dei compiti supportati. </Tip> ## Utilizzo della Pipeline Nonostante ogni compito abbia una [`pipeline`] associata, รจ piรน semplice utilizzare l'astrazione generica della [`pipeline`] che contiene tutte quelle specifiche per ogni mansione. La [`pipeline`] carica automaticamente un modello predefinito e un tokenizer in grado di fare inferenza per il tuo compito. 1. Inizia creando una [`pipeline`] e specificando il compito su cui fare inferenza: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation") ``` 2. Inserisci il testo in input nella [`pipeline`]: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Iron-priests at the door to the east, and thirteen for the Lord Kings at the end of the mountain'}] ``` Se hai piรน di un input, inseriscilo in una lista: ```py >>> generator( ... [ ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... "Nine for Mortal Men, doomed to die, One for the Dark Lord on his dark throne", ... ] ... ) # doctest: +SKIP ``` Qualsiasi parametro addizionale per il tuo compito puรฒ essere incluso nella [`pipeline`]. La mansione `text-generation` ha un metodo [`~generation.GenerationMixin.generate`] con diversi parametri per controllare l'output. Ad esempio, se desideri generare piรน di un output, utilizza il parametro `num_return_sequences`: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... num_return_sequences=2, ... ) # doctest: +SKIP ``` ### Scegliere modello e tokenizer La [`pipeline`] accetta qualsiasi modello dal [Model Hub](https://huggingface.co/models). Ci sono tag nel Model Hub che consentono di filtrare i modelli per attivitร . Una volta che avrai scelto il modello appropriato, caricalo usando la corrispondente classe `AutoModelFor` e [`AutoTokenizer`]. Ad esempio, carica la classe [`AutoModelForCausalLM`] per un compito di causal language modeling: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") ``` Crea una [`pipeline`] per il tuo compito, specificando il modello e il tokenizer che hai caricato: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) ``` Inserisci il testo di input nella [`pipeline`] per generare del testo: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Dragon-lords (for them to rule in a world ruled by their rulers, and all who live within the realm'}] ``` ## Audio pipeline La flessibilitร  della [`pipeline`] fa si che possa essere estesa ad attivitร  sugli audio. Per esempio, classifichiamo le emozioni in questo clip audio: ```py >>> from datasets import load_dataset >>> import torch >>> torch.manual_seed(42) # doctest: +IGNORE_RESULT >>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> audio_file = ds[0]["audio"]["path"] ``` Trova un modello per la [classificazione audio](https://huggingface.co/models?pipeline_tag=audio-classification) sul Model Hub per eseguire un compito di riconoscimento automatico delle emozioni e caricalo nella [`pipeline`]: ```py >>> from transformers import pipeline >>> audio_classifier = pipeline( ... task="audio-classification", model="ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` Inserisci il file audio nella [`pipeline`]: ```py >>> preds = audio_classifier(audio_file) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.1315, 'label': 'calm'}, {'score': 0.1307, 'label': 'neutral'}, {'score': 0.1274, 'label': 'sad'}, {'score': 0.1261, 'label': 'fearful'}, {'score': 0.1242, 'label': 'happy'}] ``` ## Vision pipeline Infine, usare la [`pipeline`] per le attivitร  sulle immagini รจ praticamente la stessa cosa. Specifica la tua attivitร  e inserisci l'immagine nel classificatore. L'immagine puรฒ essere sia un link che un percorso sul tuo pc in locale. Per esempio, quale specie di gatto รจ raffigurata qui sotto? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(task="image-classification") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/add_new_pipeline.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Come creare una pipeline personalizzata? In questa guida, scopriremo come creare una pipeline personalizzata e condividerla sull' [Hub](https://hf.co/models) o aggiungerla nella libreria Transformers. Innanzitutto, รจ necessario decidere gli input grezzi che la pipeline sarร  in grado di accettare. Possono essere strings, raw bytes, dictionaries o qualsiasi cosa sia l'input desiderato piรน probabile. Cerca di mantenere questi input il piรน possibile in Python in quanto facilita la compatibilitร  (anche con altri linguaggi tramite JSON). Questi saranno gli `inputs` della pipeline (`preprocess`). Poi definire gli `outputs`. Stessa strategia degli `inputs`. Piรน รจ seplice e meglio รจ. Questi saranno gli output del metodo `postprocess`. Si parte ereditando la classe base `Pipeline`. con i 4 metodi che bisogna implementare `preprocess`, `_forward`, `postprocess` e `_sanitize_parameters`. ```python from transformers import Pipeline class MyPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] return preprocess_kwargs, {}, {} def preprocess(self, inputs, maybe_arg=2): model_input = Tensor(inputs["input_ids"]) return {"model_input": model_input} def _forward(self, model_inputs): # model_inputs == {"model_input": model_input} outputs = self.model(**model_inputs) # Maybe {"logits": Tensor(...)} return outputs def postprocess(self, model_outputs): best_class = model_outputs["logits"].softmax(-1) return best_class ``` La struttura di questa suddivisione consiste nel supportare in modo relativamente continuo CPU/GPU, supportando allo stesso tempo l'esecuzione di pre/postelaborazione sulla CPU su thread diversi. `preprocess` prenderร  gli input originariamente definiti e li trasformerร  in qualcosa di alimentabile dal modello. Potrebbe contenere piรน informazioni e di solito รจ un `Dict`. `_forward` รจ il dettaglio dell'implementazione e non รจ destinato a essere chiamato direttamente. `forward` รจ il metodo preferito per assicurarsi che tutto funzioni correttamente perchรจ contiene delle slavaguardie. Se qualcosa รจ รจ collegato a un modello reale, appartiene al metodo `_forward`, tutto il resto รจ nel preprocess/postprocess. `postprocess` prende l'otput di `_forward` e lo trasforma nell'output finale che era stato deciso in precedenza. `_sanitize_parameters` esiste per consentire agli utenti di passare i parametri ogni volta che desiderano sia a inizialization time `pipeline(...., maybe_arg=4)` che al call time `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`. `_sanitize_parameters` ritorna 3 dicts di kwargs che vengono passati direttamente a `preprocess`, `_forward` e `postprocess`. Non riempire nulla se il chiamante non ha chiamato con alcun parametro aggiuntivo. Questo consente di mantenere gli argomenti predefiniti nella definizione della funzione, che รจ sempre piรน "naturale". Un esempio classico potrebbe essere l'argomento `top_k` nel post processing dei classification tasks. ```python >>> pipe = pipeline("my-new-task") >>> pipe("This is a test") [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05} {"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}] >>> pipe("This is a test", top_k=2) [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}] ``` In order to achieve that, we'll update our `postprocess` method with a default parameter to `5`. and edit `_sanitize_parameters` to allow this new parameter. ```python def postprocess(self, model_outputs, top_k=5): best_class = model_outputs["logits"].softmax(-1) # Add logic to handle top_k return best_class def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] postprocess_kwargs = {} if "top_k" in kwargs: postprocess_kwargs["top_k"] = kwargs["top_k"] return preprocess_kwargs, {}, postprocess_kwargs ``` Cercare di mantenere gli input/output molto semplici e idealmente serializzabili in JSON, in quanto ciรฒ rende l'uso della pipeline molto facile senza richiedere agli utenti di comprendere nuovi tipi di oggetti. รˆ anche relativamente comune supportare molti tipi di argomenti per facilitarne l'uso (ad esempio file audio, possono essere nomi di file, URL o byte puri). ## Aggiungilo alla lista dei tasks supportati Per registrar il tuo `new-task` alla lista dei tasks supportati, devi aggiungerlo al `PIPELINE_REGISTRY`: ```python from transformers.pipelines import PIPELINE_REGISTRY PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, ) ``` Puoi specificare il modello di default che desideri, in questo caso dovrebbe essere accompagnato da una revisione specifica (che puรฒ essere il nome di un branch o l'hash di un commit, in questo caso abbiamo preso `"abcdef"`) e anche dal type: ```python PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, default={"pt": ("user/awesome_model", "abcdef")}, type="text", # current support type: text, audio, image, multimodal ) ``` ## Condividi la tua pipeline sull'Hub Per condividere la tua pipeline personalizzata sull'Hub, devi solo salvare il codice della tua sottoclasse `Pipeline` in un file python. Per esempio, supponiamo di voler utilizzare una pipeline personalizzata per la classificazione delle coppie di frasi come la seguente: ```py import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits} ``` L'implementazione รจ agnostica al framework, e lavorerร  sia con modelli PyTorch che con TensorFlow. Se l'abbiamo salvato in un file chiamato `pair_classification.py`, puรฒ essere successivamente importato e registrato in questo modo: ```py from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification PIPELINE_REGISTRY.register_pipeline( "pair-classification", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification, tf_model=TFAutoModelForSequenceClassification, ) ``` Una volta fatto, possiamo usarla con un modello pretrained. L'istanza `sgugger/finetuned-bert-mrpc` รจ stata fine-tuned sul dataset MRPC, che classifica le coppie di frasi come parafrasi o no. ```py from transformers import pipeline classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc") ``` Successivamente possiamo condividerlo sull'Hub usando il metodo `push_to_hub` ```py classifier.push_to_hub("test-dynamic-pipeline") ``` Questo codice copierร  il file dove รจ stato definitp `PairClassificationPipeline` all'interno della cartella `"test-dynamic-pipeline"`, insieme al salvataggio del modello e del tokenizer della pipeline, prima di pushare il tutto nel repository `{your_username}/test-dynamic-pipeline`. Dopodichรฉ chiunque potrร  utilizzarlo, purchรฉ fornisca l'opzione `trust_remote_code=True`: ```py from transformers import pipeline classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True) ``` ## Aggiungere la pipeline a Transformers Se vuoi contribuire con la tua pipeline a Transformers, dovrai aggiungere un modulo nel sottomodulo `pipelines` con il codice della tua pipeline, quindi aggiungilo all'elenco dei tasks definiti in `pipelines/__init__.py`. Poi hai bisogno di aggiungere i test. Crea un nuovo file `tests/test_pipelines_MY_PIPELINE.py` con esempi ed altri test. La funzione `run_pipeline_test` sarร  molto generica e su piccoli modelli casuali su ogni possibile architettura, come definito da `model_mapping` e `tf_model_mapping`. Questo รจ molto importante per testare la compatibilitร  futura, nel senso che se qualcuno aggiunge un nuovo modello di `XXXForQuestionAnswering` allora il test della pipeline tenterร  di essere eseguito su di esso. Poichรฉ i modelli sono casuali, รจ รจ impossibile controllare i valori effettivi, per questo esiste un aiuto `ANY` che tenterร  solamente di far corrispondere l'output della pipeline TYPE. Hai anche *bisogno* di implementare 2 (idealmente 4) test. - `test_small_model_pt` : Definire 1 piccolo modello per questa pipeline (non importa se i risultati non hanno senso) e testare i risultati della pipeline. I risultati dovrebbero essere gli stessi di `test_small_model_tf`. - `test_small_model_tf` : Definire 1 piccolo modello per questa pipeline (non importa se i risultati non hanno senso) e testare i risultati della pipeline. I risultati dovrebbero essere gli stessi di `test_small_model_pt`. - `test_large_model_pt` (`optional`): Testare la pipeline su una pipeline reale in cui i risultati dovrebbero avere senso. Questi test sono lenti e dovrebbero essere contrassegnati come tali. In questo caso l'obiettivo รจ mostrare la pipeline e assicurarsi che non ci siano derive nelle versioni future - `test_large_model_tf` (`optional`): Testare la pipeline su una pipeline reale in cui i risultati dovrebbero avere senso. Questi test sono lenti e dovrebbero essere contrassegnati come tali. In questo caso l'obiettivo รจ mostrare la pipeline e assicurarsi che non ci siano derive nelle versioni future
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Tour rapido - local: installation title: Installazione title: Iniziare - sections: - local: pipeline_tutorial title: Pipeline per l'inferenza - local: autoclass_tutorial title: Carica istanze pre-allenate con AutoClass - local: preprocessing title: Preprocess - local: training title: Fine-tuning di un modello pre-addestrato - local: accelerate title: Allenamento distribuito con ๐Ÿค— Accelerate - local: model_sharing title: Condividere un modello title: Esercitazione - sections: - local: create_a_model title: Crea un'architettura personalizzata - local: custom_models title: Condividere modelli personalizzati - local: run_scripts title: Addestramento con script - local: multilingual title: Modelli multilingua per l'inferenza - local: converting_tensorflow_models title: Convertire modelli tensorflow - local: serialization title: Esporta modelli Transformers - local: perf_train_cpu title: Addestramento efficiente su CPU - local: perf_train_cpu_many title: Addestramento efficiente su multiple CPU - local: perf_train_tpu title: Addestramento su TPU - local: perf_train_special title: Addestramento su Hardware Specializzato - local: perf_infer_cpu title: Inferenza Efficiente su CPU - local: perf_infer_gpu_one title: Inferenza su una GPU - local: perf_infer_gpu_many title: Inferenza Efficiente su GPU Multiple - local: perf_infer_special title: Inferenza su Hardware Specializzato - local: big_models title: Istanziare un big model - local: migration title: Passaggio da pacchetti precedenti - local: debugging title: Debugging title: Guide pratiche - sections: - local: add_new_pipeline title: Come aggiungere una pipeline a ๐Ÿค— Transformers? - local: add_new_model title: Come aggiungere un modello a ๐Ÿค— Transformers? - local: perf_hardware title: Hardware ottimizzato per l'addestramento - local: community title: Risorse della comunitร  - local: pr_checks title: Controlli su una Pull Request title: Guide How-to
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/perf_infer_gpu_one.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza efficiente su GPU singola Questo documento sarร  presto completato con informazioni su come effetture l'inferenza su una singola GPU. Nel frattempo รจ possibile consultare [la guida per l'addestramento su una singola GPU](perf_train_gpu_one) e [la guida per l'inferenza su CPU](perf_infer_cpu). ## `BetterTransformer` per l'inferenza piรน veloce Abbiamo recentemente integrato `BetterTransformer` per velocizzare l'inferenza su GPU per modelli di testo, immagini e audio. Per maggiori dettagli, consultare la documentazione su questa integrazione [qui](https://huggingface.co/docs/optimum/bettertransformer/overview). ## Integrazione di `bitsandbytes` per Int8 mixed-precision matrix decomposition <Tip> Nota che questa funzione puรฒ essere utilizzata anche nelle configurazioni multi GPU. </Tip> Dal paper [`LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale`](https://arxiv.org/abs/2208.07339), noi supportiamo l'integrazione di Hugging Face per tutti i modelli dell'Hub con poche righe di codice. Il metodo `nn.Linear` riduce la dimensione di 2 per i pesi `float16` e `bfloat16` e di 4 per i pesi `float32`, con un impatto quasi nullo sulla qualitร , operando sugli outlier in half-precision. ![HFxbitsandbytes.png](https://cdn-uploads.huggingface.co/production/uploads/1659861207959-62441d1d9fdefb55a0b7d12c.png) Il metodo Int8 mixed-precision matrix decomposition funziona separando la moltiplicazione tra matrici in due flussi: (1) una matrice di flusso di outlier di caratteristiche sistematiche moltiplicata in fp16, (2) in flusso regolare di moltiplicazione di matrici int8 (99,9%). Con questo metodo, รจ possibile effettutare inferenza int8 per modelli molto grandi senza degrado predittivo. Per maggiori dettagli sul metodo, consultare il [paper](https://arxiv.org/abs/2208.07339) o il nostro [blogpost sull'integrazione](https://huggingface.co/blog/hf-bitsandbytes-integration). ![MixedInt8.gif](https://cdn-uploads.huggingface.co/production/uploads/1660567469965-62441d1d9fdefb55a0b7d12c.gif) Nota che รจ necessaria una GPU per eseguire modelli di tipo mixed-8bit, poichรฉ i kernel sono stati compilati solo per le GPU. Prima di utilizzare questa funzione, assicurarsi di disporre di memoria sufficiente sulla GPU per memorizzare un quarto del modello (o la metร  se i pesi del modello sono in mezza precisione). Di seguito sono riportate alcune note per aiutarvi a utilizzare questo modulo, oppure seguite le dimostrazioni su [Google colab](#colab-demos). ### Requisiti - Se si dispone di `bitsandbytes<0.37.0`, assicurarsi di eseguire su GPU NVIDIA che supportano tensor cores a 8 bit (Turing, Ampere o architetture piรน recenti - ad esempio T4, RTX20s RTX30s, A40-A100). Per `bitsandbytes>=0.37.0`, tutte le GPU dovrebbero essere supportate. - Installare la versione corretta di `bitsandbytes` eseguendo: `pip install bitsandbytes>=0.31.5`. - Installare `accelerate` `pip install accelerate>=0.12.0` ### Esecuzione di modelli mixed-Int8 - configurazione per singola GPU Dopo aver installato le librerie necessarie, per caricare il tuo modello mixed 8-bit รจ il seguente: ```py from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` Per la generazione di testo, si consiglia di: * utilizzare il metodo `generate()` del modello invece della funzione `pipeline()`. Sebbene l'inferenza sia possibile con la funzione `pipeline()`, essa non รจ ottimizzata per i modelli mixed-8bit e sarร  piรน lenta rispetto all'uso del metodo `generate()`. Inoltre, alcune strategie di campionamento, come il campionamento nucleaus, non sono supportate dalla funzione `pipeline()` per i modelli mixed-8bit. * collocare tutti gli ingressi sullo stesso dispositivo del modello. Ecco un semplice esempio: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "bigscience/bloom-2b5" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) text = "Hello, my llama is cute" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ``` ### Esecuzione di modelli mixed-8bit - configurazione multi GPU Usare il seguente modo caricare il modello mixed-8bit su piรน GPU (stesso comando della configurazione a GPU singola): ```py model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` Puoi controllare la RAM della GPU che si vuole allocare su ogni GPU usando `accelerate`. Utilizzare l'argomento `max_memory` come segue: ```py max_memory_mapping = {0: "1GB", 1: "2GB"} model_name = "bigscience/bloom-3b" model_8bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping ) ``` In questo esempio, la prima GPU utilizzerร  1 GB di memoria e la seconda 2 GB. ### Colab demos Con questo metodo รจ possibile inferire modelli che prima non era possibile inferire su Google Colab. Guardate la demo per l'esecuzione di T5-11b (42GB in fp32)! Utilizzo la quantizzazione a 8 bit su Google Colab: [![Open In Colab: T5-11b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) Oppure questa demo di BLOOM-3B: [![Open In Colab: BLOOM-3b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/debugging.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Debugging ## Debug dei problemi di rete multi-GPU Quando addestri o fai inferenza con `DistributedDataParallel` e GPU multiple, se si verificano problemi di intercomunicazione tra processi e/o nodi, puoi utilizzare il seguente script per diagnosticare i problemi della rete. ```bash wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py ``` Per esempio per testare come 2 GPU interagiscono fai: ```bash python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` Se entrambi i processi sono in grado di comunicare tra loro e di allocare la memoria della GPU, ciascuno di essi stamperร  lo stato OK. Per piรน GPU o nodi adatta gli argumenti nello script. All'interno dello script di diagnostica troverai molti altri dettagli e anche una guida per eseguirlo in ambiente SLURM. Un livello di debug superiore รจ aggiungere la variabile d'ambiente `NCCL_DEBUG=INFO` come di seguito: ```bash NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` In questo modo si scaricano molte informazioni di debug relative a NCCL, che puoi cercare online in caso di problemi. Oppure, se non hai la sicurezza di come interpretare l'output, puoi condividere il file di log in una Issue. ## Rilevamento di Underflow e Overflow <Tip> Questa funzionalitร  al momento รจ disponibile solo per PyTorch. </Tip> <Tip> Per addestramento multi-GPU richiede DDP (`torch.distributed.launch`). </Tip> <Tip> Questa funzionalitร  puรฒ essere usata con modelli basati su `nn.Module`. </Tip> Se inizi a ottenere `loss=NaN` o il modello presenta qualche altro comportamento anomalo a causa di valori `inf` o `nan` in attivazioni o nei pesi, รจ necessario scoprire dove si verifica il primo underflow o overflow e cosa lo ha determinato. Fortunatamente รจ possibile farlo facilmente attivando un modulo speciale che effettuerร  il rilevamento automaticamente. Se stai usando [`Trainer`], hai bisogno di aggiungere solo: ```bash --debug underflow_overflow ``` ai normali argomenti della riga di comando, o passa `debug="underflow_overflow"` quando viene creato l'oggetto [`TrainingArguments`]. Se stai usando il tuo ciclo di allenamento o un altro trainer, puoi ottenere lo stesso risultato con: ```python from .debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model) ``` [`~debug_utils.DebugUnderflowOverflow`] inserisce dei ganci nel modello che dopo ogni chiamata testeranno le variabili di ingresso e di uscita e anche i pesi del modulo corrispondente. Non appena viene rilevato `inf` o o `nan` in almeno un elemento delle attivazioni o dei pesi, il programma lo notifica e stampa un rapporto come il seguente (questo รจ stato rilevato con `google/mt5-small` sotto fp16 mixed precision): ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata encoder.block.1.layer.1.DenseReluDense.dropout Dropout 0.00e+00 2.57e+02 input[0] 0.00e+00 2.85e+02 output [...] encoder.block.2.layer.0 T5LayerSelfAttention 6.78e-04 3.15e+03 input[0] 2.65e-04 3.42e+03 output[0] None output[1] 2.25e-01 1.00e+04 output[2] encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.dropout Dropout 0.00e+00 8.76e+03 input[0] 0.00e+00 9.74e+03 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` L'output di esempio รจ stato tagliato al centro per brevitร . La seconda colonna mostra il valore dell'elemento piรน grande in assoluto,cosรฌ se osserviamo da vicino gli ultimi istanti, input e output sono nel range di `1e4`. Questo addestramento รจ stato eseguito con una mixed precision fp16 e l'ultimo passo usciva fuori (sotto `fp16` il valore piรน grande prima di `inf` รจ `64e3`). Per evitare overflows sotto `fp16` le attivazionioni devono rimanere molto al di sotto di `1e4`, perchรฉ `1e4 * 1e4 = 1e8` quindi qualsiasi moltiplicazione di matrice con grandi attivazioni porterร  a una condizione di overflow numerico. All'inizio della traccia รจ possibile scoprire a quale lotto si รจ verificato il problema (questo `Detected inf/nan during batch_number=0` significa che il problema si รจ verificato nel primo lotto). Ogni frame segnalato inizia dichiarando la voce completamente qualificata per il modulo corrispondente per il quale il frame รจ stato segnalato. Se osserviamo il seguente frame: ``` encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output ``` Questo, `encoder.block.2.layer.1.layer_norm` indica che si tratta di un layer norm nel primo layer, del secondo blocco dell'encoder. E le chiamata specifica di `forward` รจ `T5LayerNorm`. Osserviamo gli ultimi frame del report: ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` L'ultimo frame report per la funzione `Dropout.forward` con la prima voce per l'unico input e la seconda per l'unico output. Si puรฒ notare che รจ stato richiamato da un attibuto `dropout` dentro la classe `DenseReluDense`. Si puรฒ notare che ciรฒ รจ avvenuto durante il primo strato, del 2ยฐ blocco, durante il primissimo lotto. Infine, gli elementi di input piรน grandi in assoluto sono stati `6.27e+04` e l'equivalente per l'output era `inf`. Puoi vedere qui, che `T5DenseGatedGeluDense.forward` risulta in output activations, il cui valore massimo assoluto era circa 62,7K, che รจ molto vicino al limite massimo di 64K di fp16. Nel prossimo frame abbiamo `Dropout` che rinormalizza i pesi, dopo aver azzerato alcuni elementi, il che spinge il valore massimo assoluto a piรน di 64K e si verifica un overflow.(`inf`). Come puoi notare, รจ nei frames precedenti che occorre esaminare quando i numeri iniziano a diventare molto grandi per i valori fp16. Confrontiamo il report al codice `models/t5/modeling_t5.py`: ```python class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states ``` Ora รจ facile vedere la chiamata `dropout`, e tutte le chiamate precedenti. Poichรฉ il rilevamento avviene in un avanzamento (forward hook in eng.), i rapporti vengono creati immeditamente dopo ogni rientro da `forward` (forward returns in eng.). Tornando al rapporto completo, per agire e risolvere il problema, dobbiamo andare qualche frame piรน in alto, dove i numeri hanno iniziato a salire, e probabilmente passare alla modalitร  `fp32`, in modo che i numeri non trabocchino quando vengono moltiplicati o sommati. Naturalmente, potrebbero esserci altre soluzioni. Per esempio, potremmo spegnere temporanemante `amp` se รจ abilitato, successivamente spostare `forward` in un helper wrapper, come: ```python def _forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states import torch def forward(self, hidden_states): if torch.is_autocast_enabled(): with torch.cuda.amp.autocast(enabled=False): return self._forward(hidden_states) else: return self._forward(hidden_states) ``` Poichรฉ il rilevatore automatico riporta solo gli ingressi e le uscite di fotogrammi completi, una volta che si sa dove cercare, si puรฒ analizzare anche le fasi intermedie di una specifica funzione `forward`. In alcuni casi puoi usare la funzione di supporto `detect_overflow` per indirizzare il rilevatore dove preferisci, ad esempio: ```python from debug_utils import detect_overflow class T5LayerFF(nn.Module): [...] def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, "after layer_norm") forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, "after DenseReluDense") return hidden_states + self.dropout(forwarded_states) ``` Si puรฒ vedere che abbiamo aggiunto 2 di questi e ora teniamo traccia se `inf` o `nan` per `forwarded_states` รจ stato rilevato da qualche parte. In realtร , il rilevatore li riporta giร , perchรฉ ciascuna delle chiamate nell'esempio precedente รจ un `nn.Module`, ma diciamo che se avessimo dei calcoli diretti locali, questo รจ il modo in cui lo faremmo. Inoltre, se si istanzia il debugger nel proprio codice, รจ possibile modificare il numero di fotogrammi stampati rispetto a predefinito, ad esempio.: ```python from .debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ``` ### Tracciamento della mistura assoluta del lotto specifico e del valore massimo La stessa classe di debug puรฒ essere utilizzata per il tracciamento per-batch con la funzione di rilevamento di underflow/overflow disattivata. Supponiamo di voler osservare i valori minimi e massimi assoluti per tutti gli ingredienti di ogni chiamata `forward` di un dato lotto. lotto, e che lo si voglia fare solo per i lotti 1 e 3. Si istanzia questa classe come: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ``` Ora i batch completi 1 e 3 saranno tracciati utilizzando lo stesso formato del rilevatore di underflow/overflow. I batches sono 0-indexed. Questo รจ utile se si sa che il programma inizia a comportarsi male dopo un certo numero di batch, in modo da poter avanzare velocemente fino a quell'area. direttamente a quell'area. Ecco un esempio di output troncato per questa configurazione: ``` *** Starting batch number=1 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.47e+04 input[0] 5.36e-05 7.92e+02 output [...] decoder.dropout Dropout 1.60e-07 2.27e+01 input[0] 0.00e+00 2.52e+01 output decoder T5Stack not a tensor output lm_head Linear 1.01e-06 7.92e+02 weight 0.00e+00 1.11e+00 input[0] 6.06e-02 8.39e+01 output T5ForConditionalGeneration not a tensor output *** Starting batch number=3 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.78e+04 input[0] 5.36e-05 7.92e+02 output [...] ``` Qui verrร  scaricato un numero enorme di fotogrammi, tanti quanti sono le chiamate in avanti nel modello, quindi puรฒ essere o non essere quello che volete, ma a volte puรฒ essere piรน utile usarlo di un classico debugger. Per esempio, se il problema inizia a verificarsi a partire dal lotto numero 150. Quindi รจ possibile scaricare le tracce dei lotti 149 e 150 e confrontare i punti in cui i numeri hanno iniziato a divergere. รˆ inoltre possibile specificare il numero di batch dopo il quale interrompere l'addestramento, con: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Machine Learning allo stato dell'arte per PyTorch, TensorFlow e JAX. ๐Ÿค— Transformers fornisce delle API per scaricare in modo semplice e allenare modelli pre-allenati allo stato dell'arte. L'utilizzo di modelli pre-allenati puรฒ ridurre i tuoi costi computazionali, l'impatto ambientale, e farti risparmiare il tempo che utilizzeresti per allenare un modello da zero. I modelli possono essere utilizzati in diverse modalitร  come ad esempio: * ๐Ÿ“ Testo: classificazione del testo, estrazione delle informazioni, rispondere a domande, riassumere, traduzione e generazione del testo in piรน di 100 lingue. * ๐Ÿ–ผ๏ธ Immagini: classificazione di immagini, rilevazione di oggetti e segmentazione. * ๐Ÿ—ฃ๏ธ Audio: riconoscimento vocale e classificazione dell'audio. * ๐Ÿ™ Multimodale: rispondere a domande inerenti dati tabulari, riconoscimento ottico dei caratteri, estrazione di informazioni a partire da documenti scannerizzati, classificazione di video e risposta visuale a domande. La nostra libreria supporta un'integrazione perfetta tra tre delle librerie per il deep learning piรน popolari: [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/) e [JAX](https://jax.readthedocs.io/en/latest/). Allena il tuo modello in tre righe di codice in un framework, e caricalo per l'inferenza in un altro. Ogni architettura di ๐Ÿค— Transformers รจ definita in un modulo Python indipendente cosรฌ da poter essere personalizzata in modo semplice per la ricerca e gli esperimenti. ## Se stai cercando supporto personalizzato dal team di Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Contenuti La documentazione รจ organizzata in cinque parti: - **INIZIARE** contiene un tour rapido e le istruzioni di installazione per cominciare ad utilizzare ๐Ÿค— Transformers. - **TUTORIALS** รจ un buon posto da cui iniziare se per te la nostra libreria รจ nuova. Questa sezione ti aiuterร  ad acquisire le competenze basilari di cui hai bisogno per iniziare ad utilizzare ๐Ÿค— Transformers. - **GUIDE PRATICHE** ti mostrerร  come raggiungere obiettivi specifici come fare fine-tuning di un modello pre-allenato per la modellizzazione del linguaggio o come creare una testa per un modello personalizzato. - **GUIDE CONCETTUALI** fornisce discussioni e spiegazioni dei concetti sottostanti alle idee dietro ai modelli, compiti, e la filosofia di progettazione di ๐Ÿค— Transformers. - **API** descrive ogni classe e funzione, raggruppate in: - **CLASSI PRINCIPALI** per le classi principali che espongono le API importanti della libreria. - **MODELLI** per le classi e le funzioni relative ad ogni modello implementato all'interno della libreria. - **HELPERS INTERNI** per le classi e le funzioni che utilizziamo internamente. La libreria attualmente contiene implementazioni in JAX, PyTorch e TensorFlow, pesi di modelli pre-allenati, script di utilizzo e strumenti di conversione per i seguenti modelli. ### Modelli supportati <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (da Google Research e l'Istituto Tecnologico di Chicago) rilasciato con il paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), da Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) rilasciato con il paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) da Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[BART](model_doc/bart)** (da Facebook) rilasciato con il paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) da Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov e Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (da politecnico di ร‰cole) rilasciato con il paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) da Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (da VinAI Research) rilasciato con il paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) da Nguyen Luong Tran, Duong Minh Le e Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (da Microsoft) rilasciato con il paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) da Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (da Google) rilasciato con il paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) da Jacob Devlin, Ming-Wei Chang, Kenton Lee e Kristina Toutanova. 1. **[BERTweet](model_doc/bertweet)** (da VinAI Research) rilasciato con il paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) da Dat Quoc Nguyen, Thanh Vu e Anh Tuan Nguyen. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (da Google) rilasciato con il paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) da Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (da Google Research) rilasciato con il paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) da Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (v Google Research) rilasciato con il paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) da Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[Blenderbot](model_doc/blenderbot)** (da Facebook) rilasciato con il paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) da Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (da Facebook) rilasciato con il paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) da Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BORT](model_doc/bort)** (da Alexa) rilasciato con il paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) da Adrian de Wynter e Daniel J. Perry. 1. **[ByT5](model_doc/byt5)** (da Google Research) rilasciato con il paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) da Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (da Inria/Facebook/Sorbonne) rilasciato con il paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) da Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah e Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (da Google Research) rilasciato con il paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) da Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[ConvNeXT](model_doc/convnext)** (da Facebook AI) rilasciato con il paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) da Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (da Facebook AI) rilasciato con il paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) da Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CLIP](model_doc/clip)** (da OpenAI) rilasciato con il paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) da Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[ConvBERT](model_doc/convbert)** (da YituTech) rilasciato con il paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) da Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[CPM](model_doc/cpm)** (dalla Universitร  di Tsinghua) rilasciato con il paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) da Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (da Salesforce) rilasciato con il paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) da Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong e Richard Socher. 1. **[CvT](model_doc/cvt)** (da Microsoft) rilasciato con il paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) da Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (da Facebook) rilasciato con il paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) da Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (da Microsoft) rilasciato con il paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) da Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (da Microsoft) rilasciato con il paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) da Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (da Berkeley/Facebook/Google) rilasciato con il paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) da Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[DiT](model_doc/dit)** (da Microsoft Research) rilasciato con il paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) da Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[DeiT](model_doc/deit)** (da Facebook) rilasciato con il paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) da Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETR](model_doc/detr)** (da Facebook) rilasciato con il paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) da Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (da Microsoft Research) rilasciato con il paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) da Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DistilBERT](model_doc/distilbert)** (da HuggingFace), rilasciato assieme al paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) da Victor Sanh, Lysandre Debut e Thomas Wolf. La stessa tecnica รจ stata applicata per comprimere GPT2 in [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa in [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT in [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DPR](model_doc/dpr)** (da Facebook) rilasciato con il paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) da Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, e Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (da Intel Labs) rilasciato con il paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) da Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (da Google Research) rilasciato con il paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) da Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ELECTRA](model_doc/electra)** (da Google Research/Stanford University) rilasciato con il paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) da Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[FlauBERT](model_doc/flaubert)** (da CNRS) rilasciato con il paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) da Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (da Facebook AI) rilasciato con il paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) da Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, e Douwe Kiela. 1. **[FNet](model_doc/fnet)** (da Google Research) rilasciato con il paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) da James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (da CMU/Google Brain) rilasciato con il paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) da Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GLPN](model_doc/glpn)** (da KAIST) rilasciato con il paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) da Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (da OpenAI) rilasciato con il paper [Improving Language Understanding by Generative Pre-Training](https://openai.com/research/language-unsupervised/) da Alec Radford, Karthik Narasimhan, Tim Salimans e Ilya Sutskever. 1. **[GPT-2](model_doc/gpt2)** (da OpenAI) rilasciato con il paper [Language Models are Unsupervised Multitask Learners](https://openai.com/research/better-language-models/) da Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei e Ilya Sutskever. 1. **[GPT-J](model_doc/gptj)** (da EleutherAI) rilasciato nel repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) da Ben Wang e Aran Komatsuzaki. 1. **[GPT Neo](model_doc/gpt_neo)** (da EleutherAI) rilasciato nel repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) da Sid Black, Stella Biderman, Leo Gao, Phil Wang e Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (da EleutherAI) rilasciato con il paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) da Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[Hubert](model_doc/hubert)** (da Facebook) rilasciato con il paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) da Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (da Berkeley) rilasciato con il paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) da Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (da OpenAI) rilasciato con il paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) da Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (da Microsoft Research Asia) rilasciato con il paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) da Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (da Microsoft Research Asia) rilasciato con il paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) da Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (da Microsoft Research Asia) rilasciato con il paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) da Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutlxlm)** (da Microsoft Research Asia) rilasciato con il paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) da Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (da AllenAI) rilasciato con il paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) da Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[Longformer](model_doc/longformer)** (da AllenAI) rilasciato con il paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) da Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LUKE](model_doc/luke)** (da Studio Ousia) rilasciato con il paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) da Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[mLUKE](model_doc/mluke)** (da Studio Ousia) rilasciato con il paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) da Ryokan Ri, Ikuya Yamada, e Yoshimasa Tsuruoka. 1. **[LXMERT](model_doc/lxmert)** (da UNC Chapel Hill) rilasciato con il paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) da Hao Tan e Mohit Bansal. 1. **[M2M100](model_doc/m2m_100)** (da Facebook) rilasciato con il paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) da Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Modello di machine learning per le traduzioni allenato utilizzando i dati [OPUS](http://opus.nlpl.eu/) di Jรถrg Tiedemann. Il [Framework Marian](https://marian-nmt.github.io/) รจ stato sviluppato dal Microsoft Translator Team. 1. **[Mask2Former](model_doc/mask2former)** (da FAIR e UIUC) rilasciato con il paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) da Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (da Meta e UIUC) rilasciato con il paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) da Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[MBart](model_doc/mbart)** (da Facebook) rilasciato con il paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) da Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[MBart-50](model_doc/mbart)** (da Facebook) rilasciato con il paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) da Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (da NVIDIA) rilasciato con il paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) da Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper e Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (da NVIDIA) rilasciato con il paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) da Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper e Bryan Catanzaro. 1. **[MPNet](model_doc/mpnet)** (da Microsoft Research) rilasciato con il paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) da Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (da Google AI) rilasciato con il paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) da Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[Nystrรถmformer](model_doc/nystromformer)** (dalla Universitร  del Wisconsin - Madison) rilasciato con il paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) da Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (da SHI Labs) rilasciato con il paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) da Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (da Meta AI) rilasciato con il paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) da Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[Pegasus](model_doc/pegasus)** (da Google) rilasciato con il paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) da Jingqing Zhang, Yao Zhao, Mohammad Saleh e Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (da Deepmind) rilasciato con il paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) da Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (da VinAI Research) rilasciato con il paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) da Dat Quoc Nguyen e Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (da UCLA NLP) rilasciato con il paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) da Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (da Sea AI Labs) rilasciato con il paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) da Yu, Weihao e Luo, Mi e Zhou, Pan e Si, Chenyang e Zhou, Yichen e Wang, Xinchao e Feng, Jiashi e Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (da Microsoft Research) rilasciato con il paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) da Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang e Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (da NVIDIA) rilasciato con il paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) da Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev e Paulius Micikevicius. 1. **[REALM](model_doc/realm.html)** (da Google Research) rilasciato con il paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) da Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat e Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (da Google Research) rilasciato con il paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) da Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RemBERT](model_doc/rembert)** (da Google Research) rilasciato con il paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) da Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[RegNet](model_doc/regnet)** (da META Platforms) rilasciato con il paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) da Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[ResNet](model_doc/resnet)** (da Microsoft Research) rilasciato con il paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) da Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (da Facebook), rilasciato assieme al paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) da Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoFormer](model_doc/roformer)** (da ZhuiyiTechnology), rilasciato assieme al paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) da Jianlin Su e Yu Lu e Shengfeng Pan e Bo Wen e Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (da NVIDIA) rilasciato con il paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) da Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (da ASAPP) rilasciato con il paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) da Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (da ASAPP) rilasciato con il paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) da Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (da Facebook), rilasciato assieme al paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) da Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (da Facebook), rilasciato assieme al paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) da Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (dalla Universitร  di Tel Aviv), rilasciato assieme al paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) da Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBert](model_doc/squeezebert)** (da Berkeley) rilasciato con il paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) da Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, e Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (da Microsoft) rilasciato con il paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) da Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[T5](model_doc/t5)** (da Google AI) rilasciato con il paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) da Colin Raffel e Noam Shazeer e Adam Roberts e Katherine Lee e Sharan Narang e Michael Matena e Yanqi Zhou e Wei Li e Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (da Google AI) rilasciato nel repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) da Colin Raffel e Noam Shazeer e Adam Roberts e Katherine Lee e Sharan Narang e Michael Matena e Yanqi Zhou e Wei Li e Peter J. Liu. 1. **[TAPAS](model_doc/tapas)** (da Google AI) rilasciato con il paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) da Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno e Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (da Microsoft Research) rilasciato con il paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) da Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (dall'Universitร  della California a Berkeley) rilasciato con il paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) da Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (da Google/CMU) rilasciato con il paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) da Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (da Microsoft), rilasciato assieme al paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) da Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UniSpeech](model_doc/unispeech)** (da Microsoft Research) rilasciato con il paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) da Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (da Microsoft Research) rilasciato con il paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) da Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[VAN](model_doc/van)** (dalle Universitร  di Tsinghua e Nankai) rilasciato con il paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) da Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[ViLT](model_doc/vilt)** (da NAVER AI Lab/Kakao Enterprise/Kakao Brain) rilasciato con il paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) da Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (da Google AI) rilasciato con il paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) da Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[ViTMAE](model_doc/vit_mae)** (da Meta AI) rilasciato con il paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) da Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[VisualBERT](model_doc/visual_bert)** (da UCLA NLP) rilasciato con il paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) da Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[WavLM](model_doc/wavlm)** (da Microsoft Research) rilasciato con il paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) da Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Wav2Vec2](model_doc/wav2vec2)** (da Facebook AI) rilasciato con il paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) da Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (da Facebook AI) rilasciato con il paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) da Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[XGLM](model_doc/xglm)** (da Facebook AI) rilasciato con il paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) da Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (v Facebook) rilasciato assieme al paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) da Guillaume Lample e Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (da Microsoft Research) rilasciato con il paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) da Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang e Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (da Facebook AI), rilasciato assieme al paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) da Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer e Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (da Facebook AI), rilasciato assieme al paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) da Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (da Google/CMU) rilasciato con il paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) da Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (da Facebook AI) rilasciato con il paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) da Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[XLS-R](model_doc/xls_r)** (da Facebook AI) rilasciato con il paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) da Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (dalla Universitร  della scienza e tecnologia di Huazhong) rilasciato con il paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) da Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (dall'Universitร  del Wisconsin - Madison) rilasciato con il paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) da Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Framework supportati La tabella seguente rappresenta il supporto attuale nella libreria per ognuno di questi modelli, si puรฒ identificare se questi hanno un Python tokenizer (chiamato "slow"). Un tokenizer "fast" supportato dalla libreria ๐Ÿค— Tokenizers, e se hanno supporto in Jax (via Flax), PyTorch, e/o TensorFlow. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:---------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBirdPegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | Canine | โœ… | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNext | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โŒ | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | Flava | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | MegatronBert | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | mT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Nystromformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | Realm | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin | โŒ | โŒ | โœ… | โœ… | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBert | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โŒ | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLMProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/big_models.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Istanziare un big model Quando vuoi utilizzare un modello preaddestrato (pretrained) molto grande, una sfida รจ minimizzare l'uso della RAM. Il workflow classico in PyTorch รจ: 1. Crea il tuo modello con pesi casuali (random weights). 2. Carica i tuoi pesi preaddestrati. 3. Inserisci i pesi preaddestrati nel tuo modello casuale. I passi 1 e 2 una versione completa del modello in memoria, in molti casi non รจ un problema, ma se il modello inizia a pesare diversi GigaBytes, queste due copie possono sturare la nostra RAM. Ancora peggio, se stai usando `torch.distributed` per seguire l'addestramento (training) in distribuito, ogni processo caricherร  il modello preaddestrato e memorizzerร  queste due copie nella RAM. <Tip> Nota che il modello creato casualmente รจ inizializzato con tensori "vuoti", che occupano spazio in memoria ma senza riempirlo (quindi i valori casuali sono quelli che si trovavano in questa porzione di memoria in un determinato momento). L'inizializzazione casuale che segue la distribuzione appropriata per il tipo di modello/parametri istanziato (come la distribuzione normale per le istanze) รจ eseguito solo dopo il passaggio 3 sui pesi non inizializzati, per essere piรน rapido possibile! </Tip> In questa guida, esploreremo le soluzioni che Transformers offre per affrontare questo problema. C'รจ da tenere in conto che questa รจ un'area in cui si sta attualmente sviluppando, quindi le API spiegate qui possono variare velocemente in futuro. ## Checkpoints condivisi Dalla versione 4.18.0, i checkpoints dei modelli che occupano piรน di 10GB di spazio vengono automaticamente frammentati in piรน parti. Per quanto riguarda la possibilitร  di avere un unico checkpoint quando si utilizza `model.save_pretrained(save_dir)`, si hanno diversi checkpoint parziali (ognuno con dimensione < 10GB) e un indice che mappa i nomi dei parametri ai file in cui sono memorizzati. Puoi controllare la dimensione massima dopo la frammentazione con il parametro `max_shard_size`, nel prossimo esempio, useremo modelli di dimensioni normali con frammenti di piccoli dimensioni: prendiamo un modello BERT classico. ```py from transformers import AutoModel model = AutoModel.from_pretrained("google-bert/bert-base-cased") ``` Se tu salvi usando [`~PreTrainedModel.save_pretrained`], avrai una nuova cartella con due file: il config del modello e i suoi pesi: ```py >>> import os >>> import tempfile >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir) ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model.bin'] ``` Adesso usiamo una dimensione massima di frammentazione di 200MB: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json'] ``` In aggiunta alla configurazione del modello, vediamo tre differenti file dei pesi, e un file `index.json` che รจ il nostro indice. Un checkpoint puรฒ essere ricaricato totalmente usando il metodo [`~PreTrainedModel.from_pretrained`]: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... new_model = AutoModel.from_pretrained(tmp_dir) ``` Il vantaggio principale di applicare questo metodo per modelli grandi รจ che durante il passo 2 del workflow illustrato in precedenza, ogni frammento del checkpoint viene caricato dopo il precedente, limitando l'utilizzo della RAM alla dimensione del modello piรน la dimensione del frammento piรน grande. Dietro le quinte, il file indice รจ utilizzato per determinare quali chiavi sono nel checkpoint, e dove i corrispondenti pesi sono memorizzati. Possiamo caricare l'indice come un qualsiasi json e ottenere un dizionario: ```py >>> import json >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f: ... index = json.load(f) >>> print(index.keys()) dict_keys(['metadata', 'weight_map']) ``` I metadati consistono solo nella dimensione totale del modello per ora. Abbiamo in programma di aggiungere altre informazioni in futuro: ```py >>> index["metadata"] {'total_size': 433245184} ``` La mappa dei pesi รจ la parte principale di questo indice, che mappa ogni nome dei parametri (si trova solitamente nei modelli PyTorch come `state_dict`) al file in cui รจ memorizzato: ```py >>> index["weight_map"] {'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin', 'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin', ... ``` Se vuoi caricare direttamente un checkpoint frammentato in un modello senza usare [`~PreTrainedModel.from_pretrained`] (come si farebbe con `model.load_state_dict()` per un checkpoint completo) devi usare [`~modeling_utils.load_sharded_checkpoint`]: ```py >>> from transformers.modeling_utils import load_sharded_checkpoint >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... load_sharded_checkpoint(model, tmp_dir) ``` ## Caricamento low memory Frammentare i checkpoint l'utilizzo di memoria al passo 2 del workflow citato in precedenza, ma per utilizzare questo modello in un ambiente con poca memoria, consigliamo di utilizzare i nostri strumenti basati sulla libreria Accelerate. Per ulteriori informazioni, leggere la seguente guida: [Large model loading using Accelerate](./main_classes/model#large-model-loading)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/training.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Fine-tuning di un modello pre-addestrato [[open-in-colab]] Ci sono benefici significativi nell'usare un modello pre-addestrato. Si riducono i costi computazionali, l'impronta di carbonio e ti consente di usare modelli stato dell'arte senza doverli addestrare da zero. ๐Ÿค— Transformers consente l'accesso a migliaia di modelli pre-addestrati per un'ampia gamma di compiti. Quando usi un modello pre-addestrato, lo alleni su un dataset specifico per il tuo compito. Questo รจ conosciuto come fine-tuning, una tecnica di addestramento incredibilmente potente. In questa esercitazione, potrai fare il fine-tuning di un modello pre-addestrato, con un framework di deep learning a tua scelta: * Fine-tuning di un modello pre-addestrato con ๐Ÿค— Transformers [`Trainer`]. * Fine-tuning di un modello pre-addestrato in TensorFlow con Keras. * Fine-tuning di un modello pre-addestrato con PyTorch. <a id='data-processing'></a> ## Preparare un dataset <Youtube id="_BZearw7f0w"/> Prima di poter fare il fine-tuning di un modello pre-addestrato, scarica un dataset e preparalo per l'addestramento. La precedente esercitazione ti ha mostrato come processare i dati per l'addestramento e adesso hai l'opportunitร  di metterti alla prova! Inizia caricando il dataset [Yelp Reviews](https://huggingface.co/datasets/yelp_review_full): ```py >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset["train"][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` Come giร  sai, hai bisogno di un tokenizer per processare il testo e includere una strategia di padding e truncation per gestire sequenze di lunghezza variabile. Per processare il dataset in un unico passo, usa il metodo [`map`](https://huggingface.co/docs/datasets/process#map) di ๐Ÿค— Datasets che applica la funzione di preprocessing all'intero dataset: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` Se vuoi, puoi creare un sottoinsieme piรน piccolo del dataset per il fine-tuning cosรฌ da ridurre il tempo necessario: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Addestramento <frameworkcontent> <pt> <Youtube id="nvBXf7s7vTI"/> ๐Ÿค— Transformers mette a disposizione la classe [`Trainer`] ottimizzata per addestrare modelli ๐Ÿค— Transformers, rendendo semplice iniziare l'addestramento senza scrivere manualmente il tuo ciclo di addestramento. L'API [`Trainer`] supporta un'ampia gamma di opzioni e funzionalitร  di addestramento come logging, gradient accumulation e mixed precision. Inizia caricando il tuo modello e specificando il numero di etichette (labels) attese. Nel dataset Yelp Review [dataset card](https://huggingface.co/datasets/yelp_review_full#data-fields), sai che ci sono cinque etichette: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) ``` <Tip> Potresti vedere un warning dato che alcuni dei pesi pre-addestrati non sono stati utilizzati e altri pesi sono stati inizializzati casualmente. Non preoccuparti, รจ completamente normale! L'head pre-addestrata del modello BERT viene scartata e rimpiazzata da una classification head inizializzata casualmente. Farai il fine-tuning di questa nuova head del modello sul tuo compito di classificazione, trasferendogli la conoscenza del modello pre-addestrato. </Tip> ### Iperparametri per il training Successivamente, crea una classe [`TrainingArguments`] contenente tutti gli iperparametri che si possono regore nonchรฉ le variabili per attivare le differenti opzioni di addestramento. Per questa esercitazione puoi iniziare con gli [iperparametri](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) di ddestramento predefiniti, ma sentiti libero di sperimentare per trovare la configurazione ottimale per te. Specifica dove salvare i checkpoints del tuo addestramento: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### Metriche [`Trainer`] non valuta automaticamente le performance del modello durante l'addestramento. Dovrai passare a [`Trainer`] una funzione che calcola e restituisce le metriche. La libreria ๐Ÿค— Datasets mette a disposizione una semplice funzione [`accuracy`](https://huggingface.co/metrics/accuracy) che puoi caricare con la funzione `load_metric` (guarda questa [esercitazione](https://huggingface.co/docs/datasets/metrics) per maggiori informazioni): ```py >>> import numpy as np >>> from datasets import load_metric >>> metric = load_metric("accuracy") ``` Richiama `compute` su `metric` per calcolare l'accuratezza delle tue previsioni. Prima di passare le tue previsioni a `compute`, hai bisogno di convertirle in logits (ricorda che tutti i modelli ๐Ÿค— Transformers restituiscono logits): ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` Se preferisci monitorare le tue metriche di valutazione durante il fine-tuning, specifica il parametro `eval_strategy` nei tuoi training arguments per restituire le metriche di valutazione ad ogni epoca di addestramento: ```py >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch") ``` ### Trainer Crea un oggetto [`Trainer`] col tuo modello, training arguments, dataset di training e test, e funzione di valutazione: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Poi metti a punto il modello richiamando [`~transformers.Trainer.train`]: ```py >>> trainer.train() ``` </pt> <tf> <a id='keras'></a> <Youtube id="rnTGBy2ax1c"/> I modelli ๐Ÿค— Transformers supportano anche l'addestramento in TensorFlow usando l'API di Keras. ### Convertire dataset nel formato per TensorFlow Il [`DefaultDataCollator`] assembla tensori in lotti su cui il modello si addestrerร . Assicurati di specificare di restituire tensori per TensorFlow in `return_tensors`: ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` <Tip> [`Trainer`] usa [`DataCollatorWithPadding`] in maniera predefinita in modo da non dover specificare esplicitamente un collettore di dati. </Tip> Successivamente, converti i datasets tokenizzati in TensorFlow datasets con il metodo [`to_tf_dataset`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset). Specifica il tuo input in `columns` e le tue etichette in `label_cols`: ```py >>> tf_train_dataset = small_train_dataset.to_tf_dataset( ... columns=["attention_mask", "input_ids", "token_type_ids"], ... label_cols=["labels"], ... shuffle=True, ... collate_fn=data_collator, ... batch_size=8, ... ) >>> tf_validation_dataset = small_eval_dataset.to_tf_dataset( ... columns=["attention_mask", "input_ids", "token_type_ids"], ... label_cols=["labels"], ... shuffle=False, ... collate_fn=data_collator, ... batch_size=8, ... ) ``` ### Compilazione e addestramento Carica un modello TensorFlow col numero atteso di etichette: ```py >>> import tensorflow as tf >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) ``` Poi compila e fai il fine-tuning del tuo modello usando [`fit`](https://keras.io/api/models/model_training_apis/) come faresti con qualsiasi altro modello di Keras: ```py >>> model.compile( ... optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5), ... loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), ... metrics=tf.metrics.SparseCategoricalAccuracy(), ... ) >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a> ## Addestramento in PyTorch nativo <frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`] si occupa del ciclo di addestramento e ti consente di mettere a punto un modello con una sola riga di codice. Per chi preferisse scrivere un proprio ciclo di addestramento personale, puoi anche fare il fine-tuning di un modello ๐Ÿค— Transformers in PyTorch nativo. A questo punto, potresti avere bisogno di riavviare il tuo notebook o eseguire il seguente codice per liberare un po' di memoria: ```py del model del pytorch_model del trainer torch.cuda.empty_cache() ``` Successivamente, postprocessa manualmente il `tokenized_dataset` per prepararlo ad essere allenato. 1. Rimuovi la colonna `text` perchรฉ il modello non accetta testo grezzo come input: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. Rinomina la colonna `label` in `labels` perchรฉ il modello si aspetta che questo argomento si chiami `labels`: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. Imposta il formato del dataset per farti restituire tensori di PyTorch all'interno delle liste: ```py >>> tokenized_datasets.set_format("torch") ``` Poi crea un piccolo sottocampione del dataset come visto precedentemente per velocizzare il fine-tuning: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader Crea un `DataLoader` per i tuoi datasets di train e test cosรฌ puoi iterare sui lotti di dati: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` Carica il tuo modello con il numero atteso di etichette: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) ``` ### Ottimizzatore e learning rate scheduler Crea un ottimizzatore e il learning rate scheduler per fare il fine-tuning del modello. Usa l'ottimizzatore [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) di PyTorch: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` Crea il learning rate scheduler predefinito da [`Trainer`]: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` Infine specifica come `device` da usare una GPU se ne hai una. Altrimenti, l'addestramento su una CPU puรฒ richiedere diverse ore invece di un paio di minuti. ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> Ottieni l'accesso gratuito a una GPU sul cloud se non ne possiedi una usando un notebook sul web come [Colaboratory](https://colab.research.google.com/) o [SageMaker StudioLab](https://studiolab.sagemaker.aws/). </Tip> Ottimo, adesso possiamo addestrare! ๐Ÿฅณ ### Training loop Per tenere traccia dei tuoi progressi durante l'addestramento, usa la libreria [tqdm](https://tqdm.github.io/) per aggiungere una progress bar sopra il numero dei passi di addestramento: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### Metriche Proprio come รจ necessario aggiungere una funzione di valutazione del [`Trainer`], รจ necessario fare lo stesso quando si scrive il proprio ciclo di addestramento. Ma invece di calcolare e riportare la metrica alla fine di ogni epoca, questa volta accumulerai tutti i batch con [`add_batch`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=add_batch#datasets.Metric.add_batch) e calcolerai la metrica alla fine. ```py >>> metric = load_metric("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a> ## Altre risorse Per altri esempi sul fine-tuning, fai riferimento a: - [๐Ÿค— Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) include scripts per addestrare compiti comuni di NLP in PyTorch e TensorFlow. - [๐Ÿค— Transformers Notebooks](notebooks) contiene diversi notebooks su come mettere a punto un modello per compiti specifici in PyTorch e TensorFlow.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/migration.md
<!--- Copyright 2020 The HuggingFace Team. Tutti i diritti riservati. Concesso in licenza in base alla Licenza Apache, Versione 2.0 (la "Licenza"); non รจ possibile utilizzare questo file se non in conformitร  con la Licenza. รˆ possibile ottenere una copia della Licenza all'indirizzo http://www.apache.org/licenses/LICENSE-2.0 A meno che non sia richiesto dalla legge applicabile o concordato per iscritto, il software distribuito con la Licenza รจ distribuito su BASE "COSรŒ COM'รˆ", SENZA GARANZIE O CONDIZIONI DI ALCUN TIPO, espresse o implicite. Per la lingua specifica vedi la Licenza che regola le autorizzazioni e le limitazioni ai sensi della STESSA. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Migrazione da pacchetti precedenti ## Migrazione da transformers `v3.x` a `v4.x` Un paio di modifiche sono state introdotte nel passaggio dalla versione 3 alla versione 4. Di seguito รจ riportato un riepilogo delle modifiche previste: #### 1. AutoTokenizer e pipeline ora utilizzano tokenizer veloci (rust) per impostazione predefinita. I tokenizer python e rust hanno all'incirca le stesse API, ma i tokenizer rust hanno un set di funzionalitร  piรน completo. Ciรฒ introduce due modifiche sostanziali: - La gestione dei token in overflow tra i tokenizer Python e Rust รจ diversa. - I tokenizers di rust non accettano numeri interi nei metodi di codifica. ##### Come ottenere lo stesso comportamento di v3.x in v4.x - Le pipeline ora contengono funzionalitร  aggiuntive pronte all'uso. Vedi la [pipeline di classificazione dei token con il flag `grouped_entities`](main_classes/pipelines#transformers.TokenClassificationPipeline). - Gli auto-tokenizer ora restituiscono tokenizer rust. Per ottenere invece i tokenizer python, l'utente deve usare il flag `use_fast` impostandolo `False`: Nella versione `v3.x`: ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") ``` per ottenere lo stesso nella versione `v4.x`: ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased", use_fast=False) ``` #### 2. SentencePiece รจ stato rimosso dalle dipendenze richieste Il requisito sulla dipendenza SentencePiece รจ stato rimosso da `setup.py`. รˆ stato fatto per avere un canale su anaconda cloud senza basarsi su `conda-forge`. Ciรฒ significa che i tokenizer che dipendono dalla libreria SentencePiece non saranno disponibili con un'installazione standard di `transformers`. Ciรฒ include le versioni **lente** di: - `XLNetTokenizer` - `AlbertTokenizer` - `CamembertTokenizer` - `MBartTokenizer` - `PegasusTokenizer` - `T5Tokenizer` - `ReformerTokenizer` - `XLMRobertaTokenizer` ##### Come ottenere lo stesso comportamento della v3.x nella v4.x Per ottenere lo stesso comportamento della versione `v3.x`, devi installare anche `sentencepiece`: Nella versione `v3.x`: ```bash pip install transformers ``` per ottenere lo stesso nella versione `v4.x`: ```bash pip install transformers[sentencepiece] ``` o ```bash pip install transformers stentencepiece ``` #### 3. L'architettura delle repo รจ stato aggiornata in modo che ogni modello abbia la propria cartella Con lโ€™aggiunta di nuovi modelli, il numero di file nella cartella `src/transformers` continua a crescere e diventa piรน difficile navigare e capire. Abbiamo fatto la scelta di inserire ogni modello e i file che lo accompagnano nelle proprie sottocartelle. Si tratta di una modifica sostanziale in quanto l'importazione di layer intermedi utilizzando direttamente il modulo di un modello deve essere eseguita tramite un percorso diverso. ##### Come ottenere lo stesso comportamento della v3.x nella v4.x Per ottenere lo stesso comportamento della versione `v3.x`, devi aggiornare il percorso utilizzato per accedere ai layer. Nella versione `v3.x`: ```bash from transformers.modeling_bert import BertLayer ``` per ottenere lo stesso nella versione `v4.x`: ```bash from transformers.models.bert.modeling_bert import BertLayer ``` #### 4. Impostare l'argomento `return_dict` su `True` per impostazione predefinita L'[argomento `return_dict`](main_classes/output) abilita la restituzione di oggetti python dict-like contenenti gli output del modello, invece delle tuple standard. Questo oggetto รจ self-documented poichรฉ le chiavi possono essere utilizzate per recuperare valori, comportandosi anche come una tupla e gli utenti possono recuperare oggetti per indexing o slicing. Questa รจ una modifica sostanziale poichรฉ la tupla non puรฒ essere decompressa: `value0, value1 = outputs` non funzionerร . ##### Come ottenere lo stesso comportamento della v3.x nella v4.x Per ottenere lo stesso comportamento della versione `v3.x`, specifica l'argomento `return_dict` come `False`, sia nella configurazione del modello che nel passaggio successivo. Nella versione `v3.x`: ```bash model = BertModel.from_pretrained("google-bert/bert-base-cased") outputs = model(**inputs) ``` per ottenere lo stesso nella versione `v4.x`: ```bash model = BertModel.from_pretrained("google-bert/bert-base-cased") outputs = model(**inputs, return_dict=False) ``` o ```bash model = BertModel.from_pretrained("google-bert/bert-base-cased", return_dict=False) outputs = model(**inputs) ``` #### 5. Rimozione di alcuni attributi deprecati Gli attributi sono stati rimossi se deprecati da almeno un mese. L'elenco completo degli attributi obsoleti รจ disponibile in [#8604](https://github.com/huggingface/transformers/pull/8604). Ecco un elenco di questi attributi/metodi/argomenti e quali dovrebbero essere le loro sostituzioni: In diversi modelli, le etichette diventano coerenti con gli altri modelli: - `masked_lm_labels` diventa `labels` in `AlbertForMaskedLM` e `AlbertForPreTraining`. - `masked_lm_labels` diventa `labels` in `BertForMaskedLM` e `BertForPreTraining`. - `masked_lm_labels` diventa `labels` in `DistilBertForMaskedLM`. - `masked_lm_labels` diventa `labels` in `ElectraForMaskedLM`. - `masked_lm_labels` diventa `labels` in `LongformerForMaskedLM`. - `masked_lm_labels` diventa `labels` in `MobileBertForMaskedLM`. - `masked_lm_labels` diventa `labels` in `RobertaForMaskedLM`. - `lm_labels` diventa `labels` in `BartForConditionalGeneration`. - `lm_labels` diventa `labels` in `GPT2DoubleHeadsModel`. - `lm_labels` diventa `labels` in `OpenAIGPTDoubleHeadsModel`. - `lm_labels` diventa `labels` in `T5ForConditionalGeneration`. In diversi modelli, il meccanismo di memorizzazione nella cache diventa coerente con gli altri: - `decoder_cached_states` diventa `past_key_values` in tutti i modelli BART-like, FSMT e T5. - `decoder_past_key_values` diventa `past_key_values` in tutti i modelli BART-like, FSMT e T5. - `past` diventa `past_key_values` in tutti i modelli CTRL. - `past` diventa `past_key_values` in tutti i modelli GPT-2. Per quanto riguarda le classi tokenizer: - L'attributo tokenizer `max_len` diventa `model_max_length`. - L'attributo tokenizer `return_lengths` diventa `return_length`. - L'argomento di codifica del tokenizer `is_pretokenized` diventa `is_split_into_words`. Per quanto riguarda la classe `Trainer`: - L'argomento `tb_writer` di `Trainer` รจ stato rimosso in favore della funzione richiamabile `TensorBoardCallback(tb_writer=...)`. - L'argomento `prediction_loss_only` di `Trainer` รจ stato rimosso in favore dell'argomento di classe `args.prediction_loss_only`. - L'attributo `data_collator` di `Trainer` sarร  richiamabile. - Il metodo `_log` di `Trainer` รจ deprecato a favore di `log`. - Il metodo `_training_step` di `Trainer` รจ deprecato a favore di `training_step`. - Il metodo `_prediction_loop` di `Trainer` รจ deprecato a favore di `prediction_loop`. - Il metodo `is_local_master` di `Trainer` รจ deprecato a favore di `is_local_process_zero`. - Il metodo `is_world_master` di `Trainer` รจ deprecato a favore di `is_world_process_zero`. Per quanto riguarda la classe `TrainingArguments`: - L'argomento `evaluate_during_training` di `TrainingArguments` รจ deprecato a favore di `eval_strategy`. Per quanto riguarda il modello Transfo-XL: - L'attributo di configurazione `tie_weight` di Transfo-XL diventa `tie_words_embeddings`. - Il metodo di modellazione `reset_length` di Transfo-XL diventa `reset_memory_length`. Per quanto riguarda le pipeline: - L'argomento `topk` di `FillMaskPipeline` diventa `top_k`. ## Passaggio da pytorch-transformers a ๐Ÿค— Transformers Ecco un breve riepilogo di ciรฒ a cui prestare attenzione durante il passaggio da `pytorch-transformers` a ๐Ÿค— Transformers. ### Lโ€™ordine posizionale di alcune parole chiave di input dei modelli (`attention_mask`, `token_type_ids`...) รจ cambiato Per usare Torchscript (vedi #1010, #1204 e #1195) l'ordine specifico delle **parole chiave di input** di alcuni modelli (`attention_mask`, `token_type_ids`...) รจ stato modificato. Se inizializzavi i modelli usando parole chiave per gli argomenti, ad esempio `model(inputs_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)`, questo non dovrebbe causare alcun cambiamento. Se inizializzavi i modelli con input posizionali per gli argomenti, ad esempio `model(inputs_ids, attention_mask, token_type_ids)`, potrebbe essere necessario ricontrollare l'ordine esatto degli argomenti di input. ## Migrazione da pytorch-pretrained-bert Ecco un breve riepilogo di ciรฒ a cui prestare attenzione durante la migrazione da `pytorch-pretrained-bert` a ๐Ÿค— Transformers ### I modelli restituiscono sempre `tuple` La principale modifica di rilievo durante la migrazione da `pytorch-pretrained-bert` a ๐Ÿค— Transformers รจ che il metodo dei modelli di previsione dร  sempre una `tupla` con vari elementi a seconda del modello e dei parametri di configurazione. Il contenuto esatto delle tuple per ciascun modello รจ mostrato in dettaglio nelle docstring dei modelli e nella [documentazione](https://huggingface.co/transformers/). In quasi tutti i casi, andrร  bene prendendo il primo elemento dell'output come quello che avresti precedentemente utilizzato in `pytorch-pretrained-bert`. Ecco un esempio di conversione da `pytorch-pretrained-bert` a ๐Ÿค— Transformers per un modello di classificazione `BertForSequenceClassification`: ```python # Carichiamo il nostro modello model = BertForSequenceClassification.from_pretrained("google-bert/bert-base-uncased") # Se usavi questa riga in pytorch-pretrained-bert : loss = model(input_ids, labels=labels) # Ora usa questa riga in ๐Ÿค— Transformers per estrarre la perdita dalla tupla di output: outputs = model(input_ids, labels=labels) loss = outputs[0] # In ๐Ÿค— Transformers puoi anche avere accesso ai logit: loss, logits = outputs[:2] # Ed anche agli attention weight se configuri il modello per restituirli (e anche altri output, vedi le docstring e la documentazione) model = BertForSequenceClassification.from_pretrained(" google-bert/bert-base-uncased", output_attentions=True) outputs = model(input_ids, labels=labels) loss, logits, attentions = outputs ``` ### Serializzazione Modifica sostanziale nel metodo `from_pretrained()`: 1. I modelli sono ora impostati in modalitร  di valutazione in maniera predefinita quando usi il metodo `from_pretrained()`. Per addestrarli non dimenticare di riportarli in modalitร  di addestramento (`model.train()`) per attivare i moduli di dropout. 2. Gli argomenti aggiuntivi `*inputs` e `**kwargs` forniti al metodo `from_pretrained()` venivano passati direttamente al metodo `__init__()` della classe sottostante del modello. Ora sono usati per aggiornare prima l'attributo di configurazione del modello, che puรฒ non funzionare con le classi del modello derivate costruite basandosi sui precedenti esempi di `BertForSequenceClassification`. Piรน precisamente, gli argomenti posizionali `*inputs` forniti a `from_pretrained()` vengono inoltrati direttamente al metodo `__init__()` del modello mentre gli argomenti keyword `**kwargs` (i) che corrispondono agli attributi della classe di configurazione, vengono utilizzati per aggiornare tali attributi (ii) che non corrispondono ad alcun attributo della classe di configurazione, vengono inoltrati al metodo `__init__()`. Inoltre, sebbene non si tratti di una modifica sostanziale, i metodi di serializzazione sono stati standardizzati e probabilmente dovresti passare al nuovo metodo `save_pretrained(save_directory)` se prima usavi qualsiasi altro metodo di serializzazione. Ecco un esempio: ```python ### Carichiamo un modello e un tokenizer model = BertForSequenceClassification.from_pretrained("google-bert/bert-base-uncased") tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") ### Facciamo fare alcune cose al nostro modello e tokenizer # Es: aggiungiamo nuovi token al vocabolario e agli embending del nostro modello tokenizer.add_tokens(["[SPECIAL_TOKEN_1]", "[SPECIAL_TOKEN_2]"]) model.resize_token_embeddings(len(tokenizer)) # Alleniamo il nostro modello train(model) ### Ora salviamo il nostro modello e il tokenizer in una cartella model.save_pretrained("./my_saved_model_directory/") tokenizer.save_pretrained("./my_saved_model_directory/") ### Ricarichiamo il modello e il tokenizer model = BertForSequenceClassification.from_pretrained("./my_saved_model_directory/") tokenizer = BertTokenizer.from_pretrained("./my_saved_model_directory/") ``` ### Ottimizzatori: BertAdam e OpenAIAdam ora sono AdamW, lo scheduling รจ quello standard PyTorch I due ottimizzatori precedenti inclusi, `BertAdam` e `OpenAIAdam`, sono stati sostituiti da un singolo `AdamW` che presenta alcune differenze: - implementa solo la correzione del weights decay, - lo scheduling ora รจ esterno (vedi sotto), - anche il gradient clipping ora รจ esterno (vedi sotto). Il nuovo ottimizzatore `AdamW` corrisponde alle API di `Adam` di PyTorch e ti consente di utilizzare metodi PyTorch o apex per lo scheduling e il clipping. Lo scheduling รจ ora standard [PyTorch learning rate schedulers](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate) e non fanno piรน parte dell'ottimizzatore. Ecco un esempio di linear warmup e decay con `BertAdam` e con `AdamW`: ```python # Parametri: lr = 1e-3 max_grad_norm = 1.0 num_training_steps = 1000 num_warmup_steps = 100 warmup_proportion = float( num_warmup_steps) / float(num_training_steps) # 0.1 ### In precedenza l'ottimizzatore BertAdam veniva istanziato in questo modo: optimizer = BertAdam( model.parameters(), lr=lr, schedule="warmup_linear", warmup=warmup_proportion, num_training_steps=num_training_steps, ) ### e usato in questo modo: for batch in train_data: loss = model(batch) loss.backward() optimizer.step() ### In ๐Ÿค— Transformers, ottimizzatore e schedule sono divisi e usati in questo modo: optimizer = AdamW( model.parameters(), lr=lr, correct_bias=False ) # Per riprodurre il comportamento specifico di BertAdam impostare correct_bias=False scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps ) # PyTorch scheduler ### e va usato cosรฌ: for batch in train_data: loss = model(batch) loss.backward() torch.nn.utils.clip_grad_norm_( model.parameters(), max_grad_norm ) # Gradient clipping non รจ piรน in AdamW (quindi puoi usare amp senza problemi) optimizer.step() scheduler.step() ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/serialization.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Esporta modelli ๐Ÿค— Transformers Se devi implementare ๐Ÿค— modelli Transformers in ambienti di produzione, noi consigliamo di esportarli in un formato serializzato che puรฒ essere caricato ed eseguito su runtime e hardware specializzati. In questa guida ti mostreremo come farlo esporta ๐Ÿค— Modelli Transformers in due formati ampiamente utilizzati: ONNX e TorchScript. Una volta esportato, un modello puรฒ essere ottimizato per l'inferenza tramite tecniche come la quantizzazione e soppressione. Se sei interessato a ottimizzare i tuoi modelli per l'esecuzione con la massima efficienza, dai un'occhiata a [๐Ÿค— Optimum library](https://github.com/huggingface/optimum). ## ONNX Il progetto [ONNX (Open Neural Network eXchange)](http://onnx.ai) Il progetto onnx รจ un open standard che definisce un insieme comune di operatori e un formato di file comune a rappresentano modelli di deep learning in un'ampia varietร  di framework, tra cui PyTorch e TensorFlow. Quando un modello viene esportato nel formato ONNX, questi operatori sono usati per costruire un grafico computazionale (often called an _intermediate representation_) che rappresenta il flusso di dati attraverso la rete neurale. Esponendo un grafico con operatori e tipi di dati standardizzati, ONNX rende piรน facile passare da un framework all'altro. Ad esempio, un modello allenato in PyTorch puรฒ essere esportato in formato ONNX e quindi importato in TensorFlow (e viceversa). ๐Ÿค— Transformers fornisce un pacchetto `transformers.onnx` che ti consente di convertire i checkpoint del modello in un grafico ONNX sfruttando gli oggetti di configurazione. Questi oggetti di configurazione sono giร  pronti per una serie di architetture di modelli, e sono progettati per essere facilmente estensibili ad altre architetture. Le configurazioni pronte includono le seguenti architetture: <!--This table is automatically generated by `make fix-copies`, do not fill manually!--> - ALBERT - BART - BEiT - BERT - BigBird - BigBird-Pegasus - Blenderbot - BlenderbotSmall - CamemBERT - ConvBERT - Data2VecText - Data2VecVision - DeiT - DistilBERT - ELECTRA - FlauBERT - GPT Neo - GPT-J - I-BERT - LayoutLM - M2M100 - Marian - mBART - MobileBERT - OpenAI GPT-2 - Perceiver - PLBart - RoBERTa - RoFormer - SqueezeBERT - T5 - ViT - XLM - XLM-RoBERTa - XLM-RoBERTa-XL Nelle prossime due sezioni, ti mostreremo come: * Esporta un modello supportato usando il pacchetto `transformers.onnx`. * Esporta un modello personalizzato per un'architettura non supportata. ### Esportazione di un modello in ONNX Per esportare un modello ๐Ÿค— Transformers in ONNX, dovrai prima installarne alcune dipendenze extra: ```bash pip install transformers[onnx] ``` Il pacchetto `transformers.onnx` puรฒ essere usato come modulo Python: ```bash python -m transformers.onnx --help usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output positional arguments: output Path indicating where to store generated ONNX model. optional arguments: -h, --help show this help message and exit -m MODEL, --model MODEL Model ID on huggingface.co or path on disk to load model from. --feature {causal-lm, ...} The type of features to export the model with. --opset OPSET ONNX opset version to export the model with. --atol ATOL Absolute difference tolerance when validating the model. ``` L'esportazione di un checkpoint utilizzando una configurazione giร  pronta puรฒ essere eseguita come segue: ```bash python -m transformers.onnx --model=distilbert/distilbert-base-uncased onnx/ ``` che dovrebbe mostrare i seguenti log: ```bash Validating ONNX model... -[โœ“] ONNX model output names match reference model ({'last_hidden_state'}) - Validating ONNX Model output "last_hidden_state": -[โœ“] (2, 8, 768) matches (2, 8, 768) -[โœ“] all values close (atol: 1e-05) All good, model saved at: onnx/model.onnx ``` Questo esporta un grafico ONNX del checkpoint definito dall'argomento `--model`. In questo esempio รจ `distilbert/distilbert-base-uncased`, ma puรฒ essere qualsiasi checkpoint Hugging Face Hub o uno memorizzato localmente. Il file risultante `model.onnx` puรฒ quindi essere eseguito su uno dei [tanti acceleratori](https://onnx.ai/supported-tools.html#deployModel) che supportano il lo standard ONNX. Ad esempio, possiamo caricare ed eseguire il modello con [ONNX Runtime](https://onnxruntime.ai/) come segue: ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` I nomi di output richiesti (cioรจ `["last_hidden_state"]`) possono essere ottenuti dando un'occhiata alla configurazione ONNX di ogni modello. Ad esempio, per DistilBERT abbiamo: ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"] ``` Il processo รจ identico per i checkpoint TensorFlow sull'hub. Ad esempio, noi possiamo esportare un checkpoint TensorFlow puro da [Keras organizzazione](https://huggingface.co/keras-io) come segue: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` Per esportare un modello memorizzato localmente, devi disporre dei pesi del modello e file tokenizer memorizzati in una directory. Ad esempio, possiamo caricare e salvare un checkpoint come segue: <frameworkcontent> <pt> ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> # Load tokenizer and PyTorch weights form the Hub >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") >>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") >>> # Save to disk >>> tokenizer.save_pretrained("local-pt-checkpoint") >>> pt_model.save_pretrained("local-pt-checkpoint") ``` Una volta salvato il checkpoint, possiamo esportarlo su ONNX puntando l'argomento `--model` del pacchetto `transformers.onnx` nella directory desiderata: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ``` </pt> <tf> ```python >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> # Load tokenizer and TensorFlow weights from the Hub >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") >>> # Save to disk >>> tokenizer.save_pretrained("local-tf-checkpoint") >>> tf_model.save_pretrained("local-tf-checkpoint") ``` Once the checkpoint is saved, we can export it to ONNX by pointing the `--model` argument of the `transformers.onnx` package to the desired directory: ```bash python -m transformers.onnx --model=local-tf-checkpoint onnx/ ``` </tf> </frameworkcontent> ### Selezione delle caratteristiche per diverse topologie di modello Ogni configurazione giร  pronta viene fornita con una serie di _caratteristiche_ che ti consentono di esportare modelli per diversi tipi di topologie o attivitร . Come mostrato nella tabella di seguito, ogni caratteristica รจ associata a una diversa Auto Class: | Caratteristica | Auto Class | | ------------------------------------ | ------------------------------------ | | `causal-lm`, `causal-lm-with-past` | `AutoModelForCausalLM` | | `default`, `default-with-past` | `AutoModel` | | `masked-lm` | `AutoModelForMaskedLM` | | `question-answering` | `AutoModelForQuestionAnswering` | | `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM` | | `sequence-classification` | `AutoModelForSequenceClassification` | | `token-classification` | `AutoModelForTokenClassification` | Per ciascuna configurazione, puoi trovare l'elenco delle funzionalitร  supportate tramite il `FeaturesManager`. Ad esempio, per DistilBERT abbiamo: ```python >>> from transformers.onnx.features import FeaturesManager >>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys()) >>> print(distilbert_features) ["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"] ``` Puoi quindi passare una di queste funzionalitร  all'argomento `--feature` nel pacchetto `transformers.onnx`. Ad esempio, per esportare un modello di classificazione del testo possiamo scegliere un modello ottimizzato dall'Hub ed eseguire: ```bash python -m transformers.onnx --model=distilbert/distilbert-base-uncased-finetuned-sst-2-english \ --feature=sequence-classification onnx/ ``` che visualizzerร  i seguenti registri: ```bash Validating ONNX model... -[โœ“] ONNX model output names match reference model ({'logits'}) - Validating ONNX Model output "logits": -[โœ“] (2, 2) matches (2, 2) -[โœ“] all values close (atol: 1e-05) All good, model saved at: onnx/model.onnx ``` Puoi notare che in questo caso, i nomi di output del modello ottimizzato sono `logits` invece di `last_hidden_state` che abbiamo visto con il checkpoint `distilbert/distilbert-base-uncased` precedente. Questo รจ previsto dal modello ottimizato visto che ha una testa di e. <Tip> Le caratteristiche che hanno un suffisso `wtih-past` (ad es. `causal-lm-with-past`) corrispondono a topologie di modello con stati nascosti precalcolati (chiave e valori nei blocchi di attenzione) che possono essere utilizzati per la decodifica autoregressiva veloce. </Tip> ### Esportazione di un modello per un'architettura non supportata Se desideri esportare un modello la cui architettura non รจ nativamente supportata dalla libreria, ci sono tre passaggi principali da seguire: 1. Implementare una configurazione ONNX personalizzata. 2. Esportare il modello in ONNX. 3. Convalidare gli output di PyTorch e dei modelli esportati. In questa sezione, vedremo come DistilBERT รจ stato implementato per mostrare cosa รจ coinvolto in ogni passaggio. #### Implementazione di una configurazione ONNX personalizzata Iniziamo con l'oggetto di configurazione ONNX. Forniamo tre classi astratte da cui ereditare, a seconda del tipo di archittettura del modello che desideri esportare: * I modelli basati su encoder ereditano da [`~onnx.config.OnnxConfig`] * I modelli basati su decoder ereditano da [`~onnx.config.OnnxConfigWithPast`] * I modelli encoder-decoder ereditano da[`~onnx.config.OnnxSeq2SeqConfigWithPast`] <Tip> Un buon modo per implementare una configurazione ONNX personalizzata รจ guardare l'implementazione esistente nel file `configuration_<model_name>.py` di un'architettura simile. </Tip> Poichรฉ DistilBERT รจ un modello basato su encoder, la sua configurazione eredita da `OnnxConfig`: ```python >>> from typing import Mapping, OrderedDict >>> from transformers.onnx import OnnxConfig >>> class DistilBertOnnxConfig(OnnxConfig): ... @property ... def inputs(self) -> Mapping[str, Mapping[int, str]]: ... return OrderedDict( ... [ ... ("input_ids", {0: "batch", 1: "sequence"}), ... ("attention_mask", {0: "batch", 1: "sequence"}), ... ] ... ) ``` Ogni oggetto di configurazione deve implementare la proprietร  `inputs` e restituire una mappatura, dove ogni chiave corrisponde a un input previsto e ogni valore indica l'asse di quell'input. Per DistilBERT, possiamo vedere che sono richiesti due input: `input_ids` e `attention_mask`. Questi inputs hanno la stessa forma di `(batch_size, sequence_length)` per questo motivo vediamo gli stessi assi usati nella configurazione. <Tip> Puoi notare che la proprietร  `inputs` per `DistilBertOnnxConfig` restituisce un `OrdinatoDict`. Ciรฒ garantisce che gli input corrispondano alla loro posizione relativa all'interno del metodo `PreTrainedModel.forward()` durante il tracciamento del grafico. Raccomandiamo di usare un `OrderedDict` per le proprietร  `inputs` e `outputs` quando si implementano configurazioni ONNX personalizzate. </Tip> Dopo aver implementato una configurazione ONNX, รจ possibile istanziarla fornendo alla configurazione del modello base come segue: ```python >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased") >>> onnx_config = DistilBertOnnxConfig(config) ``` L'oggetto risultante ha diverse proprietร  utili. Ad esempio รจ possibile visualizzare il Set operatore ONNX che verrร  utilizzato durante l'esportazione: ```python >>> print(onnx_config.default_onnx_opset) 11 ``` รˆ inoltre possibile visualizzare gli output associati al modello come segue: ```python >>> print(onnx_config.outputs) OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})]) ``` Puoi notare che la proprietร  degli output segue la stessa struttura degli input; esso restituisce un `OrderedDict` di output con nome e le loro forme. La struttura di output รจ legato alla scelta della funzione con cui viene inizializzata la configurazione. Per impostazione predefinita, la configurazione ONNX viene inizializzata con la funzione 'predefinita' che corrisponde all'esportazione di un modello caricato con la classe `AutoModel`. Se tu desideri esportare una topologia di modello diversa, รจ sufficiente fornire una funzionalitร  diversa a l'argomento `task` quando inizializzi la configurazione ONNX. Ad esempio, se volevamo esportare DistilBERT con una testa di classificazione per sequenze, potremmo usare: ```python >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased") >>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification") >>> print(onnx_config_for_seq_clf.outputs) OrderedDict([('logits', {0: 'batch'})]) ``` <Tip> Tutte le proprietร  e i metodi di base associati a [`~onnx.config.OnnxConfig`] e le altre classi di configurazione possono essere sovrascritte se necessario. Guarda [`BartOnnxConfig`] per un esempio avanzato. </Tip> #### Esportazione del modello Una volta implementata la configurazione ONNX, il passaggio successivo consiste nell'esportare il modello. Qui possiamo usare la funzione `export()` fornita dal pacchetto `transformers.onnx`. Questa funzione prevede la configurazione ONNX, insieme con il modello base e il tokenizer e il percorso per salvare il file esportato: ```python >>> from pathlib import Path >>> from transformers.onnx import export >>> from transformers import AutoTokenizer, AutoModel >>> onnx_path = Path("model.onnx") >>> model_ckpt = "distilbert/distilbert-base-uncased" >>> base_model = AutoModel.from_pretrained(model_ckpt) >>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt) >>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path) ``` Gli `onnx_inputs` e `onnx_outputs` restituiti dalla funzione `export()` sono liste di chiavi definite nelle proprietร  di `input` e `output` della configurazione. Una volta esportato il modello, puoi verificare che il modello sia ben formato come segue: ```python >>> import onnx >>> onnx_model = onnx.load("model.onnx") >>> onnx.checker.check_model(onnx_model) ``` <Tip> Se il tuo modello รจ piรน largo di 2 GB, vedrai che molti file aggiuntivi sono creati durante l'esportazione. Questo รจ _previsto_ perchรฉ ONNX utilizza [Protocol Buffer](https://developers.google.com/protocol-buffers/) per memorizzare il modello e questi hanno un limite di dimensione 2 GB. Vedi la [Documentazione ONNX](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) per istruzioni su come caricare modelli con dati esterni. </Tip> #### Convalida degli output del modello Il passaggio finale consiste nel convalidare gli output dal modello di base e quello esportato corrispondere entro una soglia di tolleranza assoluta. Qui possiamo usare la Funzione `validate_model_outputs()` fornita dal pacchetto `transformers.onnx` come segue: ```python >>> from transformers.onnx import validate_model_outputs >>> validate_model_outputs( ... onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation ... ) ``` Questa funzione usa il metodo `OnnxConfig.generate_dummy_inputs()` per generare input per il modello di base e quello esportato e la tolleranza assoluta puรฒ essere definita nella configurazione. Generalmente troviamo una corrispondenza numerica nell'intervallo da 1e-6 a 1e-4, anche se รจ probabile che qualsiasi cosa inferiore a 1e-3 vada bene. ### Contribuire con una nuova configurazione a ๐Ÿค— Transformers Stiamo cercando di espandere l'insieme di configurazioni giร  pronte e di accettare contributi della community! Se vuoi contribuire con la tua aggiunta nella libreria, dovrai: * Implementare la configurazione ONNX nella corrispondente `configuration file _<model_name>.py` * Includere l'architettura del modello e le funzioni corrispondenti in [`~onnx.features.FeatureManager`] * Aggiungere la tua architettura del modello ai test in `test_onnx_v2.py` Scopri come stato contribuito la configurazione per [IBERT](https://github.com/huggingface/transformers/pull/14868/files) per avere un'idea di cosa รจ coinvolto. ## TorchScript <Tip> Questo รจ l'inizio dei nostri esperimenti con TorchScript e stiamo ancora esplorando le sue capacitร  con modelli con variable-input-size. รˆ una nostra prioritร  e approfondiremo le nostre analisi nelle prossime versioni, con piรน esempi di codici, un'implementazione piรน flessibile e benchmark che confrontano i codici basati su Python con quelli compilati con TorchScript. </Tip> Secondo la documentazione di Pytorch: "TorchScript รจ un modo per creare modelli serializzabili e ottimizzabili da codice Pytorch". I due moduli di Pytorch [JIT e TRACE](https://pytorch.org/docs/stable/jit.html) consentono allo sviluppatore di esportare il loro modello da riutilizzare in altri programmi, come i programmi C++ orientati all'efficienza. Abbiamo fornito un'interfaccia che consente l'esportazione di modelli ๐Ÿค— Transformers in TorchScript in modo che possano essere riutilizzati in un ambiente diverso rispetto a un programma Python basato su Pytorch. Qui spieghiamo come esportare e utilizzare i nostri modelli utilizzando TorchScript. Esportare un modello richiede due cose: - Un passaggio in avanti con input fittizzi. - Istanziazione del modello con flag `torchscript`. Queste necessitร  implicano diverse cose a cui gli sviluppatori dovrebbero prestare attenzione. Questi dettagli mostrati sotto. ### Flag TorchScript e pesi legati Questo flag รจ necessario perchรฉ la maggior parte dei modelli linguistici in questo repository hanno pesi legati tra il loro strato "Embedding" e lo strato "Decoding". TorchScript non consente l'esportazione di modelli che hanno pesi legati, quindi รจ necessario prima slegare e clonare i pesi. Ciรฒ implica che i modelli istanziati con il flag `torchscript` hanno il loro strato `Embedding` e strato `Decoding` separato, il che significa che non dovrebbero essere addestrati in futuro. L'allenamento de-sincronizza i due strati, portando a risultati inaspettati. Questo non รจ il caso per i modelli che non hanno una testa del modello linguistico, poichรฉ quelli non hanno pesi legati. Questi modelli puรฒ essere esportato in sicurezza senza il flag `torchscript`. ### Input fittizi e standard lengths Gli input fittizzi sono usati per fare un modello passaggio in avanti . Mentre i valori degli input si propagano attraverso i strati, Pytorch tiene traccia delle diverse operazioni eseguite su ciascun tensore. Queste operazioni registrate vengono quindi utilizzate per creare la "traccia" del modello. La traccia viene creata relativamente alle dimensioni degli input. รˆ quindi vincolato dalle dimensioni dell'input fittizio e non funzionerร  per altre lunghezze di sequenza o dimensioni batch. Quando si proverร  con una dimensione diversa, ci sarร  errore come: `La dimensione espansa del tensore (3) deve corrispondere alla dimensione esistente (7) nella dimensione non singleton 2` will be raised. Si consiglia pertanto di tracciare il modello con una dimensione di input fittizia grande almeno quanto il piรน grande input che verrร  fornito al modello durante l'inferenza. รˆ possibile eseguire il padding per riempire i valori mancanti. Il modello sarร  tracciato con una grande dimensione di input, tuttavia, anche le dimensioni della diverse matrici saranno grandi, risultando in piรน calcoli. Si raccomanda di prestare attenzione al numero totale di operazioni eseguite su ciascun input e di seguire da vicino le prestazioni durante l'esportazione di modelli di sequenza-lunghezza variabili. ### Usare TorchSscript in Python Di seguito รจ riportato un esempio, che mostra come salvare, caricare modelli e come utilizzare la traccia per l'inferenza. #### Salvare un modello Questo frammento di codice mostra come usare TorchScript per esportare un `BertModel`. Qui il `BertModel` รจ istanziato secondo una classe `BertConfig` e quindi salvato su disco con il nome del file `traced_bert.pt` ```python from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") # Tokenizing input text text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) # Masking one of the input tokens masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # Creating a dummy input tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # Initializing the model with the torchscript flag # Flag set to True even though it is not necessary as this model does not have an LM Head. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # Instantiating the model model = BertModel(config) # The model needs to be in evaluation mode model.eval() # If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag model = BertModel.from_pretrained("google-bert/bert-base-uncased", torchscript=True) # Creating the trace traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt") ``` #### Caricare un modello Questo frammento di codice mostra come caricare il `BertModel` che era stato precedentemente salvato su disco con il nome `traced_bert.pt`. Stiamo riutilizzando il `dummy_input` precedentemente inizializzato. ```python loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ``` #### Utilizzare un modello tracciato per l'inferenza Usare il modello tracciato per l'inferenza รจ semplice come usare il suo metodo dunder `__call__`: ```python traced_model(tokens_tensor, segments_tensors) ``` ### Implementare modelli HuggingFace TorchScript su AWS utilizzando Neuron SDK AWS ha introdotto [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) famiglia di istanze per l'inferenza di machine learning a basso costo e ad alte prestazioni nel cloud. Le istanze Inf1 sono alimentate dal chip AWS Inferentia, un acceleratore hardware personalizzato, specializzato in carichi di lavoro di inferenza di deep learning. [AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#) รจ l'SDK per Inferentia che supporta il tracciamento e l'ottimizzazione dei modelli transformers per distribuzione su Inf1. L'SDK Neuron fornisce: 1. API di facile utilizzo con una riga di modifica del codice per tracciare e ottimizzare un modello TorchScript per l'inferenza nel cloud. 2. Ottimizzazioni delle prestazioni pronte all'uso per [miglioramento dei costi-prestazioni](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>) 3. Supporto per i modelli di trasformatori HuggingFace costruiti con [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) o [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html). #### Implicazioni Modelli Transformers basati su architettura [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert), o sue varianti come [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) e [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta) funzioneranno meglio su Inf1 per attivitร  non generative come la question answering estrattive, Classificazione della sequenza, Classificazione dei token. In alternativa, generazione di testo le attivitร  possono essere adattate per essere eseguite su Inf1, secondo questo [tutorial AWS Neuron MarianMT](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html). Ulteriori informazioni sui modelli che possono essere convertiti fuori dagli schemi su Inferentia possono essere trovati nella [sezione Model Architecture Fit della documentazione Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia). #### Dipendenze L'utilizzo di AWS Neuron per convertire i modelli richiede le seguenti dipendenze e l'ambiente: * A [Neuron SDK environment](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide), which comes pre-configured on [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html). #### Convertire un modello per AWS Neuron Usando lo stesso script come in [Usando TorchScipt in Python](https://huggingface.co/docs/transformers/main/en/serialization#using-torchscript-in-python) per tracciare un "BertModel", importi l'estensione del framework `torch.neuron` per accedere i componenti di Neuron SDK tramite un'API Python. ```python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron ``` E modificare solo la riga di codice di traccia Da: ```python torch.jit.trace(model, [tokens_tensor, segments_tensors]) ``` A: ```python torch.neuron.trace(model, [token_tensor, segments_tensors]) ``` Questa modifica consente a Neuron SDK di tracciare il modello e ottimizzarlo per l'esecuzione nelle istanze Inf1. Per ulteriori informazioni sulle funzionalitร , gli strumenti, i tutorial di esempi e gli ultimi aggiornamenti di AWS Neuron SDK, consultare la [documentazione AWS NeuronSDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Allenamento distribuito con ๐Ÿค— Accelerate La parallelizzazione รจ emersa come strategia per allenare modelli sempre piรน grandi su hardware limitato e accelerarne la velocitร  di allenamento di diversi ordini di magnitudine. In Hugging Face, abbiamo creato la libreria [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) per aiutarti ad allenare in modo semplice un modello ๐Ÿค— Transformers su qualsiasi tipo di configurazione distribuita, sia che si tratti di piรน GPU su una sola macchina o di piรน GPU su piรน macchine. In questo tutorial, imparerai come personalizzare il training loop nativo di PyTorch per consentire l'addestramento in un ambiente distribuito. ## Configurazione Inizia installando ๐Ÿค— Accelerate: ```bash pip install accelerate ``` Poi importa e crea un oggetto [`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator). `Accelerator` rileverร  automaticamente il tuo setup distribuito e inizializzerร  tutte le componenti necessarie per l'allenamento. Non dovrai allocare esplicitamente il tuo modello su un device. ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## Preparati ad accelerare Il prossimo passo รจ quello di passare tutti gli oggetti rilevanti per l'allenamento al metodo [`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare). Questo include i tuoi DataLoaders per l'allenamento e per la valutazione, un modello e un ottimizzatore: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## Backward Infine, sostituisci il tipico metodo `loss.backward()` nel tuo loop di allenamento con il metodo [`backward`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.backward) di ๐Ÿค— Accelerate: ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` Come puoi vedere nel seguente codice, hai solo bisogno di aggiungere quattro righe in piรน di codice al tuo training loop per abilitare l'allenamento distribuito! ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## Allenamento Una volta che hai aggiunto le righe di codice rilevanti, lancia il tuo allenamento in uno script o in un notebook come Colaboratory. ### Allenamento con uno script Se stai eseguendo il tuo allenamento da uno script, esegui il comando seguente per creare e salvare un file di configurazione: ```bash accelerate config ``` Poi lancia il tuo allenamento con: ```bash accelerate launch train.py ``` ### Allenamento con un notebook La libreria ๐Ÿค— Accelerate puรฒ anche essere utilizzata in un notebook se stai pianificando di utilizzare le TPU di Colaboratory. Inserisci tutto il codice legato all'allenamento in una funzione, e passala al `notebook_launcher`: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` Per maggiori informazioni relative a ๐Ÿค— Accelerate e le sue numerose funzionalitร , fai riferimento alla [documentazione](https://huggingface.co/docs/accelerate).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/perf_infer_gpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza Efficiente su GPU Multiple Questo documento contiene informazioni su come fare inferenza in maniera efficiente su GPU multiple. <Tip> Nota: Un setup con GPU multiple puรฒ utilizzare la maggior parte delle strategie descritte nella [sezione con GPU singola](./perf_infer_gpu_one). Tuttavia, รจ necessario conoscere delle tecniche semplici che possono essere utilizzate per un risultato migliore. </Tip> ## `BetterTransformer` per inferenza piรน rapida Abbiamo recentemente integrato `BetterTransformer` per inferenza piรน rapida su multi-GPU per modelli su testo, immagini e audio. Controlla il documento con queste integrazioni [qui](https://huggingface.co/docs/optimum/bettertransformer/overview) per maggiori dettagli.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/perf_infer_special.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza su Hardware Specializzato Questo documento sarร  completato a breve con la documentazione per l'inferenza su hardware specializzato. Nel frattempo puoi controllare [la guida per fare inferenza sulle CPU](perf_infer_cpu).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/perf_hardware.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hardware ottimizzato per l'addestramento L'hardware utilizzato per eseguire l'addestramento del modello e l'inferenza puรฒ avere un grande effetto sulle prestazioni. Per un analisi approfondita delle GPUs, assicurati di dare un'occhiata all'eccellente [blog post](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/) di Tim Dettmer. Diamo un'occhiata ad alcuni consigli pratici per la configurazione della GPU. ## GPU Quando si addestrano modelli piรน grandi ci sono essenzialmente tre opzioni: - GPUs piu' grandi - Piu' GPUs - Piu' CPU e piu' NVMe (scaricato da [DeepSpeed-Infinity](main_classes/deepspeed#nvme-support)) Iniziamo dal caso in cui ci sia una singola GPU. ### Potenza e Raffreddamento Se hai acquistato una costosa GPU di fascia alta, assicurati di darle la potenza corretta e un raffreddamento sufficiente. **Potenza**: Alcune schede GPU consumer di fascia alta hanno 2 e talvolta 3 prese di alimentazione PCI-E a 8 pin. Assicurati di avere tanti cavi PCI-E a 8 pin indipendenti da 12 V collegati alla scheda quante sono le prese. Non utilizzare le 2 fessure a un'estremitร  dello stesso cavo (noto anche come cavo a spirale). Cioรจ se hai 2 prese sulla GPU, vuoi 2 cavi PCI-E a 8 pin che vanno dall'alimentatore alla scheda e non uno che abbia 2 connettori PCI-E a 8 pin alla fine! In caso contrario, non otterrai tutte le prestazioni ufficiali. Ciascun cavo di alimentazione PCI-E a 8 pin deve essere collegato a una guida da 12 V sul lato dell'alimentatore e puรฒ fornire fino a 150 W di potenza. Alcune altre schede possono utilizzare connettori PCI-E a 12 pin e questi possono fornire fino a 500-600 W di potenza. Le schede di fascia bassa possono utilizzare connettori a 6 pin, che forniscono fino a 75 W di potenza. Inoltre vuoi un alimentatore (PSU) di fascia alta che abbia una tensione stabile. Alcuni PSU di qualitร  inferiore potrebbero non fornire alla scheda la tensione stabile di cui ha bisogno per funzionare al massimo. E ovviamente l'alimentatore deve avere abbastanza Watt inutilizzati per alimentare la scheda. **Raffreddamento**: Quando una GPU si surriscalda, inizierร  a rallentare e non fornirร  le prestazioni mssimali e potrebbe persino spegnersi se diventasse troppo calda. รˆ difficile dire l'esatta temperatura migliore a cui aspirare quando una GPU รจ molto caricata, ma probabilmente qualsiasi cosa al di sotto di +80ยฐC va bene, ma piรน bassa รจ meglio - forse 70-75ยฐC รจ un intervallo eccellente in cui trovarsi. รˆ probabile che il rallentamento inizi a circa 84-90ยฐC. Ma oltre alla limitazione delle prestazioni, una temperatura molto elevata prolungata รจ probabile che riduca la durata di una GPU. Diamo quindi un'occhiata a uno degli aspetti piรน importanti quando si hanno piรน GPU: la connettivitร . ### Connettivitร  multi-GPU Se utilizzi piรน GPU, il modo in cui le schede sono interconnesse puรฒ avere un enorme impatto sul tempo totale di allenamento. Se le GPU si trovano sullo stesso nodo fisico, puoi eseguire: ```bash nvidia-smi topo -m ``` e ti dirร  come sono interconnesse le GPU. Su una macchina con doppia GPU e collegata a NVLink, molto probabilmente vedrai qualcosa del tipo: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A ``` su una macchina diversa senza NVLink potremmo vedere: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X PHB 0-11 N/A GPU1 PHB X 0-11 N/A ``` Il rapporto include questa legenda: ``` X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ``` Quindi il primo rapporto `NV2` ci dice che le GPU sono interconnesse con 2 NVLinks e nel secondo report `PHB` abbiamo una tipica configurazione PCIe+Bridge a livello di consumatore. Controlla che tipo di connettivitร  hai sulla tua configurazione. Alcuni di questi renderanno la comunicazione tra le carte piรน veloce (es. NVLink), altri piรน lenta (es. PHB). A seconda del tipo di soluzione di scalabilitร  utilizzata, la velocitร  di connettivitร  potrebbe avere un impatto maggiore o minore. Se le GPU devono sincronizzarsi raramente, come in DDP, l'impatto di una connessione piรน lenta sarร  meno significativo. Se le GPU devono scambiarsi messaggi spesso, come in ZeRO-DP, una connettivitร  piรน veloce diventa estremamente importante per ottenere un addestramento piรน veloce. #### NVlink [NVLink](https://en.wikipedia.org/wiki/NVLink) รจ un collegamento di comunicazione a corto raggio multilinea seriale basato su cavo sviluppato da Nvidia. Ogni nuova generazione fornisce una larghezza di banda piรน veloce, ad es. ecco una citazione da [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf): > Third-Generation NVLinkยฎ > GA102 GPUs utilize NVIDIAโ€™s third-generation NVLink interface, which includes four x4 links, > with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four > links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth > between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. > (Note that 3-Way and 4-Way SLI configurations are not supported.) Quindi piรน `X` si ottiene nel rapporto di `NVX` nell'output di `nvidia-smi topo -m`, meglio รจ. La generazione dipenderร  dall'architettura della tua GPU. Confrontiamo l'esecuzione di un training del modello di linguaggio openai-community/gpt2 su un piccolo campione di wikitext I risultati sono: | NVlink | Time | | ----- | ---: | | Y | 101s | | N | 131s | Puoi vedere che NVLink completa l'addestramento circa il 23% piรน velocemente. Nel secondo benchmark utilizziamo `NCCL_P2P_DISABLE=1` per dire alle GPU di non utilizzare NVLink. Ecco il codice benchmark completo e gli output: ```bash # DDP w/ NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`) Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/add_new_model.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Come aggiungere un modello a ๐Ÿค— Transformers? Aggiungere un nuovo modello รฉ spesso difficile e richiede una profonda conoscenza della libreria ๐Ÿค— Transformers e anche della repository originale del modello. A Hugging Face cerchiamo di dare alla community sempre piรบ poteri per aggiungere modelli independentemente. Quindi, per alcuni nuovi modelli che la community vuole aggiungere a ๐Ÿค— Transformers, abbiamo creato una specifica *call-for-model-addition* che spiega passo dopo passo come aggiungere il modello richiesto. Con questo *call-for-model-addition* vogliamo insegnare a volenterosi e esperti collaboratori della community come implementare un modello in ๐Ÿค— Transformers. Se questo รฉ qualcosa che puรฒ interessarvi, siete liberi di controllare l'attuale โ€œcalls-for-model-additionโ€ [qui](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model/open_model_proposals/README.md) e contattarci. Se il modello sarร  selezionato, allora potrete lavorare insieme a un membro di Hugging Face per integrare il modello in ๐Ÿค— Transformers. Cosรฌ facendo, ci guadagnerai in una comprensione totale, sia teorica che pratica, del modello proposto. Inoltre, sarai l'artefice di un importante contributo open-source a ๐Ÿค— Transformers. Durante l'implementazione avrai l'opportunitร  di: - ottenere piรน comprensione delle best practices in open-source - capire i principi di design di una della librerie NLP piรน popolari - capire come efficientemente testare complessi modelli NLP - capire come integrare utilit Python come `black`, `ruff`, `make fix-copies` in una libreria per garantire sempre di avere un codice leggibile e pulito Siamo anche contenti se vuoi aggiungere un modello che non puรฒ essere trovato nella cartella โ€œcalls-for-model-additionโ€. Le seguenti sezioni spiegano in dettaglio come aggiungere un nuovo modello. Puรฒ anche essere molto utile controllare modelli giร  aggiunti [qui](https://github.com/huggingface/transformers/pulls?q=is%3Apr+label%3A%22PR+for+Model+Addition%22+is%3Aclosed), per capire se richiamano il modello che vorreste aggiungere. Per cominciare, vediamo una panoramica general della libreria Transformers. ## Panoramica generale su ๐Ÿค— Transformers Prima di tutto, vediamo in generale ๐Ÿค— Transformers. ๐Ÿค— Transformers รฉ una libreria molto strutturata, quindi puร  essere che a volte ci sia un disaccordo con alcune filosofie della libreria o scelte di design. Dalla nostra esperienza, tuttavia, abbiamo trovato che le scelte fondamentali di design della libreria sono cruciali per usare ๐Ÿค— Transformers efficacemente su larga scala, mantenendo i costi a un livello accettabile. Un buon primo punto di partenza per capire al meglio la libreria รฉ leggere la [documentazione sulla nostra filosofia](filosofia) Da qui, ci sono alcune scelte sul modo di lavorare che cerchiamo di applicare a tutti i modelli: - La composizione รฉ generalmente favorita sulla sovra-astrazione - Duplicare il codice non รฉ sempre male, soprattutto se migliora notevolmente la leggibilitร  e accessibilitร  del modello - Tutti i files creati per il nuovo modello devono il piu possibile "compatti". Questo vuol dire che quando qualcuno leggerรก il codice di uno specifico modello, potrรก vedere solo il corrispettivo file `modeling_....py` senza avere multiple dipendenze. La cosa piรบ importante, รฉ che consideriamo la libreria non solo un mezzo per dare un prodotto, *per esempio* dare la possibilitร  di usare BERT per inferenza, ma รฉ anche il prodotto reale che noi vogliamo migliorare sempre piรน. Quindi, quando aggiungi un modello, non sei solo la persona che userร  il modello, ma rappresenti anche tutti coloro che leggeranno, cercheranno di capire e modificare il tuo modello. Tenendo questi principi in mente, immergiamoci nel design generale della libreria. ### Panoramica sui modelli Per aggiungere con successo un modello, รฉ importante capire l'interazione tra il tuo modello e la sua configurazione, [`PreTrainedModel`], e [`PretrainedConfig`]. Per dare un esempio, chiameremo il modello da aggiungere a ๐Ÿค— Transformers `BrandNewBert`. Diamo un'occhiata: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> Come potete vedere, ci basiamo sull'ereditarietร  in ๐Ÿค— Transformers, tenendo perรฒ il livello di astrazione a un minimo assoluto. Non ci sono mai piรน di due livelli di astrazione per ogni modello nella libreria. `BrandNewBertModel` eredita da `BrandNewBertPreTrainedModel` che, a sua volta, eredita da [`PreTrainedModel`] - semplice no? Come regola generale, vogliamo essere sicuri che un nuovo modello dipenda solo da [`PreTrainedModel`]. Le funzionalitร  importanti che sono automaticamente conferite a ogni nuovo modello sono [`~PreTrainedModel.from_pretrained`] e [`~PreTrainedModel.save_pretrained`], che sono usate per serializzazione e deserializzazione. Tutte le altre importanti funzionalitร , come ad esempio `BrandNewBertModel.forward` devono essere definite completamente nel nuovo script `modeling_brand_new_bert.py`. Inoltre, vogliamo essere sicuri che un modello con uno specifico head layer, come `BrandNewBertForMaskedLM` non erediti da `BrandNewBertModel`, ma piuttosto usi `BrandNewBertModel` come componente che puรฒ essere chiamata nel passaggio forward per mantenere il livello di astrazione basso. Ogni nuovo modello richieste una classe di configurazione, chiamata `BrandNewBertConfig`. Questa configurazione รฉ sempre mantenuta come un attributo in [`PreTrainedModel`], e quindi puรฒ essere accessibile tramite l'attributo `config` per tutte le classi che ereditano da `BrandNewBertPreTrainedModel`: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # il modello ha accesso al suo config ``` Analogamente al modello, la configurazione eredita le funzionalitร  base di serializzazione e deserializzazione da [`PretrainedConfig`]. ร‰ da notare che la configurazione e il modello sono sempre serializzati in due formati differenti - il modello รฉ serializzato in un file *pytorch_model.bin* mentre la configurazione con *config.json*. Chiamando [`~PreTrainedModel.save_pretrained`] automaticamente chiamerร  [`~PretrainedConfig.save_pretrained`], cosicchรฉ sia il modello che la configurazione siano salvati. ### Stile per il codice Quando codifichi un nuovo modello, tieni presente che Transformers ha una sua struttura di fondo come libreria, perciรฒ ci sono alcuni fatti da considerare su come scrivere un codice :-) 1. Il forward pass del tuo modello dev'essere scritto completamente nel file del modello, mentre dev'essere indipendente da altri modelli nella libreria. Se vuoi riutilizzare un blocco di codice da un altro modello, copia e incolla il codice con un commento `# Copied from` in cima al codice (guarda [qui](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160) per un ottimo esempio). 2. Il codice dev'essere interamente comprensibile, anche da persone che non parlano in inglese. Questo significa che le variabili devono avere un nome descrittivo e bisogna evitare abbreviazioni. Per esempio, `activation` รฉ molto meglio che `act`. Le variabili con una lettera sono da evitare fortemente, almeno che non sia per un indce in un for loop. 3. Generamente รฉ meglio avere un codice esplicito e piรบ lungo che un codice corto e magico. 4. Evita di subclassare `nn.Sequential` in Pytorch, puoi subclassare `nn.Module` e scrivere il forward pass, cosicchรฉ chiunque puรฒ effettuare debug sul tuo codice, aggiungendo print o breaking points. 5. La tua function-signature dev'essere type-annoted. Per il resto, รฉ meglio preferire variabili con un nome accettabile piuttosto che annotazioni per aumentare la comprensione e leggibilitร  del codice. ### Panoramica sui tokenizers Questa sezione sarร  creata al piu presto :-( ## Aggiungere un modello a ๐Ÿค— Transformers passo dopo passo Ci sono differenti modi per aggiungere un modello a Hugging Face. Qui trovi una lista di blog posts da parte della community su come aggiungere un modello: 1. [Aggiungere GPT2](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) scritto da [Thomas](https://huggingface.co/thomwolf) 2. [Aggiungere WMT19 MT](https://huggingface.co/blog/porting-fsmt) scritto da [Stas](https://huggingface.co/stas) Per esperienza, possiamo dirti che quando si aggiunge un modello รฉ meglio tenere a mente le seguenti considerazioni: - Non sfondare una porta giรก aperta! La maggior parte del codice che aggiungerai per un nuovo modello ๐Ÿค— Transformers esiste giร  da qualche parte in ๐Ÿค— Transformers. Prendi un po' di tempo per trovare codici simili in modelli e tokenizers esistenti e fare un copia-incolla. Ricorda che [grep](https://www.gnu.org/software/grep/) e [rg](https://github.com/BurntSushi/ripgrep) sono tuoi buoni amici. Inoltre, ricorda che puรณ essere molto probabile che il tokenizer per il tuo modello sia basato sull'implementazione di un altro modello, e il codice del tuo modello stesso su un altro ancora. *Per esempio* il modello FSMT รฉ basato su BART, mentre il tokenizer di FSMT รฉ basato su XLM. - Ricorda che qui รฉ piu una sfida ingegneristica che scientifica. Spendi piรบ tempo per create un efficiente ambiente di debugging piuttosto che cercare di capire tutti gli aspetti teorici dell'articolo del modello. - Chiedi aiuto se sei in panne! I modelli sono la parte principale di ๐Ÿค— Transformers, perciรฒ qui a Hugging Face siamo piรน che contenti di aiutarti in ogni passo per aggiungere il tuo modello. Non esitare a chiedere se vedi che non riesci a progredire. Di seguito, diamo una ricetta generale per aiutare a portare un modello in ๐Ÿค— Transformers. La lista seguente รฉ un sommario di tutto quello che รฉ stato fatto per aggiungere un modello, e puรฒ essere usata come To-Do List: - 1. โ˜ (Opzionale) Capire gli aspetti teorici del modello - 2. โ˜ Preparare l'ambiente dev per transformers - 3. โ˜ Preparare l'ambiente debugging della repository originale - 4. โ˜ Create uno script che gestisca con successo il forward pass usando la repository originale e checkpoint - 5. โ˜ Aggiungere con successo lo scheletro del modello a Transformers - 6. โ˜ Convertire i checkpoint original a Transformers checkpoint - 7. โ˜ Effettuare con successo la forward pass in Transformers, di modo che dia un output identico al checkpoint originale - 8. โ˜ Finire i tests per il modello in Transformers - 9. โ˜ Aggiungere con successo Tokenizer in Transformers - 10. โ˜ Testare e provare gli integration tests da capo a fine - 11. โ˜ Completare i docs - 12. โ˜ Caricare i moedl weights all'hub - 13. โ˜ Sottomettere una pull request - 14. โ˜ (Opzionale) Aggiungere un notebook con una demo Per cominciare di solito consigliamo `BrandNewBert`, partendo dalla teoria, di modo da avere una buona comprensione della teoria generale. TUttavia, se preferisci imparare l'aspetto teorico del modello mentre *lavori* sul modello รฉ ok immergersi direttamente nel codice di `BrandNewBert`. Questa opzione puรณ essere buona se le tue skills ingegneristiche sono meglio che quelle teoriche, o se il paper `BrandNewBert` ti dรก problemi, o se semplicemente ti piace programmare piรบ che leggere articoli scientifici. ### 1. (Opzionale) Aspetti teorici di BrandNewBert Allora con calma, prendi un po' di tempo per leggere l'articolo su *BrandNewBert* . Sicuramente, alcune sezioni dell'articolo sono molto complesse, ma non preoccuparti! L'obiettivo non รฉ avere una compresione immensa della teoria alla base, ma estrarre le informazioni necessarie per re-implementare con successo il modello in ๐Ÿค— Transformers. Quindi, non impazzire sugli aspetti teorici, ma piuttosto focalizzati su quelli pratici, ossia: - Che tipo di modello รฉ *brand_new_bert*? ร‰ solo un encoder in stile BERT? O tipo decoder come GPT2? O encoder e decoder stile BART? Dai un'occhiata a [model_summary](model_summary) se non sei famigliare con le differenze tra questi modelli - Quali sono le applicazioni di *brand_new_bert*? Classificazione di testo? Generazione di testo? O per tasks del genere seq2seq? - Quali sono le nuove aggiunte al modello che lo rendono diverso da BERT/GPT-2/BART? - Quali modelli estistenti in [๐Ÿค— Transformers models](https://huggingface.co/transformers/#contents) sono molto simili a *brand_new_bert*? - Che tipo di tokenizer si usa in questo caso? Un sentencepiece tokenizer? O un word piece tokenizer? Il tokenizer รฉ lo stesso di BERT o BART? Una volta che senti che hai avuto una bella overview dell'architettura del modello, puoi scrivere senza problemi al team di Hugging Face per ogni domanda che tu hai. Questo puรณ includere domande sull'architettura del modello, o sull'attention layer, etc. Saremo molto felici di aiutarti :) ### 2. Prepare il tuo ambiente 1. Forka la [repository](https://github.com/huggingface/transformers) cliccando sul tasto โ€˜Fork' nella pagina della repository. Questo crea una copia del codice nel tuo account GitHub 2. Clona il tuo fork `transfomers` sul tuo dico locale, e aggiungi la repository base come remota: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Crea un ambiente di sviluppo, per esempio tramite questo comando: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` quindi torna alla directory principale: ```bash cd .. ``` 4. Attenzione, raccomandiamo di aggiungere la versione di PyTorch di *brand_new_bert* a Transfomers. Per installare PyTorch, basta seguire queste istruzioni https://pytorch.org/get-started/locally/. **Nota bene:** Non c'รฉ bisogno di installare o avere installato CUDA. Il nuovo modello puรฒ funzionare senza problemi su una CPU. 5. Per trasferire *brand_new_bert* To port *brand_new_bert* avrai bisogno anche accesso alla sua repository originale: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` Ok, ora hai un ambiente di sviluppo per portare *brand_new_bert* in ๐Ÿค— Transformers. ### 3.-4. Provare un pretrained checkpoint usando la repo originale Per cominciare, comincerai a lavorare sulla repo originale di *brand_new_bert*. Come spesso accade, l'implementazione originale รฉ molto sullo stile "ricerca". Questo significa che a volte la documentazione non รฉ al top, magari manca qualche cosa e il codice puรณ essere difficile da capire. Tuttavia, questa รฉ e dev'essere la motivazione per reimplementare *brand_new_bert*. In Hugging Face, uno degli obiettivi principali รฉ di *mettere le persone sulle spalle dei giganti*, il che si traduce, in questo contesto, di prendere un modello funzionante e riscriverlo e renderlo il piรบ possibile **accessibile, user-friendly, e leggibile**. Questa รฉ la top motivazione per re-implementare modelli in ๐Ÿค— Transformers - cercare di creare nuove complesse tecnologie NLP accessibili a **chiunque**. Riuscire a far girare il modello pretrained originale dalla repository ufficiale รฉ spesso il passo **piu arduo**. Dalla nostra esperienza, รฉ molto importante spendere un p' di tempo per diventare familiari con il codice base originale. Come test, prova a capire i seguenti punti: - Dove si trovano i pretrained weights? - Come caricare i pretrained weights nel modello corrispondente? - Come girare un tokenizer independentemente dal modello? - Prova a tracciare un singolo forward pass, cosicchรฉ potrai sapere che classi e funzioni sono richieste per un semplice forward pass. Di solito, dovrai reimplementare queste funzioni e basta - Prova a localizzare i componenti importanti del modello: Dove si trova la classe del modello? Ci sono sotto classi nel modello *per esempio* EngoderModel, DecoderMOdel? Dove si trova il self-attention layer? Ci sono molteplici differenti layer di attention, *per esempio * *self-attention*, *cross-attention*...? - Come puoi fare debug sul modello nell'ambiente originale della repo? Devi aggiungere dei *print* o puoi usare *ipdb* come debugger interattivo, o vabene anche un IDE efficiente per debug come PyCharm? ร‰ molto importante che prima di cominciare a trasferire il modello nuovo tu spenda tempo a fare debug del codice originale in maniera **efficiente**! Inoltre, ricorda che tutta la library รฉ open-soruce, quindi non temere di aprire issue o fare una pull request nella repo originale. Tutti coloro che mantengono la repository saranno piรบ che felici di avere qualcuno che guarda e gioca con i loro codici! A questo punto, sta a te decidere quale ambiente per debug vuoi usare. Noi consilgiamo di evitare setup con GPU, che potrebbero costare assai, lavorare su una CPU puรณ essere un ottimo punto di partenza per indagare la repository originale e per cominciare a scrivere il codice per ๐Ÿค— Transformers. Solo alla fine, quando il modello รฉ stato portato con successo in ๐Ÿค— Transformers, allora si potrรก verificare il suo funzionamento su GPU. In generale ci sono due possibili ambienti di debug per il testare il modello originale: - [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb) - Scripts locali in Python Il vantaggio dei Jupyter notebooks รฉ la possibilitร  di eseguire cella per cella, il che puรฒ essere utile per decomporre tutte le componenti logiche, cosi da a vere un ciclo di debug piรน rapido, siccome si possono salvare i risultati da steps intermedi. Inoltre, i notebooks spesso sono molto facili da condividere con altri contributors, il che puรฒ essere molto utile se vuoi chiedere aiuto al team di Hugging Face. Se sei famigliare con Jupyter notebooks allora racommandiamo di lavorare in questa maniera. Ovviamente se non siete abituati a lavorare con i notebook, questo puรฒ essere uno svantaggio nell'usare questa tecnologia, sprecando un sacco di tempo per setup e portare tutto al nuovo ambiente, siccome non potreste neanche usare dei tools di debug come `ipdb`. Per ogni pratica code-base, รฉ sempre meglio come primo step caricare un **piccolo** checkpoint pretrained e cercare di riprodurre un singolo forward pass usando un vettore fittizio di IDs fatti da numeri interi. Un esempio per uno script simile, in pseudocodice รฉ: ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` Per quanto riguarda la strategia di debugging, si puรฒ scegliere tra: - Decomporre il modello originario in piccole componenenti e testare ognuna di esse - Decomporre il modello originario nel *tokenizer* originale e nel *modello* originale, testare un forward pass su questi, e usare dei print statement o breakpoints intermedi per verificare Ancora una volta, siete liberi di scegliere quale strategia sia ottimale per voi. Spesso una strategia รฉ piu avvantaggiosa di un'altra, ma tutto dipende dall'code-base originario. Se il code-base vi permette di decomporre il modello in piccole sub-componenenti, *per esempio* se il code-base originario puรฒ essere facilmente testato in eager mode, allora vale la pena effettuare un debugging di questo genere. Ricordate che ci sono dei vantaggi nel decidere di prendere la strada piu impegnativa sin da subito: - negli stage piu finali, quando bisognerร  comparare il modello originario all'implementazione in Hugging Face, potrete verificare automaticamente ogni componente, individualmente, di modo che ci sia una corrispondenza 1:1 - avrete l'opportunitร  di decomporre un problema molto grande in piccoli passi, cosรฌ da strutturare meglio il vostro lavoro - separare il modello in componenti logiche vi aiuterร  ad avere un'ottima overview sul design del modello, quindi una migliore comprensione del modello stesso - verso gli stage finali i test fatti componente per componente vi aiuterร  ad essere sicuri di non andare avanti e indietro nell'implementazione, cosรฌ da continuare la modifica del codice senza interruzione Un ottimo esempio di come questo puรฒ essere fatto รฉ dato da [Lysandre](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) per il modello ELECTRA Tuttavia, se il code-base originale รฉ molto complesso o le componenti intermedie possono essere testate solo in tramite compilazione, potrebbe richiedere parecchio tempo o addirittura essere impossibile separare il modello in piccole sotto-componenti. Un buon esempio รฉ [MeshTensorFlow di T5](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow). Questa libreria รฉ molto complessa e non offre un metodo semplice di decomposizione in sotto-componenti. Per simili librerie, potrete fare affidamento ai print statements. In ogni caso, indipendentemente da quale strategia scegliete, la procedura raccomandata รฉ di cominciare a fare debug dal primo layer al layer finale. ร‰ consigliato recuperare gli output dai layers, tramite print o sotto-componenti, nel seguente ordine: 1. Recuperare gli IDs di input dati al modello 2. Recuperare i word embeddings 3. Recuperare l'input del primo Transformer layer 4. Recuperare l'output del primo Transformer layer 5. Recuperare l'output dei seguenti `n - 1` Transformer layers 6. Recuperare l'output dell'intero BrandNewBert Model Gli IDs in input dovrebbero essere un arrary di interi, *per esempio* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` Gli output dei seguenti layer di solito dovrebbero essere degli array di float multi-dimensionali come questo: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` Ci aspettiamo che ogni modello aggiunto a ๐Ÿค— Transformers passi con successo un paio di test d'integrazione. Questo significa che il modello originale e la sua implementazione in ๐Ÿค— Transformers abbiano lo stesso output con una precisione di 0.001! Siccome รฉ normale che lo stesso esatto modello, scritto in librerie diverse, possa dare output leggermente diversi, la tolleranza accettata รฉ 1e-3 (0.001). Ricordate che i due modelli devono dare output quasi identici. Dunque, รฉ molto conveniente comparare gli output intermedi di ๐Ÿค— Transformers molteplici volte con gli output intermedi del modello originale di *brand_new_bert*. Di seguito vi diamo alcuni consigli per avere un ambiente di debug il piu efficiente possibile: - Trovate la migliore strategia per fare debug dei risultati intermedi. Per esempio, รฉ la repository originale scritta in PyTorch? Se si, molto probabilmente dovrete dedicare un po' di tempo per scrivere degli script piu lunghi, cosรฌ da decomporre il modello originale in piccole sotto-componenti, in modo da poter recuperare i valori intermedi. Oppure, la repo originale รฉ scritta in Tensorflow 1? Se รฉ cosรฌ dovrete fare affidamento ai print di Tensorflow [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) per avere i valori intermedi. Altro caso, la repo รฉ scritta in Jax? Allora assicuratevi che il modello non sia in **jit** quanto testate il foward pass, *per esempio* controllate [questo link](https://github.com/google/jax/issues/196). - Usate i piรน piccoli pretrained checkpoint che potete trovare. Piu piccolo รฉ il checkpoint, piu velocemente sarร  il vostro ciclo di debug. Non รฉ efficiente avere un pretrained model cosรฌ gigante che per il forward pass impieghi piu di 10 secondi. Nel caso in cui i checkpoints siano molto grandi, e non si possa trovare di meglio, allora รฉ buona consuetudine ricorrere a fare un dummy model nel nuovo ambiente, con weights inizializzati random e salvare quei weights per comprare la versione ๐Ÿค— Transformers con il vostro modello - Accertatevi di usare la via piu semplice per chiamare il forward pass nella repo originale. Sarebbe opportuno trovare la funzione originaria che chiami **solo** un singolo forward pass, *per esempio* questa funzione spesso viene chiamata `predict`, `evaluate`, `forward` o `__call__`. Siate sicuri di non fare debug su una funzione che chiami `forward` molteplici volte, *per esempio* per generare testo, come `autoregressive_sample`, `generate`. - Cercate di separare la tokenization dal forward pass del modello. Se la repo originaria mostra esempio dove potete dare come input una stringa, provate a cercare dove nella forward call la stringa viene cambiata in input ids e cominciate il debug da questo punto. Questo vi garantisce un ottimo punto di partenza per scrivere un piccolo script personale dove dare gli input al modello, anziche delle stringhe in input. - Assicuratevi che il debugging **non** sia in training mode. Spesso questo potra il modello a dare degli output random, per via dei molteplici dropout layers. Assicuratevi che il forward pass nell'ambiente di debug sia **deterministico**, cosicche i dropout non siano usati. Alternativamente, potete usare *transformers.utils.set_seed* se la vecchia e nuova implementazione sono nello stesso framework. La seguente sezione vi da ulteriori dettagli e accorgimenti su come potete fare tutto questo per *brand_new_bert*. ### 5.-14. Trasferire BrandNewBert in ๐Ÿค— Transformers Allora cominciamo ad aggiungere un nuovo codice in ๐Ÿค— Transformers. Andate nel vostro fork clone di ๐Ÿค— Transformers: ```bash cd transformers ``` Nel caso speciale in cui stiate aggiungendo un modello, la cui architettura sia identica a una di un modello giร  esistente, dovrete solo aggiugnere uno script di conversione, come descritto [qui](#write-a-conversion-script). In questo caso, potete riutilizzare l'intera architettura del modello gia esistente. Se questo non รฉ il caso, cominciamo con il generare un nuovo modello. Ti consigliamo di utilizzare il seguente script per aggiungere un modello a partire da un modello esistente: ```bash transformers-cli add-new-model-like ``` Ti verrร  richiesto con un questionario di compilare le informazioni di base del tuo modello. **Aprire una Pull Request in main huggingface/transformers repo** Prime di cominciare ad adattare il codice automaticamente generato, aprite una nuova PR come "Work in progress (WIP)", *per esempio* "[WIP] Aggiungere *brand_new_bert*", cosicchรฉ il team di Hugging Face possa lavorare al vostro fianco nell' integrare il modello in ๐Ÿค— Transformers. Questi sarebbero gli step generali da seguire: 1. Creare un branch dal main branch con un nome descrittivo ```bash git checkout -b add_brand_new_bert ``` 2. Commit del codice automaticamente generato ```bash git add . git commit ``` 3. Fare fetch e rebase del main esistente ```bash git fetch upstream git rebase upstream/main ``` 4. Push dei cambiamenti al proprio account: ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. Una volte che siete soddisfatti dei nuovi cambiamenti, andate sulla webpage del vostro fork su GitHub. Cliccate "Pull request". Assiuratevi di aggiungere alcuni membri di Hugging Face come reviewers, nel riguardo alla destra della pagina della PR, cosicche il team Hugging Face verrร  notificato anche per i futuri cambiamenti. 6. Cambiare la PR a draft, cliccando su "Convert to draft" alla destra della pagina della PR Da quel punto in poi, ricordate di fare commit di ogni progresso e cambiamento, cosicche venga mostrato nella PR. Inoltre, ricordatevi di tenere aggiornato il vostro lavoro con il main esistente: ```bash git fetch upstream git merge upstream/main ``` In generale, tutte le domande che avrete riguardo al modello o l'implementazione dovranno essere fatte nella vostra PR e discusse/risolte nella PR stessa. In questa maniera, il team di Hugging Face sarร  sempre notificato quando farete commit di un nuovo codice o se avrete qualche domanda. ร‰ molto utile indicare al team di Hugging Face il codice a cui fate riferimento nella domanda, cosicche il team potra facilmente capire il problema o la domanda. Per fare questo andate sulla tab "Files changed", dove potrete vedere tutti i vostri cambiamenti al codice, andate sulla linea dove volete chiedere una domanda, e cliccate sul simbolo "+" per aggiungere un commento. Ogni volta che una domanda o problema รฉ stato risolto, cliccate sul bottone "Resolve". In questa stessa maniera, Hugging Face aprirร  domande o commenti nel rivedere il vostro codice. Mi raccomando, chiedete piรน domande possibili nella pagina della vostra PR. Se avete domande molto generali, non molto utili per il pubblico, siete liberi di chiedere al team Hugging Face direttamente su slack o email. **5. Adattare i codici per brand_new_bert** Per prima cosa, ci focalizzeremo sul modello e non sui tokenizer. Tutto il codice relative dovrebbe trovarsi in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` e `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. Ora potete finalmente cominciare il codice :). Il codice generato in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` avrร  sia la stessa architettura di BERT se รฉ un modello encoder-only o BART se รฉ encoder-decoder. A questo punto, ricordatevi cio che avete imparato all'inizio, riguardo agli aspetti teorici del modello: *In che maniera il modello che sto implmementando รฉ diverso da BERT o BART?*. Implementare questi cambi spesso vuol dire cambiare il layer *self-attention*, l'ordine dei layer di normalizzazione e cosรฌ via... Ancora una volta ripetiamo, รฉ molto utile vedere architetture simili di modelli gia esistenti in Transformers per avere un'idea migliore su come implementare il modello. **Notate** che a questo punto non dovete avere subito un codice tutto corretto o pulito. Piuttosto, รฉ consigliato cominciare con un codice poco pulito, con copia-incolla del codice originale in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` fino a che non avrete tutto il codice necessario. In base alla nostra esperienza, รฉ molto meglio aggiungere una prima bozza del codice richiesto e poi correggere e migliorare iterativamente. L'unica cosa essenziale che deve funzionare qui รฉ la seguente instanza: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` Questo comando creerร  un modello con i parametri di default definiti in `BrandNewBergConfig()` e weights random. Questo garantisce che `init()` di tutte le componenti funzioni correttamente. **6. Scrivere uno script di conversione** Il prossimo step รฉ scrivere uno script per convertire il checkpoint che avete usato per fare debug su *brand_new_berts* nella repo originale in un checkpoint per la nuova implementazione di *brand_new_bert* in ๐Ÿค— Transformers. Non รฉ consigliato scrivere lo script di conversione da zero, ma piuttosto cercate e guardate script gia esistenti in ๐Ÿค— Transformers, cosรฌ da trovarne uno simile al vostro modello. Di solito basta fare una copia di uno script gia esistente e adattarlo al vostro caso. Non esistate a chiedre al team di Hugging Face a riguardo. - Se state convertendo un modello da TensorFlow a PyTorch, un ottimo inizio รฉ vedere [questo script di conversione per BERT](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91) - Se state convertendo un modello da PyTorch a PyTorch, [lo script di conversione di BART puรฒ esservi utile](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py) Qui di seguito spiegheremo come i modelli PyTorch salvano i weights per ogni layer e come i nomi dei layer sono definiti. In PyTorch, il nomde del layer รฉ definito dal nome della class attribute che date al layer. Definiamo un modello dummy in PyTorch, chiamato `SimpleModel`: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` Ora possiamo creare un'instanza di questa definizione di modo da inizializzare a random weights: `dense`, `intermediate`, `layer_norm`. Possiamo usare print per vedere l'architettura del modello: ```python model = SimpleModel() print(model) ``` Da cui si ottiene: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` Si puรฒ vedere come i nomi dei layers siano definiti dal nome della class attribute in PyTorch. I valori dei weights di uno specifico layer possono essere visualizzati: ```python print(model.dense.weight.data) ``` ad esempio: ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` Nello script di conversione, dovreste riempire quei valori di inizializzazione random con gli stessi weights del corrispondente layer nel checkpoint. *Per esempio* ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` Cosรฌ facendo, dovete verificare che ogni inizializzazione random di un peso del modello PyTorch e il suo corrispondente peso nel pretrained checkpoint siano esattamente gli stessi e uguali in **dimensione/shape e nome**. Per fare questo, รฉ **necessario** aggiungere un `assert` per la dimensione/shape e nome: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` Inoltre, dovrete fare il print sia dei nomi che dei weights per essere sicuri che siano gli stessi: ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` Se la dimensione o il nome non sono uguali, probabilmente avete sbagliato ad assegnare il peso nel checkpoint o nel layer costrutture di ๐Ÿค— Transformers. Una dimensione sbagliata puรฒ essere dovuta ad un errore nei parameteri in `BrandNewBertConfig()`. Tuttavia, puรฒ essere anche che l'implementazione del layer in PyTorch richieda di fare una transposizione della matrice dei weights. Infine, controllate **tutti** che tutti i weights inizializzati e fate print di tutti i weights del checkpoint che non sono stati usati per l'inizializzazione, di modo da essere sicuri che il modello sia correttamente convertito. ร‰ normale che ci siano errori nel test di conversione, fai per un errore in `BrandNewBertConfig()`, o un errore nell'architettura in ๐Ÿค— Transformers, o un bug in `init()`. Questo step dev'essere fatto tramite iterazioni fino a che non si raggiungano gli stessi valori per i weights. Una volta che il checkpoint รฉ stato correttamente caricato in ๐Ÿค— Transformers, potete salvare il modello in una cartella di vostra scelta `/path/to/converted/checkpoint/folder` che contenga sia `pytorch_model.bin` che `config.json`: ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. Implementare il forward pass** Una volta che i weights pretrained sono stati correttamente caricati in ๐Ÿค— Transformers, dovrete assicurarvi che il forward pass sia correttamente implementato. [Qui](#3-4-provare-un-pretrained-checkpoint-usando-la-repo-originale), avete give creato e provato uno script che testi il forward pass del modello usando la repo originaria. Ora dovrete fare lo stesso con uno script analogo usando l'implementazione in ๐Ÿค— Transformers anzichรฉ l'originale. Piu o meno lo script dovrebbe essere: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` Di solito l'output da ๐Ÿค— Transformers non รฉ uguale uguale all'output originario, sopratto la prima volta. Non vi abbattete - รฉ normale! Prima di tutto assicuratevi che non ci siano errori o che non vengano segnalati degli errori nella forward pass. Spesso capita che ci siano dimensioni sbagliate o data type sbagliati, *ad esempio* `torch.long` anziche `torch.float32`. Non esistate a chiedere al team Hugging Face! Nella parte finale assicuratevi che l'implementazione ๐Ÿค— Transformers funzioni correttamente cosi da testare che gli output siano equivalenti a una precisione di `1e-3`. Controllate che `outputs.shape` siano le stesse tra ๐Ÿค— Transformers e l'implementazione originaria. Poi, controllate che i valori in output siano identici. Questa รฉ sicuramente la parte piรน difficile, qui una serie di errori comuni quando gli output non sono uguali: - Alcuni layers non sono stati aggiunti, *ad esempio* un *activation* layer non รฉ stato aggiunto, o ci si รฉ scordati di una connessione - La matrice del word embedding non รฉ stata ripareggiata - Ci sono degli embeddings posizionali sbagliati perchรฉ l'implementazione originaria ha un offset - Il dropout รฉ in azione durante il forward pass. Per sistemare questo errore controllate che *model.training = False* e che il dropout non sia stato attivato nel forward pass, * per esempio * passate *self.training* a [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout) La miglior maniera per sistemare il problema รฉ di vedere all'implementazione originaria del forward pass e in ๐Ÿค— Transformers fianco a fianco e vedere se ci sono delle differenze. In teoria, con debug e print degli output intermedie di entrambe le implementazioni nel forward pass nell'esatta posizione del network dovrebbe aiutarvi a vedere dove ci sono differenze tra i due frameworks. Come prima mossa controllate che `input_ids` siano identici in entrambi gli scripts. Da lรฌ andate fino all'ultimo layer. Potrete notare una differenza tra le due implementazioni a quel punto. Una volta che lo stesso output รฉ stato ragguingi, verificate gli output con `torch.allclose(original_output, output, atol=1e-3)`. A questo punto se รฉ tutto a posto: complimenti! Le parti seguenti saranno una passeggiata ๐Ÿ˜Š. **8. Aggiungere i test necessari per il modello** A questo punto avete aggiunto con successo il vostro nuovo modello. Tuttavia, รฉ molto probabile che il modello non sia del tutto ok con il design richiesto. Per essere sicuri che l'implementazione sia consona e compatibile con ๐Ÿค— Transformers รฉ necessario implementare dei tests. Il Cookiecutter dovrebbe fornire automaticamente dei file per test per il vostro modello, di solito nella folder `tests/test_modeling_brand_new_bert.py`. Provate questo per verificare l'ok nei test piu comuni: ```bash pytest tests/test_modeling_brand_new_bert.py ``` Una volta sistemati i test comuni, bisogna assicurarsi che il vostro lavoro sia correttamente testato cosicchรจ: - a) La community puo capire in maniera semplice il vostro lavoro controllando tests specifici del modello *brand_new_bert*, - b) Implementazioni future del vostro modello non rompano alcune feature importante del modello. Per prima cosa agguingete dei test d'integrazione. Questi sono essenziali perche fanno la stessa funzione degli scripts di debug usati precedentemente. Un template per questi tests esiste gia nel Cookiecutter ed รฉ sotto il nome di `BrandNewBertModelIntegrationTests`, voi dovrete solo completarlo. Una volta che questi tests sono OK, provate: ```bash RUN_SLOW=1 pytest -sv tests/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> Nel caso siate su Windows, sostituite `RUN_SLOW=1` con `SET RUN_SLOW=1` </Tip> Di seguito, tutte le features che sono utili e necessarire per *brand_new_bert* devono essere testate in test separati, contenuti in `BrandNewBertModelTester`/ `BrandNewBertModelTest`. spesso la gente si scorda questi test, ma ricordate che sono utili per: - Aiuta gli utenti a capire il vostro codice meglio, richiamando l'attenzione su queste nuove features - Developers e contributors futuri potranno velocemente testare nuove implementazioni del modello testanto questi casi speciali. **9. Implementare il tokenizer** A questo punto avremo bisogno un tokenizer per *brand_new_bert*. Di solito il tokenizer รฉ uguale ad altri modelli in ๐Ÿค— Transformers. ร‰ importante che troviate il file con il tokenizer originale e che lo carichiate in ๐Ÿค— Transformers. Per controllare che il tokenizer funzioni in modo corretto, create uno script nella repo originaria che riceva come input una stringa e ritorni gli `input_ids`. Piu o meno questo potrebbe essere il codice: ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` Potrebbe richiedere un po' di tempo, ma guardate ancora alla repo originaria per trovare la funzione corretta del tokenizer. A volte capita di dover riscrivere il tokenizer nella repo originaria, di modo da avere come output gli `input_ids`. A quel punto uno script analogo รฉ necessario in ๐Ÿค— Transformers: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` Una volta che `input_ids` sono uguali, bisogna aggiungere un test per il tokenizer. Il file test per tokenizer di *brand_new_brand* dovrebbe avere un paio di hard-coded test d'integrazione. **10. Test end-to-end** Ora che avete il tokenizer, dovrete aggiungere dei test d'integrazione per l'intero workflow in `tests/test_modeling_brand_new_bert.py` in ๐Ÿค— Transformer. Questi test devono mostrare che un significante campione text-to-text funzioni come ci si aspetta nell'implementazione di ๐Ÿค— Transformers. *Per esempio* potreste usare dei source-to-target-translation, o un sommario di un articolo, o un domanda-risposta e cosi via. Se nessuno dei checkpoints รฉ stato ultra parametrizzato per task simili, allora i tests per il modello sono piu che sufficienti. Nello step finale dovete assicurarvi che il modello sia totalmente funzionale, e consigliamo anche di provare a testare su GPU. Puo succedere che ci si scordi un `.to(self.device)` ad esempio. Se non avete accesso a GPU, il team Hugging Face puo provvedere a testare questo aspetto per voi. **11. Aggiungere una Docstring** Siete quasi alla fine! L'ultima cosa rimasta รฉ avere una bella docstring e una pagina doc. Il Cookiecutter dovrebbe provvedere giร  un template chiamato `docs/source/model_doc/brand_new_bert.rst`, che dovrete compilare. La prima cosa che un utente farร  per usare il vostro modello sarร  dare una bella lettura al doc. Quindi proponete una documentazione chiara e concisa. ร‰ molto utile per la community avere anche delle *Tips* per mostrare come il modello puo' essere usato. Non esitate a chiedere a Hugging Face riguardo alle docstirng. Quindi, assicuratevi che la docstring sia stata aggiunta a `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`. Assicuratevi che la docstring sia corretta e che includa tutti i necessari input e output. Abbiamo una guida dettagliata per scrivere la documentazione e docstring. **Rifattorizzare il codice** Perfetto! Ora che abbiamo tutto per *brand_new_bert* controllate che lo stile del codice sia ok: ```bash make style ``` E che il codice passi i quality check: ```bash make quality ``` A volte capita che manchino delle informazioninella docstring o alcuni nomi sbagliati, questo farร  fallire i tests sopra. Ripetiamo: chiedete pure a Hugging Face, saremo lieti di aiutarvi. Per ultimo, fare del refactoring del codice una volta che รฉ stato creato. Avete finito con il codice, congratulazioni! ๐ŸŽ‰ Siete fantasticiiiiiii! ๐Ÿ˜Ž **12. Caricare il modello sul model hub** In questa ultima parte dovrete convertire e caricare il modello, con tutti i checkpoints, nel model hub e aggiungere una model card per ogni checkpoint caricato. Leggete la nostra guida [Model sharing and uploading Page](model_sharing) per avere familiaritร  con l'hub. Di solito in questa parte lavorate a fianco di Hugging face per decidere un nome che sia ok per ogni checkpoint, per ottenere i permessi necessari per caricare il modello nell'organizzazione dell'autore di *brand_new_bert*. Il metodo `push_to_hub`, presente in tutti i modelli `transformers`, รฉ una maniera rapida e indolore per caricare il vostro checkpoint sull'hub: ```python brand_new_bert.push_to_hub( repo_path_or_name="brand_new_bert", # Uncomment the following line to push to an organization # organization="<ORGANIZATION>", commit_message="Add model", use_temp_dir=True, ) ``` Vale la pena spendere un po' di tempo per creare una model card ad-hoc per ogni checkpoint. Le model cards dovrebbero suggerire le caratteristiche specifiche del checkpoint, *per esempio* su che dataset il checkpoint รฉ stato pretrained o fine-tuned. O che su che genere di task il modello lavoro? E anche buona pratica includere del codice su come usare il modello correttamente. **13. (Opzionale) Aggiungere un notebook** ร‰ molto utile aggiungere un notebook, che dimostri in dettaglio come *brand_new_bert* si utilizzi per fare inferenza e/o fine-tuned su specifiche task. Non รฉ una cosa obbligatoria da avere nella vostra PR, ma รฉ molto utile per la community. **14. Sottomettere la PR** L'ultimissimo step! Ovvero il merge della PR nel main. Di solito il team Hugging face a questo punto vi avrร  gia aiutato, ma รฉ ok prendere un po' di tempo per pulire la descirzione e commenti nel codice. ### Condividete il vostro lavoro!! ร‰ ora tempo di prendere un po' di credito dalla communitร  per il vostro lavoro! Caricare e implementare un nuovo modello รฉ un grandissimo contributo per Transformers e l'intera community NLP. Il codice e la conversione dei modelli pre-trained sara sicuramente utilizzato da centinaia o migliaia di sviluppatori e ricercatori. Siate fieri e orgogliosi di condividere il vostro traguardo con l'intera community :) ** Avete create un altro modello che รฉ super facile da usare per tutti quanti nella community! ๐Ÿคฏ**
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installazione Installa ๐Ÿค— Transformers per qualsiasi libreria di deep learning con cui stai lavorando, imposta la tua cache, e opzionalmente configura ๐Ÿค— Transformers per l'esecuzione offline. ๐Ÿค— Transformers รจ testato su Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, e Flax. Segui le istruzioni di installazione seguenti per la libreria di deep learning che stai utilizzando: * [PyTorch](https://pytorch.org/get-started/locally/) istruzioni di installazione. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) istruzioni di installazione. * [Flax](https://flax.readthedocs.io/en/latest/) istruzioni di installazione. ## Installazione con pip Puoi installare ๐Ÿค— Transformers in un [ambiente virtuale](https://docs.python.org/3/library/venv.html). Se non sei familiare con gli ambienti virtuali in Python, dai un'occhiata a questa [guida](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Un ambiente virtuale rende piรน semplice la gestione di progetti differenti, evitando problemi di compatibilitร  tra dipendenze. Inizia creando un ambiente virtuale nella directory del tuo progetto: ```bash python -m venv .env ``` Attiva l'ambiente virtuale: ```bash source .env/bin/activate ``` Ora puoi procedere con l'installazione di ๐Ÿค— Transformers eseguendo il comando seguente: ```bash pip install transformers ``` Per il solo supporto della CPU, puoi installare facilmente ๐Ÿค— Transformers e una libreria di deep learning in solo una riga. Ad esempio, installiamo ๐Ÿค— Transformers e PyTorch con: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers e TensorFlow 2.0: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers e Flax: ```bash pip install transformers[flax] ``` Infine, verifica se ๐Ÿค— Transformers รจ stato installato in modo appropriato eseguendo il seguente comando. Questo scaricherร  un modello pre-allenato: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Dopodichรฉ stampa l'etichetta e il punteggio: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Installazione dalla fonte Installa ๐Ÿค— Transformers dalla fonte con il seguente comando: ```bash pip install git+https://github.com/huggingface/transformers ``` Questo comando installa la versione `main` piรน attuale invece dell'ultima versione stabile. Questo รจ utile per stare al passo con gli ultimi sviluppi. Ad esempio, se un bug รจ stato sistemato da quando รจ uscita l'ultima versione ufficiale ma non รจ stata ancora rilasciata una nuova versione. Tuttavia, questo significa che questa versione `main` puรฒ non essere sempre stabile. Ci sforziamo per mantenere la versione `main` operativa, e la maggior parte dei problemi viene risolta in poche ore o in un giorno. Se riscontri un problema, per favore apri una [Issue](https://github.com/huggingface/transformers/issues) cosรฌ possiamo sistemarlo ancora piรน velocemente! Controlla se ๐Ÿค— Transformers รจ stata installata in modo appropriato con il seguente comando: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Installazione modificabile Hai bisogno di un'installazione modificabile se vuoi: * Usare la versione `main` del codice dalla fonte. * Contribuire a ๐Ÿค— Transformers e hai bisogno di testare i cambiamenti nel codice. Clona il repository e installa ๐Ÿค— Transformers con i seguenti comandi: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` Questi comandi collegheranno la cartella in cui รจ stato clonato il repository e i path delle librerie Python. Python guarderร  ora all'interno della cartella clonata, oltre ai normali path delle librerie. Per esempio, se i tuoi pacchetti Python sono installati tipicamente in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python cercherร  anche nella cartella clonata: `~/transformers/`. <Tip warning={true}> Devi tenere la cartella `transformers` se vuoi continuare ad utilizzare la libreria. </Tip> Ora puoi facilmente aggiornare il tuo clone all'ultima versione di ๐Ÿค— Transformers con il seguente comando: ```bash cd ~/transformers/ git pull ``` Il tuo ambiente Python troverร  la versione `main` di ๐Ÿค— Transformers alla prossima esecuzione. ## Installazione con conda Installazione dal canale conda `conda-forge`: ```bash conda install conda-forge::transformers ``` ## Impostazione della cache I modelli pre-allenati sono scaricati e memorizzati localmente nella cache in: `~/.cache/huggingface/transformers/`. Questa รจ la directory di default data dalla variabile d'ambiente della shell `TRANSFORMERS_CACHE`. Su Windows, la directory di default รจ data da `C:\Users\username\.cache\huggingface\transformers`. Puoi cambiare le variabili d'ambiente della shell indicate in seguito, in ordine di prioritร , per specificare una directory differente per la cache: 1. Variabile d'ambiente della shell (default): `TRANSFORMERS_CACHE`. 2. Variabile d'ambiente della shell: `HF_HOME` + `transformers/`. 3. Variabile d'ambiente della shell: `XDG_CACHE_HOME` + `/huggingface/transformers`. <Tip> ๐Ÿค— Transformers utilizzerร  le variabili d'ambiente della shell `PYTORCH_TRANSFORMERS_CACHE` o `PYTORCH_PRETRAINED_BERT_CACHE` se si proviene da un'iterazione precedente di questa libreria e sono state impostate queste variabili d'ambiente, a meno che non si specifichi la variabile d'ambiente della shell `TRANSFORMERS_CACHE`. </Tip> ## Modalitร  Offline ๐Ÿค— Transformers puรฒ essere eseguita in un ambiente firewalled o offline utilizzando solo file locali. Imposta la variabile d'ambiente `TRANSFORMERS_OFFLINE=1` per abilitare questo comportamento. <Tip> Aggiungi [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) al tuo flusso di lavoro offline di training impostando la variabile d'ambiente `HF_DATASETS_OFFLINE=1`. </Tip> Ad esempio, in genere si esegue un programma su una rete normale, protetta da firewall per le istanze esterne, con il seguente comando: ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Esegui lo stesso programma in un'istanza offline con: ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Lo script viene ora eseguito senza bloccarsi o attendere il timeout, perchรฉ sa di dover cercare solo file locali. ### Ottenere modelli e tokenizer per l'uso offline Un'altra opzione per utilizzare offline ๐Ÿค— Transformers รจ scaricare i file in anticipo, e poi puntare al loro path locale quando hai la necessitร  di utilizzarli offline. Ci sono tre modi per fare questo: * Scarica un file tramite l'interfaccia utente sul [Model Hub](https://huggingface.co/models) premendo sull'icona โ†“. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Utilizza il flusso [`PreTrainedModel.from_pretrained`] e [`PreTrainedModel.save_pretrained`]: 1. Scarica i tuoi file in anticipo con [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Salva i tuoi file in una directory specificata con [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./il/tuo/path/bigscience_t0") >>> model.save_pretrained("./il/tuo/path/bigscience_t0") ``` 3. Ora quando sei offline, carica i tuoi file con [`PreTrainedModel.from_pretrained`] dalla directory specificata: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./il/tuo/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./il/tuo/path/bigscience_t0") ``` * Scarica in maniera programmatica i file con la libreria [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub): 1. Installa la libreria `huggingface_hub` nel tuo ambiente virtuale: ```bash python -m pip install huggingface_hub ``` 2. Utilizza la funzione [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) per scaricare un file in un path specifico. Per esempio, il seguente comando scarica il file `config.json` dal modello [T0](https://huggingface.co/bigscience/T0_3B) nel path che desideri: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./il/tuo/path/bigscience_t0") ``` Una volta che il tuo file รจ scaricato e salvato in cache localmente, specifica il suo path locale per caricarlo e utilizzarlo: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./il/tuo/path/bigscience_t0/config.json") ``` <Tip> Fai riferimento alla sezione [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream) per avere maggiori dettagli su come scaricare modelli presenti sull Hub. </Tip>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Quick tour [[open-in-colab]] Entra in azione con ๐Ÿค— Transformers! Inizia utilizzando [`pipeline`] per un'inferenza veloce, carica un modello pre-allenato e un tokenizer con una [AutoClass](./model_doc/auto) per risolvere i tuoi compiti legati a testo, immagini o audio. <Tip> Tutti gli esempi di codice presenti in questa documentazione hanno un pulsante in alto a sinistra che permette di selezionare tra PyTorch e TensorFlow. Se questo non รจ presente, ci si aspetta che il codice funzioni per entrambi i backend senza alcun cambiamento. </Tip> ## Pipeline [`pipeline`] รจ il modo piรน semplice per utilizzare un modello pre-allenato per un dato compito. <Youtube id="tiZFewofSLM"/> La [`pipeline`] supporta molti compiti comuni: **Testo**: * Analisi del Sentimento (Sentiment Analysis, in inglese): classifica la polaritร  di un testo dato. * Generazione del Testo (Text Generation, in inglese): genera del testo a partire da un dato input. * Riconoscimento di Entitร  (Name Entity Recognition o NER, in inglese): etichetta ogni parola con l'entitร  che questa rappresenta (persona, data, luogo, ecc.). * Rispondere a Domande (Question answering, in inglese): estrae la risposta da un contesto, dato del contesto e una domanda. * Riempimento di Maschere (Fill-mask, in inglese): riempie gli spazi mancanti in un testo che ha parole mascherate. * Riassumere (Summarization, in inglese): genera una sintesi di una lunga sequenza di testo o di un documento. * Traduzione (Translation, in inglese): traduce un testo in un'altra lingua. * Estrazione di Caratteristiche (Feature Extraction, in inglese): crea un tensore che rappresenta un testo. **Immagini**: * Classificazione di Immagini (Image Classification, in inglese): classifica un'immagine. * Segmentazione di Immagini (Image Segmentation, in inglese): classifica ogni pixel di un'immagine. * Rilevazione di Oggetti (Object Detection, in inglese): rileva oggetti all'interno di un'immagine. **Audio**: * Classificazione di Audio (Audio Classification, in inglese): assegna un'etichetta ad un segmento di audio dato. * Riconoscimento Vocale Automatico (Automatic Speech Recognition o ASR, in inglese): trascrive il contenuto di un audio dato in un testo. <Tip> Per maggiori dettagli legati alla [`pipeline`] e ai compiti ad essa associati, fai riferimento alla documentazione [qui](./main_classes/pipelines). </Tip> ### Utilizzo della Pipeline Nel seguente esempio, utilizzerai la [`pipeline`] per l'analisi del sentimento. Installa le seguenti dipendenze se non lo hai giร  fatto: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> Importa [`pipeline`] e specifica il compito che vuoi completare: ```py >>> from transformers import pipeline >>> classificatore = pipeline("sentiment-analysis", model="MilaNLProc/feel-it-italian-sentiment") ``` La pipeline scarica e salva il [modello pre-allenato](https://huggingface.co/MilaNLProc/feel-it-italian-sentiment) e il tokenizer per l'analisi del sentimento. Se non avessimo scelto un modello, la pipeline ne avrebbe scelto uno di default. Ora puoi utilizzare il `classifier` sul tuo testo obiettivo: ```py >>> classificatore("Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.") [{'label': 'positive', 'score': 0.9997}] ``` Per piรน di una frase, passa una lista di frasi alla [`pipeline`] la quale restituirร  una lista di dizionari: ```py >>> risultati = classificatore( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."] ... ) >>> for risultato in risultati: ... print(f"etichetta: {risultato['label']}, con punteggio: {round(risultato['score'], 4)}") etichetta: positive, con punteggio: 0.9998 etichetta: negative, con punteggio: 0.9998 ``` La [`pipeline`] puรฒ anche iterare su un dataset intero. Inizia installando la libreria [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/): ```bash pip install datasets ``` Crea una [`pipeline`] con il compito che vuoi risolvere e con il modello che vuoi utilizzare. ```py >>> import torch >>> from transformers import pipeline >>> riconoscitore_vocale = pipeline( ... "automatic-speech-recognition", model="radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram" ... ) ``` Poi, carica un dataset (vedi ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart) per maggiori dettagli) sul quale vuoi iterare. Per esempio, carichiamo il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="it-IT", split="train") # doctest: +IGNORE_RESULT ``` Dobbiamo assicurarci che la frequenza di campionamento del set di dati corrisponda alla frequenza di campionamento con cui รจ stato addestrato `radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram`. ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=riconoscitore_vocale.feature_extractor.sampling_rate)) ``` I file audio vengono caricati automaticamente e ri-campionati quando chiamiamo la colonna "audio". Estraiamo i vettori delle forme d'onda grezze delle prime 4 osservazioni e passiamoli come lista alla pipeline: ```py >>> risultato = riconoscitore_vocale(dataset[:4]["audio"]) >>> print([d["text"] for d in risultato]) ['dovrei caricare dei soldi sul mio conto corrente', 'buongiorno e senza vorrei depositare denaro sul mio conto corrente come devo fare per cortesia', 'sรฌ salve vorrei depositare del denaro sul mio conto', 'e buon pomeriggio vorrei depositare dei soldi sul mio conto bancario volleo sapere come posso fare se e posso farlo online ed un altro conto o andandoo tramite bancomut'] ``` Per un dataset piรน grande dove gli input sono di dimensione maggiore (come nel parlato/audio o nella visione), dovrai passare un generatore al posto di una lista che carica tutti gli input in memoria. Guarda la [documentazione della pipeline](./main_classes/pipelines) per maggiori informazioni. ### Utilizzare un altro modello e tokenizer nella pipeline La [`pipeline`] puรฒ ospitare qualsiasi modello del [Model Hub](https://huggingface.co/models), rendendo semplice l'adattamento della [`pipeline`] per altri casi d'uso. Per esempio, se si vuole un modello capace di trattare testo in francese, usa i tag presenti nel Model Hub in modo da filtrare per ottenere un modello appropriato. Il miglior risultato filtrato restituisce un modello multi-lingua [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) fine-tuned per l'analisi del sentimento. Ottimo, utilizziamo questo modello! ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Usa [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `AutoClass` in seguito): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Usa [`TFAutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `TFAutoClass` in seguito): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Poi puoi specificare il modello e il tokenizer nella [`pipeline`], e applicare il `classifier` sul tuo testo obiettivo: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Se non riesci a trovare un modello per il tuo caso d'uso, dovrai fare fine-tuning di un modello pre-allenato sui tuoi dati. Dai un'occhiata al nostro tutorial [fine-tuning tutorial](./training) per imparare come. Infine, dopo che hai completato il fine-tuning del tuo modello pre-allenato, considera per favore di condividerlo (vedi il tutorial [qui](./model_sharing)) con la comunitร  sul Model Hub per democratizzare l'NLP! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Al suo interno, le classi [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] lavorano assieme per dare potere alla [`pipeline`]. Una [AutoClass](./model_doc/auto) รจ una scorciatoia che automaticamente recupera l'architettura di un modello pre-allenato a partire dal suo nome o path. Hai solo bisogno di selezionare la `AutoClass` appropriata per il tuo compito e il suo tokenizer associato con [`AutoTokenizer`]. Ritorniamo al nostro esempio e vediamo come puoi utilizzare la `AutoClass` per replicare i risultati della [`pipeline`]. ### AutoTokenizer Un tokenizer รจ responsabile dell'elaborazione del testo in modo da trasformarlo in un formato comprensibile dal modello. Per prima cosa, il tokenizer dividerร  il testo in parole chiamate *token*. Ci sono diverse regole che governano il processo di tokenizzazione, tra cui come dividere una parola e a quale livello (impara di piรน sulla tokenizzazione [qui](./tokenizer_summary)). La cosa piรน importante da ricordare comunque รจ che hai bisogno di inizializzare il tokenizer con lo stesso nome del modello in modo da assicurarti che stai utilizzando le stesse regole di tokenizzazione con cui il modello รจ stato pre-allenato. Carica un tokenizer con [`AutoTokenizer`]: ```py >>> from transformers import AutoTokenizer >>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(nome_del_modello) ``` Dopodichรฉ, il tokenizer converte i token in numeri in modo da costruire un tensore come input del modello. Questo รจ conosciuto come il *vocabolario* del modello. Passa il tuo testo al tokenizer: ```py >>> encoding = tokenizer("Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.") >>> print(encoding) {'input_ids': [101, 56821, 10132, 14407, 13019, 13007, 10120, 47201, 10330, 10106, 91686, 100, 58263, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Il tokenizer restituirร  un dizionario contenente: * [input_ids](./glossary#input-ids): rappresentazioni numeriche dei tuoi token. * [attention_mask](.glossary#attention-mask): indica quali token devono essere presi in considerazione. Come con la [`pipeline`], il tokenizer accetterร  una lista di input. In piรน, il tokenizer puรฒ anche completare (pad, in inglese) e troncare il testo in modo da restituire un lotto (batch, in inglese) di lunghezza uniforme: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> Leggi il tutorial sul [preprocessing](./preprocessing) per maggiori dettagli sulla tokenizzazione. ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`AutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza รจ selezionare l'[`AutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`AutoModelForSequenceClassification`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito. </Tip> Ora puoi passare il tuo lotto di input pre-processati direttamente al modello. Devi solo spacchettare il dizionario aggiungendo `**`: ```py >>> pt_outputs = pt_model(**pt_batch) ``` Il modello produrrร  le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilitร : ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0041, 0.0037, 0.0203, 0.2005, 0.7713], [0.3766, 0.3292, 0.1832, 0.0558, 0.0552]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`TFAutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza รจ selezionare il [`TFAutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`TFAutoModelForSequenceClassification`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(nome_del_modello) ``` <Tip> Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito. </Tip> Ora puoi passare il tuo lotto di input pre-processati direttamente al modello passando le chiavi del dizionario al tensore: ```py >>> tf_outputs = tf_model(tf_batch) ``` Il modello produrrร  le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilitร : ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Tutti i modelli di ๐Ÿค— Transformers (PyTorch e TensorFlow) restituiscono i tensori *prima* della funzione finale di attivazione (come la softmax) perchรฉ la funzione di attivazione finale viene spesso unita a quella di perdita. </Tip> I modelli sono [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) o [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) standard cosรฌ puoi utilizzarli all'interno del tuo training loop usuale. Tuttavia, per rendere le cose piรน semplici, ๐Ÿค— Transformers fornisce una classe [`Trainer`] per PyTorch che aggiunge delle funzionalitร  per l'allenamento distribuito, precisione mista, e altro ancora. Per TensorFlow, puoi utilizzare il metodo `fit` di [Keras](https://keras.io/). Fai riferimento al [tutorial per il training](./training) per maggiori dettagli. <Tip> Gli output del modello di ๐Ÿค— Transformers sono delle dataclasses speciali in modo che i loro attributi vengano auto-completati all'interno di un IDE. Gli output del modello si comportano anche come una tupla o un dizionario (ad esempio, puoi indicizzare con un intero, una slice o una stringa) nel qual caso gli attributi che sono `None` vengono ignorati. </Tip> ### Salva un modello <frameworkcontent> <pt> Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`PreTrainedModel.save_pretrained`]: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`PreTrainedModel.from_pretrained`]: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`TFPreTrainedModel.save_pretrained`]: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Una caratteristica particolarmente interessante di ๐Ÿค— Transformers รจ la sua abilitร  di salvare un modello e ri-caricarlo sia come modello di PyTorch che di TensorFlow. I parametri `from_pt` o `from_tf` possono convertire un modello da un framework all'altro: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/perf_train_special.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento su Hardware Specializzato <Tip> Nota: Molte delle strategie introdotte nella [sezione sulla GPU singola](perf_train_gpu_one) (come mixed precision training o gradient accumulation) e [sezione multi-GPU](perf_train_gpu_many) sono generiche e applicabili all'addestramento di modelli in generale quindi assicurati di dargli un'occhiata prima di immergerti in questa sezione. </Tip> Questo documento sarร  presto completato con informazioni su come effettuare la formazione su hardware specializzato.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/converting_tensorflow_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Convertire checkpoint di Tensorflow รˆ disponibile un'interfaccia a linea di comando per convertire gli originali checkpoint di Bert/GPT/GPT-2/Transformer-XL/XLNet/XLM in modelli che possono essere caricati utilizzando i metodi `from_pretrained` della libreria. <Tip> A partire dalla versione 2.3.0 lo script di conversione รจ parte di transformers CLI (**transformers-cli**), disponibile in ogni installazione di transformers >=2.3.0. La seguente documentazione riflette il formato dei comandi di **transformers-cli convert**. </Tip> ## BERT Puoi convertire qualunque checkpoint Tensorflow di BERT (in particolare [i modeli pre-allenati rilasciati da Google](https://github.com/google-research/bert#pre-trained-models)) in un file di salvataggio Pytorch utilizzando lo script [convert_bert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py). Questo CLI prende come input un checkpoint di Tensorflow (tre files che iniziano con `bert_model.ckpt`) ed il relativo file di configurazione (`bert_config.json`), crea un modello Pytorch per questa configurazione, carica i pesi dal checkpoint di Tensorflow nel modello di Pytorch e salva il modello che ne risulta in un file di salvataggio standard di Pytorch che puรฒ essere importato utilizzando `from_pretrained()` (vedi l'esempio nel [quicktour](quicktour) , [run_glue.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_glue.py) ). Devi soltanto lanciare questo script di conversione **una volta** per ottenere un modello Pytorch. Dopodichรจ, potrai tralasciare il checkpoint di Tensorflow (i tre files che iniziano con `bert_model.ckpt`), ma assicurati di tenere il file di configurazione (`bert_config.json`) ed il file di vocabolario (`vocab.txt`) in quanto queste componenti sono necessarie anche per il modello di Pytorch. Per lanciare questo specifico script di conversione avrai bisogno di un'installazione di Tensorflow e di Pytorch (`pip install tensorflow`). Il resto della repository richiede soltanto Pytorch. Questo รจ un esempio del processo di conversione per un modello `BERT-Base Uncased` pre-allenato: ```bash export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12 transformers-cli convert --model_type bert \ --tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \ --config $BERT_BASE_DIR/bert_config.json \ --pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin ``` Puoi scaricare i modelli pre-allenati di Google per la conversione [qua](https://github.com/google-research/bert#pre-trained-models). ## ALBERT Per il modello ALBERT, converti checkpoint di Tensoflow in Pytorch utilizzando lo script [convert_albert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/tree/main/src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py). Il CLI prende come input un checkpoint di Tensorflow (tre files che iniziano con `model.ckpt-best`) e i relativi file di configurazione (`albert_config.json`), dopodichรจ crea e salva un modello Pytorch. Per lanciare questa conversione avrai bisogno di un'installazione di Tensorflow e di Pytorch. Ecco un esempio del procedimento di conversione di un modello `ALBERT Base` pre-allenato: ```bash export ALBERT_BASE_DIR=/path/to/albert/albert_base transformers-cli convert --model_type albert \ --tf_checkpoint $ALBERT_BASE_DIR/model.ckpt-best \ --config $ALBERT_BASE_DIR/albert_config.json \ --pytorch_dump_output $ALBERT_BASE_DIR/pytorch_model.bin ``` Puoi scaricare i modelli pre-allenati di Google per la conversione [qui](https://github.com/google-research/albert#pre-trained-models). ## OpenAI GPT Ecco un esempio del processo di conversione di un modello OpenAI GPT pre-allenato, assumendo che il tuo checkpoint di NumPy sia salvato nello stesso formato dei modelli pre-allenati OpenAI (vedi [qui](https://github.com/openai/finetune-transformer-lm)): ```bash export OPENAI_GPT_CHECKPOINT_FOLDER_PATH=/path/to/openai/pretrained/numpy/weights transformers-cli convert --model_type gpt \ --tf_checkpoint $OPENAI_GPT_CHECKPOINT_FOLDER_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--config OPENAI_GPT_CONFIG] \ [--finetuning_task_name OPENAI_GPT_FINETUNED_TASK] \ ``` ## OpenAI GPT-2 Ecco un esempio del processo di conversione di un modello OpenAI GPT-2 pre-allenato (vedi [qui](https://github.com/openai/gpt-2)): ```bash export OPENAI_GPT2_CHECKPOINT_PATH=/path/to/openai-community/gpt2/pretrained/weights transformers-cli convert --model_type gpt2 \ --tf_checkpoint $OPENAI_GPT2_CHECKPOINT_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--config OPENAI_GPT2_CONFIG] \ [--finetuning_task_name OPENAI_GPT2_FINETUNED_TASK] ``` ## XLNet Ecco un esempio del processo di conversione di un modello XLNet pre-allenato: ```bash export TRANSFO_XL_CHECKPOINT_PATH=/path/to/xlnet/checkpoint export TRANSFO_XL_CONFIG_PATH=/path/to/xlnet/config transformers-cli convert --model_type xlnet \ --tf_checkpoint $TRANSFO_XL_CHECKPOINT_PATH \ --config $TRANSFO_XL_CONFIG_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--finetuning_task_name XLNET_FINETUNED_TASK] \ ``` ## XLM Ecco un esempio del processo di conversione di un modello XLM pre-allenato: ```bash export XLM_CHECKPOINT_PATH=/path/to/xlm/checkpoint transformers-cli convert --model_type xlm \ --tf_checkpoint $XLM_CHECKPOINT_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT [--config XML_CONFIG] \ [--finetuning_task_name XML_FINETUNED_TASK] ``` ## T5 Ecco un esempio del processo di conversione di un modello T5 pre-allenato: ```bash export T5=/path/to/t5/uncased_L-12_H-768_A-12 transformers-cli convert --model_type t5 \ --tf_checkpoint $T5/t5_model.ckpt \ --config $T5/t5_config.json \ --pytorch_dump_output $T5/pytorch_model.bin ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/pr_checks.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Controlli su una Pull Request Quando apri una pull request sui ๐Ÿค— Transformers, vengono eseguiti un discreto numero di controlli per assicurarsi che la patch che stai aggiungendo non stia rompendo qualcosa di esistente. Questi controlli sono di quattro tipi: - test regolari - costruzione della documentazione - stile del codice e della documentazione - coerenza generale del repository In questo documento, cercheremo di spiegare quali sono i vari controlli e le loro ragioni, oltre a spiegare come eseguire il debug locale se uno di essi fallisce sulla tua PR. Nota che tutti richiedono un'installazione dev: ```bash pip install transformers[dev] ``` o un'installazione modificabile: ```bash pip install -e .[dev] ``` all'interno del repo Transformers. ## Tests Tutti i job che iniziano con `ci/circleci: run_tests_` eseguono parti della suite di test dei Transformers. Ognuno di questi job si concentra su una parte della libreria in un determinato ambiente: per esempio `ci/circleci: run_tests_pipelines_tf` esegue il test delle pipeline in un ambiente in cui รจ installato solo TensorFlow. Nota che per evitare di eseguire i test quando non ci sono cambiamenti reali nei moduli che si stanno testando, ogni volta viene eseguita solo una parte della suite di test: viene eseguita una utility per determinare le differenze nella libreria tra prima e dopo la PR (ciรฒ che GitHub mostra nella scheda "Files changes") e sceglie i test che sono stati impattati dalla diff. Questa utility puรฒ essere eseguita localmente con: ```bash python utils/tests_fetcher.py ``` dalla root del repo Transformers. Di seguito ciรฒ che farร : 1. Controlla per ogni file nel diff se le modifiche sono nel codice o solo nei commenti o nelle docstrings. Vengono mantenuti solo i file con modifiche reali al codice. 2. Costruisce una mappa interna che fornisce per ogni file del codice sorgente della libreria tutti i file su cui ha un impatto ricorsivo. Si dice che il modulo A ha un impatto sul modulo B se il modulo B importa il modulo A. Per l'impatto ricorsivo, abbiamo bisogno di una catena di moduli che va dal modulo A al modulo B in cui ogni modulo importa il precedente. 3. Applica questa mappa ai file raccolti nel passaggio 1, si ottiene l'elenco dei file del modello interessati dalla PR. 4. Mappa ciascuno di questi file con i corrispondenti file di test e ottiene l'elenco dei test da eseguire. Quando esegui lo script in locale, dovresti ottenere la stampa dei risultati dei passi 1, 3 e 4 e quindi sapere quali test sono stati eseguiti. Lo script creerร  anche un file chiamato `test_list.txt` che contiene l'elenco dei test da eseguire e che puoi eseguire localmente con il seguente comando: ```bash python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) ``` Nel caso in cui qualcosa sia sfuggito, l'intera suite di test viene eseguita quotidianamente. ## Build della documentazione Il job `ci/circleci: build_doc` esegue una build della documentazione per assicurarsi che tutto sia a posto una volta che la PR รจ stata unita. Se questo passaggio fallisce, puoi controllare localmente entrando nella cartella `docs` del repo Transformers e digitare ```bash make html ``` Sphinx non รจ noto per i suoi messaggi di errore chiari, quindi potrebbe essere necessario che provi alcune cose per trovare davvero la fonte dell'errore. ## Stile del codice e della documentazione La formattazione del codice viene applicata a tutti i file sorgenti, agli esempi e ai test usando `black` e `isort`. Abbiamo anche uno strumento personalizzato che si occupa della formattazione delle docstring e dei file `rst` (`utils/style_doc.py`), cosรฌ come dell'ordine dei lazy imports eseguiti nei file `__init__.py` dei Transformers (`utils/custom_init_isort.py`). Tutto questo puรฒ essere lanciato eseguendo ```bash make style ``` I controlli della CI sono applicati all'interno del controllo `ci/circleci: check_code_quality`. Esegue anche `flake8`, che dร  un'occhiata di base al codice e si lamenta se trova una variabile non definita o non utilizzata. Per eseguire questo controllo localmente, usare ```bash make quality ``` Questa operazione puรฒ richiedere molto tempo, quindi per eseguire la stessa operazione solo sui file modificati nel branch corrente, eseguire ```bash make fixup ``` Quest'ultimo comando eseguirร  anche tutti i controlli aggiuntivi per la consistenza del repository. Diamogli un'occhiata. ## Coerenza del repository All'interno sono raggruppati tutti i test per assicurarsi che la tua PR lasci il repository in un buono stato ed รจ eseguito dal controllo `ci/circleci: check_repository_consistency`. Puoi eseguire localmente questo controllo eseguendo quanto segue: ```bash make repo-consistency ``` Questo verifica che: - Tutti gli oggetti aggiunti all'init sono documentati (eseguito da `utils/check_repo.py`) - Tutti i file `__init__.py` hanno lo stesso contenuto nelle loro due sezioni (eseguito da `utils/check_inits.py`) - Tutto il codice identificato come copia da un altro modulo รจ coerente con l'originale (eseguito da `utils/check_copies.py`) - Le traduzioni dei README e l'indice della documentazione hanno lo stesso elenco di modelli del README principale (eseguito da `utils/check_copies.py`) - Le tabelle autogenerate nella documentazione sono aggiornate (eseguito da `utils/check_table.py`) - La libreria ha tutti gli oggetti disponibili anche se non tutte le dipendenze opzionali sono installate (eseguito da `utils/check_dummies.py`) Se questo controllo fallisce, le prime due voci richiedono una correzione manuale, mentre le ultime quattro possono essere corrette automaticamente per te eseguendo il comando ```bash make fix-copies ``` Ulteriori controlli riguardano le PR che aggiungono nuovi modelli, principalmente che: - Tutti i modelli aggiunti sono in un Auto-mapping (eseguita da `utils/check_repo.py`) <!-- TODO Sylvain, add a check that makes sure the common tests are implemented.--> - Tutti i modelli sono testati correttamente (eseguito da `utils/check_repo.py`) <!-- TODO Sylvain, add the following - All models are added to the main README, inside the main doc - All checkpoints used actually exist on the Hub -->
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/it/preprocessing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Preprocess [[open-in-colab]] Prima di poter usare i dati in un modello, bisogna processarli in un formato accettabile per quest'ultimo. Un modello non comprende il testo grezzo, le immagini o l'audio. Bisogna convertire questi input in numeri e assemblarli all'interno di tensori. In questa esercitazione, tu potrai: * Preprocessare dati testuali con un tokenizer. * Preprocessare immagini o dati audio con un estrattore di caratteristiche. * Preprocessare dati per attivitร  multimodali mediante un processore. ## NLP <Youtube id="Yffk5aydLzg"/> Lo strumento principale per processare dati testuali รจ un [tokenizer](main_classes/tokenizer). Un tokenizer inizia separando il testo in *tokens* secondo una serie di regole. I tokens sono convertiti in numeri, questi vengono utilizzati per costruire i tensori di input del modello. Anche altri input addizionali se richiesti dal modello vengono aggiunti dal tokenizer. <Tip> Se stai pensando si utilizzare un modello preaddestrato, รจ importante utilizzare il tokenizer preaddestrato associato. Questo assicura che il testo sia separato allo stesso modo che nel corpus usato per l'addestramento, e venga usata la stessa mappatura tokens-to-index (solitamente indicato come il *vocabolario*) come nel preaddestramento. </Tip> Iniziamo subito caricando un tokenizer preaddestrato con la classe [`AutoTokenizer`]. Questo scarica il *vocabolario* usato quando il modello รจ stato preaddestrato. ### Tokenize Carica un tokenizer preaddestrato con [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") ``` Poi inserisci le tue frasi nel tokenizer: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Il tokenizer restituisce un dizionario contenente tre oggetti importanti: * [input_ids](glossary#input-ids) sono gli indici che corrispondono ad ogni token nella frase. * [attention_mask](glossary#attention-mask) indicata se un token deve essere elaborato o no. * [token_type_ids](glossary#token-type-ids) identifica a quale sequenza appartiene un token se รจ presente piรน di una sequenza. Si possono decodificare gli `input_ids` per farsi restituire l'input originale: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` Come si puรฒ vedere, il tokenizer aggiunge due token speciali - `CLS` e `SEP` (classificatore e separatore) - alla frase. Non tutti i modelli hanno bisogno dei token speciali, ma se servono, il tokenizer li aggiungerร  automaticamente. Se ci sono piรน frasi che vuoi processare, passale come una lista al tokenizer: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### Pad Questo รจ un argomento importante. Quando processi un insieme di frasi potrebbero non avere tutte la stessa lunghezza. Questo รจ un problema perchรจ i tensori, in input del modello, devono avere dimensioni uniformi. Il padding รจ una strategia per assicurarsi che i tensori siano rettangolari aggiungendo uno speciale *padding token* alle frasi piรน corte. Imposta il parametro `padding` a `True` per imbottire le frasi piรน corte nel gruppo in modo che combacino con la massima lunghezza presente: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` Nota che il tokenizer aggiunge alle sequenze degli `0` perchรจ sono troppo corte! ### Truncation L'altra faccia della medaglia รจ che avolte le sequenze possono essere troppo lunghe per essere gestite dal modello. In questo caso, avrai bisogno di troncare la sequenza per avere una lunghezza minore. Imposta il parametro `truncation` a `True` per troncare una sequenza alla massima lunghezza accettata dal modello: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` ### Costruire i tensori Infine, vuoi che il tokenizer restituisca i tensori prodotti dal modello. Imposta il parametro `return_tensors` su `pt` per PyTorch, o `tf` per TensorFlow: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[ 101, 153, 7719, 21490, 1122, 1114, 9582, 1623, 102], [ 101, 5226, 1122, 9649, 1199, 2610, 1236, 102, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0]])} ===PT-TF-SPLIT=== >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[ 101, 153, 7719, 21490, 1122, 1114, 9582, 1623, 102], [ 101, 5226, 1122, 9649, 1199, 2610, 1236, 102, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0]], dtype=int32)>} ``` ## Audio Gli input audio sono processati in modo differente rispetto al testo, ma l'obiettivo rimane lo stesso: creare sequenze numeriche che il modello puรฒ capire. Un [estrattore di caratteristiche](main_classes/feature_extractor) รจ progettato con lo scopo preciso di estrarre caratteristiche da immagini o dati audio grezzi e convertirli in tensori. Prima di iniziare, installa ๐Ÿค— Datasets per caricare un dataset audio e sperimentare: ```bash pip install datasets ``` Carica il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) (vedi il ๐Ÿค— [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) per avere maggiori dettagli su come caricare un dataset): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Accedi al primo elemento della colonna `audio` per dare uno sguardo all'input. Richiamando la colonna `audio` sarร  caricato automaticamente e ricampionato il file audio: ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` Questo restituisce tre oggetti: * `array` รจ il segnale vocale caricato - e potenzialmente ricampionato - come vettore 1D. * `path` il percorso del file audio. * `sampling_rate` si riferisce al numero di campioni del segnale vocale misurati al secondo. ### Ricampionamento Per questo tutorial, puoi usare il modello [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base). Come puoi vedere dalla model card, il modello Wav2Vec2 รจ preaddestrato su un campionamento vocale a 16kHz.รˆ importante che la frequenza di campionamento dei tuoi dati audio combaci con la frequenza di campionamento del dataset usato per preaddestrare il modello. Se la frequenza di campionamento dei tuoi dati non รจ uguale dovrai ricampionare i tuoi dati audio. Per esempio, il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ha una frequenza di campionamento di 8000kHz. Utilizzando il modello Wav2Vec2 su questo dataset, alzala a 16kHz: ```py >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` 1. Usa il metodo di ๐Ÿค— Datasets' [`cast_column`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.cast_column) per alzare la frequenza di campionamento a 16kHz: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. Carica il file audio: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` Come puoi notare, la `sampling_rate` adesso รจ 16kHz! ### Feature extractor Il prossimo passo รจ caricare un estrattore di caratteristiche per normalizzare e fare padding sull'input. Quando applichiamo il padding sui dati testuali, uno `0` รจ aggiunto alle sequenze piรน brevi. La stessa idea si applica ai dati audio, l'estrattore di caratteristiche per gli audio aggiungerร  uno `0` - interpretato come silenzio - agli `array`. Carica l'estrattore delle caratteristiche con [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` Inserisci l' `array` audio nell'estrattore delle caratteristiche. Noi raccomandiamo sempre di aggiungere il parametro `sampling_rate` nell'estrattore delle caratteristiche per correggere meglio qualche errore, dovuto ai silenzi, che potrebbe verificarsi. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` ### Pad e truncate Come per il tokenizer, puoi applicare le operazioni padding o truncation per manipolare sequenze di variabili a lotti. Dai uno sguaro alla lunghezza delle sequenze di questi due campioni audio: ```py >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` Come puoi vedere, il primo campione ha una sequenza piรน lunga del secondo. Crea una funzione che preprocesserร  il dataset. Specifica una lunghezza massima del campione, e l'estrattore di features si occuperร  di riempire o troncare la sequenza per coincidervi: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` Applica la funzione ai primi esempi nel dataset: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` Adesso guarda la lunghezza dei campioni elaborati: ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` La lunghezza dei campioni adesso coincide con la massima lunghezza impostata nelle funzione. ## Vision Un estrattore di caratteristiche si puรฒ usare anche per processare immagini e per compiti di visione. Ancora una volta, l'obiettivo รจ convertire l'immagine grezza in un lotto di tensori come input. Carica il dataset [food101](https://huggingface.co/datasets/food101) per questa esercitazione. Usa il parametro `split` di ๐Ÿค— Datasets per caricare solo un piccolo campione dal dataset di addestramento poichรจ il set di dati รจ molto grande: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` Secondo passo, dai uno sguardo alle immagini usando la caratteristica [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image) di ๐Ÿค— Datasets: ```py >>> dataset[0]["image"] ``` ![vision-preprocess-tutorial.png](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png) ### Feature extractor Carica l'estrattore di caratteristiche [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224") ``` ### Data augmentation Per le attivitร  di visione, รจ usuale aggiungere alcuni tipi di data augmentation alle immagini come parte del preprocessing. Puoi aggiungere augmentations con qualsiasi libreria che preferisci, ma in questa esercitazione, userai il modulo [`transforms`](https://pytorch.org/vision/stable/transforms.html) di torchvision. 1. Normalizza l'immagine e usa [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) per concatenare alcune trasformazioni - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) e [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) - insieme: ```py >>> from torchvision.transforms import Compose, Normalize, RandomResizedCrop, ColorJitter, ToTensor >>> normalize = Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std) >>> _transforms = Compose( ... [RandomResizedCrop(feature_extractor.size), ColorJitter(brightness=0.5, hue=0.5), ToTensor(), normalize] ... ) ``` 2. Il modello accetta [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values) come input. Questo valore รจ generato dall'estrattore di caratteristiche. Crea una funzione che genera `pixel_values` dai transforms: ```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(image.convert("RGB")) for image in examples["image"]] ... return examples ``` 3. Poi utilizza ๐Ÿค— Datasets [`set_transform`](https://huggingface.co/docs/datasets/process#format-transform)per applicare al volo la trasformazione: ```py >>> dataset.set_transform(transforms) ``` 4. Adesso quando accedi all'immagine, puoi notare che l'estrattore di caratteristiche ha aggiunto `pixel_values` allo schema di input: ```py >>> dataset[0]["image"] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x7F1A7B0630D0>, 'label': 6, 'pixel_values': tensor([[[ 0.0353, 0.0745, 0.1216, ..., -0.9922, -0.9922, -0.9922], [-0.0196, 0.0667, 0.1294, ..., -0.9765, -0.9843, -0.9922], [ 0.0196, 0.0824, 0.1137, ..., -0.9765, -0.9686, -0.8667], ..., [ 0.0275, 0.0745, 0.0510, ..., -0.1137, -0.1216, -0.0824], [ 0.0667, 0.0824, 0.0667, ..., -0.0588, -0.0745, -0.0980], [ 0.0353, 0.0353, 0.0431, ..., -0.0039, -0.0039, -0.0588]], [[ 0.2078, 0.2471, 0.2863, ..., -0.9451, -0.9373, -0.9451], [ 0.1608, 0.2471, 0.3098, ..., -0.9373, -0.9451, -0.9373], [ 0.2078, 0.2706, 0.3020, ..., -0.9608, -0.9373, -0.8275], ..., [-0.0353, 0.0118, -0.0039, ..., -0.2392, -0.2471, -0.2078], [ 0.0196, 0.0353, 0.0196, ..., -0.1843, -0.2000, -0.2235], [-0.0118, -0.0039, -0.0039, ..., -0.0980, -0.0980, -0.1529]], [[ 0.3961, 0.4431, 0.4980, ..., -0.9216, -0.9137, -0.9216], [ 0.3569, 0.4510, 0.5216, ..., -0.9059, -0.9137, -0.9137], [ 0.4118, 0.4745, 0.5216, ..., -0.9137, -0.8902, -0.7804], ..., [-0.2314, -0.1922, -0.2078, ..., -0.4196, -0.4275, -0.3882], [-0.1843, -0.1686, -0.2000, ..., -0.3647, -0.3804, -0.4039], [-0.1922, -0.1922, -0.1922, ..., -0.2941, -0.2863, -0.3412]]])} ``` Di seguito come si vede l'immagine dopo la fase di preprocessing. Come ci si aspetterebbe dalle trasformazioni applicate, l'immagine รจ stata ritagliata in modo casuale e le proprietร  del colore sono diverse. ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` ![preprocessed_image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png) ## Multimodal Per attivitร  multimodali userai una combinazione di tutto quello che hai imparato poco fa e applicherai le tue competenze alla comprensione automatica del parlato (Automatic Speech Recognition - ASR). Questo significa che avrai bisogno di: * Un estrattore delle caratteristiche per processare i dati audio. * Il Tokenizer per processare i testi. Ritorna sul datasere [LJ Speech](https://huggingface.co/datasets/lj_speech): ```py >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` Visto che sei interessato solo alle colonne `audio` e `text`, elimina tutte le altre: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` Adesso guarda le colonne `audio` e `text`: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` Ricorda dalla sezione precedente sull'elaborazione dei dati audio, tu dovresti sempre [ricampionare](preprocessing#audio) la frequenza di campionamento dei tuoi dati audio per farla coincidere con quella del dataset usato dal modello preaddestrato: ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` ### Processor Un processor combina un estrattore di caratteristiche e un tokenizer. Carica un processor con [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. Crea una funzione che processi i dati audio in `input_values`, e tokenizza il testo in `labels`. Questi sono i tuoi input per il modello: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. Applica la funzione `prepare_dataset` ad un campione: ```py >>> prepare_dataset(lj_speech[0]) ``` Nota che il processor ha aggiunto `input_values` e `labels`. La frequenza di campionamento รจ stata corretta riducendola a 16kHz. Fantastico, ora dovresti essere in grado di preelaborare i dati per qualsiasi modalitร  e persino di combinare modalitร  diverse! Nella prossima esercitazione, impareremo a mettere a punto un modello sui dati appena pre-elaborati.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/hi/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # เค…เคจเฅเคฎเคพเคจ เค•เฅ‡ เคฒเคฟเค เคชเคพเค‡เคชเคฒเคพเค‡เคจ [`pipeline`] เค•เคฟเคธเฅ€ เคญเฅ€ เคญเคพเคทเคพ, เค•เค‚เคชเฅเคฏเฅ‚เคŸเคฐ เคฆเฅƒเคทเฅเคŸเคฟ, เคญเคพเคทเคฃ เค”เคฐ เคฎเคฒเฅเคŸเฅ€เคฎเฅ‰เคกเคฒ เค•เคพเคฐเฅเคฏเฅ‹เค‚ เคชเคฐ เค…เคจเฅเคฎเคพเคจ เคฒเค—เคพเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค [Hub](https://huggingface.co/models) เคธเฅ‡ เค•เคฟเคธเฅ€ เคญเฅ€ เคฎเฅ‰เคกเคฒ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เค†เคธเคพเคจ เคฌเคจเคพเคคเคพ เคนเฅˆเฅค เคญเคฒเฅ‡ เคนเฅ€ เค†เคชเค•เฅ‡ เคชเคพเคธ เค•เคฟเคธเฅ€ เคตเคฟเคถเคฟเคทเฅเคŸ เคคเฅŒเคฐ-เคคเคฐเฅ€เค•เฅ‡ เค•เคพ เค…เคจเฅเคญเคต เคจ เคนเฅ‹ เคฏเคพ เค†เคช เคฎเฅ‰เคกเคฒเฅ‹เค‚ เค•เฅ‡ เคชเฅ€เค›เฅ‡ เค…เค‚เคคเคฐเฅเคจเคฟเคนเคฟเคค เค•เฅ‹เคก เคธเฅ‡ เคชเคฐเคฟเคšเคฟเคค เคจ เคนเฅ‹เค‚, เคซเคฟเคฐ เคญเฅ€ เค†เคช [`pipeline`] เค•เฅ‡ เค…เคจเฅเคฎเคพเคจ เค•เฅ‡ เคฒเคฟเค เค‰เคจเค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚! เคฏเคน เคŸเฅเคฏเฅ‚เคŸเฅ‹เคฐเคฟเคฏเคฒ เค†เคชเค•เฅ‹ เคฏเฅ‡ เคธเคฟเค–เคพเคเค—เคพ: * เค…เคจเฅเคฎเคพเคจ เค•เฅ‡ เคฒเคฟเค [`pipeline`] เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚เฅค * เคเค• เคตเคฟเคถเคฟเคทเฅเคŸ เคŸเฅ‹เค•เคจเคจเคพเค‡เคœเคผเคฐ เคฏเคพ เคฎเฅ‰เคกเคฒ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚เฅค * เค‘เคกเคฟเคฏเฅ‹, เคตเคฟเคœเคผเคจ เค”เคฐ เคฎเคฒเฅเคŸเฅ€เคฎเฅ‰เคกเคฒ เค•เคพเคฐเฅเคฏเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค [`pipeline`] เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚เฅค <Tip> เคธเคฎเคฐเฅเคฅเคฟเคค เค•เคพเคฐเฅเคฏเฅ‹เค‚ เค”เคฐ เค‰เคชเคฒเคฌเฅเคง เคฎเคพเคชเคฆเค‚เคกเฅ‹เค‚ เค•เฅ€ เคชเฅ‚เคฐเฅ€ เคธเฅ‚เคšเฅ€ เค•เฅ‡ เคฒเคฟเค [`pipeline`] เคฆเคธเฅเคคเคพเคตเฅ‡เคœเคผ เคชเคฐ เคเค• เคจเคœเคผเคฐ เคกเคพเคฒเฅ‡เค‚เฅค </Tip> ## เคชเคพเค‡เคชเคฒเคพเค‡เคจ เค•เคพ เค‰เคชเคฏเฅ‹เค— เคœเคฌเค•เคฟ เคชเฅเคฐเคคเฅเคฏเฅ‡เค• เค•เคพเคฐเฅเคฏ เคฎเฅ‡เค‚ เคเค• เคธเค‚เคฌเคฆเฅเคง [`pipeline`] เคนเฅ‹เคคเคพ เคนเฅˆ, เคธเคพเคฎเคพเคจเฅเคฏ [`pipeline`] เค…เคฎเฅ‚เคฐเฅเคค เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เค†เคธเคพเคจ เคนเฅ‹เคคเคพ เคนเฅˆ เคœเคฟเคธเคฎเฅ‡เค‚ เคถเคพเคฎเคฟเคฒ เคนเฅ‹เคคเคพ เคนเฅˆ เคธเคญเฅ€ เค•เคพเคฐเฅเคฏ-เคตเคฟเคถเคฟเคทเฅเคŸ เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‡เค‚เฅค [`pipeline`] เคธเฅเคตเคšเคพเคฒเคฟเคค เคฐเฅ‚เคช เคธเฅ‡ เคเค• เคกเคฟเคซเคผเฅ‰เคฒเฅเคŸ เคฎเฅ‰เคกเคฒ เค”เคฐ เคธเค•เฅเคทเคฎ เคชเฅเคฐเฅ€เคชเฅเคฐเฅ‹เคธเฅ‡เคธเคฟเค‚เค— เค•เฅเคฒเคพเคธ เคฒเฅ‹เคก เค•เคฐเคคเคพ เคนเฅˆ เค†เคชเค•เฅ‡ เค•เคพเคฐเฅเคฏ เค•เฅ‡ เคฒเคฟเค เค…เคจเฅเคฎเคพเคจ เค•เคพ. เค†เค‡เค เคธเฅเคตเคšเคพเคฒเคฟเคค เคตเคพเค•เฅ เคชเคนเคšเคพเคจ (เคเคเคธเค†เคฐ) เค•เฅ‡ เคฒเคฟเค [`pipeline`] เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเฅ‡ เค•เคพ เค‰เคฆเคพเคนเคฐเคฃ เคฒเฅ‡เค‚, เคฏเคพ เคตเคพเค•เฅ-เคธเฅ‡-เคชเคพเค . 1. เคเค• [`pipeline`] เคฌเคจเคพเค•เคฐ เคชเฅเคฐเคพเคฐเค‚เคญ เค•เคฐเฅ‡เค‚ เค”เคฐ เค…เคจเฅเคฎเคพเคจ เค•เคพเคฐเฅเคฏ เคจเคฟเคฐเฅเคฆเคฟเคทเฅเคŸ เค•เคฐเฅ‡เค‚: ```py >>> from transformers import pipeline >>> transcriber = pipeline(task="automatic-speech-recognition") ``` 2. เค…เคชเคจเคพ เค‡เคจเคชเฅเคŸ [`pipeline`] เคชเคฐ เคญเฅ‡เคœเฅ‡เค‚เฅค เคตเคพเค•เฅ เคชเคนเคšเคพเคจ เค•เฅ‡ เคฎเคพเคฎเคฒเฅ‡ เคฎเฅ‡เค‚, เคฏเคน เคเค• เค‘เคกเคฟเคฏเฅ‹ เค‡เคจเคชเฅเคŸ เคซเคผเคพเค‡เคฒ เคนเฅˆ: ```py >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES'} ``` เค•เฅเคฏเคพ เคตเคน เคชเคฐเคฟเคฃเคพเคฎ เคจเคนเฅ€เค‚ เคœเฅ‹ เค†เคชเค•เฅ‡ เคฎเคจ เคฎเฅ‡เค‚ เคฅเคพ? เค•เฅเค› [เคธเคฌเคธเฅ‡ เค…เคงเคฟเค• เคกเคพเค‰เคจเคฒเฅ‹เคก เค•เคฟเค เค—เค เคธเฅเคตเคšเคพเคฒเคฟเคค เคตเคพเค•เฅ เคชเคนเคšเคพเคจ เคฎเฅ‰เคกเคฒ](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=trending) เคฆเฅ‡เค–เฅ‡เค‚ เคฏเคน เคฆเฅ‡เค–เคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เคนเคฌ เคชเคฐ เคœเคพเคเค‚ เค•เคฟ เค•เฅเคฏเคพ เค†เคชเค•เฅ‹ เคฌเฅ‡เคนเคคเคฐ เคŸเฅเคฐเคพเค‚เคธเฅเค•เฅเคฐเคฟเคชเฅเคถเคจ เคฎเคฟเคฒ เคธเค•เคคเคพ เคนเฅˆเฅค เค†เค‡เค OpenAI เคธเฅ‡ [เคตเฅเคนเคฟเคธเฅเคชเคฐ เคฒเคพเคฐเฅเคœ-v2](https://huggingface.co/openai/whisper-large) เคฎเฅ‰เคกเคฒ เค†เคœเคผเคฎเคพเคเค‚เฅค เคตเฅเคนเคฟเคธเฅเคชเคฐ เคœเคพเคฐเฅ€ เค•เคฟเคฏเคพ เค—เคฏเคพ Wav2Vec2 เค•เฅ€ เคคเฅเคฒเคจเคพ เคฎเฅ‡เค‚ 2 เคธเคพเคฒ เคฌเคพเคฆ, เค”เคฐ เคฒเค—เคญเค— 10 เค—เฅเคจเคพ เค…เคงเคฟเค• เคกเฅ‡เคŸเคพ เคชเคฐ เคชเฅเคฐเคถเคฟเค•เฅเคทเคฟเคค เค•เคฟเคฏเคพ เค—เคฏเคพ เคฅเคพเฅค เค‡เคธ เคชเฅเคฐเค•เคพเคฐ, เคฏเคน เค…เคงเคฟเค•เคพเค‚เคถ เคกเคพเค‰เคจเคธเฅเคŸเฅเคฐเฅ€เคฎ เคชเคฐ Wav2Vec2 เค•เฅ‹ เคฎเคพเคค เคฆเฅ‡เคคเคพ เคนเฅˆ เคฌเฅ‡เค‚เคšเคฎเคพเคฐเฅเค•. เค‡เคธเคฎเฅ‡เค‚ เคตเคฟเคฐเคพเคฎ เคšเคฟเคนเฅเคจ เค”เคฐ เค†เคตเคฐเคฃ เค•เฅ€ เคญเคตเคฟเคทเฅเคฏเคตเคพเคฃเฅ€ เค•เคฐเคจเฅ‡ เค•เคพ เค…เคคเคฟเคฐเคฟเค•เฅเคค เคฒเคพเคญ เคญเฅ€ เคนเฅˆ, เคœเคฟเคจเคฎเฅ‡เค‚ เคธเฅ‡ เค•เฅ‹เคˆ เคญเฅ€ เคธเค‚เคญเคต เคจเคนเฅ€เค‚ เคนเฅˆ Wav2Vec2. เค†เค‡เค เค‡เคธเฅ‡ เคฏเคนเคพเค‚ เค†เคœเคผเคฎเคพเค•เคฐ เคฆเฅ‡เค–เฅ‡เค‚ เค•เคฟ เคฏเคน เค•เฅˆเคธเคพ เคชเฅเคฐเคฆเคฐเฅเคถเคจ เค•เคฐเคคเคพ เคนเฅˆ: ```py >>> transcriber = pipeline(model="openai/whisper-large-v2") >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` เค…เคฌ เคฏเคน เคชเคฐเคฟเคฃเคพเคฎ เค…เคงเคฟเค• เคธเคŸเฅ€เค• เคฆเคฟเค–เคคเคพ เคนเฅˆ! Wav2Vec2 เคฌเคจเคพเคฎ เคตเฅเคนเคฟเคธเฅเคชเคฐ เคชเคฐ เค—เคนเคจ เคคเฅเคฒเคจเคพ เค•เฅ‡ เคฒเคฟเค, [เค‘เคกเคฟเคฏเฅ‹ เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐเฅเคธ เค•เฅ‹เคฐเฅเคธ](https://huggingface.co/learn/audio-course/chapter5/asr_models) เคฆเฅ‡เค–เฅ‡เค‚เฅค เคนเคฎ เคตเคพเคธเฅเคคเคต เคฎเฅ‡เค‚ เค†เคชเค•เฅ‹ เคตเคฟเคญเคฟเคจเฅเคจ เคญเคพเคทเคพเค“เค‚ เคฎเฅ‡เค‚ เคฎเฅ‰เคกเคฒ, เค†เคชเค•เฅ‡ เค•เฅเคทเฅ‡เคคเฅเคฐ เคฎเฅ‡เค‚ เคตเคฟเคถเฅ‡เคทเฅ€เค•เฅƒเคค เคฎเฅ‰เคกเคฒ เค”เคฐ เคฌเคนเฅเคค เค•เฅเค› เค•เฅ‡ เคฒเคฟเค เคนเคฌ เค•เฅ€ เคœเคพเค‚เคš เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เคชเฅเคฐเฅ‹เคคเฅเคธเคพเคนเคฟเคค เค•เคฐเคคเฅ‡ เคนเฅˆเค‚เฅค เค†เคช เคนเคฌ เคชเคฐ เคธเฅ€เคงเฅ‡ เค…เคชเคจเฅ‡ เคฌเฅเคฐเคพเค‰เคœเคผเคฐ เคธเฅ‡ เคฎเฅ‰เคกเคฒ เคชเคฐเคฟเคฃเคพเคฎเฅ‹เค‚ เค•เฅ€ เคœเคพเค‚เคš เค”เคฐ เคคเฅเคฒเคจเคพ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ เค•เคฟ เคฏเคน เคซเคฟเคŸ เคฌเฅˆเค เคคเคพ เคนเฅˆ เคฏเคพ เคจเคนเฅ€เค‚ เค…เคจเฅเคฏ เคฎเคพเคฎเคฒเฅ‹เค‚ เค•เฅ€ เคคเฅเคฒเคจเคพ เคฎเฅ‡เค‚ เค•เฅ‹เคจเฅ‡ เค•เฅ‡ เคฎเคพเคฎเคฒเฅ‹เค‚ เค•เฅ‹ เคฌเฅ‡เคนเคคเคฐ เคขเค‚เค— เคธเฅ‡ เคธเค‚เคญเคพเคฒเคคเคพ เคนเฅˆเฅค เค”เคฐ เคฏเคฆเคฟ เค†เคชเค•เฅ‹ เค…เคชเคจเฅ‡ เค‰เคชเคฏเฅ‹เค— เค•เฅ‡ เคฎเคพเคฎเคฒเฅ‡ เค•เฅ‡ เคฒเคฟเค เค•เฅ‹เคˆ เคฎเฅ‰เคกเคฒ เคจเคนเฅ€เค‚ เคฎเคฟเคฒเคคเคพ เคนเฅˆ, เคคเฅ‹ เค†เคช เคนเคฎเฅ‡เคถเคพ เค…เคชเคจเคพ เค–เฅเคฆ เค•เคพ [เคชเฅเคฐเคถเคฟเค•เฅเคทเคฃ](training) เคถเฅเคฐเฅ‚ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚! เคฏเคฆเคฟ เค†เคชเค•เฅ‡ เคชเคพเคธ เค•เคˆ เค‡เคจเคชเฅเคŸ เคนเฅˆเค‚, เคคเฅ‹ เค†เคช เค…เคชเคจเฅ‡ เค‡เคจเคชเฅเคŸ เค•เฅ‹ เคเค• เคธเฅ‚เคšเฅ€ เค•เฅ‡ เคฐเฅ‚เคช เคฎเฅ‡เค‚ เคชเคพเคธ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚: ```py transcriber( [ "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac", "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac", ] ) ``` เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‡เค‚ เคชเฅเคฐเคฏเฅ‹เค— เค•เฅ‡ เคฒเคฟเค เคฌเคนเฅเคค เค…เคšเฅเค›เฅ€ เคนเฅˆเค‚ เค•เฅเคฏเฅ‹เค‚เค•เคฟ เคเค• เคฎเฅ‰เคกเคฒ เคธเฅ‡ เคฆเฅ‚เคธเคฐเฅ‡ เคฎเฅ‰เคกเคฒ เคชเคฐ เคธเฅเคตเคฟเคš เค•เคฐเคจเคพ เคฎเคพเคฎเฅ‚เคฒเฅ€ เค•เคพเคฎ เคนเฅˆ; เคนเคพเคฒเคพเคเค•เคฟ, เคชเฅเคฐเคฏเฅ‹เค— เค•เฅ€ เคคเฅเคฒเคจเคพ เคฎเฅ‡เค‚ เคฌเคกเคผเฅ‡ เค•เคพเคฐเฅเคฏเคญเคพเคฐ เค•เฅ‡ เคฒเคฟเค เค‰เคจเฅเคนเฅ‡เค‚ เค…เคจเฅเค•เฅ‚เคฒเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ‡ เค•เฅเค› เคคเคฐเฅ€เค•เฅ‡ เคนเฅˆเค‚เฅค เคธเค‚เคชเฅ‚เคฐเฅเคฃ เคกเฅ‡เคŸเคพเคธเฅ‡เคŸ เคชเคฐ เคชเฅเคจเคฐเคพเคตเฅƒเคคเฅเคคเคฟ เค•เคฐเคจเฅ‡ เคฏเคพ เคตเฅ‡เคฌเคธเคฐเฅเคตเคฐ เคฎเฅ‡เค‚ เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฌเคพเคฐเฅ‡ เคฎเฅ‡เค‚ เคจเคฟเคฎเฅเคจเคฒเคฟเค–เคฟเคค เคฎเคพเคฐเฅเค—เคฆเคฐเฅเคถเคฟเค•เคพเคเค เคฆเฅ‡เค–เฅ‡เค‚: เคฆเคธเฅเคคเคพเคตเฅ‡เคœเคผเฅ‹เค‚ เคฎเฅ‡เค‚ เคธเฅ‡: * [เคกเฅ‡เคŸเคพเคธเฅ‡เคŸ เคชเคฐ เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ](#using-pipelines-on-a-dataset) * [เคตเฅ‡เคฌเคธเคฐเฅเคตเคฐ เค•เฅ‡ เคฒเคฟเค เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ](./pipeline_webserver) ## เคชเฅเคฐเคพเคšเคฒ [`pipeline`] เค•เคˆ เคฎเคพเคชเคฆเค‚เคกเฅ‹เค‚ เค•เคพ เคธเคฎเคฐเฅเคฅเคจ เค•เคฐเคคเคพ เคนเฅˆ; เค•เฅเค› เค•เคพเคฐเฅเคฏ เคตเคฟเคถเคฟเคทเฅเคŸ เคนเฅˆเค‚, เค”เคฐ เค•เฅเค› เคธเคญเฅ€ เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค เคธเคพเคฎเคพเคจเฅเคฏ เคนเฅˆเค‚เฅค เคธเคพเคฎเคพเคจเฅเคฏ เคคเฅŒเคฐ เคชเคฐ, เค†เคช เค…เคชเคจเฅ€ เค‡เคšเฅเค›เคพเคจเฅเคธเคพเคฐ เค•เคนเฅ€เค‚ เคญเฅ€ เคชเฅˆเคฐเคพเคฎเฅ€เคŸเคฐ เคจเคฟเคฐเฅเคฆเคฟเคทเฅเคŸ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚: ```py transcriber = pipeline(model="openai/whisper-large-v2", my_parameter=1) out = transcriber(...) # This will use `my_parameter=1`. out = transcriber(..., my_parameter=2) # This will override and use `my_parameter=2`. out = transcriber(...) # This will go back to using `my_parameter=1`. ``` เค†เค‡เค 3 เคฎเคนเคคเฅเคตเคชเฅ‚เคฐเฅเคฃ เคฌเคพเคคเฅ‹เค‚ เคชเคฐ เค—เฅŒเคฐ เค•เคฐเฅ‡เค‚: ### เค‰เคชเค•เคฐเคฃ เคฏเคฆเคฟ เค†เคช `device=0` เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคคเฅ‡ เคนเฅˆเค‚, เคคเฅ‹ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เคธเฅเคตเคšเคพเคฒเคฟเคค เคฐเฅ‚เคช เคธเฅ‡ เคฎเฅ‰เคกเคฒ เค•เฅ‹ เคจเคฟเคฐเฅเคฆเคฟเคทเฅเคŸ เคกเคฟเคตเคพเค‡เคธ เคชเคฐ เคกเคพเคฒ เคฆเฅ‡เคคเฅ€ เคนเฅˆเฅค เคฏเคน เค‡เคธ เคชเคฐ เคงเฅเคฏเคพเคจ เคฆเคฟเค เคฌเคฟเคจเคพ เค•เคพเคฎ เค•เคฐเฅ‡เค—เคพ เค•เคฟ เค†เคช PyTorch เคฏเคพ Tensorflow เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐ เคฐเคนเฅ‡ เคนเฅˆเค‚ เคฏเคพ เคจเคนเฅ€เค‚เฅค ```py transcriber = pipeline(model="openai/whisper-large-v2", device=0) ``` เคฏเคฆเคฟ เคฎเฅ‰เคกเคฒ เคเค•เคฒ GPU เค•เฅ‡ เคฒเคฟเค เคฌเคนเฅเคค เคฌเคกเคผเคพ เคนเฅˆ เค”เคฐ เค†เคช PyTorch เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐ เคฐเคนเฅ‡ เคนเฅˆเค‚, เคคเฅ‹ เค†เคช `device_map="auto"` เค•เฅ‹ เคธเฅเคตเคšเคพเคฒเคฟเคค เคฐเฅ‚เคช เคธเฅ‡ เคธเฅ‡เคŸ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ เคจเคฟเคฐเฅเคงเคพเคฐเคฟเคค เค•เคฐเฅ‡เค‚ เค•เคฟ เคฎเฅ‰เคกเคฒ เคตเคœเคผเคจ เค•เฅ‹ เค•เฅˆเคธเฅ‡ เคฒเฅ‹เคก เค”เคฐ เคธเค‚เค—เฅเคฐเคนเฅ€เคค เค•เคฟเคฏเคพ เคœเคพเคเฅค `device_map` เคคเคฐเฅเค• เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) เค•เฅ€ เค†เคตเคถเฅเคฏเค•เคคเคพ เคนเฅ‹เคคเฅ€ เคนเฅˆ เคชเฅˆเค•เฅ‡เคŸ: ```bash pip install --upgrade accelerate ``` เคจเคฟเคฎเฅเคจเคฒเคฟเค–เคฟเคค เค•เฅ‹เคก เคธเฅเคตเคšเคพเคฒเคฟเคค เคฐเฅ‚เคช เคธเฅ‡ เคธเคญเฅ€ เคกเคฟเคตเคพเค‡เคธเฅ‹เค‚ เคฎเฅ‡เค‚ เคฎเฅ‰เคกเคฒ เคญเคพเคฐ เค•เฅ‹ เคฒเฅ‹เคก เค”เคฐ เคธเค‚เค—เฅเคฐเคนเฅ€เคค เค•เคฐเคคเคพ เคนเฅˆ: ```py transcriber = pipeline(model="openai/whisper-large-v2", device_map="auto") ``` เคงเฅเคฏเคพเคจ เคฆเฅ‡เค‚ เค•เคฟ เคฏเคฆเคฟ `device_map='auto'` เคชเคพเคฐเคฟเคค เคนเฅ‹ เค—เคฏเคพ เคนเฅˆ, เคคเฅ‹ เค…เคชเคจเฅ€ `pipeline` เค•เฅ‹ เคšเคพเคฒเฅ‚ เค•เคฐเคคเฅ‡ เคธเคฎเคฏ `device=device` เคคเคฐเฅเค• เคœเฅ‹เคกเคผเคจเฅ‡ เค•เฅ€ เค•เฅ‹เคˆ เค†เคตเคถเฅเคฏเค•เคคเคพ เคจเคนเฅ€เค‚ เคนเฅˆ เค•เฅเคฏเฅ‹เค‚เค•เคฟ เค†เคชเค•เฅ‹ เค•เฅเค› เค…เคชเฅเคฐเคคเฅเคฏเคพเคถเคฟเคค เคตเฅเคฏเคตเคนเคพเคฐ เค•เคพ เคธเคพเคฎเคจเคพ เค•เคฐเคจเคพ เคชเคกเคผ เคธเค•เคคเคพ เคนเฅˆ! ### เคฌเฅˆเคš เค•เคพ เค†เค•เคพเคฐ เคกเคฟเคซเคผเฅ‰เคฒเฅเคŸ เคฐเฅ‚เคช เคธเฅ‡, เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‡เค‚ [เคฏเคนเคพเค‚](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching) เคตเคฟเคธเฅเคคเคพเคฐ เคธเฅ‡ เคฌเคคเคพเค เค—เค เค•เคพเคฐเคฃเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค เคฌเฅˆเคš เค…เคจเฅเคฎเคพเคจ เคจเคนเฅ€เค‚ เคฒเค—เคพเคเค‚เค—เฅ€เฅค เค‡เคธเค•เคพ เค•เคพเคฐเคฃ เคฏเคน เคนเฅˆ เค•เคฟ เคฌเฅˆเคšเคฟเค‚เค— เค†เคตเคถเฅเคฏเค• เคฐเฅ‚เคช เคธเฅ‡ เคคเฅ‡เคœเคผ เคจเคนเฅ€เค‚ เคนเฅˆ, เค”เคฐ เคตเคพเคธเฅเคคเคต เคฎเฅ‡เค‚ เค•เฅเค› เคฎเคพเคฎเคฒเฅ‹เค‚ เคฎเฅ‡เค‚ เค•เคพเคซเฅ€ เคงเฅ€เคฎเฅ€ เคนเฅ‹ เคธเค•เคคเฅ€ เคนเฅˆเฅค เคฒเฅ‡เค•เคฟเคจ เค…เค—เคฐ เคฏเคน เค†เคชเค•เฅ‡ เค‰เคชเคฏเฅ‹เค— เค•เฅ‡ เคฎเคพเคฎเคฒเฅ‡ เคฎเฅ‡เค‚ เค•เคพเคฎ เค•เคฐเคคเคพ เคนเฅˆ, เคคเฅ‹ เค†เคช เค‡เคธเค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚: ```py transcriber = pipeline(model="openai/whisper-large-v2", device=0, batch_size=2) audio_filenames = [f"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/{i}.flac" for i in range(1, 5)] texts = transcriber(audio_filenames) ``` เคฏเคน เคชเฅเคฐเคฆเคพเคจ เค•เฅ€ เค—เคˆ 4 เค‘เคกเคฟเคฏเฅ‹ เคซเคพเค‡เคฒเฅ‹เค‚ เคชเคฐ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เคšเคฒเคพเคคเคพ เคนเฅˆ, เคฒเฅ‡เค•เคฟเคจ เคฏเคน เค‰เคจเฅเคนเฅ‡เค‚ 2 เค•เฅ‡ เคฌเฅˆเคš เคฎเฅ‡เค‚ เคชเคพเคธ เค•เคฐเฅ‡เค—เคพ เค†เคชเคธเฅ‡ เค•เคฟเคธเฅ€ เค”เคฐ เค•เฅ‹เคก เค•เฅ€ เค†เคตเคถเฅเคฏเค•เคคเคพ เค•เฅ‡ เคฌเคฟเคจเคพ เคฎเฅ‰เคกเคฒ (เคœเฅ‹ เคเค• เคœเฅ€เคชเฅ€เคฏเฅ‚ เคชเคฐ เคนเฅˆ, เคœเคนเคพเค‚ เคฌเฅˆเคšเคฟเค‚เค— เคธเฅ‡ เคฎเคฆเคฆ เคฎเคฟเคฒเคจเฅ‡ เค•เฅ€ เค…เคงเคฟเค• เคธเค‚เคญเคพเคตเคจเคพ เคนเฅˆ) เคชเคฐ เคœเคพเคเค‚เฅค เค†เค‰เคŸเคชเฅเคŸ เคนเคฎเฅ‡เคถเคพ เค‰เคธเฅ€ เคธเฅ‡ เคฎเฅ‡เคฒ เค–เคพเคจเคพ เคšเคพเคนเคฟเค เคœเฅ‹ เค†เคชเค•เฅ‹ เคฌเฅˆเคšเคฟเค‚เค— เค•เฅ‡ เคฌเคฟเคจเคพ เคชเฅเคฐเคพเคชเฅเคค เคนเฅเค† เคนเฅ‹เค—เคพเฅค เค‡เคธเค•เคพ เค‰เคฆเฅเคฆเฅ‡เคถเฅเคฏ เค•เฅ‡เคตเคฒ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เคธเฅ‡ เค…เคงเคฟเค• เค—เคคเคฟ เคชเฅเคฐเคพเคชเฅเคค เค•เคฐเคจเฅ‡ เคฎเฅ‡เค‚ เค†เคชเค•เฅ€ เคธเคนเคพเคฏเคคเคพ เค•เคฐเคจเคพ เคนเฅˆเฅค เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‡เค‚ เคฌเฅˆเคšเคฟเค‚เค— เค•เฅ€ เค•เฅเค› เคœเคŸเคฟเคฒเคคเคพเค“เค‚ เค•เฅ‹ เคญเฅ€ เค•เคฎ เค•เคฐ เคธเค•เคคเฅ€ เคนเฅˆเค‚ เค•เฅเคฏเฅ‹เค‚เค•เคฟ, เค•เฅเค› เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค, เคเค• เคเค•เคฒ เค†เค‡เคŸเคฎ (เคœเฅˆเคธเฅ‡ เคเค• เคฒเค‚เคฌเฅ€ เค‘เคกเคฟเคฏเฅ‹ เคซเคผเคพเค‡เคฒ) เค•เฅ‹ เคเค• เคฎเฅ‰เคกเคฒ เคฆเฅเคตเคพเคฐเคพ เคธเค‚เคธเคพเคงเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เค•เคˆ เคญเคพเค—เฅ‹เค‚ เคฎเฅ‡เค‚ เคตเคฟเคญเคพเคœเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ€ เค†เคตเคถเฅเคฏเค•เคคเคพ เคนเฅ‹เคคเฅ€ เคนเฅˆเฅค เคชเคพเค‡เคชเคฒเคพเค‡เคจ เค†เคชเค•เฅ‡ เคฒเคฟเค เคฏเคน [*chunk batching*](./main_classes/pipelines#pipeline-chunk-batching) เค•เคฐเคคเฅ€ เคนเฅˆเฅค ### เค•เคพเคฐเฅเคฏ เคตเคฟเคถเคฟเคทเฅเคŸ เคชเฅเคฐเคพเคšเคฒ เคธเคญเฅ€ เค•เคพเคฐเฅเคฏ เค•เคพเคฐเฅเคฏ เคตเคฟเคถเคฟเคทเฅเคŸ เคชเฅเคฐเคพเคšเคฒ เคชเฅเคฐเคฆเคพเคจ เค•เคฐเคคเฅ‡ เคนเฅˆเค‚ เคœเฅ‹ เค†เคชเค•เฅ‹ เค…เคชเคจเคพ เค•เคพเคฎ เคชเฅ‚เคฐเคพ เค•เคฐเคจเฅ‡ เคฎเฅ‡เค‚ เคฎเคฆเคฆ เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เค…เคคเคฟเคฐเคฟเค•เฅเคค เคฒเคšเฅ€เคฒเฅ‡เคชเคจ เค”เคฐ เคตเคฟเค•เคฒเฅเคชเฅ‹เค‚ เค•เฅ€ เค…เคจเฅเคฎเคคเคฟ เคฆเฅ‡เคคเฅ‡ เคนเฅˆเค‚เฅค เค‰เคฆเคพเคนเคฐเคฃ เค•เฅ‡ เคฒเคฟเค, [`transformers.AutomaticSpeechRecognitionPipeline.__call__`] เคตเคฟเคงเคฟ เคฎเฅ‡เค‚ เคเค• `return_timestamps` เคชเฅเคฐเคพเคšเคฒ เคนเฅˆ เคœเฅ‹ เคตเฅ€เคกเคฟเคฏเฅ‹ เค‰เคชเคถเฅ€เคฐเฅเคทเค• เค•เฅ‡ เคฒเคฟเค เค†เคถเคพเคœเคจเค• เคฒเค—เคคเคพ เคนเฅˆ: ```py >>> transcriber = pipeline(model="openai/whisper-large-v2", return_timestamps=True) >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.', 'chunks': [{'timestamp': (0.0, 11.88), 'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its'}, {'timestamp': (11.88, 12.38), 'text': ' creed.'}]} ``` เคœเฅˆเคธเคพ เค•เคฟ เค†เคช เคฆเฅ‡เค– เคธเค•เคคเฅ‡ เคนเฅˆเค‚, เคฎเฅ‰เคกเคฒ เคจเฅ‡ เคชเคพเค  เค•เคพ เค…เคจเฅเคฎเคพเคจ เคฒเค—เคพเคฏเคพ เค”เคฐ **when** เคตเคฟเคญเคฟเคจเฅเคจ เคตเคพเค•เฅเคฏเฅ‹เค‚ เค•เคพ เค‰เคšเฅเคšเคพเคฐเคฃ เค•เคฟเคฏเคพ เค—เคฏเคพ เคคเฅ‹ เค†เค‰เคŸเคชเฅเคŸ เคญเฅ€ เคฆเคฟเคฏเคพเฅค เคชเฅเคฐเคคเฅเคฏเฅ‡เค• เค•เคพเคฐเฅเคฏ เค•เฅ‡ เคฒเคฟเค เค•เคˆ เคชเฅเคฐเคพเคšเคฒ เค‰เคชเคฒเคฌเฅเคง เคนเฅˆเค‚, เค‡เคธเคฒเคฟเค เคฏเคน เคฆเฅ‡เค–เคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เค•เคฟ เค†เคช เค•เคฟเคธเค•เฅ‡ เคธเคพเคฅ เค›เฅ‡เคกเคผเค›เคพเคกเคผ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚, เคชเฅเคฐเคคเฅเคฏเฅ‡เค• เค•เคพเคฐเฅเคฏ เค•เคพ API เคธเค‚เคฆเคฐเฅเคญ เคฆเฅ‡เค–เฅ‡เค‚! เค‰เคฆเคพเคนเคฐเคฃ เค•เฅ‡ เคฒเคฟเค, [`~transformers.AutomaticSpeechRecognitionPipeline`] เคฎเฅ‡เค‚ เคเค• `chunk_length_s` เคชเฅเคฐเคพเคšเคฒ เคนเฅˆ เคœเฅ‹ เคธเคนเคพเคฏเค• เคนเฅˆ เคตเคพเคธเฅเคคเคต เคฎเฅ‡เค‚ เคฒเค‚เคฌเฅ€ เค‘เคกเคฟเคฏเฅ‹ เคซเคผเคพเค‡เคฒเฅ‹เค‚ เคชเคฐ เค•เคพเคฎ เค•เคฐเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค (เค‰เคฆเคพเคนเคฐเคฃ เค•เฅ‡ เคฒเคฟเค, เคธเค‚เคชเฅ‚เคฐเฅเคฃ เคซเคฟเคฒเฅเคฎเฅ‹เค‚ เคฏเคพ เค˜เค‚เคŸเฅ‡-เคฒเค‚เคฌเฅ‡ เคตเฅ€เคกเคฟเคฏเฅ‹ เค•เฅ‹ เค‰เคชเคถเฅ€เคฐเฅเคทเค• เคฆเฅ‡เคจเคพ) เคœเฅ‹ เค†เคฎเคคเฅŒเคฐ เคชเคฐ เคเค• เคฎเฅ‰เคกเคฒ เคนเฅ‹เคคเคพ เคนเฅˆ เค…เคชเคจเฅ‡ เค†เคช เคธเค‚เคญเคพเคฒ เคจเคนเฅ€เค‚ เคธเค•เคคเคพ: ```python >>> transcriber = pipeline(model="openai/whisper-large-v2", chunk_length_s=30, return_timestamps=True) >>> transcriber("https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/resolve/main/audio.wav") {'text': " Chapter 16. I might have told you of the beginning of this liaison in a few lines, but I wanted you to see every step by which we came. I, too, agree to whatever Marguerite wished, Marguerite to be unable to live apart from me. It was the day after the evening... ``` เคฏเคฆเคฟ เค†เคชเค•เฅ‹ เค•เฅ‹เคˆ เคเคธเคพ เคชเฅˆเคฐเคพเคฎเฅ€เคŸเคฐ เคจเคนเฅ€เค‚ เคฎเคฟเคฒ เคฐเคนเคพ เคนเฅˆ เคœเฅ‹ เคตเคพเคธเฅเคคเคต เคฎเฅ‡เค‚ เค†เคชเค•เฅ€ เคฎเคฆเคฆ เค•เคฐเฅ‡เค—เคพ, เคคเฅ‹ เคฌเฅ‡เคเคฟเคเค• [เค…เคจเฅเคฐเฅ‹เคง เค•เคฐเฅ‡เค‚](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)! ## เคกเฅ‡เคŸเคพเคธเฅ‡เคŸ เคชเคฐ เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เคฌเคกเคผเฅ‡ เคกเฅ‡เคŸเคพเคธเฅ‡เคŸ เคชเคฐ เคญเฅ€ เค…เคจเฅเคฎเคพเคจ เคšเคฒเคพ เคธเค•เคคเฅ€ เคนเฅˆเฅค เคเคธเคพ เค•เคฐเคจเฅ‡ เค•เคพ เคธเคฌเคธเฅ‡ เค†เคธเคพเคจ เคคเคฐเฅ€เค•เคพ เคนเคฎ เคเค• เคชเฅเคจเคฐเคพเคตเคฐเฅเคคเค• เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเฅ‡ เค•เฅ€ เคธเคฒเคพเคน เคฆเฅ‡เคคเฅ‡ เคนเฅˆเค‚: ```py def data(): for i in range(1000): yield f"My example {i}" pipe = pipeline(model="openai-community/gpt2", device=0) generated_characters = 0 for out in pipe(data()): generated_characters += len(out[0]["generated_text"]) ``` เคชเฅเคจเคฐเคพเคตเคฐเฅเคคเค• `data()` เคชเฅเคฐเคคเฅเคฏเฅ‡เค• เคชเคฐเคฟเคฃเคพเคฎ เค”เคฐ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เคธเฅเคตเคšเคพเคฒเคฟเคค เคฐเฅ‚เคช เคธเฅ‡ เค‰เคคเฅเคชเคจเฅเคจ เค•เคฐเคคเคพ เคนเฅˆ เคชเคนเคšเคพเคจเคคเคพ เคนเฅˆ เค•เคฟ เค‡เคจเคชเฅเคŸ เคชเฅเคจเคฐเคพเคตเคฐเฅเคคเคจเฅ€เคฏ เคนเฅˆ เค”เคฐ เคกเฅ‡เคŸเคพ เคชเฅเคฐเคพเคชเฅเคค เค•เคฐเคจเคพ เคถเฅเคฐเฅ‚ เค•เคฐ เคฆเฅ‡เค—เคพ เคฏเคน เค‡เคธเฅ‡ GPU เคชเคฐ เคชเฅเคฐเฅ‹เคธเฅ‡เคธ เค•เคฐเคจเคพ เคœเคพเคฐเฅ€ เคฐเค–เคคเคพ เคนเฅˆ (เคฏเคน เคนเฅเคก เค•เฅ‡ เคคเคนเคค [DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคคเคพ เคนเฅˆ)เฅค เคฏเคน เคฎเคนเคคเฅเคตเคชเฅ‚เคฐเฅเคฃ เคนเฅˆ เค•เฅเคฏเฅ‹เค‚เค•เคฟ เค†เคชเค•เฅ‹ เคธเค‚เคชเฅ‚เคฐเฅเคฃ เคกเฅ‡เคŸเคพเคธเฅ‡เคŸ เค•เฅ‡ เคฒเคฟเค เคฎเฅ‡เคฎเฅ‹เคฐเฅ€ เค†เคตเค‚เคŸเคฟเคค เค•เคฐเคจเฅ‡ เค•เฅ€ เค†เคตเคถเฅเคฏเค•เคคเคพ เคจเคนเฅ€เค‚ เคนเฅˆ เค”เคฐ เค†เคช เคœเคฟเคคเคจเฅ€ เคœเคฒเฅเคฆเฅ€ เคนเฅ‹ เคธเค•เฅ‡ GPU เค•เฅ‹ เคซเฅ€เคก เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚เฅค เคšเฅ‚เค‚เค•เคฟ เคฌเฅˆเคšเคฟเค‚เค— เคธเฅ‡ เคšเฅ€เคœเคผเฅ‡เค‚ เคคเฅ‡เคœเคผ เคนเฅ‹ เคธเค•เคคเฅ€ เคนเฅˆเค‚, เค‡เคธเคฒเคฟเค เคฏเคนเคพเค‚ `batch_size` เคชเฅเคฐเคพเคšเคฒ เค•เฅ‹ เคŸเฅเคฏเฅ‚เคจ เค•เคฐเคจเฅ‡ เค•เคพ เคชเฅเคฐเคฏเคพเคธ เค•เคฐเคจเคพ เค‰เคชเคฏเฅ‹เค—เฅ€ เคนเฅ‹ เคธเค•เคคเคพ เคนเฅˆเฅค เค•เคฟเคธเฅ€ เคกเฅ‡เคŸเคพเคธเฅ‡เคŸ เคชเคฐ เคชเฅเคจเคฐเคพเคตเฅƒเคคเคฟ เค•เคฐเคจเฅ‡ เค•เคพ เคธเคฌเคธเฅ‡ เคธเคฐเคฒ เคคเคฐเฅ€เค•เคพ เคฌเคธ เคเค• เค•เฅ‹ ๐Ÿค— [Dataset](https://github.com/huggingface/datasets/) เคธเฅ‡ เคฒเฅ‹เคก เค•เคฐเคจเคพ เคนเฅˆ: ```py # KeyDataset is a util that will just output the item we're interested in. from transformers.pipelines.pt_utils import KeyDataset from datasets import load_dataset pipe = pipeline(model="hf-internal-testing/tiny-random-wav2vec2", device=0) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]") for out in pipe(KeyDataset(dataset, "audio")): print(out) ``` ## เคตเฅ‡เคฌเคธเคฐเฅเคตเคฐ เค•เฅ‡ เคฒเคฟเค เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ <Tip> เคเค• เค…เคจเฅเคฎเคพเคจ เค‡เค‚เคœเคจ เคฌเคจเคพเคจเคพ เคเค• เคœเคŸเคฟเคฒ เคตเคฟเคทเคฏ เคนเฅˆ เคœเฅ‹ เค…เคชเคจเฅ‡ เค†เคช เคฎเฅ‡เค‚ เค‰เคชเคฏเฅเค•เฅเคค เคนเฅˆ เคชเฅƒเคทเฅเค เฅค </Tip> [Link](./pipeline_webserver) ## เคตเคฟเคœเคผเคจ เคชเคพเค‡เคชเคฒเคพเค‡เคจ เคฆเฅƒเคทเฅเคŸเคฟ เค•เคพเคฐเฅเคฏเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค [`pipeline`] เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เคตเฅเคฏเคพเคตเคนเคพเคฐเคฟเค• เคฐเฅ‚เคช เคธเฅ‡ เคธเคฎเคพเคจ เคนเฅˆเฅค เค…เคชเคจเคพ เค•เคพเคฐเฅเคฏ เคจเคฟเคฐเฅเคฆเคฟเคทเฅเคŸ เค•เคฐเฅ‡เค‚ เค”เคฐ เค…เคชเคจเฅ€ เค›เคตเคฟ เค•เฅเคฒเคพเคธเคฟเคซเคพเคฏเคฐเคฟเคฏเคฐ เค•เฅ‹ เคญเฅ‡เคœเฅ‡เค‚เฅค เค›เคตเคฟ เคเค• เคฒเคฟเค‚เค•, เคเค• เคธเฅเคฅเคพเคจเฅ€เคฏ เคชเคฅ เคฏเคพ เคฌเฅ‡เคธ64-เคเคจเฅเค•เฅ‹เคกเฅ‡เคก เค›เคตเคฟ เคนเฅ‹ เคธเค•เคคเฅ€ เคนเฅˆเฅค เค‰เคฆเคพเคนเคฐเคฃ เค•เฅ‡ เคฒเคฟเค, เคฌเคฟเคฒเฅเคฒเฅ€ เค•เฅ€ เค•เฅŒเคจ เคธเฅ€ เคชเฅเคฐเคœเคพเคคเคฟ เคจเฅ€เคšเฅ‡ เคฆเคฟเค–เคพเคˆ เค—เคˆ เคนเฅˆ? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(model="google/vit-base-patch16-224") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ``` ## เคชเคพเค  เคชเคพเค‡เคชเคฒเคพเค‡เคจ NLP เค•เคพเคฐเฅเคฏเฅ‹เค‚ เค•เฅ‡ เคฒเคฟเค [`pipeline`] เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เคตเฅเคฏเคพเคตเคนเคพเคฐเคฟเค• เคฐเฅ‚เคช เคธเฅ‡ เคธเคฎเคพเคจ เคนเฅˆเฅค ```py >>> from transformers import pipeline >>> # This model is a `zero-shot-classification` model. >>> # It will classify text, except you are free to choose any label you might imagine >>> classifier = pipeline(model="facebook/bart-large-mnli") >>> classifier( ... "I have a problem with my iphone that needs to be resolved asap!!", ... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"], ... ) {'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]} ``` ## เคฌเคนเฅเคตเคฟเคง เคชเคพเค‡เคชเคฒเคพเค‡เคจ [`pipeline`] เคเค• เคธเฅ‡ เค…เคงเคฟเค• เคคเฅŒเคฐ-เคคเคฐเฅ€เค•เฅ‹เค‚ เค•เคพ เคธเคฎเคฐเฅเคฅเคจ เค•เคฐเคคเฅ€ เคนเฅˆเฅค เค‰เคฆเคพเคนเคฐเคฃ เค•เฅ‡ เคฒเคฟเค, เคเค• เคฆเฅƒเคถเฅเคฏ เคชเฅเคฐเคถเฅเคจ เค‰เคคเฅเคคเคฐ (VQA) เค•เคพเคฐเฅเคฏ เคชเคพเค  เค”เคฐ เค›เคตเคฟ เค•เฅ‹ เคœเฅ‹เคกเคผเคคเคพ เคนเฅˆเฅค เค…เคชเคจเฅ€ เคชเคธเค‚เคฆ เค•เฅ‡ เค•เคฟเคธเฅ€ เคญเฅ€ เค›เคตเคฟ เคฒเคฟเค‚เค• เค”เคฐ เค›เคตเคฟ เค•เฅ‡ เคฌเคพเคฐเฅ‡ เคฎเฅ‡เค‚ เค•เฅ‹เคˆ เคชเฅเคฐเคถเฅเคจ เคชเฅ‚เค›เคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เคธเฅเคตเคคเค‚เคคเฅเคฐ เคฎเคนเคธเฅ‚เคธ เค•เคฐเฅ‡เค‚เฅค เค›เคตเคฟ เคเค• URL เคฏเคพ เค›เคตเคฟ เค•เคพ เคธเฅเคฅเคพเคจเฅ€เคฏ เคชเคฅ เคนเฅ‹ เคธเค•เคคเฅ€ เคนเฅˆเฅค เค‰เคฆเคพเคนเคฐเคฃ เค•เฅ‡ เคฒเคฟเค, เคฏเคฆเคฟ เค†เคช เค‡เคธ [invoice image](https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png) เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคคเฅ‡ เคนเฅˆเค‚: ```py >>> from transformers import pipeline >>> vqa = pipeline(model="impira/layoutlm-document-qa") >>> output = vqa( ... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", ... question="What is the invoice number?", ... ) >>> output[0]["score"] = round(output[0]["score"], 3) >>> output [{'score': 0.425, 'answer': 'us-001', 'start': 16, 'end': 16}] ``` <Tip> เคŠเคชเคฐ เคฆเคฟเค เค—เค เค‰เคฆเคพเคนเคฐเคฃ เค•เฅ‹ เคšเคฒเคพเคจเฅ‡ เค•เฅ‡ เคฒเคฟเค เค†เคชเค•เฅ‹ ๐Ÿค— เคŸเฅเคฐเคพเค‚เคธเคซเฅ‰เคฐเฅเคฎเคฐ เค•เฅ‡ เค…เคฒเคพเคตเคพ [`pytesseract`](https://pypi.org/project/pytesseract/) เค‡เค‚เคธเฅเคŸเฅ‰เคฒ เค•เคฐเคจเคพ เคนเฅ‹เค—เคพ: ```bash sudo apt install -y tesseract-ocr pip install pytesseract ``` </Tip> ## ๐Ÿค— `เคคเฅเคตเคฐเคฃ` เค•เฅ‡ เคธเคพเคฅ เคฌเคกเคผเฅ‡ เคฎเฅ‰เคกเคฒเฅ‹เค‚ เคชเคฐ `pipeline` เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ: เค†เคช ๐Ÿค— `accelerate` เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเค•เฅ‡ เคฌเคกเคผเฅ‡ เคฎเฅ‰เคกเคฒเฅ‹เค‚ เคชเคฐ เค†เคธเคพเคจเฅ€ เคธเฅ‡ `pipeline` เคšเคฒเคพ เคธเค•เคคเฅ‡ เคนเฅˆเค‚! เคชเคนเคฒเฅ‡ เคธเฅเคจเคฟเคถเฅเคšเคฟเคค เค•เคฐเฅ‡เค‚ เค•เคฟ เค†เคชเคจเฅ‡ `accelerate` เค•เฅ‹ `pip install accelerate` เค•เฅ‡ เคธเคพเคฅ เค‡เค‚เคธเฅเคŸเฅ‰เคฒ เค•เคฟเคฏเคพ เคนเฅˆเฅค เคธเคฌเคธเฅ‡ เคชเคนเคฒเฅ‡ `device_map='auto'` เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเค•เฅ‡ เค…เคชเคจเคพ เคฎเฅ‰เคกเคฒ เคฒเฅ‹เคก เค•เคฐเฅ‡เค‚! เคนเคฎ เค…เคชเคจเฅ‡ เค‰เคฆเคพเคนเคฐเคฃ เค•เฅ‡ เคฒเคฟเค `facebook/opt-1.3b` เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเฅ‡เค‚เค—เฅ‡เฅค ```py # pip install accelerate import torch from transformers import pipeline pipe = pipeline(model="facebook/opt-1.3b", torch_dtype=torch.bfloat16, device_map="auto") output = pipe("This is a cool example!", do_sample=True, top_p=0.95) ``` เคฏเคฆเคฟ เค†เคช `bitsandbytes` เค‡เค‚เคธเฅเคŸเฅ‰เคฒ เค•เคฐเคคเฅ‡ เคนเฅˆเค‚ เค”เคฐ `load_in_8bit=True` เคคเคฐเฅเค• เคœเฅ‹เคกเคผเคคเฅ‡ เคนเฅˆเค‚ เคคเฅ‹ เค†เคช 8-เคฌเคฟเคŸ เคฒเฅ‹เคกเฅ‡เคก เคฎเฅ‰เคกเคฒ เคญเฅ€ เคชเคพเคธ เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ ```py # pip install accelerate bitsandbytes import torch from transformers import pipeline pipe = pipeline(model="facebook/opt-1.3b", device_map="auto", model_kwargs={"load_in_8bit": True}) output = pipe("This is a cool example!", do_sample=True, top_p=0.95) ``` เคงเฅเคฏเคพเคจ เคฆเฅ‡เค‚ เค•เคฟ เค†เคช เคšเฅ‡เค•เคชเฅ‰เค‡เค‚เคŸ เค•เฅ‹ เค•เคฟเคธเฅ€ เคญเฅ€ เคนเค—เคฟเค‚เค— เคซเฅ‡เคธ เคฎเฅ‰เคกเคฒ เคธเฅ‡ เคฌเคฆเคฒ เคธเค•เคคเฅ‡ เคนเฅˆเค‚ เคœเฅ‹ BLOOM เคœเฅˆเคธเฅ‡ เคฌเคกเคผเฅ‡ เคฎเฅ‰เคกเคฒ เคฒเฅ‹เคกเคฟเค‚เค— เค•เคพ เคธเคฎเคฐเฅเคฅเคจ เค•เคฐเคคเคพ เคนเฅˆ!
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/hi/_toctree.yml
- sections: - local: pipeline_tutorial title: เคชเคพเค‡เคชเคฒเคพเค‡เคจเฅ‹เค‚ เค•เฅ‡ เคธเคพเคฅ เค…เคจเฅเคฎเคพเคจ เคšเคฒเคพเคเค
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/fast_tokenizers.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Usa los tokenizadores de ๐Ÿค— Tokenizers [`PreTrainedTokenizerFast`] depende de la biblioteca [๐Ÿค— Tokenizers](https://huggingface.co/docs/tokenizers). Los tokenizadores obtenidos desde la biblioteca ๐Ÿค— Tokenizers pueden ser cargados de forma muy sencilla en los ๐Ÿค— Transformers. Antes de entrar en detalles, comencemos creando un tokenizador dummy en unas cuantas lรญneas: ```python >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token="[UNK]")) >>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [...] >>> tokenizer.train(files, trainer) ``` Ahora tenemos un tokenizador entrenado en los archivos que definimos. Lo podemos seguir utilizando en ese entorno de ejecuciรณn (runtime en inglรฉs), o puedes guardarlo en un archivo JSON para reutilizarlo en un futuro. ## Cargando directamente desde el objeto tokenizador Veamos cรณmo utilizar este objeto tokenizador en la biblioteca ๐Ÿค— Transformers. La clase [`PreTrainedTokenizerFast`] permite una instanciaciรณn fรกcil, al aceptar el objeto *tokenizer* instanciado como argumento: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) ``` Este objeto ya puede ser utilizado con todos los mรฉtodos compartidos por los tokenizadores de ๐Ÿค— Transformers! Visita la [pรกgina sobre tokenizadores ](main_classes/tokenizer) para mรกs informaciรณn. ## Cargando desde un archivo JSON Para cargar un tokenizador desde un archivo JSON, comencemos por guardar nuestro tokenizador: ```python >>> tokenizer.save("tokenizer.json") ``` La localizaciรณn (path en inglรฉs) donde este archivo es guardado puede ser incluida en el mรฉtodo de inicializaciรณn de [`PreTrainedTokenizerFast`] utilizando el parรกmetro `tokenizer_file`: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` Este objeto ya puede ser utilizado con todos los mรฉtodos compartidos por los tokenizadores de ๐Ÿค— Transformers! Visita la [pรกgina sobre tokenizadores ](main_classes/tokenizer) para mรกs informaciรณn.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/run_scripts.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Entrenamiento con scripts Junto con los [notebooks](./notebooks) de ๐Ÿค— Transformers, tambiรฉn hay scripts con ejemplos que muestran cรณmo entrenar un modelo para una tarea en [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), o [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). Tambiรฉn encontrarรกs scripts que hemos usado en nuestros [proyectos de investigaciรณn](https://github.com/huggingface/transformers/tree/main/examples/research_projects) y [ejemplos pasados](https://github.com/huggingface/transformers/tree/main/examples/legacy) que en su mayorรญa son aportados por la comunidad. Estos scripts no se mantienen activamente y requieren una versiรณn especรญfica de ๐Ÿค— Transformers que probablemente sea incompatible con la รบltima versiรณn de la biblioteca. No se espera que los scripts de ejemplo funcionen de inmediato en todos los problemas, y es posible que debas adaptar el script al problema que estรกs tratando de resolver. Para ayudarte con esto, la mayorรญa de los scripts exponen completamente cรณmo se preprocesan los datos, lo que te permite editarlos segรบn sea necesario para tu caso de uso. Para cualquier caracterรญstica que te gustarรญa implementar en un script de ejemplo, por favor discรบtelo en el [foro](https://discuss.huggingface.co/) o con un [issue](https://github.com/huggingface/transformers/issues) antes de enviar un Pull Request. Si bien agradecemos las correcciones de errores, es poco probable que fusionemos un Pull Request que agregue mรกs funcionalidad a costa de la legibilidad. Esta guรญa te mostrarรก cรณmo ejecutar un ejemplo de un script de entrenamiento para resumir texto en [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) y [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). Se espera que todos los ejemplos funcionen con ambos frameworks a menos que se especifique lo contrario. ## Configuraciรณn Para ejecutar con รฉxito la รบltima versiรณn de los scripts de ejemplo debes **instalar ๐Ÿค— Transformers desde su fuente** en un nuevo entorno virtual: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` Para versiones anteriores de los scripts de ejemplo, haz clic en alguno de los siguientes links: <details> <summary>Ejemplos de versiones anteriores de ๐Ÿค— Transformers</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> Luego cambia tu clon actual de ๐Ÿค— Transformers a una versiรณn especรญfica, por ejemplo v3.5.1: ```bash git checkout tags/v3.5.1 ``` Una vez que hayas configurado la versiรณn correcta de la biblioteca, ve a la carpeta de ejemplo de tu elecciรณn e instala los requisitos especรญficos del ejemplo: ```bash pip install -r requirements.txt ``` ## Ejecutar un script <frameworkcontent> <pt> El script de ejemplo descarga y preprocesa un conjunto de datos de la biblioteca ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Luego, el script ajusta un conjunto de datos con [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) en una arquitectura que soporta la tarea de resumen. El siguiente ejemplo muestra cรณmo ajustar un [T5-small](https://huggingface.co/google-t5/t5-small) en el conjunto de datos [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). El modelo T5 requiere un argumento adicional `source_prefix` debido a cรณmo fue entrenado. Este aviso le permite a T5 saber que se trata de una tarea de resumir. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> El script de ejemplo descarga y preprocesa un conjunto de datos de la biblioteca ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Luego, el script ajusta un conjunto de datos utilizando Keras en una arquitectura que soporta la tarea de resumir. El siguiente ejemplo muestra cรณmo ajustar un [T5-small](https://huggingface.co/google-t5/t5-small) en el conjunto de datos [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). El modelo T5 requiere un argumento adicional `source_prefix` debido a cรณmo fue entrenado. Este aviso le permite a T5 saber que se trata de una tarea de resumir. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Entrenamiento distribuido y de precisiรณn mixta [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) admite un entrenamiento distribuido y de precisiรณn mixta, lo que significa que tambiรฉn puedes usarlo en un script. Para habilitar ambas caracterรญsticas: - Agrega el argumento `fp16` para habilitar la precisiรณn mixta. - Establece la cantidad de GPU que se usarรก con el argumento `nproc_per_node`. ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Los scripts de TensorFlow utilizan [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) para el entrenamiento distribuido, y no es necesario agregar argumentos adicionales al script de entrenamiento. El script de TensorFlow utilizarรก mรบltiples GPUs de forma predeterminada si estรกn disponibles. ## Ejecutar un script en una TPU <frameworkcontent> <pt> Las Unidades de Procesamiento de Tensor (TPUs) estรกn diseรฑadas especรญficamente para acelerar el rendimiento. PyTorch admite TPU con el compilador de aprendizaje profundo [XLA](https://www.tensorflow.org/xla) (consulta [aquรญ](https://github.com/pytorch/xla/blob/master/README.md) para obtener mรกs detalles). Para usar una TPU, inicia el script `xla_spawn.py` y usa el argumento `num_cores` para establecer la cantidad de nรบcleos de TPU que deseas usar. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Las Unidades de Procesamiento de Tensor (TPUs) estรกn diseรฑadas especรญficamente para acelerar el rendimiento. TensorFlow utiliza [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) para entrenar en TPUs. Para usar una TPU, pasa el nombre del recurso de la TPU al argumento `tpu` ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path google-t5/t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Ejecutar un script con ๐Ÿค— Accelerate ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) es una biblioteca exclusiva de PyTorch que ofrece un mรฉtodo unificado para entrenar un modelo en varios tipos de configuraciones (solo CPU, GPU mรบltiples, TPU) mientras mantiene una visibilidad completa en el ciclo de entrenamiento de PyTorch. Asegรบrate de tener ๐Ÿค— Accelerate instalado si aรบn no lo tienes: > Nota: Como Accelerate se estรก desarrollando rรกpidamente, debes instalar la versiรณn git de Accelerate para ejecutar los scripts ```bash pip install git+https://github.com/huggingface/accelerate ``` En lugar del script `run_summarization.py`, debes usar el script `run_summarization_no_trainer.py`. Los scripts compatibles con ๐Ÿค— Accelerate tendrรกn un archivo `task_no_trainer.py` en la carpeta. Comienza ejecutando el siguiente comando para crear y guardar un archivo de configuraciรณn: ```bash accelerate config ``` Prueba tu configuraciรณn para asegurarte que estรก configurada correctamente: ```bash accelerate test ``` Todo listo para iniciar el entrenamiento: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path google-t5/t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## Usar un conjunto de datos personalizado El script de la tarea resumir admite conjuntos de datos personalizados siempre que sean un archivo CSV o JSON Line. Cuando uses tu propio conjunto de datos, necesitas especificar varios argumentos adicionales: - `train_file` y `validation_file` especifican la ruta a tus archivos de entrenamiento y validaciรณn. - `text_column` es el texto de entrada para resumir. - `summary_column` es el texto de destino para la salida. Un script para resumir que utiliza un conjunto de datos personalizado se vera asรญ: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## Prueba un script A veces, es una buena idea ejecutar tu secuencia de comandos en una cantidad menor de ejemplos para asegurarte de que todo funciona como se espera antes de comprometerte con un conjunto de datos completo, lo que puede demorar horas en completarse. Utiliza los siguientes argumentos para truncar el conjunto de datos a un nรบmero mรกximo de muestras: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` No todos los scripts de ejemplo admiten el argumento `max_predict_samples`. Puede que desconozcas si la secuencia de comandos admite este argumento, agrega `-h` para verificar: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## Reanudar el entrenamiento desde el punto de control Otra opciรณn รบtil para habilitar es reanudar el entrenamiento desde un punto de control anterior. Esto asegurarรก que puedas continuar donde lo dejaste sin comenzar de nuevo si tu entrenamiento se interrumpe. Hay dos mรฉtodos para reanudar el entrenamiento desde un punto de control. El primer mรฉtodo utiliza el argumento `output_dir previous_output_dir` para reanudar el entrenamiento desde el รบltimo punto de control almacenado en `output_dir`. En este caso, debes eliminar `overwrite_output_dir`: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` El segundo mรฉtodo utiliza el argumento `resume_from_checkpoint path_to_specific_checkpoint` para reanudar el entrenamiento desde una carpeta de punto de control especรญfica. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## Comparte tu modelo Todos los scripts pueden cargar tu modelo final en el [Model Hub](https://huggingface.co/models). Asegรบrate de haber iniciado sesiรณn en Hugging Face antes de comenzar: ```bash huggingface-cli login ``` Luego agrega el argumento `push_to_hub` al script. Este argumento crearรก un repositorio con tu nombre de usuario Hugging Face y el nombre de la carpeta especificado en `output_dir`. Para darle a tu repositorio un nombre especรญfico, usa el argumento `push_to_hub_model_id` para aรฑadirlo. El repositorio se incluirรก automรกticamente en tu namespace. El siguiente ejemplo muestra cรณmo cargar un modelo con un nombre de repositorio especรญfico: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/create_a_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Crea una arquitectura personalizada Una [`AutoClass`](model_doc/auto) infiere, automรกticamente, la arquitectura del modelo y descarga la configuraciรณn y los pesos del modelo preentrenado. Normalmente, recomendamos usar una `AutoClass` para producir un cรณdigo agnรณstico a puntos de guardado o checkpoints. Sin embargo, los usuarios que quieran mรกs control sobre los parรกmetros especรญficos de los modelos pueden crear su propio modelo ๐Ÿค— Transformers personalizado a partir de varias clases base. Esto puede ser particularmente รบtil para alguien que estรฉ interesado en estudiar, entrenar o experimentar con modelos ๐Ÿค— Transformers. En esta guรญa vamos a profundizar en la creaciรณn de modelos personalizados sin usar `AutoClass`. Aprenderemos a: - Cargar y personalizar una configuraciรณn para un modelo. - Crear una arquitectura para un modelo. - Crear tokenizadores rรกpidos y lentos para textos. - Crear un extractor de propiedades para tareas de audio o imรกgenes. - Crear un procesador para tareas multimodales. ## Configuraciรณn Una [configuraciรณn](main_classes/configuration) es un conjunto de atributos especรญficos de un modelo. Cada configuraciรณn de modelo tiene atributos diferentes. Por ejemplo, todos los modelos de PLN tienen los atributos `hidden_size`, `num_attention_heads`, `num_hidden_layers` y `vocab_size` en comรบn. Estos atributos especifican el nรบmero de cabezas de atenciรณn o de capas ocultas con las que se construyen los modelos. Puedes echarle un vistazo a [DistilBERT](model_doc/distilbert) y sus atributos accediendo a [`DistilBertConfig`]: ```py >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` [`DistilBertConfig`] muestra todos los atributos por defecto que se han usado para construir un modelo [`DistilBertModel`] base. Todos ellos son personalizables, lo que deja espacio para poder experimentar. Por ejemplo, puedes personalizar un modelo predeterminado para: - Probar una funciรณn de activaciรณn diferente, usando el parรกmetro `activation`. - Usar un valor de abandono (tambiรฉn conocido como _dropout_) mรกs alto para las probabilidades de las capas de atenciรณn, usando el parรกmetro `attention_dropout`. ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` Los atributos de los modelos preentrenados pueden ser modificados con la funciรณn [`~PretrainedConfig.from_pretrained`]: ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert/distilbert-base-uncased", activation="relu", attention_dropout=0.4) ``` Cuando estรฉs satisfecho con la configuraciรณn de tu modelo, puedes guardarlo con la funciรณn [`~PretrainedConfig.save_pretrained`]. Tu configuraciรณn se guardarรก en un archivo JSON dentro del directorio que le especifiques como parรกmetro. ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` Para volver a usar el archivo de configuraciรณn, puedes cargarlo usando [`~PretrainedConfig.from_pretrained`]: ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") ``` <Tip> Tambiรฉn puedes guardar los archivos de configuraciรณn como un diccionario; o incluso guardar solo la diferencia entre tu archivo personalizado y la configuraciรณn por defecto. Consulta la [documentaciรณn sobre configuraciรณn](main_classes/configuration) para ver mรกs detalles. </Tip> ## Modelo El siguiente paso serรก crear un [modelo](main_classes/models). El modelo, al que a veces tambiรฉn nos referimos como arquitectura, es el encargado de definir cada capa y quรฉ operaciones se realizan. Los atributos como `num_hidden_layers` de la configuraciรณn se usan para definir la arquitectura. Todos los modelos comparten una clase base, [`PreTrainedModel`], y algunos mรฉtodos comunes que se pueden usar para redimensionar los _embeddings_ o para recortar cabezas de auto-atenciรณn (tambiรฉn llamadas _self-attention heads_). Ademรกs, todos los modelos son subclases de [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) o [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html), lo que significa que son compatibles con su respectivo framework. <frameworkcontent> <pt> Carga los atributos de tu configuraciรณn personalizada en el modelo de la siguiente forma: ```py >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> model = DistilBertModel(my_config) ``` Esto crea un modelo con valores aleatorios, en lugar de crearlo con los pesos del preentrenamiento, por lo que no serรกs capaz de usar este modelo para nada รบtil hasta que no lo entrenes. El entrenamiento es un proceso costoso, tanto en cuestiรณn de recursos como de tiempo, por lo que generalmente es mejor usar un modelo preentrenado para obtener mejores resultados mรกs rรกpido, consumiendo una fracciรณn de los recursos que un entrenamiento completo hubiera requerido. Puedes crear un modelo preentrenado con [`~PreTrainedModel.from_pretrained`]: ```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` Cuando cargues tus pesos del preentrenamiento, el modelo por defecto se carga automรกticamente si nos lo proporciona ๐Ÿค— Transformers. Sin embargo, siempre puedes reemplazar (todos o algunos de) los atributos del modelo por defecto por los tuyos: ```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config) ``` </pt> <tf> Carga los atributos de tu configuraciรณn personalizada en el modelo de la siguiente forma: ```py >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` Esto crea un modelo con valores aleatorios, en lugar de crearlo con los pesos del preentrenamiento, por lo que no serรกs capaz de usar este modelo para nada รบtil hasta que no lo entrenes. El entrenamiento es un proceso costoso, tanto en cuestiรณn de recursos como de tiempo, por lo que generalmente es mejor usar un modelo preentrenado para obtener mejores resultados mรกs rรกpido, consumiendo solo una fracciรณn de los recursos que un entrenamiento completo hubiera requerido. Puedes crear un modelo preentrenado con [`~TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` Cuando cargues tus pesos del preentrenamiento, el modelo por defecto se carga automรกticamente si este nos lo proporciona ๐Ÿค— Transformers. Sin embargo, siempre puedes reemplazar (todos o algunos de) los atributos del modelo por defecto por los tuyos: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config) ``` </tf> </frameworkcontent> ### Cabezas de modelo En este punto del tutorial, tenemos un modelo DistilBERT base que devuelve los *hidden states* o estados ocultos. Los *hidden states* se pasan como parรกmetros de entrada a la cabeza del modelo para producir la salida. ๐Ÿค— Transformers ofrece una cabeza de modelo diferente para cada tarea, siempre y cuando el modelo sea compatible para la tarea (por ejemplo, no puedes usar DistilBERT para una tarea secuencia a secuencia como la traducciรณn). <frameworkcontent> <pt> Por ejemplo, [`DistilBertForSequenceClassification`] es un modelo DistilBERT base con una cabeza de clasificaciรณn de secuencias. La cabeza de clasificaciรณn de secuencias es una capa superior que precede a la recolecciรณn de las salidas. ```py >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Puedes reutilizar este punto de guardado o *checkpoint* para otra tarea fรกcilmente cambiando a una cabeza de un modelo diferente. Para una tarea de respuesta a preguntas, puedes usar la cabeza del modelo [`DistilBertForQuestionAnswering`]. La cabeza de respuesta a preguntas es similar a la de clasificaciรณn de secuencias, excepto porque consta de una capa lineal delante de la salida de los *hidden states*. ```py >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </pt> <tf> Por ejemplo, [`TFDistilBertForSequenceClassification`] es un modelo DistilBERT base con una cabeza de clasificaciรณn de secuencias. La cabeza de clasificaciรณn de secuencias es una capa superior que precede a la recolecciรณn de las salidas. ```py >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Puedes reutilizar este punto de guardado o *checkpoint* para otra tarea fรกcilmente cambiando a una cabeza de un modelo diferente. Para una tarea de respuesta a preguntas, puedes usar la cabeza del modelo [`TFDistilBertForQuestionAnswering`]. La cabeza de respuesta a preguntas es similar a la de clasificaciรณn de secuencias, excepto porque consta de una capa lineal delante de la salida de los *hidden states*. ```py >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </tf> </frameworkcontent> ## Tokenizer La ultima clase base que debes conocer antes de usar un modelo con datos textuales es la clase [tokenizer](main_classes/tokenizer), que convierte el texto bruto en tensores. Hay dos tipos de *tokenizers* que puedes usar con ๐Ÿค— Transformers: - [`PreTrainedTokenizer`]: una implementaciรณn de un *tokenizer* hecha en Python. - [`PreTrainedTokenizerFast`]: un *tokenizer* de nuestra librerรญa [๐Ÿค— Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/), basada en Rust. Este tipo de *tokenizer* es bastante mรกs rรกpido, especialmente durante la tokenizaciรณn por lotes, gracias a estar implementado en Rust. Esta rรกpida tokenizaciรณn tambiรฉn ofrece mรฉtodos adicionales como el *offset mapping*, que relaciona los tokens con sus palabras o caracteres originales. Ambos *tokenizers* son compatibles con los mรฉtodos comunes, como los de encodificaciรณn y decodificaciรณn, los mรฉtodos para aรฑadir tokens y aquellos que manejan tokens especiales. <Tip warning={true}> No todos los modelos son compatibles con un *tokenizer* rรกpido. ร‰chale un vistazo a esta [tabla](index#supported-frameworks) para comprobar si un modelo especรญfico es compatible con un *tokenizer* rรกpido. </Tip> Si has entrenado tu propio *tokenizer*, puedes crear uno desde tu archivo de โ€œvocabularioโ€: ```py >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left") ``` Es importante recordar que los vocabularios que provienen de un *tokenizer* personalizado serรกn diferentes a los vocabularios generados por el *tokenizer* de un modelo preentrenado. Debes usar el vocabulario de un *tokenizer* preentrenado si vas a usar un modelo preentrenado, de lo contrario las entradas no tendrรกn sentido. Crea un *tokenizer* con el vocabulario de un modelo preentrenado usando la clase [`DistilBertTokenizer`]: ```py >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` Crea un *tokenizer* rรกpido con la clase [`DistilBertTokenizerFast`]: ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert/distilbert-base-uncased") ``` <Tip> Por defecto, el [`AutoTokenizer`] intentarรก cargar un *tokenizer* rรกpido. Puedes desactivar este comportamiento cambiando el parรกmetro `use_fast=False` de `from_pretrained`. </Tip> ## Extractor de Caracterรญsticas Un extractor de caracterรญsticas procesa entradas de audio e imagen. Hereda de la clase base [`~feature_extraction_utils.FeatureExtractionMixin`] y tambiรฉn puede heredar de la clase [`ImageFeatureExtractionMixin`] para el procesamiento de caracterรญsticas de las imรกgenes o de la clase [`SequenceFeatureExtractor`] para el procesamiento de entradas de audio. Dependiendo de si trabajas en una tarea de audio o de video, puedes crear un extractor de caracterรญsticas asociado al modelo que estรฉs usando. Por ejemplo, podrรญas crear un [`ViTFeatureExtractor`] por defecto si estรกs usando [ViT](model_doc/vit) para clasificaciรณn de imรกgenes: ```py >>> from transformers import ViTFeatureExtractor >>> vit_extractor = ViTFeatureExtractor() >>> print(vit_extractor) ViTFeatureExtractor { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> Si no estรกs buscando ninguna personalizaciรณn en especรญfico, usa el mรฉtodo `from_pretrained` para cargar los parรกmetros del extractor de caracterรญsticas por defecto del modelo. </Tip> Puedes modificar cualquier parรกmetro de [`ViTFeatureExtractor`] para crear tu extractor de caracterรญsticas personalizado: ```py >>> from transformers import ViTFeatureExtractor >>> my_vit_extractor = ViTFeatureExtractor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTFeatureExtractor { "do_normalize": false, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ``` Para las entradas de audio, puedes crear un [`Wav2Vec2FeatureExtractor`] y personalizar los parรกmetros de una forma similar: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 } ``` ## Procesador Para modelos que son compatibles con tareas multimodales, ๐Ÿค— Transformers ofrece una clase *procesador* que agrupa un extractor de caracterรญsticas y un *tokenizer* en el mismo objeto. Por ejemplo, probemos a usar el procesador [`Wav2Vec2Processor`] para una tarea de reconocimiento de voz (ASR). Un ASR transcribe el audio a texto, por lo que necesitaremos un extractor de caracterรญsticas y un *tokenizer*. Crea un extractor de caracterรญsticas para manejar la entrada de audio: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) ``` Crea un *tokenizer* para manejar la entrada de texto: ```py >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt") ``` Puedes combinar el extractor de caracterรญsticas y el *tokenizer* en el [`Wav2Vec2Processor`]: ```py >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` Con dos clases base (la configuraciรณn y el modelo) y una clase de preprocesamiento adicional (*tokenizer*, extractor de caracterรญsticas o procesador), puedes crear cualquiera de los modelos compatibles con ๐Ÿค— Transformers. Cada una de estas clases son configurables, permitiรฉndote usar sus atributos especรญficos. Puedes crear un modelo para entrenarlo de una forma fรกcil, o modificar un modelo preentrenado disponible para especializarlo.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/pipeline_webserver.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Uso de un flujo de trabajo para un servidor web <Tip> Crear un motor de inferencia es un tema complejo, y la "mejor" soluciรณn probablemente dependerรก de tu caso de uso. ยฟEstรกs en CPU o en GPU? ยฟQuieres la latencia mรกs baja, el rendimiento mรกs alto, soporte para muchos modelos o simplemente optimizar altamente un modelo especรญfico? Hay muchas formas de abordar este tema, asรญ que lo que vamos a presentar es un buen valor predeterminado para comenzar, que no necesariamente serรก la soluciรณn mรกs รณptima para ti. </Tip> Lo fundamental para entender es que podemos usar un iterador, tal como [en un conjunto de datos](https://huggingface.co/docs/transformers/pipeline_tutorial#using-pipelines-on-a-dataset), ya que un servidor web es bรกsicamente un sistema que espera solicitudes y las trata a medida que llegan. <!-- To do: * Check the content of es/pipeline_tutorial.md * And update the link [en un conjunto de datos] -> (pipeline_tutorial#pipelines-en-un-conjunto-de-datos) --> Por lo general, los servidores web estรกn multiplexados (multihilo, asรญncrono, etc.) para manejar varias solicitudes simultรกneamente. Por otro lado, los flujos de trabajo (y principalmente los modelos subyacentes) no son realmente ideales para el paralelismo; consumen mucha RAM, por lo que es mejor darles todos los recursos disponibles cuando se estรกn ejecutando o es un trabajo intensivo en cรณmputo. Vamos a resolver esto haciendo que el servidor web maneje la carga ligera de recibir y enviar solicitudes, y que un รบnico hilo maneje el trabajo real. Este ejemplo va a utilizar `starlette`. El marco de trabajo no es realmente importante, pero es posible que debas ajustar o cambiar el cรณdigo si estรกs utilizando otro para lograr el mismo efecto. Crear `server.py`: ```py from starlette.applications import Starlette from starlette.responses import JSONResponse from starlette.routing import Route from transformers import pipeline import asyncio async def homepage(request): payload = await request.body() string = payload.decode("utf-8") response_q = asyncio.Queue() await request.app.model_queue.put((string, response_q)) output = await response_q.get() return JSONResponse(output) async def server_loop(q): pipe = pipeline(model="google-bert/bert-base-uncased") while True: (string, response_q) = await q.get() out = pipe(string) await response_q.put(out) app = Starlette( routes=[ Route("/", homepage, methods=["POST"]), ], ) @app.on_event("startup") async def startup_event(): q = asyncio.Queue() app.model_queue = q asyncio.create_task(server_loop(q)) ``` Ahora puedes empezar con: ```bash uvicorn server:app ``` Y puedes consultarlo con: ```bash curl -X POST -d "test [MASK]" http://localhost:8000/ #[{"score":0.7742936015129089,"token":1012,"token_str":".","sequence":"test."},...] ``` ยกY listo, ahora tienes una buena idea de cรณmo crear un servidor web! Lo realmente importante es cargar el modelo solo **una vez**, de modo que no haya copias del modelo en el servidor web. De esta manera, no se utiliza RAM innecesariamente. Luego, el mecanismo de queuing (colas) te permite hacer cosas sofisticadas como acumular algunos elementos antes de inferir para usar el agrupamiento dinรกmico: <Tip warning={true}> El ejemplo de cรณdigo a continuaciรณn estรก escrito intencionalmente como pseudocรณdigo para facilitar la lectura. ยกNo lo ejecutes sin verificar si tiene sentido para los recursos de tu sistema! </Tip> ```py (string, rq) = await q.get() strings = [] queues = [] while True: try: (string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms except asyncio.exceptions.TimeoutError: break strings.append(string) queues.append(rq) strings outs = pipe(strings, batch_size=len(strings)) for rq, out in zip(queues, outs): await rq.put(out) ``` Nuevamente, el cรณdigo propuesto estรก optimizado para la legibilidad, no para ser el mejor cรณdigo. En primer lugar, no hay lรญmite de tamaรฑo de lote, lo cual generalmente no es una buena idea. Luego, el tiempo de espera se restablece en cada obtenciรณn de la cola, lo que significa que podrรญas esperar mucho mรกs de 1ms antes de ejecutar la inferencia (retrasando la primera solicitud en esa cantidad). Serรญa mejor tener un รบnico plazo de 1ms. Esto siempre esperarรก 1ms incluso si la cola estรก vacรญa, lo que podrรญa no ser lo mejor ya que probablemente quieras comenzar a hacer inferencias si no hay nada en la cola. Pero tal vez tenga sentido si el agrupamiento es realmente crucial para tu caso de uso. Nuevamente, no hay una soluciรณn รบnica y mejor. ## Algunas cosas que podrรญas considerar ### Comprobaciรณn de errores Hay muchas cosas que pueden salir mal en producciรณn: falta de memoria, falta de espacio, cargar el modelo podrรญa fallar, la consulta podrรญa ser incorrecta, la consulta podrรญa ser correcta pero aรบn asรญ fallar debido a una mala configuraciรณn del modelo, y asรญ sucesivamente. Generalmente, es bueno que el servidor muestre los errores al usuario, por lo que agregar muchos bloques `try..except` para mostrar esos errores es una buena idea. Pero ten en cuenta que tambiรฉn puede ser un riesgo de seguridad revelar todos esos errores dependiendo de tu contexto de seguridad. ### Interrupciรณn de circuito Los servidores web suelen verse mejor cuando hacen interrupciones de circuitos. Significa que devuelven errores adecuados cuando estรกn sobrecargados en lugar de simplemente esperar la consulta indefinidamente. Devolver un error 503 en lugar de esperar un tiempo muy largo o un error 504 despuรฉs de mucho tiempo. Esto es relativamente fรกcil de implementar en el cรณdigo propuesto ya que hay una sola cola. Mirar el tamaรฑo de la cola es una forma bรกsica de empezar a devolver errores antes de que tu servidor web falle bajo carga. ### Bloqueo del hilo principal Actualmente, PyTorch no es consciente de la asincronรญa, y el cรกlculo bloquearรก el hilo principal mientras se ejecuta. Esto significa que serรญa mejor si PyTorch se viera obligado a ejecutarse en su propio hilo/proceso. Esto no se hizo aquรญ porque el cรณdigo es mucho mรกs complejo (principalmente porque los hilos, la asincronรญa y las colas no se llevan bien juntos). Pero en รบltima instancia, hace lo mismo. Esto serรญa importante si la inferencia de elementos individuales fuera larga (> 1s) porque en este caso, significa que cada consulta durante la inferencia tendrรญa que esperar 1s antes de recibir incluso un error. ### Procesamiento por lotes dinรกmico En general, el procesamiento por lotes no es necesariamente una mejora respecto a pasar 1 elemento a la vez (ver [procesamiento por lotes](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching) para mรกs informaciรณn). Pero puede ser muy efectivo cuando se usa en el entorno correcto. En la API, no hay procesamiento por lotes dinรกmico por defecto (demasiada oportunidad para una desaceleraciรณn). Pero para la inferencia de BLOOM - que es un modelo muy grande - el procesamiento por lotes dinรกmico es **esencial** para proporcionar una experiencia decente para todos.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/community.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Comunidad Esta pรกgina agrupa los recursos de ๐Ÿค— Transformers desarrollados por la comunidad. ## Los recursos de la comunidad: | Recurso | Descripciรณn | Autor | |:----------|:-------------|------:| | [Hugging Face Transformers Glossary Flashcards](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | Un conjunto de flashcards basadas en el [Glosario de documentos de Transformers] (glosario) que se ha puesto en un formato que se puede aprender/revisar fรกcilmente usando [Anki](https://apps.ankiweb.net/) una fuente abierta, aplicaciรณn de multiplataforma diseรฑada especรญficamente para la retenciรณn de conocimientos a largo plazo. Ve este [Introductory video on how to use the flashcards](https://www.youtube.com/watch?v=Dji_h7PILrw). | [Darigov Research](https://www.darigovresearch.com/) | ## Los cuadernos de la comunidad: | Cuaderno | Descripciรณn | Autor | | |:----------|:-------------|:-------------|------:| | [Ajustar un transformador preentrenado para generar letras](https://github.com/AlekseyKorshuk/huggingartists) | Cรณmo generar letras al estilo de tu artista favorito ajustando un modelo GPT-2 | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | [Entrenar T5 en Tensorflow 2](https://github.com/snapthat/TF-T5-text-to-text) | Cรณmo entrenar a T5 para cualquier tarea usando Tensorflow 2. Este cuaderno demuestra una tarea de preguntas y respuestas implementada en Tensorflow 2 usando SQUAD | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | [Entrenar T5 en TPU](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | Cรณmo entrenar a T5 en SQUAD con Transformers y Nlp | [Suraj Patil](https://github.com/patil-suraj) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | [Ajustar T5 para Clasificaciรณn y Opciรณn Mรบltiple](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | Cรณmo ajustar T5 para clasificaciรณn y tareas de opciรณn mรบltiple usando un formato de texto a texto con PyTorch Lightning | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | [Ajustar DialoGPT en nuevos conjuntos de datos e idiomas](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | Cรณmo ajustar el modelo DialoGPT en un nuevo conjunto de datos para chatbots conversacionales de diรกlogo abierto | [Nathan Cooper](https://github.com/ncoop57) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | [Modelado de secuencias largas con Reformer](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | Cรณmo entrenar en secuencias de hasta 500,000 tokens con Reformer | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | [Ajustar BART para resumir](https://github.com/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) | Cรณmo ajustar BART para resumir con fastai usando blurr | [Wayde Gilliam](https://ohmeow.com/) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) | | [Ajustar un Transformador previamente entrenado en los tweets de cualquier persona](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | Cรณmo generar tweets al estilo de tu cuenta de Twitter favorita ajustando un modelo GPT-2 | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | [Optimizar ๐Ÿค— modelos de Hugging Face con pesos y sesgos](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | Un tutorial completo que muestra la integraciรณn de W&B con Hugging Face | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | [Preentrenar Longformer](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | Cรณmo construir una versiรณn "larga" de modelos preentrenados existentes | [Iz Beltagy](https://beltagy.net) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | [Ajustar Longformer para control de calidad](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | Cรณmo ajustar el modelo antiguo para la tarea de control de calidad | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | [Evaluar modelo con ๐Ÿค—nlp](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | Cรณmo evaluar longformer en TriviaQA con `nlp` | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | [Ajustar fino de T5 para la extracciรณn de amplitud de opiniรณn](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | Cรณmo ajustar T5 para la extracciรณn de intervalos de opiniones mediante un formato de texto a texto con PyTorch Lightning | [Lorenzo Ampil](https://github.com/enzoampil) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | [Ajustar fino de DistilBert para la clasificaciรณn multiclase](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | Cรณmo ajustar DistilBert para la clasificaciรณn multiclase con PyTorch | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| |[Ajustar BERT para la clasificaciรณn de etiquetas mรบltiples](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| Cรณmo ajustar BERT para la clasificaciรณn de mรบltiples etiquetas usando PyTorch |[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| |[Ajustar T5 para resumir](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)| Cรณmo ajustar T5 para resumir en PyTorch y realizar un seguimiento de los experimentos con WandB |[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)| |[Acelerar el ajuste fino en transformadores con Dynamic Padding/Bucketing](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)| Cรณmo acelerar el ajuste fino en un factor de 2 usando relleno dinรกmico/cubetas |[Michael Benesty](https://github.com/pommedeterresautee) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| |[Preentrenar Reformer para modelado de lenguaje enmascarado](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| Cรณmo entrenar un modelo Reformer con capas de autoatenciรณn bidireccionales | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| |[Ampliar y ajustar Sci-BERT](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| Cรณmo aumentar el vocabulario de un modelo SciBERT preentrenado de AllenAI en el conjunto de datos CORD y canalizarlo. | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| |[Ajustar fino de BlenderBotSmall para resรบmenes usando la API de Entrenador](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| Cรณmo ajustar BlenderBotSmall para resumir en un conjunto de datos personalizado, utilizando la API de Entrenador. | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| |[Ajustar Electra e interpreta con gradientes integrados](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | Cรณmo ajustar Electra para el anรกlisis de sentimientos e interpretar predicciones con Captum Integrated Gradients | [Eliza Szczechla](https://elsanns.github.io) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| |[ajustar un modelo GPT-2 que no estรก en inglรฉs con la clase Trainer](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | Cรณmo ajustar un modelo GPT-2 que no estรก en inglรฉs con la clase Trainer | [Philipp Schmid](https://www.philschmid.de) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| |[Ajustar un modelo DistilBERT para la tarea de clasificaciรณn de mรบltiples etiquetas](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | Cรณmo ajustar un modelo DistilBERT para la tarea de clasificaciรณn de mรบltiples etiquetas | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| |[Ajustar ALBERT para la clasificaciรณn de pares de oraciones](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | Cรณmo ajustar un modelo ALBERT u otro modelo basado en BERT para la tarea de clasificaciรณn de pares de oraciones | [Nadir El Manouzi](https://github.com/NadirEM) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| |[Ajustar a Roberta para el anรกlisis de sentimientos](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | Cรณmo ajustar un modelo de Roberta para el anรกlisis de sentimientos | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| |[Evaluaciรณn de modelos de generaciรณn de preguntas](https://github.com/flexudy-pipe/qugeev) | ยฟQuรฉ tan precisas son las respuestas a las preguntas generadas por tu modelo de transformador seq2seq? | [Pascal Zoleko](https://github.com/zolekode) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| |[Clasificar texto con DistilBERT y Tensorflow](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | Cรณmo ajustar DistilBERT para la clasificaciรณn de texto en TensorFlow | [Peter Bayerle](https://github.com/peterbayerle) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| |[Aprovechar BERT para el resumen de codificador y decodificador en CNN/Dailymail](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | Cรณmo iniciar en caliente un *EncoderDecoderModel* con un punto de control *google-bert/bert-base-uncased* para resumir en CNN/Dailymail | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| |[Aprovechar RoBERTa para el resumen de codificador-decodificador en BBC XSum](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | Cรณmo iniciar en caliente un *EncoderDecoderModel* compartido con un punto de control *FacebookAI/roberta-base* para resumir en BBC/XSum | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| |[Ajustar TAPAS en Sequential Question Answering (SQA)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | Cรณmo ajustar *TapasForQuestionAnswering* con un punto de control *tapas-base* en el conjunto de datos del Sequential Question Answering (SQA) | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| |[Evaluar TAPAS en Table Fact Checking (TabFact)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | Cรณmo evaluar un *TapasForSequenceClassification* ajustado con un punto de control *tapas-base-finetuned-tabfact* usando una combinaciรณn de ๐Ÿค— conjuntos de datos y ๐Ÿค— bibliotecas de transformadores | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| |[Ajustar de mBART para traducciรณn](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | Cรณmo ajustar mBART utilizando Seq2SeqTrainer para la traducciรณn del hindi al inglรฉs | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| |[Ajustar LayoutLM en FUNSD (a form understanding dataset)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | Cรณmo ajustar *LayoutLMForTokenClassification* en el conjunto de datos de FUNSD para la extracciรณn de informaciรณn de documentos escaneados | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| |[Ajustar DistilGPT2 y genere texto](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | Cรณmo ajustar DistilGPT2 y generar texto | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| |[Ajustar LED en tokens de hasta 8K](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | Cรณmo ajustar LED en pubmed para resรบmenes de largo alcance | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| |[Evaluar LED en Arxiv](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | Cรณmo evaluar efectivamente LED en resรบmenes de largo alcance | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| |[Ajustar fino de LayoutLM en RVL-CDIP (un conjunto de datos de clasificaciรณn de imรกgenes de documentos)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | Cรณmo ajustar *LayoutLMForSequenceClassification* en el conjunto de datos RVL-CDIP para la clasificaciรณn de documentos escaneados | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| |[Decodificaciรณn Wav2Vec2 CTC con ajuste GPT2](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | Cรณmo decodificar la secuencia CTC con el ajuste del modelo de lenguaje | [Eric Lam](https://github.com/voidful) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)| |[Ajustar BART para resรบmenes en dos idiomas con la clase Trainer](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | Cรณmo ajustar BART para resรบmenes en dos idiomas con la clase Trainer | [Eliza Szczechla](https://github.com/elsanns) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| |[Evaluar Big Bird en Trivia QA](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | Cรณmo evaluar BigBird en respuesta a preguntas de documentos largos en Trivia QA | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | [Crear subtรญtulos de video usando Wav2Vec2](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | Cรณmo crear subtรญtulos de YouTube a partir de cualquier vรญdeo transcribiendo el audio con Wav2Vec | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | [Ajustar el transformador de visiรณn en CIFAR-10 usando PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | Cรณmo ajustar el transformador de visiรณn (ViT) en CIFAR-10 usando transformadores HuggingFace, conjuntos de datos y PyTorch Lightning | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | [Ajustar el Transformador de visiรณn en CIFAR-10 usando el ๐Ÿค— Entrenador](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | Cรณmo ajustar el Vision Transformer (ViT) en CIFAR-10 usando HuggingFace Transformers, Datasets y el ๐Ÿค— Trainer | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | [Evaluar LUKE en Open Entity, un conjunto de datos de tipificaciรณn de entidades](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | Cรณmo evaluar *LukeForEntityClassification* en el conjunto de datos de entidad abierta | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | [Evaluar LUKE en TACRED, un conjunto de datos de extracciรณn de relaciones](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | Cรณmo evaluar *LukeForEntityPairClassification* en el conjunto de datos TACRED | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | [Evaluar LUKE en CoNLL-2003, un punto de referencia importante de NER](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | Cรณmo evaluar *LukeForEntitySpanClassification* en el conjunto de datos CoNLL-2003 | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | [Evaluar BigBird-Pegasus en el conjunto de datos de PubMed](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | Cรณmo evaluar *BigBirdPegasusForConditionalGeneration* en el conjunto de datos de PubMed | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | [Clasificaciรณn de emociones del habla con Wav2Vec2](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | Cรณmo aprovechar un modelo Wav2Vec2 preentrenado para la clasificaciรณn de emociones en el conjunto de datos MEGA | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | [Detectar objetos en una imagen con DETR](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | Cรณmo usar un modelo entrenado *DetrForObjectDetection* para detectar objetos en una imagen y visualizar la atenciรณn | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | [Ajustar el DETR en un conjunto de datos de detecciรณn de objetos personalizados](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | Cรณmo ajustar *DetrForObjectDetection* en un conjunto de datos de detecciรณn de objetos personalizados | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | [Ajustar T5 para el reconocimiento de entidades nombradas](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | Cรณmo ajustar *T5* en una tarea de reconocimiento de entidad nombrada | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/attention.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Mecanismos de atenciรณn La mayorรญa de los modelos transformers utilizan atenciรณn completa, en el sentido de que la matriz de atenciรณn es cuadrada. Esto puede ser un gran cuello de botella computacional cuando tienes textos largos. `Longformer` y `reformer` son modelos que intentan ser mรกs eficientes y utilizan una versiรณn dispersa de la matriz de atenciรณn para acelerar el entrenamiento. ## Atenciรณn LSH [Reformer](https://huggingface.co/docs/transformers/model_doc/reformer) utiliza atenciรณn LSH. En el softmax(QK^t), solo los elementos mรกs grandes (en la dimensiรณn softmax) de la matriz QK^t van a dar contribuciones รบtiles. Entonces, para cada consulta q en Q, podemos considerar solo las claves k en K que estรฉn cerca de q. Se utiliza una funciรณn hash para determinar si q y k estรกn cerca. La mรกscara de atenciรณn se modifica para enmascarar el token actual (excepto en la primera posiciรณn), porque darรก una consulta y una clave iguales (entonces muy similares entre sรญ). Dado que el hash puede ser un poco aleatorio, en la prรกctica se utilizan varias funciones hash (determinadas por un parรกmetro n_rounds) y luego se promedian juntas. ## Atenciรณn local [Longformer](https://huggingface.co/docs/transformers/model_doc/longformer) utiliza atenciรณn local: a menudo, el contexto local (por ejemplo, ยฟcuรกles son los dos tokens a la izquierda y a la derecha?) es suficiente para tomar acciรณn para un token dado. Ademรกs, apilando capas de atenciรณn que tienen una ventana pequeรฑa, la รบltima capa tendrรก un campo receptivo mayor que solamente los tokens en la ventana, lo que les permite construir una representaciรณn de toda la oraciรณn. Algunos tokens de entrada preseleccionados tambiรฉn reciben atenciรณn global: para esos pocos tokens, la matriz de atenciรณn puede acceder a todos los tokens y este proceso es simรฉtrico: todos los demรกs tokens tienen acceso a esos tokens especรญficos (ademรกs de los que estรกn en su ventana local). Esto se muestra en la Figura 2d del artรญculo, el cual se puede apreciar un ejemplo de una mรกscara de atenciรณn: <div class="flex justify-center"> <img scale="50 %" align="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/local_attention_mask.png"/> </div> El uso de dichas matrices de atenciรณn con menos parรกmetros permite que el modelo tenga entradas con una longitud de secuencia mayor. ## Otros trucos ### Codificaciรณn posicional axial [Reformer](https://huggingface.co/docs/transformers/model_doc/reformer) utiliza codificaciรณn posicional axial: en los modelos transformers tradicionales, la codificaciรณn posicional E es una matriz de tamaรฑo \\(l\\) por \\(d\\), donde \\(l\\) es la longitud de la secuencia y \\(d\\) es la dimensiรณn del estado oculto. Si tienes textos muy extensos, esta matriz puede ser enorme y ocupar demasiado espacio en la GPU. Para aliviar eso, las codificaciones posicionales axiales consisten en factorizar esa gran matriz E en dos matrices mรกs pequeรฑas E1 y E2, con dimensiones \\(l_{1} \times d_{1}\\) y \\(l_{2} \times d_{2}\\), tal que \\(l_{1} \times l_{2} = l\\) y \\(d_{1} + d_{2} = d\\) (con el producto de las longitudes, esto termina siendo mucho mรกs pequeรฑo). La incrustaciรณn (embedding) para el paso de tiempo \\(j\\) en E se obtiene concatenando las incrustaciones para el paso de tiempo \\(j \% l1\\) en E1 y \\(j // l1\\) en E2.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/bertology.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BERTologรญa Hay un creciente campo de estudio empeรฑado en la investigaciรณn del funcionamiento interno de los transformers de gran escala como BERT (que algunos llaman "BERTologรญa"). Algunos buenos ejemplos de este campo son: - BERT Rediscovers the Classical NLP Pipeline por Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950 - Are Sixteen Heads Really Better than One? por Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650 - What Does BERT Look At? An Analysis of BERT's Attention por Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341 - CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: https://arxiv.org/abs/2210.04633 Para asistir al desarrollo de este nuevo campo, hemos incluido algunas features adicionales en los modelos BERT/GPT/GPT-2 para ayudar a acceder a las representaciones internas, principalmente adaptado de la gran obra de Paul Michel (https://arxiv.org/abs/1905.10650): - accediendo a todos los hidden-states de BERT/GPT/GPT-2, - accediendo a todos los pesos de atenciรณn para cada head de BERT/GPT/GPT-2, - adquiriendo los valores de salida y gradientes de las heads para poder computar la mรฉtrica de importancia de las heads y realizar la poda de heads como se explica en https://arxiv.org/abs/1905.10650. Para ayudarte a entender y usar estas features, hemos aรฑadido un script especรญfico de ejemplo: [bertology.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/bertology/run_bertology.py) mientras extraes informaciรณn y cortas un modelo pre-entrenado en GLUE.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/torchscript.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Exportar a TorchScript <Tip> Este es el comienzo de nuestros experimentos con TorchScript y todavรญa estamos explorando sus capacidades con modelos de variables de entrada. Es un tema de interรฉs para nosotros y profundizaremos en nuestro anรกlisis en las prรณximas versiones, con mรกs ejemplos de cรณdigo, una implementaciรณn mรกs flexible y comparativas de rendimiento comparando cรณdigos basados en Python con TorchScript compilado. </Tip> De acuerdo con la documentaciรณn de TorchScript: > "TorchScript es una manera de crear modelos serializables y optimizables a partir del cรณdigo PyTorch." Hay dos mรณdulos de PyTorch, [JIT y TRACE](https://pytorch.org/docs/stable/jit.html), que permiten a los desarrolladores exportar sus modelos para ser reusados en otros programas, como los programas de C++ orientados a la eficiencia. Nosotros proveemos una interface que te permite exportar los modelos ๐Ÿค—Transformers a TorchScript para que puedan ser reusados en un entorno diferente al de los programas Python basados en PyTorch. Aquรญ explicamos como exportar y usar nuestros modelos utilizando TorchScript. Exportar un modelo requiere de dos cosas: - La instanciaciรณn del modelo con la bandera TorchScript. - Un paso hacia adelante con entradas ficticias. Estas necesidades implican varias cosas de las que los desarrolladores deben tener cuidado, como se detalla a continuaciรณn. ## Bandera TorchScript y pesos atados. La bandera `torchscript` es necesaria porque la mayorรญa de los modelos de lenguaje de ๐Ÿค—Transformers tienen pesos atados entre su `capa de incrustaciรณn` (`Embedding`) y su `capa de decodificaciรณn` (`Decoding`). TorchScript no te permite exportar modelos que tienen pesos atados, por lo que es necesario desatar y clonar los pesos de antemano. Los modelos instanciados con la bandera `torchscript` tienen su `capa de incrustaciรณn` (`Embedding`) y su `capa de decodificaciรณn` (`Decoding`) separadas, lo que significa que no deben ser entrenados mรกs adelante. Entrenar desincronizarรญa las dos capas, lo que llevarรญa a resultados inesperados. Esto no es asรญ para los modelos que no tienen una cabeza de modelo de lenguaje, ya que esos modelos no tienen pesos atados. Estos modelos pueden ser exportados de manera segura sin la bandera `torchscript`. ## Entradas ficticias y longitudes estรกndar Las entradas ficticias se utilizan para un paso del modelo hacia adelante. Mientras los valores de las entradas se propagan a travรฉs de las capas, PyTorch realiza un seguimiento de las diferentes operaciones ejecutadas en cada tensor. Estas operaciones registradas se utilizan luego para crear *la traza* del modelo. La traza se crea en relaciรณn con las dimensiones de las entradas. Por lo tanto, estรก limitada por las dimensiones de la entrada ficticia y no funcionarรก para ninguna otra longitud de secuencia o tamaรฑo de lote. Cuando se intenta con un tamaรฑo diferente, se genera el siguiente error: ``` `El tamaรฑo expandido del tensor (3) debe coincidir con el tamaรฑo existente (7) en la dimensiรณn no singleton 2`. ``` Recomendamos trazar el modelo con un tamaรฑo de entrada ficticio al menos tan grande como la entrada mรกs grande con la que se alimentarรก al modelo durante la inferencia. El relleno puede ayudar a completar los valores faltantes. Sin embargo, dado que el modelo se traza con un tamaรฑo de entrada mรกs grande, las dimensiones de la matriz tambiรฉn serรกn grandes, lo que resultarรก en mรกs cรกlculos. Ten cuidado con el nรบmero total de operaciones realizadas en cada entrada y sigue de cerca el rendimiento al exportar modelos con longitudes de secuencia variables. ## Usando TorchScript en Python Esta secciรณn demuestra cรณmo guardar y cargar modelos, asรญ como cรณmo usar la traza para la inferencia. ### Guardando un modelo Para exportar un `BertModel` con TorchScript, instancia `BertModel` a partir de la clase `BertConfig` y luego guรกrdalo en disco bajo el nombre de archivo `traced_bert.pt`: ```python from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("bert-base-uncased") # Tokenizing input text text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) # Masking one of the input tokens masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # Creating a dummy input tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # Initializing the model with the torchscript flag # Flag set to True even though it is not necessary as this model does not have an LM Head. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # Instantiating the model model = BertModel(config) # The model needs to be in evaluation mode model.eval() # If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag model = BertModel.from_pretrained("bert-base-uncased", torchscript=True) # Creating the trace traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt") ``` ### Cargando un modelo Ahora puedes cargar el `BertModel` guardado anteriormente, `traced_bert.pt`, desde el disco y usarlo en la entrada ficticia (`dummy_input`) previamente inicializada: ```python loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ``` ## Usando un modelo trazado para inferencia Utiliza el modelo trazado para inferencia utilizando su mรฉtodo `_call_` dunder: ```python traced_model(tokens_tensor, segments_tensors) ``` ## Despliega modelos TorchScript de Hugging Face en AWS con el Neuron SDK AWS introdujo la familia de instancias [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) para inferencia de aprendizaje automรกtico de alto rendimiento y bajo costo en la nube. Las instancias Inf1 estรกn alimentadas por el chip AWS Inferentia, un acelerador de hardware personalizado que se especializa en cargas de trabajo de inferencia de aprendizaje profundo. [AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#) es el SDK para Inferentia que admite el trazado y la optimizaciรณn de modelos de transformers para implementaciรณn en Inf1. El SDK Neuron proporciona: 1. Una API fรกcil de usar con un solo cambio de lรญnea de cรณdigo para trazar y optimizar un modelo TorchScript para inferencia en la nube. 2. Optimizaciones de rendimiento listas para usar [para mejorar el rendimiento y el costo](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>). 3. Soporte para modelos de transformers de Hugging Face construidos tanto con [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) como con [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html). ### Implicaciones Los modelos transformers basados en la arquitectura [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert), o sus variantes como [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) y [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta), funcionan mejor en Inf1 para tareas no generativas como la respuesta a preguntas extractivas, la clasificaciรณn de secuencias y la clasificaciรณn de tokens. Sin embargo, las tareas de generaciรณn de texto aรบn pueden adaptarse para ejecutarse en Inf1 segรบn este [tutorial de AWS Neuron MarianMT](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html). Se puede encontrar mรกs informaciรณn sobre los modelos que se pueden convertir fรกcilmente para usar en Inferentia en la secciรณn de [Model Architecture Fit](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia) de la documentaciรณn de Neuron. ### Dependencias El uso de AWS Neuron para convertir modelos requiere un [entorno de Neuron SDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide) que viene preconfigurado en [la AMI de AWS Deep Learning](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html). ### Convertir un modelo para AWS Neuron Convierte un modelo para AWS NEURON utilizando el mismo cรณdigo de [Uso de TorchScript en Python](torchscript#using-torchscript-in-python) para trazar un `BertModel`. Importa la extensiรณn del framework `torch.neuron` para acceder a los componentes del Neuron SDK a travรฉs de una API de Python: ```python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron ``` Solo necesitas la linea sigueda: ```diff - torch.jit.trace(model, [tokens_tensor, segments_tensors]) + torch.neuron.trace(model, [token_tensor, segments_tensors]) ``` Esto permite que el Neuron SDK trace el modelo y lo optimice para las instancias Inf1. Para obtener mรกs informaciรณn sobre las caracterรญsticas, herramientas, tutoriales de ejemplo y รบltimas actualizaciones del AWS Neuron SDK, consulta [la documentaciรณn de AWS NeuronSDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Transformers installation ! pip install transformers datasets evaluate accelerate # To install from source instead of the last release, comment the command above and uncomment the following one. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/performance.md
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Rendimiento y Escalabilidad Entrenar modelos grandes de transformadores y desplegarlos en producciรณn presenta varios desafรญos. Durante el entrenamiento, el modelo puede requerir mรกs memoria de GPU de la disponible o mostrar una velocidad de entrenamiento lenta. En la fase de implementaciรณn, el modelo puede tener dificultades para manejar el rendimiento necesario en un entorno de producciรณn. Esta documentaciรณn tiene como objetivo ayudarte a superar estos desafรญos y encontrar la configuraciรณn รณptima para tu caso de uso. Las guรญas estรกn divididas en secciones de entrenamiento e inferencia, ya que cada una presenta diferentes desafรญos y soluciones. Dentro de cada secciรณn, encontrarรกs guรญas separadas para diferentes configuraciones de hardware, como GPU รบnica vs. multi-GPU para el entrenamiento o CPU vs. GPU para la inferencia. Utiliza este documento como punto de partida para navegar hacia los mรฉtodos que se ajusten a tu escenario. ## Entrenamiento Entrenar modelos grandes de transformadores de manera eficiente requiere un acelerador como una GPU o TPU. El caso mรกs comรบn es cuando tienes una GPU รบnica. Los mรฉtodos que puedes aplicar para mejorar la eficiencia de entrenamiento en una GPU รบnica tambiรฉn se aplican a otras configuraciones, como mรบltiples GPU. Sin embargo, tambiรฉn existen tรฉcnicas especรญficas para entrenamiento con mรบltiples GPU o CPU, las cuales cubrimos en secciones separadas. * [Mรฉtodos y herramientas para un entrenamiento eficiente en una sola GPU](https://huggingface.co/docs/transformers/perf_train_gpu_one): comienza aquรญ para aprender enfoques comunes que pueden ayudar a optimizar la utilizaciรณn de memoria de la GPU, acelerar el entrenamiento o ambas cosas. * [Secciรณn de entrenamiento con varias GPU](https://huggingface.co/docs/transformers/perf_train_gpu_many): explora esta secciรณn para conocer mรฉtodos de optimizaciรณn adicionales que se aplican a configuraciones con varias GPU, como paralelismo de datos, tensores y canalizaciones. * [Secciรณn de entrenamiento en CPU](https://huggingface.co/docs/transformers/perf_train_cpu): aprende sobre entrenamiento de precisiรณn mixta en CPU. * [Entrenamiento eficiente en mรบltiples CPUs](https://huggingface.co/docs/transformers/perf_train_cpu_many): aprende sobre el entrenamiento distribuido en CPU. * [Entrenamiento en TPU con TensorFlow](https://huggingface.co/docs/transformers/perf_train_tpu_tf): si eres nuevo en TPUs, consulta esta secciรณn para obtener una introducciรณn basada en opiniones sobre el entrenamiento en TPUs y el uso de XLA. * [Hardware personalizado para el entrenamiento](https://huggingface.co/docs/transformers/perf_hardware): encuentra consejos y trucos al construir tu propia plataforma de aprendizaje profundo. * [Bรบsqueda de hiperparรกmetros utilizando la API del Entrenador](https://huggingface.co/docs/transformers/hpo_train) ## Inferencia Realizar inferencias eficientes con modelos grandes en un entorno de producciรณn puede ser tan desafiante como entrenarlos. En las siguientes secciones, describimos los pasos para ejecutar inferencias en CPU y configuraciones con GPU รบnica/mรบltiple. * [Inferencia en una sola CPU](https://huggingface.co/docs/transformers/perf_infer_cpu) * [Inferencia en una sola GPU](https://huggingface.co/docs/transformers/perf_infer_gpu_one) * [Inferencia con mรบltiples GPU](https://huggingface.co/docs/transformers/perf_infer_gpu_one) * [Integraciรณn de XLA para modelos de TensorFlow](https://huggingface.co/docs/transformers/tf_xla) ## Entrenamiento e Inferencia Aquรญ encontrarรกs tรฉcnicas, consejos y trucos que aplican tanto si estรกs entrenando un modelo como si estรกs ejecutando inferencias con รฉl. * [Instanciar un modelo grande](https://huggingface.co/docs/transformers/big_models) * [Soluciรณn de problemas de rendimiento](https://huggingface.co/docs/transformers/debugging) ## Contribuir Este documento estรก lejos de estar completo y aรบn se deben agregar muchas cosas, asรญ que si tienes adiciones o correcciones que hacer, no dudes en abrir un PR. Si no estรกs seguro, inicia un Issue y podemos discutir los detalles allรญ. Cuando hagas contribuciones que indiquen que A es mejor que B, intenta incluir un benchmark reproducible y/o un enlace a la fuente de esa informaciรณn (a menos que provenga directamente de ti).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Carga instancias preentrenadas con un AutoClass Con tantas arquitecturas diferentes de Transformer puede ser retador crear una para tu checkpoint. Como parte de la filosofรญa central de ๐Ÿค— Transformers para hacer que la biblioteca sea fรกcil, simple y flexible de usar; una `AutoClass` automรกticamente infiere y carga la arquitectura correcta desde un checkpoint dado. El mรฉtodo `from_pretrained` te permite cargar rรกpidamente un modelo preentrenado para cualquier arquitectura, por lo que no tendrรกs que dedicar tiempo y recursos para entrenar uno desde cero. Producir este tipo de cรณdigo con checkpoint implica que si funciona con uno, funcionarรก tambiรฉn con otro (siempre que haya sido entrenado para una tarea similar) incluso si la arquitectura es distinta. <Tip> Recuerda, la arquitectura se refiere al esqueleto del modelo y los checkpoints son los pesos para una arquitectura dada. Por ejemplo, [BERT](https://huggingface.co/google-bert/bert-base-uncased) es una arquitectura, mientras que `google-bert/bert-base-uncased` es un checkpoint. Modelo es un tรฉrmino general que puede significar una arquitectura o un checkpoint. </Tip> En este tutorial, aprenderรกs a: * Cargar un tokenizador pre-entrenado. * Cargar un extractor de caracterรญsticas (feature extractor en inglรฉs) pre-entrenado. * Cargar un procesador pre-entrenado. * Cargar un modelo pre-entrenado. ## AutoTokenizer Casi cualquier tarea de Procesamiento de Lenguaje Natural comienza con un tokenizador. Un tokenizador convierte tu input a un formato que puede ser procesado por el modelo. Carga un tokenizador con [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") ``` Luego tokeniza tu input como lo mostrado a continuaciรณn: ```py >>> sequence = "In a hole in the ground there lived a hobbit." >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoFeatureExtractor Para tareas de audio y visiรณn, un extractor de caracterรญsticas procesa la seรฑal de audio o imagen al formato de input correcto. Carga un extractor de caracterรญsticas con [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor Las tareas multimodales requieren un procesador que combine dos tipos de herramientas de preprocesamiento. Por ejemplo, el modelo [LayoutLMV2](model_doc/layoutlmv2) requiere que un extractor de caracterรญsticas maneje las imรกgenes y que un tokenizador maneje el texto; un procesador combina ambas. Carga un procesador con [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel <frameworkcontent> <pt> Finalmente, las clases `AutoModelFor` te permiten cargar un modelo preentrenado para una tarea dada (revisa [aquรญ](model_doc/auto) para conocer la lista completa de tareas disponibles). Por ejemplo, cargue un modelo para clasificaciรณn de secuencias con [`AutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Reutiliza fรกcilmente el mismo checkpoint para cargar una aquitectura para alguna tarea diferente: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Generalmente recomendamos utilizar las clases `AutoTokenizer` y `AutoModelFor` para cargar instancias pre-entrenadas de modelos. ร‰sto asegurarรก que cargues la arquitectura correcta en cada ocasiรณn. En el siguiente [tutorial](preprocessing), aprende a usar tu tokenizador reciรฉn cargado, el extractor de caracterรญsticas y el procesador para preprocesar un dataset para fine-tuning. </pt> <tf> Finalmente, la clase `TFAutoModelFor` te permite cargar tu modelo pre-entrenado para una tarea dada (revisa [aquรญ](model_doc/auto) para conocer la lista completa de tareas disponibles). Por ejemplo, carga un modelo para clasificaciรณn de secuencias con [`TFAutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Reutiliza fรกcilmente el mismo checkpoint para cargar una aquitectura para alguna tarea diferente: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Generalmente recomendamos utilizar las clases `AutoTokenizer` y `TFAutoModelFor` para cargar instancias de modelos pre-entrenados. ร‰sto asegurarรก que cargues la arquitectura correcta cada vez. En el siguiente [tutorial](preprocessing), aprende a usar tu tokenizador reciรฉn cargado, el extractor de caracterรญsticas y el procesador para preprocesar un dataset para fine-tuning. </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/multilingual.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Modelos multilingรผes para inferencia [[open-in-colab]] Existen varios modelos multilingรผes en ๐Ÿค— Transformers y su uso para inferencia difiere de los modelos monolingรผes. Sin embargo, no *todos* los usos de los modelos multilingรผes son diferentes. Algunos modelos, como [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased), pueden utilizarse igual que un modelo monolingรผe. Esta guรญa te enseรฑarรก cรณmo utilizar modelos multilingรผes cuyo uso difiere en la inferencia. ## XLM XLM tiene diez checkpoints diferentes de los cuales solo uno es monolingรผe. Los nueve checkpoints restantes del modelo pueden dividirse en dos categorรญas: los checkpoints que utilizan language embeddings y los que no. ### XLM con language embeddings Los siguientes modelos XLM usan language embeddings para especificar el lenguaje utilizado en la inferencia: - `FacebookAI/xlm-mlm-ende-1024` (Masked language modeling, English-German) - `FacebookAI/xlm-mlm-enfr-1024` (Masked language modeling, English-French) - `FacebookAI/xlm-mlm-enro-1024` (Masked language modeling, English-Romanian) - `FacebookAI/xlm-mlm-xnli15-1024` (Masked language modeling, XNLI languages) - `FacebookAI/xlm-mlm-tlm-xnli15-1024` (Masked language modeling + translation, XNLI languages) - `FacebookAI/xlm-clm-enfr-1024` (Causal language modeling, English-French) - `FacebookAI/xlm-clm-ende-1024` (Causal language modeling, English-German) Los language embeddings son representados como un tensor de la mismas dimensiones que los `input_ids` pasados al modelo. Los valores de estos tensores dependen del idioma utilizado y se identifican mediante los atributos `lang2id` y `id2lang` del tokenizador. En este ejemplo, carga el checkpoint `FacebookAI/xlm-clm-enfr-1024` (Causal language modeling, English-French): ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("FacebookAI/xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("FacebookAI/xlm-clm-enfr-1024") ``` El atributo `lang2id` del tokenizador muestra los idiomas de este modelo y sus ids: ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` A continuaciรณn, crea un input de ejemplo: ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1 ``` Establece el id del idioma, por ejemplo `"en"`, y utilรญzalo para definir el language embedding. El language embedding es un tensor lleno de `0` ya que es el id del idioma para inglรฉs. Este tensor debe ser del mismo tamaรฑo que `input_ids`. ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1) ``` Ahora puedes pasar los `input_ids` y el language embedding al modelo: ```py >>> outputs = model(input_ids, langs=langs) ``` El script [run_generation.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-generation/run_generation.py) puede generar texto con language embeddings utilizando los checkpoints `xlm-clm`. ### XLM sin language embeddings Los siguientes modelos XLM no requieren language embeddings durante la inferencia: - `FacebookAI/xlm-mlm-17-1280` (modelado de lenguaje enmascarado, 17 idiomas) - `FacebookAI/xlm-mlm-100-1280` (modelado de lenguaje enmascarado, 100 idiomas) Estos modelos se utilizan para representaciones genรฉricas de frases a diferencia de los anteriores checkpoints XLM. ## BERT Los siguientes modelos de BERT pueden utilizarse para tareas multilingรผes: - `google-bert/bert-base-multilingual-uncased` (modelado de lenguaje enmascarado + predicciรณn de la siguiente oraciรณn, 102 idiomas) - `google-bert/bert-base-multilingual-cased` (modelado de lenguaje enmascarado + predicciรณn de la siguiente oraciรณn, 104 idiomas) Estos modelos no requieren language embeddings durante la inferencia. Deben identificar la lengua a partir del contexto e inferir en consecuencia. ## XLM-RoBERTa Los siguientes modelos de XLM-RoBERTa pueden utilizarse para tareas multilingรผes: - `FacebookAI/xlm-roberta-base` (modelado de lenguaje enmascarado, 100 idiomas) - `FacebookAI/xlm-roberta-large` (Modelado de lenguaje enmascarado, 100 idiomas) XLM-RoBERTa se entrenรณ con 2,5 TB de datos CommonCrawl reciรฉn creados y depurados en 100 idiomas. Proporciona fuertes ventajas sobre los modelos multilingรผes publicados anteriormente como mBERT o XLM en tareas posteriores como la clasificaciรณn, el etiquetado de secuencias y la respuesta a preguntas. ## M2M100 Los siguientes modelos de M2M100 pueden utilizarse para traducciรณn multilingรผe: - `facebook/m2m100_418M` (traducciรณn) - `facebook/m2m100_1.2B` (traducciรณn) En este ejemplo, carga el checkpoint `facebook/m2m100_418M` para traducir del chino al inglรฉs. Puedes establecer el idioma de origen en el tokenizador: ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` Tokeniza el texto: ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100 fuerza el id del idioma de destino como el primer token generado para traducir al idioma de destino.. Establece el `forced_bos_token_id` a `en` en el mรฉtodo `generate` para traducir al inglรฉs: ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart Los siguientes modelos de MBart pueden utilizarse para traducciรณn multilingรผe: - `facebook/mbart-large-50-one-to-many-mmt` (traducciรณn automรกtica multilingรผe de uno a muchos, 50 idiomas) - `facebook/mbart-large-50-many-to-many-mmt` (traducciรณn automรกtica multilingรผe de muchos a muchos, 50 idiomas) - `facebook/mbart-large-50-many-to-one-mmt` (traducciรณn automรกtica multilingรผe muchos a uno, 50 idiomas) - `facebook/mbart-large-50` (traducciรณn multilingรผe, 50 idiomas) - `facebook/mbart-large-cc25` En este ejemplo, carga el checkpoint `facebook/mbart-large-50-many-to-many-mmt` para traducir del finlandรฉs al inglรฉs. Puedes establecer el idioma de origen en el tokenizador: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` Tokeniza el texto: ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart fuerza el id del idioma de destino como el primer token generado para traducirlo. Establece el `forced_bos_token_id` a `en` en el mรฉtodo `generate` para traducir al inglรฉs: ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` Si estรกs usando el checkpoint `facebook/mbart-large-50-many-to-one-mmt` no necesitas forzar el id del idioma de destino como el primer token generado, de lo contrario el uso es el mismo.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Compartir un modelo Los รบltimos dos tutoriales mostraron cรณmo puedes realizar fine-tunning a un modelo con PyTorch, Keras y ๐Ÿค— Accelerate para configuraciones distribuidas. ยกEl siguiente paso es compartir tu modelo con la comunidad! En Hugging Face creemos en compartir abiertamente a todos el conocimiento y los recursos para democratizar la inteligencia artificial. En este sentido, te animamos a considerar compartir tu modelo con la comunidad, de esta forma ayudas a otros ahorrando tiempo y recursos. En este tutorial aprenderรกs dos mรฉtodos para compartir un modelo trained o fine-tuned en el [Model Hub](https://huggingface.co/models): - Mediante Cรณdigo, enviando (push) tus archivos al Hub. - Con la interfaz Web, con Drag-and-drop de tus archivos al Hub. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> Para compartir un modelo con la comunidad necesitas una cuenta en [huggingface.co](https://huggingface.co/join). Tambiรฉn puedes unirte a una organizaciรณn existente o crear una nueva. </Tip> ## Caracterรญsticas de los repositorios Cada repositorio en el Model Hub se comporta como cualquier otro repositorio en GitHub. Nuestros repositorios ofrecen versioning, commit history, y la habilidad para visualizar diferencias. El versioning desarrollado dentro del Model Hub es basado en git y [git-lfs](https://git-lfs.github.com/). En otras palabras, puedes tratar un modelo como un repositorio, brindando un mejor control de acceso y escalabilidad. Version control permite *revisions*, un mรฉtodo para apuntar a una versiรณn especรญfica de un modelo utilizando un commit hash, tag o branch. Como resultado, puedes cargar una versiรณn especรญfica del modelo con el parรกmetro `revision`: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ... ) ``` Los archivos son editados fรกcilmente dentro de un repositorio. Incluso puedes observar el commit history y las diferencias: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Configuraciรณn inicial Antes de compartir un modelo al Hub necesitarรกs tus credenciales de Hugging Face. Si tienes acceso a una terminal ejecuta el siguiente comando en el entorno virtual donde ๐Ÿค— Transformers estรฉ instalado. Esto guardarรก tu token de acceso dentro de tu carpeta cache de Hugging Face (~/.cache/ by default): ```bash huggingface-cli login ``` Si usas un notebook como Jupyter o Colaboratory, asegรบrate de tener instalada la biblioteca [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library). Esta biblioteca te permitirรก interactuar por cรณdigo con el Hub. ```bash pip install huggingface_hub ``` Luego usa `notebook_login` para iniciar sesiรณn al Hub, y sigue el link [aquรญ](https://huggingface.co/settings/token) para generar un token con el que iniciaremos sesiรณn: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Convertir un modelo para todos los Frameworks Para asegurarnos que tu modelo pueda ser usado por alguien que estรฉ trabajando con un framework diferente, te recomendamos convertir y subir tu modelo con checkpoints de pytorch y tensorflow. Aunque los usuarios aรบn son capaces de cargar su modelo desde un framework diferente, si se omite este paso serรก mรกs lento debido a que ๐Ÿค— Transformers necesitarรก convertir el checkpoint sobre-la-marcha. Convertir un checkpoint para otro framework es fรกcil. Asegรบrate tener Pytorch y TensorFlow instalado (Vรฉase [aquรญ](installation) para instrucciones de instalaciรณn), y luego encuentra el modelo especรญfico para tu tarea en el otro Framework. Por ejemplo, supongamos que has entrenado DistilBert para clasificaciรณn de secuencias en PyTorch y quieres convertirlo a su equivalente en TensorFlow. Cargas el equivalente en TensorFlow de tu modelo para tu tarea y especificas `from_pt=True` asรญ ๐Ÿค— Transformers convertirรก el Pytorch checkpoint a un TensorFlow Checkpoint: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` Luego guardas tu nuevo modelo TensorFlow con su nuevo checkpoint: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` De manera similar, especificas `from_tf=True` para convertir un checkpoint de TensorFlow a Pytorch: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` Si algรบn modelo estรก disponible en Flax, tambiรฉn puedes convertir un checkpoint de Pytorch a Flax: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` ## Compartir un modelo con `Trainer` <Youtube id="Z1-XMy-GNLQ"/> Compartir un modelo al Hub es tan simple como aรฑadir un parรกmetro extra o un callback. Si recuerdas del tutorial de [fine-tuning tutorial](training), la clase [`TrainingArguments`] es donde especificas los Hiperparรกmetros y opciones de entrenamiento adicionales. Una de estas opciones incluye la habilidad de compartir un modelo directamente al Hub. Para ello configuras `push_to_hub=True` dentro de [`TrainingArguments`]: ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` A continuaciรณn, como usualmente, pasa tus argumentos de entrenamiento a [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Luego que realizas fine-tune a tu modelo, llamas [`~transformers.Trainer.push_to_hub`] en [`Trainer`] para enviar el modelo al Hub!๐Ÿค— Transformers incluso aรฑadirรก automรกticamente los Hiperparรกmetros de entrenamiento, resultados de entrenamiento y versiones del Framework a tu model card! ```py >>> trainer.push_to_hub() ``` ## Compartir un modelo con `PushToHubCallback` Los usuarios de TensorFlow pueden activar la misma funcionalidad con [`PushToHubCallback`]. En la funcion [`PushToHubCallback`], agrega: - Un directorio de salida para tu modelo. - Un tokenizador. - El `hub_model_id`, el cual es tu usuario Hub y el nombre del modelo. ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` Agregamos el callback a [`fit`](https://keras.io/api/models/model_training_apis/), y ๐Ÿค— Transformers enviarรก el modelo entrenado al Hub: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` ## Usando la funciรณn `push_to_hub` Puedes llamar la funciรณn `push_to_hub` directamente en tu modelo para subirlo al Hub. Especifica el nombre del modelo en `push_to_hub`: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` Esto crearรก un repositorio bajo tu usuario con el nombre del modelo `my-awesome-model`. Ahora los usuarios pueden cargar tu modelo con la funciรณn `from_pretrained`: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` Si perteneces a una organizaciรณn y quieres compartir tu modelo bajo el nombre de la organizaciรณn, aรฑade el parรกmetro `organization`: ```py >>> pt_model.push_to_hub("my-awesome-model", organization="my-awesome-org") ``` La funciรณn `push_to_hub` tambiรฉn puede ser usada para aรฑadir archivos al repositorio del modelo. Por ejemplo, aรฑade un tokenizador al repositorio: ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` O quizรกs te gustarรญa aรฑadir la versiรณn de TensorFlow de tu modelo fine-tuned en Pytorch: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` Ahora, cuando navegues a tu perfil en Hugging Face, deberรญas observar el repositorio de tu modelo creado recientemente. Si das click en el tab **Files** observarรกs todos los archivos que has subido al repositorio. Para mรกs detalles sobre cรณmo crear y subir archivos al repositorio, consulta la [documentaciรณn del Hub](https://huggingface.co/docs/hub/how-to-upstream). ## Compartir con la interfaz web Los usuarios que prefieran un enfoque no-code tienen la opciรณn de cargar su modelo a travรฉs de la interfaz grรกfica del Hub. Visita la pรกgina [huggingface.co/new](https://huggingface.co/new) para crear un nuevo repositorio: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) Desde aquรญ, aรฑade informaciรณn acerca del modelo: - Selecciona el **owner** (la persona propietaria) del repositorio. Puedes ser tรบ o cualquier organizaciรณn a la que pertenezcas. - Escoge un nombre para tu modelo. Tambiรฉn serรก el nombre del repositorio. - Elige si tu modelo es pรบblico o privado. - Especifica la licencia que usarรก tu modelo. Ahora puedes hacer click en el tab **Files** y luego en el botรณn **Add file** para subir un nuevo archivo a tu repositorio. Luego arrastra y suelta un archivo a subir y le aรฑades un mensaje al commit. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Aรฑadiendo una tarjeta de modelo Para asegurarnos que los usuarios entiendan las capacidades de tu modelo, sus limitaciones, posibles sesgos y consideraciones รฉticas, por favor aรฑade una tarjeta (como una tarjeta de presentaciรณn) al repositorio del modelo. La tarjeta de modelo es definida en el archivo `README.md`. Puedes agregar una de la siguiente manera: * Elaborando y subiendo manualmente el archivo`README.md`. * Dando click en el botรณn **Edit model card** dentro del repositorio. Toma un momento para ver la [tarjeta de modelo](https://huggingface.co/distilbert/distilbert-base-uncased) de DistilBert para que tengas un buen ejemplo del tipo de informaciรณn que deberรญa incluir. Consulta [la documentaciรณn](https://huggingface.co/docs/hub/models-cards) para mรกs detalles acerca de otras opciones que puedes controlar dentro del archivo `README.md` como la huella de carbono del modelo o ejemplos de widgets. Consulta la documentaciรณn [aquรญ](https://huggingface.co/docs/hub/models-cards).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/custom_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Compartir modelos personalizados La biblioteca ๐Ÿค— Transformers estรก diseรฑada para ser fรกcilmente ampliable. Cada modelo estรก completamente codificado sin abstracciรณn en una subcarpeta determinada del repositorio, por lo que puedes copiar fรกcilmente un archivo del modelo y ajustarlo segรบn tus necesidades. Si estรกs escribiendo un modelo completamente nuevo, podrรญa ser mรกs fรกcil comenzar desde cero. En este tutorial, te mostraremos cรณmo escribir un modelo personalizado y su configuraciรณn para que pueda usarse dentro de Transformers, y cรณmo puedes compartirlo con la comunidad (con el cรณdigo en el que se basa) para que cualquiera pueda usarlo, incluso si no estรก presente en la biblioteca ๐Ÿค— Transformers. Ilustraremos todo esto con un modelo ResNet, envolviendo la clase ResNet de la [biblioteca timm](https://github.com/rwightman/pytorch-image-models) en un [`PreTrainedModel`]. ## Escribir una configuraciรณn personalizada Antes de adentrarnos en el modelo, primero escribamos su configuraciรณn. La configuraciรณn de un modelo es un objeto que contendrรก toda la informaciรณn necesaria para construir el modelo. Como veremos en la siguiente secciรณn, el modelo solo puede tomar un `config` para ser inicializado, por lo que realmente necesitamos que ese objeto estรฉ lo mรกs completo posible. En nuestro ejemplo, tomaremos un par de argumentos de la clase ResNet que tal vez queramos modificar. Las diferentes configuraciones nos darรกn los diferentes tipos de ResNet que son posibles. Luego simplemente almacenamos esos argumentos despuรฉs de verificar la validez de algunos de ellos. ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` Las tres cosas importantes que debes recordar al escribir tu propia configuraciรณn son las siguientes: - tienes que heredar de `PretrainedConfig`, - el `__init__` de tu `PretrainedConfig` debe aceptar cualquier `kwargs`, - esos `kwargs` deben pasarse a la superclase `__init__`. La herencia es para asegurarte de obtener toda la funcionalidad de la biblioteca ๐Ÿค— Transformers, mientras que las otras dos restricciones provienen del hecho de que una `PretrainedConfig` tiene mรกs campos que los que estรกs configurando. Al recargar una `config` con el mรฉtodo `from_pretrained`, esos campos deben ser aceptados por tu `config` y luego enviados a la superclase. Definir un `model_type` para tu configuraciรณn (en este caso `model_type="resnet"`) no es obligatorio, a menos que quieras registrar tu modelo con las clases automรกticas (ver la รบltima secciรณn). Una vez hecho esto, puedes crear y guardar fรกcilmente tu configuraciรณn como lo harรญas con cualquier otra configuraciรณn de un modelo de la biblioteca. Asรญ es como podemos crear una configuraciรณn resnet50d y guardarla: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` Esto guardarรก un archivo llamado `config.json` dentro de la carpeta `custom-resnet`. Luego puedes volver a cargar tu configuraciรณn con el mรฉtodo `from_pretrained`: ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` Tambiรฉn puedes usar cualquier otro mรฉtodo de la clase [`PretrainedConfig`], como [`~PretrainedConfig.push_to_hub`], para cargar directamente tu configuraciรณn en el Hub. ## Escribir un modelo personalizado Ahora que tenemos nuestra configuraciรณn de ResNet, podemos seguir escribiendo el modelo. En realidad escribiremos dos: una que extrae las caracterรญsticas ocultas de un grupo de imรกgenes (como [`BertModel`]) y una que es adecuada para clasificaciรณn de imagenes (como [`BertForSequenceClassification`]). Como mencionamos antes, solo escribiremos un envoltura (_wrapper_) libre del modelo para simplificar este ejemplo. Lo รบnico que debemos hacer antes de escribir esta clase es un mapeo entre los tipos de bloques y las clases de bloques reales. Luego se define el modelo desde la configuraciรณn pasando todo a la clase `ResNet`: ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` Para el modelo que clasificarรก las imรกgenes, solo cambiamos el mรฉtodo de avance (es decir, el mรฉtodo `forward`): ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` En ambos casos, observa cรณmo heredamos de `PreTrainedModel` y llamamos a la inicializaciรณn de la superclase con `config` (un poco como cuando escribes `torch.nn.Module`). La lรญnea que establece `config_class` no es obligatoria, a menos que quieras registrar tu modelo con las clases automรกticas (consulta la รบltima secciรณn). <Tip> Si tu modelo es muy similar a un modelo dentro de la biblioteca, puedes reutilizar la misma configuraciรณn de ese modelo. </Tip> Puedes hacer que tu modelo devuelva lo que quieras, pero devolver un diccionario como lo hicimos para `ResnetModelForImageClassification`, con el `loss` incluido cuando se pasan las etiquetas, harรก que tu modelo se pueda usar directamente dentro de la clase [`Trainer`]. Usar otro formato de salida estรก bien, siempre y cuando estรฉs planeando usar tu propio bucle de entrenamiento u otra biblioteca para el entrenamiento. Ahora que tenemos nuestra clase, vamos a crear un modelo: ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` Nuevamente, puedes usar cualquiera de los mรฉtodos de [`PreTrainedModel`], como [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`]. Usaremos el segundo en la siguiente secciรณn y veremos cรณmo pasar los pesos del modelo con el cรณdigo de nuestro modelo. Pero primero, carguemos algunos pesos previamente entrenados dentro de nuestro modelo. En tu caso de uso, probablemente estarรกs entrenando tu modelo personalizado con tus propios datos. Para ir rรกpido en este tutorial, usaremos la versiรณn preentrenada de resnet50d. Dado que nuestro modelo es solo un envoltorio alrededor del resnet50d original, serรก fรกcil transferir esos pesos: ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Ahora veamos cรณmo asegurarnos de que cuando hacemos [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`], se guarda el cรณdigo del modelo. ## Enviar el cรณdigo al _Hub_ <Tip warning={true}> Esta _API_ es experimental y puede tener algunos cambios leves en las prรณximas versiones. </Tip> Primero, asegรบrate de que tu modelo estรฉ completamente definido en un archivo `.py`. Puedes basarte en importaciones relativas a otros archivos, siempre que todos los archivos estรฉn en el mismo directorio (aรบn no admitimos submรณdulos para esta caracterรญstica). Para nuestro ejemplo, definiremos un archivo `modeling_resnet.py` y un archivo `configuration_resnet.py` en una carpeta del directorio de trabajo actual llamado `resnet_model`. El archivo de configuraciรณn contiene el cรณdigo de `ResnetConfig` y el archivo del modelo contiene el cรณdigo de `ResnetModel` y `ResnetModelForImageClassification`. ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` El `__init__.py` puede estar vacรญo, solo estรก ahรญ para que Python detecte que `resnet_model` se puede usar como un mรณdulo. <Tip warning={true}> Si copias archivos del modelo desde la biblioteca, deberรกs reemplazar todas las importaciones relativas en la parte superior del archivo para importarlos desde el paquete `transformers`. </Tip> Ten en cuenta que puedes reutilizar (o subclasificar) una configuraciรณn o modelo existente. Para compartir tu modelo con la comunidad, sigue estos pasos: primero importa el modelo y la configuraciรณn de ResNet desde los archivos reciรฉn creados: ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` Luego, debes decirle a la biblioteca que deseas copiar el cรณdigo de esos objetos cuando usas el mรฉtodo `save_pretrained` y registrarlos correctamente con una determinada clase automรกtica (especialmente para modelos), simplemente ejecuta: ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` Ten en cuenta que no es necesario especificar una clase automรกtica para la configuraciรณn (solo hay una clase automรกtica para ellos, [`AutoConfig`]), pero es diferente para los modelos. Tu modelo personalizado podrรญa ser adecuado para muchas tareas diferentes, por lo que debes especificar cuรกl de las clases automรกticas es la correcta para tu modelo. A continuaciรณn, vamos a crear la configuraciรณn y los modelos como lo hicimos antes: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Ahora, para enviar el modelo al Hub, asegรบrate de haber iniciado sesiรณn. Ejecuta en tu terminal: ```bash huggingface-cli login ``` o desde un _notebook_: ```py from huggingface_hub import notebook_login notebook_login() ``` Luego puedes ingresar a tu propio espacio (o una organizaciรณn de la que seas miembro) de esta manera: ```py resnet50d.push_to_hub("custom-resnet50d") ``` Ademรกs de los pesos del modelo y la configuraciรณn en formato json, esto tambiรฉn copiรณ los archivos `.py` del modelo y la configuraciรณn en la carpeta `custom-resnet50d` y subiรณ el resultado al Hub. Puedes verificar el resultado en este [repositorio de modelos](https://huggingface.co/sgugger/custom-resnet50d). Consulta el tutorial sobre cรณmo [compartir modelos](model_sharing) para obtener mรกs informaciรณn sobre el mรฉtodo para subir modelos al Hub. ## Usar un modelo con cรณdigo personalizado Puedes usar cualquier configuraciรณn, modelo o _tokenizador_ con archivos de cรณdigo personalizado en tu repositorio con las clases automรกticas y el mรฉtodo `from_pretrained`. Todos los archivos y cรณdigos cargados en el Hub se analizan en busca de malware (consulta la documentaciรณn de [seguridad del Hub](https://huggingface.co/docs/hub/security#malware-scanning) para obtener mรกs informaciรณn), pero aรบn debes revisar el cรณdigo del modelo y el autor para evitar la ejecuciรณn de cรณdigo malicioso en tu computadora. Configura `trust_remote_code=True` para usar un modelo con cรณdigo personalizado: ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` Tambiรฉn se recomienda encarecidamente pasar un _hash_ de confirmaciรณn como una "revisiรณn" para asegurarte de que el autor de los modelos no actualizรณ el cรณdigo con algunas lรญneas nuevas maliciosas (a menos que confรญes plenamente en los autores de los modelos). ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` Ten en cuenta que al navegar por el historial de confirmaciones del repositorio del modelo en Hub, hay un botรณn para copiar fรกcilmente el hash de confirmaciรณn de cualquier _commit_. ## Registrar un model con cรณdigo personalizado a las clases automรกticas Si estรกs escribiendo una biblioteca que amplรญa ๐Ÿค— Transformers, es posible que quieras ampliar las clases automรกticas para incluir tu propio modelo. Esto es diferente de enviar el cรณdigo al Hub en el sentido de que los usuarios necesitarรกn importar tu biblioteca para obtener los modelos personalizados (al contrario de descargar automรกticamente el cรณdigo del modelo desde Hub). Siempre que tu configuraciรณn tenga un atributo `model_type` que sea diferente de los tipos de modelos existentes, y que tus clases modelo tengan los atributos `config_class` correctos, puedes agregarlos a las clases automรกticas de la siguiente manera: ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` Ten en cuenta que el primer argumento utilizado al registrar tu configuraciรณn personalizada en [`AutoConfig`] debe coincidir con el `model_type` de tu configuraciรณn personalizada, y el primer argumento utilizado al registrar tus modelos personalizados en cualquier clase del modelo automรกtico debe coincidir con el `config_class ` de esos modelos.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/perplexity.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Perplejidad de los modelos de longitud fija [[open-in-colab]] La perplejidad, perplexity en inglรฉs (PPL), es una de las mรฉtricas mรกs comunes para evaluar modelos de lenguaje. Antes de sumergirnos, debemos tener en cuenta que esta mรฉtrica se aplica especรญficamente a modelos de lenguaje clรกsicos (a veces llamados modelos autorregresivos o causales) y no estรก bien definida para modelos de lenguaje enmascarados como BERT (ver [resumen del modelo](model_summary)). La perplejidad se define como la media negativa exponenciada del log-likelihood de una secuencia. Si tenemos una secuencia tokenizada \\(X = (x_0, x_1, \dots, x_t)\\), entonces la perplejidad de \\(X\\) es, $$\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\}$$ donde \\(\log p_\theta (x_i|x_{<i})\\) es el log-likelihood del token i-รฉsimo condicionado a los tokens precedentes \\(x_{<i}\\) segรบn nuestro modelo. De manera intuitiva, se puede pensar en esto como una evaluaciรณn de la capacidad del modelo para predecir de manera uniforme entre el conjunto de tokens especificados en un corpus. Es importante destacar que el procedimiento de tokenizaciรณn tiene un impacto directo en la perplejidad de un modelo, lo cual siempre debe tenerse en cuenta al comparar diferentes modelos. Esto tambiรฉn es equivalente a la exponenciaciรณn de la entropรญa cruzada entre los datos y las predicciones del modelo. Para obtener mรกs intuiciรณn sobre la perplejidad y su relaciรณn con los Bits Por Carรกcter (BPC) y la compresiรณn de datos, echa un vistazo a esta [fantรกstica publicaciรณn en el blog de "The Gradient"](https://thegradient.pub/understanding-evaluation-metrics-for-language-models/). ## Cรกlculo de PPL con modelos de longitud fija Si no estuviรฉramos limitados por el tamaรฑo del contexto de un modelo, evaluarรญamos la perplejidad (PPL) del modelo auto regresivamente factorizando una secuencia y condicionรกndonos en toda la subsecuencia precedente en cada paso, como se muestra a continuaciรณn. <img width="600" alt="Full decomposition of a sequence with unlimited context length" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_full.gif"/> Sin embargo, al trabajar con modelos aproximados, generalmente tenemos una restricciรณn en la cantidad de tokens que el modelo puede procesar. La versiรณn mรกs grande de [GPT-2](model_doc/gpt2), por ejemplo, tiene una longitud fija de 1024 tokens, por lo que no podemos calcular \\(p_\theta(x_t|x_{<t})\\) directamente cuando \\(t\\) es mayor que 1024. En cambio, la secuencia se divide tรญpicamente en subsecuencias iguales al tamaรฑo mรกximo de entrada del modelo. Si el tamaรฑo mรกximo de entrada, de un modelo es \\(k\\), entonces aproximamos la probabilidad de un token \\(x_t\\) condicionรกndonos solo en los \\(k-1\\) tokens que lo preceden en lugar de todo el contexto. Al evaluar la perplejidad del modelo en una secuencia, un enfoque tentador pero sub รณptimo es dividir la secuencia en fragmentos independientes y sumar los log-likelihood descompuestos de cada segmento de manera independiente. <img width="600" alt="Suboptimal PPL not taking advantage of full available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_chunked.gif"/> Esto es rรกpido de calcular, ya que la perplejidad de cada segmento se puede calcular en un solo pase hacia adelante, pero sirve como una aproximaciรณn pobre de la perplejidad completamente factorizada y generalmente darรก como resultado una PPL mรกs alta (peor) porque el modelo tendrรก menos contexto en la mayorรญa de los pasos de predicciรณn. En cambio, la PPL de modelos de longitud fija deberรญa evaluarse con una estrategia de ventana deslizante. Esto implica deslizar repetidamente la ventana de contexto para que el modelo tenga mรกs contexto al hacer cada predicciรณn. <img width="600" alt="Sliding window PPL taking advantage of all available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_sliding.gif"/> Esta es una aproximaciรณn mรกs cercana a la verdadera descomposiciรณn de la probabilidad de la secuencia y generalmente darรก como resultado una puntuaciรณn mรกs favorable. La desventaja es que requiere un pase hacia adelante separado para cada token en el corpus. Un buen compromiso prรกctico es emplear una ventana deslizante estratificada, moviendo el contexto con pasos mรกs grandes en lugar de deslizarse de 1 token a la vez. Esto permite que la computaciรณn avance mucho mรกs rรกpido, mientras le da al modelo un contexto amplio para hacer predicciones en cada paso. ## Ejemplo: Cรกlculo de la perplejidad con GPT-2 en ๐Ÿค— Transformers Demostremos este proceso con GPT-2. ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast device = "cuda" model_id = "openai-community/gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) ``` Carguemos el conjunto de datos WikiText-2 y evaluemos la perplejidad utilizando algunas estrategias de ventana deslizante diferentes. Dado que este conjunto de datos es pequeรฑo y solo estamos realizando un pase hacia adelante sobre el conjunto, podemos cargar y codificar todo el conjunto de datos en la memoria. ```python from datasets import load_dataset test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt") ``` Con ๐Ÿค— Transformers, simplemente podemos pasar los `input_ids` como las `labels` a nuestro modelo, y la media negativa del log-likelihood para cada token se devuelve como la pรฉrdida. Sin embargo, con nuestro enfoque de ventana deslizante, hay superposiciรณn en los tokens que pasamos al modelo en cada iteraciรณn. No queremos que el log-likelihood de los tokens que estamos tratando solo como contexto se incluya en nuestra pรฉrdida, por lo que podemos establecer estos objetivos en `-100` para que se ignoren. El siguiente es un ejemplo de cรณmo podrรญamos hacer esto con un paso de `512`. Esto significa que el modelo tendrรก al menos `512` tokens como contexto al calcular el log-likelihood condicional de cualquier token (siempre que haya `512` tokens precedentes disponibles para condicionar). ```python import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nlls = [] prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # puede ser diferente del paso en el รบltimo bucle input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # la pรฉrdida se calcula utilizando CrossEntropyLoss, que promedia las etiquetas vรกlidas # N.B. el modelo solo calcula la pรฉrdida sobre trg_len - 1 etiquetas, porque desplaza las etiqueta internamente # a la izquierda por 1. neg_log_likelihood = outputs.loss nlls.append(neg_log_likelihood) prev_end_loc = end_loc if end_loc == seq_len: break ppl = torch.exp(torch.stack(nlls).mean()) ``` Ejecuta esto con la longitud de paso igual a la longitud mรกxima de entrada es equivalente a la estrategia sub รณptima, sin ventana deslizante, que discutimos anteriormente. Cuanto menor sea el paso, mรกs contexto tendrรก el modelo para realizar cada predicciรณn y, por lo general, mejor serรก la perplejidad informada. Cuando ejecutamos lo anterior con `stride = 1024`, es decir, sin superposiciรณn, la PPL resultante es `19.44`, que es aproximadamente la misma que la `19.93` informada en el artรญculo de GPT-2. Al utilizar `stride = 512` y, por lo tanto, emplear nuestra estrategia de ventana deslizante, esto disminuye a `16.45`. Esto no solo es una puntuaciรณn mรกs favorable, sino que se calcula de una manera mรกs cercana a la verdadera descomposiciรณn autorregresiva de la probabilidad de una secuencia.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/glossary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Glosario Este glosario define tรฉrminos generales de aprendizaje automรกtico y tรฉrminos relacionados con ๐Ÿค— Transformers para ayudarte a comprender mejor la documentaciรณn. ## A ### attention mask La mรกscara de atenciรณn es un argumento opcional utilizado al agrupar secuencias. <Youtube id="M6adb1j2jPI"/> Este argumento indica al modelo quรฉ tokens deben recibir atenciรณn y cuรกles no. Por ejemplo, considera estas dos secuencias: ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") >>> sequence_a = "This is a short sequence." >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A." >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"] >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"] ``` Las versiones codificadas tienen longitudes diferentes: ```python >>> len(encoded_sequence_a), len(encoded_sequence_b) (8, 19) ``` Por lo tanto, no podemos colocarlas juntas en el mismo tensor tal cual. La primera secuencia necesita ser rellenada hasta la longitud de la segunda, o la segunda necesita ser truncada hasta la longitud de la primera. En el primer caso, la lista de IDs se extenderรก con los รญndices de relleno. Podemos pasar una lista al tokenizador y pedirle que realice el relleno de esta manera: ```python >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True) ``` Podemos ver que se han agregado ceros a la derecha de la primera oraciรณn para que tenga la misma longitud que la segunda: ```python >>> padded_sequences["input_ids"] [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]] ``` Esto luego se puede convertir en un tensor en PyTorch o TensorFlow. La mรกscara de atenciรณn es un tensor binario que indica la posiciรณn de los รญndices de relleno para que el modelo no los tenga en cuenta. Para el [`BertTokenizer`], `1` indica un valor al que se debe prestar atenciรณn, mientras que `0` indica un valor de relleno. Esta mรกscara de atenciรณn estรก en el diccionario devuelto por el tokenizador bajo la clave "attention_mask": ```python >>> padded_sequences["attention_mask"] [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] ``` ### autoencoding models Consulta [modelos de codificaciรณn](#encoder-models) y [modelado de lenguaje enmascarado](#masked-language-modeling-mlm) ### autoregressive models Consulta [modelado de lenguaje causal](#causal-language-modeling) y [modelos de decodificaciรณn](#decoder-models) ## B ### backbone La columna vertebral, backbone en inglรฉs, es la red (embeddings y layers) que produce los estados ocultos o caracterรญsticas crudas. Normalmente, estรก conectado a una [cabecera](#head), que acepta las caracterรญsticas como entrada para hacer una predicciรณn. Por ejemplo, [`ViTModel`] es una columna vertebral sin una cabecera especรญfica encima. Otros modelos tambiรฉn pueden usar [`VitModel`] como columna vertebral, como por ejemplo [DPT](model_doc/dpt). ## C ### causal language modeling Una tarea de preentrenamiento donde el modelo lee los textos en orden y tiene que predecir la siguiente palabra. Generalmente, se realiza leyendo toda la oraciรณn, pero utilizando una mรกscara dentro del modelo para ocultar los tokens futuros en un cierto paso de tiempo. ### channel Las imรกgenes a color estรกn compuestas por alguna combinaciรณn de valores en tres canales: rojo, verde y azul (RGB), y las imรกgenes en escala de grises solo tienen un canal. En ๐Ÿค— Transformers, el canal puede ser la primera o รบltima dimensiรณn del tensor de una imagen: [`n_channels`, `height`, `width`] o [`height`, `width`, `n_channels`]. ### connectionist temporal classification (CTC) Un algoritmo que permite que un modelo aprenda sin saber exactamente cรณmo estรกn alineadas la entrada y la salida; CTC calcula la distribuciรณn de todas las salidas posibles para una entrada dada y elige la salida mรกs probable de ella. CTC se utiliza comรบnmente en tareas de reconocimiento de voz porque el habla no siempre se alinea perfectamente con la transcripciรณn debido a diversas razones, como las diferentes velocidades de habla de los oradores. ### convolution Un tipo de capa en una red neuronal donde la matriz de entrada se multiplica elemento por elemento por una matriz mรกs pequeรฑa (nรบcleo o filtro) y los valores se suman en una nueva matriz. Esto se conoce como una operaciรณn de convoluciรณn que se repite sobre toda la matriz de entrada. Cada operaciรณn se aplica a un segmento diferente de la matriz de entrada. Las redes neuronales convolucionales (CNN) se utilizan comรบnmente en visiรณn por computadora. ## D ### DataParallel (DP) Tรฉcnica de paralelismo para entrenamiento en mรบltiples GPUs donde se replica la misma configuraciรณn varias veces, con cada instancia recibiendo una porciรณn de datos รบnica. El procesamiento se realiza en paralelo y todas las configuraciones se sincronizan al final de cada paso de entrenamiento. Obtรฉn mรกs informaciรณn sobre cรณmo funciona el DataParallel [aquรญ](perf_train_gpu_many#dataparallel-vs-distributeddataparallel). ### decoder input IDs Esta entrada es especรญfica para modelos codificador-decodificador y contiene los IDs de entrada que se enviarรกn al decodificador. Estas entradas deben usarse para tareas de secuencia a secuencia, como traducciรณn o resumen, y generalmente se construyen de una manera especรญfica para cada modelo. La mayorรญa de los modelos codificador-decodificador (BART, T5) crean sus `decoder_input_ids` por sรญ mismos a partir de las `labels`. En tales modelos, pasar las `labels` es la forma preferida de manejar el entrenamiento. Consulta la documentaciรณn de cada modelo para ver cรณmo manejan estos IDs de entrada para el entrenamiento de secuencia a secuencia. ### decoder models Tambiรฉn conocidos como modelos autorregresivos, los modelos decodificadores involucran una tarea de preentrenamiento (llamada modelado de lenguaje causal) donde el modelo lee los textos en orden y tiene que predecir la siguiente palabra. Generalmente, se realiza leyendo la oraciรณn completa con una mรกscara para ocultar los tokens futuros en un cierto paso de tiempo. <Youtube id="d_ixlCubqQw"/> ### deep learning (DL) Algoritmos de aprendizaje automรกtico que utilizan redes neuronales con varias capas. ## E ### encoder models Tambiรฉn conocidos como modelos de codificaciรณn automรกtica (autoencoding models), los modelos codificadores toman una entrada (como texto o imรกgenes) y las transforman en una representaciรณn numรฉrica condensada llamada embedding. A menudo, los modelos codificadores se entrenan previamente utilizando tรฉcnicas como el [modelado de lenguaje enmascarado](#masked-language-modeling-mlm), que enmascara partes de la secuencia de entrada y obliga al modelo a crear representaciones mรกs significativas. <Youtube id="H39Z_720T5s"/> ## F ### feature extraction El proceso de seleccionar y transformar datos crudos en un conjunto de caracterรญsticas mรกs informativas y รบtiles para algoritmos de aprendizaje automรกtico. Algunos ejemplos de extracciรณn de caracterรญsticas incluyen transformar texto crudo en embeddings de palabras y extraer caracterรญsticas importantes como bordes o formas de datos de imรกgenes/videos. ### feed forward chunking En cada bloque de atenciรณn residual en los transformadores, la capa de autoatenciรณn suele ir seguida de 2 capas de avance. El tamaรฑo de embedding intermedio de las capas de avance suele ser mayor que el tamaรฑo oculto del modelo (por ejemplo, para `google-bert/bert-base-uncased`). Para una entrada de tamaรฑo `[batch_size, sequence_length]`, la memoria requerida para almacenar los embeddings intermedios de avance `[batch_size, sequence_length, config.intermediate_size]` puede representar una gran fracciรณn del uso de memoria. Los autores de [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) observaron que, dado que el cรกlculo es independiente de la dimensiรณn `sequence_length`, es matemรกticamente equivalente calcular los embeddings de salida de ambas capas de avance `[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n` individualmente y concatenarlos despuรฉs a `[batch_size, sequence_length, config.hidden_size]` con `n = sequence_length`, lo que intercambia el aumento del tiempo de cรกlculo por una reducciรณn en el uso de memoria, pero produce un resultado matemรกticamente **equivalente**. Para modelos que utilizan la funciรณn [`apply_chunking_to_forward`], el `chunk_size` define el nรบmero de embeddings de salida que se calculan en paralelo y, por lo tanto, define el equilibrio entre la complejidad de memoria y tiempo. Si `chunk_size` se establece en 0, no se realiza ninguna fragmentaciรณn de avance. ### finetuned models El ajuste fino es una forma de transferencia de aprendizaje que implica tomar un modelo entrenado previamente, congelar sus pesos y reemplazar la capa de salida con una nueva [cabecera de modelo](#head) reciรฉn aรฑadida. La cabecera del modelo se entrena en tu conjunto de datos objetivo. Consulta el tutorial [Ajustar finamente un modelo pre-entrenado](https://huggingface.co/docs/transformers/training) para obtener mรกs detalles y aprende cรณmo ajustar finamente modelos con ๐Ÿค— Transformers. ## H ### head La cabecera del modelo se refiere a la รบltima capa de una red neuronal que acepta los estados ocultos crudos y los proyecta en una dimensiรณn diferente. Hay una cabecera de modelo diferente para cada tarea. Por ejemplo: * [`GPT2ForSequenceClassification`] es una cabecera de clasificaciรณn de secuencias, es decir, una capa lineal, encima del modelo base [`GPT2Model`]. * [`ViTForImageClassification`] es una cabecera de clasificaciรณn de imรกgenes, es decir, una capa lineal encima del estado oculto final del token `CLS`, encima del modelo base [`ViTModel`]. * [`Wav2Vec2ForCTC`] es una cabecera de modelado de lenguaje con [CTC](#connectionist-temporal-classification-ctc) encima del modelo base [`Wav2Vec2Model`]. ## I ### image patch Los modelos de Transformers basados en visiรณn dividen una imagen en parches mรกs pequeรฑos que se incorporan linealmente y luego se pasan como una secuencia al modelo. Puedes encontrar el `patch_size` (o resoluciรณn del modelo) en su configuraciรณn. ### inference La inferencia es el proceso de evaluar un modelo en nuevos datos despuรฉs de completar el entrenamiento. Consulta el tutorial [Pipeline for inference](https://huggingface.co/docs/transformers/pipeline_tutorial) para aprender cรณmo realizar inferencias con ๐Ÿค— Transformers. ### input IDs Los IDs de entrada a menudo son los รบnicos parรกmetros necesarios que se deben pasar al modelo como entrada. Son รญndices de tokens, representaciones numรฉricas de tokens que construyen las secuencias que se utilizarรกn como entrada por el modelo. <Youtube id="VFp38yj8h3A"/> Cada tokenizador funciona de manera diferente, pero el mecanismo subyacente sigue siendo el mismo. Aquรญ tienes un ejemplo utilizando el tokenizador BERT, que es un tokenizador [WordPiece](https://arxiv.org/pdf/1609.08144.pdf): ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") >>> sequence = "A Titan RTX has 24GB of VRAM" ``` El tokenizador se encarga de dividir la secuencia en tokens disponibles en el vocabulario del tokenizador. ```python >>> tokenized_sequence = tokenizer.tokenize(sequence) ``` Los tokens son palabras o sub palabras. Por ejemplo, "VRAM" no estaba en el vocabulario del modelo, asรญ que se dividiรณ en "V", "RA" y "M". Para indicar que estos tokens no son palabras separadas sino partes de la misma palabra, se aรฑade un prefijo de doble almohadilla para "RA" y "M": ```python >>> print(tokenized_sequence) ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M'] ``` Estos tokens luego se pueden convertir en IDs que son comprensibles por el modelo. Esto se puede hacer alimentando directamente la oraciรณn al tokenizador, que aprovecha la implementaciรณn en Rust de [๐Ÿค— Tokenizers](https://github.com/huggingface/tokenizers) para obtener un rendimiento รณptimo. ```python >>> inputs = tokenizer(sequence) ``` El tokenizador devuelve un diccionario con todos los argumentos necesarios para que su modelo correspondiente funcione correctamente. Los รญndices de los tokens estรกn bajo la clave `input_ids`: ```python >>> encoded_sequence = inputs["input_ids"] >>> print(encoded_sequence) [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102] ``` Ten en cuenta que el tokenizador aรฑade automรกticamente "tokens especiales" (si el modelo asociado depende de ellos), que son IDs especiales que el modelo utiliza en ocasiones. Si descodificamos la secuencia anterior de IDs, ```python >>> decoded_sequence = tokenizer.decode(encoded_sequence) ``` Veremos ```python >>> print(decoded_sequence) [CLS] A Titan RTX has 24GB of VRAM [SEP] ``` Porque esta es la forma en que un [`BertModel`] espera sus entradas. ## L ### labels Las etiquetas son un argumento opcional que se puede pasar para que el modelo calcule la pรฉrdida por sรญ mismo. Estas etiquetas deberรญan ser la predicciรณn esperada del modelo: usarรก la pรฉrdida estรกndar para calcular la pรฉrdida entre sus predicciones y el valor esperado (la etiqueta). Estas etiquetas son diferentes segรบn la cabecera del modelo, por ejemplo: - Para modelos de clasificaciรณn de secuencias ([`BertForSequenceClassification`]), el modelo espera un tensor de dimensiรณn `(batch_size)` con cada valor del lote correspondiente a la etiqueta esperada de toda la secuencia. - Para modelos de clasificaciรณn de tokens ([`BertForTokenClassification`]), el modelo espera un tensor de dimensiรณn `(batch_size, seq_length)` con cada valor correspondiente a la etiqueta esperada de cada token individual. - Para el modelado de lenguaje enmascarado ([`BertForMaskedLM`]), el modelo espera un tensor de dimensiรณn `(batch_size, seq_length)` con cada valor correspondiente a la etiqueta esperada de cada token individual: las etiquetas son el ID del token enmascarado y los valores deben ignorarse para el resto (generalmente -100). - Para tareas de secuencia a secuencia ([`BartForConditionalGeneration`], [`MBartForConditionalGeneration`]), el modelo espera un tensor de dimensiรณn `(batch_size, tgt_seq_length)` con cada valor correspondiente a las secuencias objetivo asociadas con cada secuencia de entrada. Durante el entrenamiento, tanto BART como T5 generarรกn internamente los `decoder_input_ids` y las mรกscaras de atenciรณn del decodificador. Por lo general, no es necesario suministrarlos. Esto no se aplica a los modelos que aprovechan el marco codificador-decodificador. - Para modelos de clasificaciรณn de imรกgenes ([`ViTForImageClassification`]), el modelo espera un tensor de dimensiรณn `(batch_size)` con cada valor del lote correspondiente a la etiqueta esperada de cada imagen individual. - Para modelos de segmentaciรณn semรกntica ([`SegformerForSemanticSegmentation`]), el modelo espera un tensor de dimensiรณn `(batch_size, height, width)` con cada valor del lote correspondiente a la etiqueta esperada de cada pรญxel individual. - Para modelos de detecciรณn de objetos ([`DetrForObjectDetection`]), el modelo espera una lista de diccionarios con claves `class_labels` y `boxes` donde cada valor del lote corresponde a la etiqueta esperada y el nรบmero de cajas delimitadoras de cada imagen individual. - Para modelos de reconocimiento automรกtico de voz ([`Wav2Vec2ForCTC`]), el modelo espera un tensor de dimensiรณn `(batch_size, target_length)` con cada valor correspondiente a la etiqueta esperada de cada token individual. <Tip> Las etiquetas de cada modelo pueden ser diferentes, asรญ que asegรบrate siempre de revisar la documentaciรณn de cada modelo para obtener mรกs informaciรณn sobre sus etiquetas especรญficas. </Tip> Los modelos base ([`BertModel`]) no aceptan etiquetas, ya que estos son los modelos base de transformadores, que simplemente generan caracterรญsticas. ### large language models (LLM) Un tรฉrmino genรฉrico que se refiere a modelos de lenguaje de transformadores (GPT-3, BLOOM, OPT) que fueron entrenados con una gran cantidad de datos. Estos modelos tambiรฉn tienden a tener un gran nรบmero de parรกmetros que se pueden aprender (por ejemplo, 175 mil millones para GPT-3). ## M ### masked language modeling (MLM) Una tarea de preentrenamiento en la que el modelo ve una versiรณn corrupta de los textos, generalmente hecha al enmascarar algunos tokens al azar, y tiene que predecir el texto original. ### multimodal Una tarea que combina textos con otro tipo de entradas (por ejemplo: imรกgenes). ## N ### Natural language generation (NLG) Todas las tareas relacionadas con la generaciรณn de texto (por ejemplo: [Escribe con Transformers](https://transformer.huggingface.co/) o traducciรณn). ### Natural language processing (NLP) Una forma genรฉrica de decir "trabajar con textos". ### Natural language understanding (NLU) Todas las tareas relacionadas con entender lo que hay en un texto (por ejemplo: clasificar el texto completo o palabras individuales). ## P ### Pipeline Un pipeline en ๐Ÿค— Transformers es una abstracciรณn que se refiere a una serie de pasos que se ejecutan en un orden especรญfico para preprocesar y transformar datos y devolver una predicciรณn de un modelo. Algunas etapas de ejemplo que se encuentran en un pipeline pueden ser el preprocesamiento de datos, la extracciรณn de caracterรญsticas y la normalizaciรณn. Para obtener mรกs detalles, consulta [Pipelines para inferencia](https://huggingface.co/docs/transformers/pipeline_tutorial). ### PipelineParallel (PP) Tรฉcnica de paralelismo en la que el modelo se divide verticalmente (a nivel de capa) en varios GPU, de modo que solo una o varias capas del modelo se colocan en un solo GPU. Cada GPU procesa en paralelo diferentes etapas del pipeline y trabaja en un pequeรฑo fragmento del lote. Obtรฉn mรกs informaciรณn sobre cรณmo funciona PipelineParallel [aquรญ](perf_train_gpu_many#from-naive-model-parallelism-to-pipeline-parallelism). ### pixel values Un tensor de las representaciones numรฉricas de una imagen que se pasa a un modelo. Los valores de pรญxeles tienen una forma de [`batch_size`, `num_channels`, `height`, `width`], y se generan a partir de un procesador de imรกgenes. ### pooling Una operaciรณn que reduce una matriz a una matriz mรกs pequeรฑa, ya sea tomando el mรกximo o el promedio de la dimensiรณn (o dimensiones) agrupada(s). Las capas de agrupaciรณn se encuentran comรบnmente entre capas convolucionales para reducir la representaciรณn de caracterรญsticas. ### position IDs A diferencia de las RNN que tienen la posiciรณn de cada token incrustada en ellas, los transformers no son conscientes de la posiciรณn de cada token. Por lo tanto, se utilizan los IDs de posiciรณn (`position_ids`) para que el modelo identifique la posiciรณn de cada token en la lista de tokens. Son un parรกmetro opcional. Si no se pasan `position_ids` al modelo, los IDs se crean automรกticamente como embeddings de posiciรณn absolutas. Los embeddings de posiciรณn absolutas se seleccionan en el rango `[0, config.max_position_embeddings - 1]`. Algunos modelos utilizan otros tipos de embeddings de posiciรณn, como embeddings de posiciรณn sinusoidales o embeddings de posiciรณn relativas. ### preprocessing La tarea de preparar datos crudos en un formato que pueda ser fรกcilmente consumido por modelos de aprendizaje automรกtico. Por ejemplo, el texto se preprocesa tรญpicamente mediante la tokenizaciรณn. Para tener una mejor idea de cรณmo es el preprocesamiento para otros tipos de entrada, consulta el tutorial [Pre-procesar](https://huggingface.co/docs/transformers/preprocessing). ### pretrained model Un modelo que ha sido pre-entrenado en algunos datos (por ejemplo, toda Wikipedia). Los mรฉtodos de preentrenamiento involucran un objetivo auto-supervisado, que puede ser leer el texto e intentar predecir la siguiente palabra (ver [modelado de lenguaje causal](#causal-language-modeling)) o enmascarar algunas palabras e intentar predecirlas (ver [modelado de lenguaje enmascarado](#masked-language-modeling-mlm)). Los modelos de habla y visiรณn tienen sus propios objetivos de pre-entrenamiento. Por ejemplo, Wav2Vec2 es un modelo de habla pre-entrenado en una tarea contrastiva que requiere que el modelo identifique la representaciรณn de habla "verdadera" de un conjunto de representaciones de habla "falsas". Por otro lado, BEiT es un modelo de visiรณn pre-entrenado en una tarea de modelado de imรกgenes enmascaradas que enmascara algunos de los parches de la imagen y requiere que el modelo prediga los parches enmascarados (similar al objetivo de modelado de lenguaje enmascarado). ## R ### recurrent neural network (RNN) Un tipo de modelo que utiliza un bucle sobre una capa para procesar textos. ### representation learning Un subcampo del aprendizaje automรกtico que se centra en aprender representaciones significativas de datos en bruto. Algunos ejemplos de tรฉcnicas de aprendizaje de representaciones incluyen embeddings de palabras, auto-encoders y Redes Generativas Adversarias (Generative Adversarial Networks, GANs). ## S ### sampling rate Una medida en hercios del nรบmero de muestras (la seรฑal de audio) tomadas por segundo. La tasa de muestreo es el resultado de aproximar una seรฑal continua como el habla. ### self-attention Cada elemento de la entrada averigua a cuรกles otros elementos de la entrada debe prestar atenciรณn. ### self-supervised learning Una categorรญa de tรฉcnicas de aprendizaje automรกtico en la que un modelo crea su propio objetivo de aprendizaje a partir de datos no etiquetados. Difiere del [aprendizaje no supervisado](#unsupervised-learning) y del [aprendizaje supervisado](#supervised-learning) en que el proceso de aprendizaje estรก supervisado, pero no explรญcitamente por el usuario. Un ejemplo de aprendizaje auto-supervisado es el [modelado de lenguaje enmascarado](#masked-language-modeling-mlm), donde un modelo recibe oraciones con una proporciรณn de sus tokens eliminados y aprende a predecir los tokens faltantes. ### semi-supervised learning Una amplia categorรญa de tรฉcnicas de entrenamiento de aprendizaje automรกtico que aprovecha una pequeรฑa cantidad de datos etiquetados con una mayor cantidad de datos no etiquetados para mejorar la precisiรณn de un modelo, a diferencia del [aprendizaje supervisado](#supervised-learning) y del [aprendizaje no supervisado](#unsupervised-learning). Un ejemplo de un enfoque de aprendizaje semi-supervisado es "auto-entrenamiento", en el que un modelo se entrena con datos etiquetados y luego se utiliza para hacer predicciones sobre los datos no etiquetados. La porciรณn de datos no etiquetados que el modelo predice con mayor confianza se agrega al conjunto de datos etiquetados y se utiliza para volver a entrenar el modelo. ### sequence-to-sequence (seq2seq) Modelos que generan una nueva secuencia a partir de una entrada, como modelos de traducciรณn o modelos de resumen (como [Bart](model_doc/bart) o [T5](model_doc/t5)). ### Sharded DDP Otro nombre para el concepto fundamental de [ZeRO](#zero-redundancy-optimizer-zero) utilizado por varias otras implementaciones de ZeRO. ### stride En [convoluciรณn](#convolution) o [agrupaciรณn](#pooling), el paso (stride) se refiere a la distancia que recorre el nรบcleo sobre una matriz. Un paso de 1 significa que el nรบcleo se mueve un pรญxel a la vez, y un paso de 2 significa que el nรบcleo se mueve dos pรญxeles a la vez. ### supervised learning Una forma de entrenamiento de modelos que utiliza directamente datos etiquetados para corregir y dirigir el rendimiento del modelo. Los datos se introducen en el modelo en entrenamiento, y sus predicciones se comparan con las etiquetas conocidas. El modelo actualiza sus pesos en funciรณn de cuรกn incorrectas fueron sus predicciones, y el proceso se repite para optimizar el rendimiento del modelo. ## T ### Tensor Parallelism (TP) Tรฉcnica de paralelismo para entrenamiento en mรบltiples GPU en la que cada tensor se divide en mรบltiples fragmentos, de modo que en lugar de tener todo el tensor en una sola GPU, cada fragmento del tensor reside en su GPU designada. Los fragmentos se procesan por separado y en paralelo en diferentes GPU y los resultados se sincronizan al final del paso de procesamiento.Esto es lo que a veces se llama paralelismo horizontal, ya que la divisiรณn ocurre a nivel horizontal. Obtรฉn mรกs informaciรณn sobre el Paralelismo de Tensores [aquรญ](perf_train_gpu_many#tensor-parallelism). ### token Parte de una oraciรณn, generalmente una palabra, pero tambiรฉn puede ser una sub-palabra (las palabras no comunes a menudo se dividen en sub-palabras) o un sรญmbolo de puntuaciรณn. ### token Type IDs Algunos modelos tienen como objetivo realizar clasificaciรณn en pares de oraciones o responder preguntas. <Youtube id="0u3ioSwev3s"/> Estos requieren que dos secuencias diferentes se unan en una รบnica entrada "input_ids", lo cual generalmente se realiza con la ayuda de tokens especiales, como el token de clasificaciรณn (`[CLS]`) y el token separador (`[SEP]`). Por ejemplo, el modelo BERT construye sus dos secuencias de entrada de la siguiente manera: ```python >>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP] ``` Podemos utilizar nuestro tokenizador para generar automรกticamente una oraciรณn de este tipo al pasar las dos secuencias a `tokenizer` como dos argumentos (y no como una lista, como antes) de la siguiente manera: ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") >>> sequence_a = "HuggingFace is based in NYC" >>> sequence_b = "Where is HuggingFace based?" >>> encoded_dict = tokenizer(sequence_a, sequence_b) >>> decoded = tokenizer.decode(encoded_dict["input_ids"]) ``` Que devolverรก: ```python >>> print(decoded) [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP] ``` Esto es suficiente para que algunos modelos comprendan dรณnde termina una secuencia y comienza otra. Sin embargo, otros modelos, como BERT, tambiรฉn utilizan identificadores de tipo de token (tambiรฉn llamados identificadores de segmento). Se representan como una mรกscara binaria que identifica los dos tipos de secuencia en el modelo. El tokenizador devuelve esta mรกscara como la entrada "token_type_ids": ```python >>> encoded_dict["token_type_ids"] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` La primera secuencia, el "contexto" utilizado para la pregunta, tiene todos sus tokens representados por un `0`, mientras que la segunda secuencia, correspondiente a la "pregunta", tiene todos sus tokens representados por un `1`. Algunos modelos, como [`XLNetModel`], utilizan un token adicional representado por un `2`. ### transfer learning Una tรฉcnica que implica tomar un modelo pre-entrenado y adaptarlo a un conjunto de datos especรญfico para tu tarea. En lugar de entrenar un modelo desde cero, puedes aprovechar el conocimiento obtenido de un modelo existente como punto de partida. Esto acelera el proceso de aprendizaje y reduce la cantidad de datos de entrenamiento necesarios. ### transformer Arquitectura de modelo de aprendizaje profundo basada en auto-atenciรณn (Self-attention). ## U ### unsupervised learning Una forma de entrenamiento de modelos en la que los datos proporcionados al modelo no estรกn etiquetados. Las tรฉcnicas de aprendizaje no supervisado aprovechan la informaciรณn estadรญstica de la distribuciรณn de datos para encontrar patrones รบtiles para la tarea en cuestiรณn. ## Z ### Zero Redundancy Optimizer (ZeRO) Tรฉcnica de paralelismo que realiza la fragmentaciรณn de los tensores de manera algo similar a [TensorParallel](#tensor-parallelism-tp), excepto que todo el tensor se reconstruye a tiempo para una computaciรณn hacia adelante o hacia atrรกs, por lo tanto, el modelo no necesita ser modificado. Este mรฉtodo tambiรฉn admite diversas tรฉcnicas de descarga para compensar la memoria limitada de la GPU. Obtรฉn mรกs informaciรณn sobre ZeRO [aquรญ](perf_train_gpu_many#zero-data-parallelism).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pipelines para inferencia Un [`pipeline`] simplifica el uso de cualquier modelo del [Model Hub](https://huggingface.co/models) para la inferencia en una variedad de tareas como la generaciรณn de texto, la segmentaciรณn de imรกgenes y la clasificaciรณn de audio. Incluso si no tienes experiencia con una modalidad especรญfica o no comprendes el cรณdigo que alimenta los modelos, ยกaรบn puedes usarlos con el [`pipeline`]! Este tutorial te enseรฑarรก a: * Utilizar un [`pipeline`] para inferencia. * Utilizar un tokenizador o modelo especรญfico. * Utilizar un [`pipeline`] para tareas de audio y visiรณn. <Tip> Echa un vistazo a la documentaciรณn de [`pipeline`] para obtener una lista completa de tareas admitidas. </Tip> ## Uso del pipeline Si bien cada tarea tiene un [`pipeline`] asociado, es mรกs sencillo usar la abstracciรณn general [`pipeline`] que contiene todos los pipelines de tareas especรญficas. El [`pipeline`] carga automรกticamente un modelo predeterminado y un tokenizador con capacidad de inferencia para tu tarea. 1. Comienza creando un [`pipeline`] y especรญfica una tarea de inferencia: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation") ``` 2. Pasa tu texto de entrada al [`pipeline`]: ```py >>> generator("Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone") [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Iron-priests at the door to the east, and thirteen for the Lord Kings at the end of the mountain'}] ``` Si tienes mรกs de una entrada, pรกsala como una lista: ```py >>> generator( ... [ ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... "Nine for Mortal Men, doomed to die, One for the Dark Lord on his dark throne", ... ] ... ) ``` Cualquier parรกmetro adicional para tu tarea tambiรฉn se puede incluir en el [`pipeline`]. La tarea `text-generation` tiene un mรฉtodo [`~generation.GenerationMixin.generate`] con varios parรกmetros para controlar la salida. Por ejemplo, si deseas generar mรกs de una salida, defรญnelo en el parรกmetro `num_return_sequences`: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... num_return_sequences=2, ... ) ``` ### Selecciona un modelo y un tokenizador El [`pipeline`] acepta cualquier modelo del [Model Hub](https://huggingface.co/models). Hay etiquetas en el Model Hub que te permiten filtrar por el modelo que te gustarรญa utilizar para tu tarea. Una vez que hayas elegido un modelo apropiado, cรกrgalo con la clase `AutoModelFor` y [`AutoTokenizer`] correspondientes. Por ejemplo, carga la clase [`AutoModelForCausalLM`] para una tarea de modelado de lenguaje causal: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") ``` Crea un [`pipeline`] para tu tarea y especรญfica el modelo y el tokenizador que cargaste: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) ``` Pasa tu texto de entrada a [`pipeline`] para generar algo de texto: ```py >>> generator("Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone") [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Dragon-lords (for them to rule in a world ruled by their rulers, and all who live within the realm'}] ``` ## Pipeline de audio La flexibilidad de [`pipeline`] significa que tambiรฉn se puede extender a tareas de audio. Por ejemplo, clasifiquemos la emociรณn de un breve fragmento del famoso discurso de John F. Kennedy ["We choose to go to the Moon"](https://en.wikipedia.org/wiki/We_choose_to_go_to_the_Moon). Encuentra un modelo de [audio classification](https://huggingface.co/models?pipeline_tag=audio-classification) para reconocimiento de emociones en el Model Hub y cรกrgalo en el [`pipeline`]: ```py >>> from transformers import pipeline >>> audio_classifier = pipeline( ... task="audio-classification", model="ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` Pasa el archivo de audio al [`pipeline`]: ```py >>> audio_classifier("jfk_moon_speech.wav") [{'label': 'calm', 'score': 0.13856211304664612}, {'label': 'disgust', 'score': 0.13148026168346405}, {'label': 'happy', 'score': 0.12635163962841034}, {'label': 'angry', 'score': 0.12439591437578201}, {'label': 'fearful', 'score': 0.12404385954141617}] ``` ## Pipeline de visiรณn Finalmente, utilizar un [`pipeline`] para tareas de visiรณn es prรกcticamente igual. Especรญfica tu tarea de visiรณn y pasa tu imagen al clasificador. La imagen puede ser un enlace o una ruta local a la imagen. Por ejemplo, ยฟquรฉ especie de gato se muestra a continuaciรณn? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(task="image-classification") >>> vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) [{'label': 'lynx, catamount', 'score': 0.4403027892112732}, {'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor', 'score': 0.03433405980467796}, {'label': 'snow leopard, ounce, Panthera uncia', 'score': 0.032148055732250214}, {'label': 'Egyptian cat', 'score': 0.02353910356760025}, {'label': 'tiger cat', 'score': 0.023034192621707916}] ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/add_new_pipeline.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ยฟCรณmo puedo crear un pipeline personalizado? En esta guรญa, veremos cรณmo crear un pipeline personalizado y cรณmo compartirlo en el [Hub](https://hf.co/models) o aรฑadirlo a la biblioteca ๐Ÿค— Transformers. En primer lugar, debes decidir las entradas que tu pipeline podrรก recibir. Pueden ser strings, bytes, diccionarios o lo que te parezca que vaya a ser la entrada mรกs apropiada. Intenta mantener estas entradas en un formato que sea tan Python puro como sea posible, puesto que esto facilita la compatibilidad (incluso con otros lenguajes de programaciรณn por medio de JSON). Estos serรกn los `inputs` (entradas) del pipeline (`preprocess`). Ahora debes definir los `outputs` (salidas). Al igual que con los `inputs`, entre mรกs simple el formato, mejor. Estas serรกn las salidas del mรฉtodo `postprocess` (posprocesamiento). Empieza heredando la clase base `Pipeline` con los 4 mรฉtodos que debemos implementar: `preprocess` (preprocesamiento), `_forward` (ejecuciรณn), `postprocess` (posprocesamiento) y `_sanitize_parameters` (verificar parรกmetros). ```python from transformers import Pipeline class MyPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] return preprocess_kwargs, {}, {} def preprocess(self, inputs, maybe_arg=2): model_input = Tensor(inputs["input_ids"]) return {"model_input": model_input} def _forward(self, model_inputs): # model_inputs == {"model_input": model_input} outputs = self.model(**model_inputs) # Quizรก {"logits": Tensor(...)} return outputs def postprocess(self, model_outputs): best_class = model_outputs["logits"].softmax(-1) return best_class ``` La estructura de este desglose es asรญ para garantizar una compatibilidad mรกs o menos transparente con el uso de CPU/GPU y el pre/posprocesamiento en CPU en varios hilos. `preprocess` tomarรก las entradas definidas originalmente y las convertirรก en algo que se le pueda pasar al modelo. Podrรญa contener mรกs informaciรณn y a menudo es un objeto `Dict` (diccionario). `_forward` contiene los detalles de la implementaciรณn y no deberรญa ser invocado de forma directa. `forward` es el mรฉtodo preferido a utilizar pues contiene verificaciones para asegurar que todo funcione en el dispositivo correcto. Cualquier cosa que estรฉ relacionada con un modelo real deberรญa ir en el mรฉtodo `_forward`, todo lo demรกs va en los mรฉtodos de preprocesamiento y posprocesamiento. Los mรฉtodos `postprocess` reciben la salida `_forward` y la convierten en la salida final que decidimos anteriormente. `_sanitize_parameters` existe para permitir a los usuarios pasar cualesquiera parรกmetros cuando lo deseen, ya sea al momento de inicializar el pipeline `pipeline(...., maybe_arg=4)` o al momento de invocarlo `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`. El mรฉtodo `_sanitize_parameters` devuelve 3 diccionarios de kwargs que serรกn pasados directamente a `preprocess`, `_forward` y `postprocess`. No ingreses nada si el caller no se va a invocar con parรกmetros adicionales. Esto permite mantener los parรกmetros por defecto de la definiciรณn de la funciรณn, lo que es mรกs "natural". Un ejemplo clรกsico serรญa un argumento `top_k` en el posprocesamiento de una tarea de clasificaciรณn. ```python >>> pipe = pipeline("my-new-task") >>> pipe("This is a test") [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05} {"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}] >>> pipe("This is a test", top_k=2) [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}] ``` Para lograrlo, actualizaremos nuestro mรฉtodo `postprocess` con un valor por defecto de `5` y modificaremos `_sanitize_parameters` para permitir este nuevo parรกmetro. ```python def postprocess(self, model_outputs, top_k=5): best_class = model_outputs["logits"].softmax(-1) # Aรฑade la lรณgica para manejar el top_k return best_class def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] postprocess_kwargs = {} if "top_k" in kwargs: postprocess_kwargs["top_k"] = kwargs["top_k"] return preprocess_kwargs, {}, postprocess_kwargs ``` Intenta que las entradas y salidas sean muy simples e, idealmente, que puedan serializarse como JSON, pues esto hace el uso del pipeline muy sencillo sin que el usuario tenga que preocuparse por conocer nuevos tipos de objetos. Tambiรฉn es relativamente comรบn tener compatibilidad con muchos tipos diferentes de argumentos por facilidad de uso (por ejemplo, los archivos de audio pueden ser nombres de archivo, URLs o bytes). ## Aรฑadirlo a la lista de tareas Para registrar tu `new-task` (nueva tarea) en la lista de tareas, debes aรฑadirla al `PIPELINE_REGISTRY` (registro de pipelines): ```python from transformers.pipelines import PIPELINE_REGISTRY PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, ) ``` Puedes especificar un modelo por defecto si lo deseas, en cuyo caso debe venir con una versiรณn especรญfica (que puede ser el nombre de un branch o hash de commit, en este caso usamos `"abcdef"`), asรญ como el tipo: ```python PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, default={"pt": ("user/awesome_model", "abcdef")}, type="text", # tipo de datos que maneja: texto, audio, imagen, multi-modalidad ) ``` ## Comparte tu pipeline en el Hub Para compartir tu pipeline personalizado en el Hub, solo tienes que guardar el cรณdigo personalizado de tu sub-clase `Pipeline` en un archivo de Python. Por ejemplo, digamos que queremos usar un pipeline personalizado para la clasificaciรณn de duplas de oraciones de esta forma: ```py import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits} ``` La implementaciรณn es independiente del framework y funcionarรก con modelos de PyTorch y TensorFlow. Si guardamos esto en un archivo llamado `pair_classification.py`, podemos importarlo y registrarlo de la siguiente manera: ```py from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification PIPELINE_REGISTRY.register_pipeline( "pair-classification", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification, tf_model=TFAutoModelForSequenceClassification, ) ``` Una vez hecho esto, podemos usarlo con un modelo pre-entrenado. Por ejemplo, al modelo `sgugger/finetuned-bert-mrpc` se le hizo fine-tuning con el dataset MRPC, en el cual se clasifican duplas de oraciones como parรกfrasis o no. ```py from transformers import pipeline classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc") ``` Ahora podemos compartirlo en el Hub usando el mรฉtodo `save_pretrained`: ```py classifier.push_to_hub("test-dynamic-pipeline") ``` Esto copiarรก el archivo donde definiste `PairClassificationPipeline` dentro de la carpeta `"test-dynamic-pipeline"`, y ademรกs guardarรก el modelo y el tokenizer del pipeline, antes de enviar todo al repositorio `{your_username}/test-dynamic-pipeline`. Despuรฉs de esto, cualquier persona puede usarlo siempre que usen la opciรณn `trust_remote_code=True` (confiar en cรณdigo remoto): ```py from transformers import pipeline classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True) ``` ## Aรฑadir el pipeline a ๐Ÿค— Transformers Si quieres contribuir tu pipeline a la biblioteca ๐Ÿค— Transformers, tendrรกs que aรฑadirlo a un nuevo mรณdulo en el sub-mรณdulo `pipelines` con el cรณdigo de tu pipeline. Luego, debes aรฑadirlo a la lista de tareas definidas en `pipelines/__init__.py`. A continuaciรณn tienes que aรฑadir las pruebas. Crea un nuevo archivo llamado `tests/test_pipelines_MY_PIPELINE.py` basรกndote en las pruebas existentes. La funciรณn `run_pipeline_test` serรก muy genรฉrica y se correrรก sobre modelos pequeรฑos escogidos al azar sobre todas las arquitecturas posibles definidas en `model_mapping` y `tf_model_mapping`. Esto es muy importante para probar compatibilidades a futuro, lo que significa que si alguien aรฑade un nuevo modelo para `XXXForQuestionAnswering` entonces el pipeline intentarรก ejecutarse con ese modelo. Ya que los modelos son aleatorios, es imposible verificar los valores como tales, y es por eso que hay un helper `ANY` que simplemente intentarรก que la salida tenga el mismo tipo que la salida esperada del pipeline. Tambiรฉn *debes* implementar 2 (preferiblemente 4) pruebas: - `test_small_model_pt` : Define un (1) modelo pequeรฑo para este pipeline (no importa si los resultados no tienen sentido) y prueba las salidas del pipeline. Los resultados deberรญan ser los mismos que en `test_small_model_tf`. - `test_small_model_tf` : Define un (1) modelo pequeรฑo para este pipeline (no importa si los resultados no tienen sentido) y prueba las salidas del pipeline. Los resultados deberรญan ser los mismos que en `test_small_model_pt`. - `test_large_model_pt` (`optional`): Prueba el pipeline en una tarea real en la que los resultados deben tener sentido. Estas pruebas son lentas y deben marcarse como tales. El objetivo de esto es ejemplificar el pipeline y asegurarse de que no haya divergencias en versiones futuras. - `test_large_model_tf` (`optional`): Prueba el pipeline en una tarea real en la que los resultados deben tener sentido. Estas pruebas son lentas y deben marcarse como tales. El objetivo de esto es ejemplificar el pipeline y asegurarse de que no haya divergencias en versiones futuras.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Tour rรกpido - local: installation title: Instalaciรณn title: Empezar - sections: - local: pipeline_tutorial title: Pipelines para inferencia - local: autoclass_tutorial title: Carga instancias preentrenadas con un AutoClass - local: preprocessing title: Preprocesamiento - local: training title: Fine-tuning a un modelo pre-entrenado - local: accelerate title: Entrenamiento distribuido con ๐Ÿค— Accelerate - local: model_sharing title: Compartir un modelo title: Tutoriales - sections: - isExpanded: false sections: - local: tasks/question_answering title: Respuesta a preguntas - local: tasks/language_modeling title: Modelado de lenguaje - local: tasks/summarization title: Generaciรณn de resรบmenes - local: tasks/multiple_choice title: Selecciรณn mรบltiple - local: tasks/image_captioning title: Subtรญtulos de imรกgenes title: Procesamiento del Lenguaje Natural - isExpanded: false sections: - local: tasks/asr title: Reconocimiento automรกtico del habla title: Audio - isExpanded: false sections: - local: tasks/image_classification title: Clasificaciรณn de imรกgenes title: Visiรณn Artificial title: Guรญas prรกcticas - sections: - local: fast_tokenizers title: Usa tokenizadores de ๐Ÿค— Tokenizers - local: multilingual title: Modelos multilingรผes para inferencia - local: create_a_model title: Crea una arquitectura personalizada - local: custom_models title: Compartir modelos personalizados - local: run_scripts title: Entrenamiento con scripts - local: chat_templating title: Plantillas para Modelos de Chat - local: trainer title: Entrenador - local: sagemaker title: Ejecutar el entrenamiento en Amazon SageMaker - local: converting_tensorflow_models title: Convertir checkpoints de TensorFlow - local: serialization title: Exportar a ONNX - local: torchscript title: Exportar a TorchScript - local: community title: Los recursos de la comunidad title: Guรญas para desarrolladores - sections: - local: performance title: Descripciรณn general - local: debugging title: Debugging title: Rendimiento y escalabilidad - sections: - local: add_new_pipeline title: ยฟCรณmo puedo aรฑadir un pipeline a ๐Ÿค— Transformers? - local: pr_checks title: Verificaciones en un Pull Request title: Contribuir - sections: - local: philosophy title: Filosofรญa - local: glossary title: Glosario - local: task_summary title: Lo que ๐Ÿค— Transformers puede hacer - local: tasks_explained title: Como los ๐Ÿค— Transformers resuelven tareas - local: attention title: Mecanismos de atenciรณn - local: pad_truncation title: Relleno y truncamiento - local: bertology title: BERTologรญa - local: perplexity title: Perplejidad de los modelos de longitud fija - local: pipeline_webserver title: Flujo de trabajo para la inferencia de los servidores web title: Guรญas conceptuales
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/tasks_explained.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ยฟCรณmo los ๐Ÿค— Transformers resuelven tareas? En [Lo que ๐Ÿค— Transformers puede hacer](task_summary), aprendiste sobre el procesamiento de lenguaje natural (NLP), tareas de voz y audio, visiรณn por computadora y algunas aplicaciones importantes de ellas. Esta pรกgina se centrarรก en cรณmo los modelos resuelven estas tareas y explicarรก lo que estรก sucediendo debajo de la superficie. Hay muchas maneras de resolver una tarea dada, y diferentes modelos pueden implementar ciertas tรฉcnicas o incluso abordar la tarea desde un รกngulo nuevo, pero para los modelos Transformer, la idea general es la misma. Debido a su arquitectura flexible, la mayorรญa de los modelos son una variante de una estructura de codificador, descodificador o codificador-descodificador. Ademรกs de los modelos Transformer, nuestra biblioteca tambiรฉn tiene varias redes neuronales convolucionales (CNNs) modernas, que todavรญa se utilizan hoy en dรญa para tareas de visiรณn por computadora. Tambiรฉn explicaremos cรณmo funciona una CNN moderna. Para explicar cรณmo se resuelven las tareas, caminaremos a travรฉs de lo que sucede dentro del modelo para generar predicciones รบtiles. - [Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2) para clasificaciรณn de audio y reconocimiento automรกtico de habla (ASR) - [Transformador de Visiรณn (ViT)](https://huggingface.co/docs/transformers/model_doc/vit) y [ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext) para clasificaciรณn de imรกgenes - [DETR](https://huggingface.co/docs/transformers/model_doc/detr) para detecciรณn de objetos - [Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former) para segmentaciรณn de imagen - [GLPN](https://huggingface.co/docs/transformers/model_doc/glpn) para estimaciรณn de profundidad - [BERT](https://huggingface.co/docs/transformers/model_doc/bert) para tareas de NLP como clasificaciรณn de texto, clasificaciรณn de tokens y preguntas y respuestas que utilizan un codificador - [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2) para tareas de NLP como generaciรณn de texto que utilizan un descodificador - [BART](https://huggingface.co/docs/transformers/model_doc/bart) para tareas de NLP como resumen y traducciรณn que utilizan un codificador-descodificador <Tip> Antes de continuar, es bueno tener un conocimiento bรกsico de la arquitectura original del Transformer. Saber cรณmo funcionan los codificadores, decodificadores y la atenciรณn te ayudarรก a entender cรณmo funcionan los diferentes modelos de Transformer. Si estรกs empezando o necesitas repasar, ยกecha un vistazo a nuestro [curso](https://huggingface.co/course/chapter1/4?fw=pt) para obtener mรกs informaciรณn! </Tip> ## Habla y audio [Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2) es un modelo auto-supervisado preentrenado en datos de habla no etiquetados y ajustado en datos etiquetados para clasificaciรณn de audio y reconocimiento automรกtico de voz. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/wav2vec2_architecture.png"/> </div> Este modelo tiene cuatro componentes principales: 1. Un *codificador de caracterรญsticas* toma la forma de onda de audio cruda, la normaliza a media cero y varianza unitaria, y la convierte en una secuencia de vectores de caracterรญsticas, cada uno de 20 ms de duraciรณn. 2. Las formas de onda son continuas por naturaleza, por lo que no se pueden dividir en unidades separadas como una secuencia de texto se puede dividir en palabras. Por eso, los vectores de caracterรญsticas se pasan a un *mรณdulo de cuantificaciรณn*, que tiene como objetivo aprender unidades de habla discretas. La unidad de habla se elige de una colecciรณn de palabras de cรณdigo, conocidas como *codebook* (puedes pensar en esto como el vocabulario). Del codebook, se elige el vector o unidad de habla que mejor representa la entrada de audio continua y se envรญa a travรฉs del modelo. 3. Alrededor de la mitad de los vectores de caracterรญsticas se enmascaran aleatoriamente, y el vector de caracterรญsticas enmascarado se alimenta a una *red de contexto*, que es un codificador Transformer que tambiรฉn agrega incrustaciones posicionales relativas. 4. El objetivo del preentrenamiento de la red de contexto es una *tarea contrastiva*. El modelo tiene que predecir la verdadera representaciรณn de habla cuantizada de la predicciรณn enmascarada a partir de un conjunto de falsas, lo que anima al modelo a encontrar el vector de contexto y la unidad de habla cuantizada mรกs similares (la etiqueta objetivo). ยกAhora que wav2vec2 estรก preentrenado, puedes ajustarlo con tus datos para clasificaciรณn de audio o reconocimiento automรกtico de voz! ### Clasificaciรณn de audio Para usar el modelo preentrenado para la clasificaciรณn de audio, aรฑade una capa de clasificaciรณn de secuencia encima del modelo base de Wav2Vec2. La capa de clasificaciรณn es una capa lineal que acepta los estados ocultos del codificador. Los estados ocultos representan las caracterรญsticas aprendidas de cada fotograma de audio, que pueden tener longitudes variables. Para crear un vector de longitud fija, primero se agrupan los estados ocultos y luego se transforman en logits sobre las etiquetas de clase. La pรฉrdida de entropรญa cruzada se calcula entre los logits y el objetivo para encontrar la clase mรกs probable. ยฟListo para probar la clasificaciรณn de audio? ยกConsulta nuestra guรญa completa de [clasificaciรณn de audio](https://huggingface.co/docs/transformers/tasks/audio_classification) para aprender cรณmo ajustar Wav2Vec2 y usarlo para inferencia! ### Reconocimiento automรกtico de voz Para usar el modelo preentrenado para el reconocimiento automรกtico de voz, aรฑade una capa de modelado del lenguaje encima del modelo base de Wav2Vec2 para [CTC (clasificaciรณn temporal conexista)](glossary#connectionist-temporal-classification-ctc). La capa de modelado del lenguaje es una capa lineal que acepta los estados ocultos del codificador y los transforma en logits. Cada logit representa una clase de token (el nรบmero de tokens proviene del vocabulario de la tarea). La pรฉrdida de CTC se calcula entre los logits y los objetivos para encontrar la secuencia de tokens mรกs probable, que luego se decodifican en una transcripciรณn. ยฟListo para probar el reconocimiento automรกtico de voz? ยกConsulta nuestra guรญa completa de [reconocimiento automรกtico de voz](tasks/asr) para aprender cรณmo ajustar Wav2Vec2 y usarlo para inferencia! ## Visiรณn por computadora Hay dos formas de abordar las tareas de visiรณn por computadora: 1. Dividir una imagen en una secuencia de parches y procesarlos en paralelo con un Transformer. 2. Utilizar una CNN moderna, como [ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext), que se basa en capas convolucionales pero adopta diseรฑos de redes modernas. <Tip> Un tercer enfoque combina Transformers con convoluciones (por ejemplo, [Convolutional Vision Transformer](https://huggingface.co/docs/transformers/model_doc/cvt) o [LeViT](https://huggingface.co/docs/transformers/model_doc/levit)). No discutiremos estos porque simplemente combinan los dos enfoques que examinamos aquรญ. </Tip> ViT y ConvNeXT se utilizan comรบnmente para la clasificaciรณn de imรกgenes, pero para otras tareas de visiรณn como la detecciรณn de objetos, la segmentaciรณn y la estimaciรณn de profundidad, veremos DETR, Mask2Former y GLPN, respectivamente; estos modelos son mรกs adecuados para esas tareas. ### Clasificaciรณn de imรกgenes ViT y ConvNeXT pueden usarse ambos para la clasificaciรณn de imรกgenes; la diferencia principal es que ViT utiliza un mecanismo de atenciรณn mientras que ConvNeXT utiliza convoluciones. #### Transformer [ViT](https://huggingface.co/docs/transformers/model_doc/vit) reemplaza completamente las convoluciones con una arquitectura de Transformer pura. Si estรกs familiarizado con el Transformer original, entonces ya estรกs en el camino para entender ViT. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg"/> </div> El cambio principal que introdujo ViT fue en cรณmo se alimentan las imรกgenes a un Transformer: 1. Una imagen se divide en parches cuadrados no superpuestos, cada uno de los cuales se convierte en un vector o *incrustaciรณn de parche*(patch embedding). Las incrustaciones de parche se generan a partir de una capa convolucional 2D que crea las dimensiones de entrada adecuadas (que para un Transformer base son 768 valores para cada incrustaciรณn de parche). Si tuvieras una imagen de 224x224 pรญxeles, podrรญas dividirla en 196 parches de imagen de 16x16. Al igual que el texto se tokeniza en palabras, una imagen se "tokeniza" en una secuencia de parches. 2. Se agrega una *incrustaciรณn aprendida* - un token especial `[CLS]` - al principio de las incrustaciones del parche, al igual que en BERT. El estado oculto final del token `[CLS]` se utiliza como la entrada para la cabecera de clasificaciรณn adjunta; otras salidas se ignoran. Este token ayuda al modelo a aprender cรณmo codificar una representaciรณn de la imagen. 3. Lo รบltimo que se agrega a las incrustaciones de parche e incrustaciones aprendidas son las *incrustaciones de posiciรณn* porque el modelo no sabe cรณmo estรกn ordenados los parches de imagen. Las incrustaciones de posiciรณn tambiรฉn son aprendibles y tienen el mismo tamaรฑo que las incrustaciones de parche. Finalmente, todas las incrustaciones se pasan al codificador Transformer. 4. La salida, especรญficamente solo la salida con el token `[CLS]`, se pasa a una cabecera de perceptrรณn multicapa (MLP). El objetivo del preentrenamiento de ViT es simplemente la clasificaciรณn. Al igual que otras cabeceras de clasificaciรณn, la cabecera de MLP convierte la salida en logits sobre las etiquetas de clase y calcula la pรฉrdida de entropรญa cruzada para encontrar la clase mรกs probable. ยฟListo para probar la clasificaciรณn de imรกgenes? ยกConsulta nuestra guรญa completa de [clasificaciรณn de imรกgenes](tasks/image_classification) para aprender cรณmo ajustar ViT y usarlo para inferencia! #### CNN <Tip> Esta secciรณn explica brevemente las convoluciones, pero serรญa รบtil tener un entendimiento previo de cรณmo cambian la forma y el tamaรฑo de una imagen. Si no estรกs familiarizado con las convoluciones, ยกecha un vistazo al [capรญtulo de Redes Neuronales Convolucionales](https://github.com/fastai/fastbook/blob/master/13_convolutions.ipynb) del libro fastai! </Tip> [ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext) es una arquitectura de CNN que adopta diseรฑos de redes nuevas y modernas para mejorar el rendimiento. Sin embargo, las convoluciones siguen siendo el nรบcleo del modelo. Desde una perspectiva de alto nivel, una [convoluciรณn](glossary#convolution) es una operaciรณn donde una matriz mรกs pequeรฑa (*kernel*) se multiplica por una pequeรฑa ventana de pรญxeles de la imagen. Esta calcula algunas caracterรญsticas de ella, como una textura particular o la curvatura de una lรญnea. Luego, se desliza hacia la siguiente ventana de pรญxeles; la distancia que recorre la convoluciรณn se conoce como el *stride*. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convolution.gif"/> </div> <small>Una convoluciรณn bรกsica sin relleno ni paso, tomada de <a href="https://arxiv.org/abs/1603.07285">Una guรญa para la aritmรฉtica de convoluciones para el aprendizaje profundo.</a></small> Puedes alimentar esta salida a otra capa convolucional, y con cada capa sucesiva, la red aprende cosas mรกs complejas y abstractas como perros calientes o cohetes. Entre capas convolucionales, es comรบn aรฑadir una capa de agrupaciรณn para reducir la dimensionalidad y hacer que el modelo sea mรกs robusto a las variaciones de la posiciรณn de una caracterรญstica. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png"/> </div> ConvNeXT moderniza una CNN de cinco maneras: 1. Cambia el nรบmero de bloques en cada etapa y "fragmenta" una imagen con un paso y tamaรฑo de kernel mรกs grandes. La ventana deslizante no superpuesta hace que esta estrategia de fragmentaciรณn sea similar a cรณmo ViT divide una imagen en parches. 2. Una capa de *cuello de botella* reduce el nรบmero de canales y luego lo restaura porque es mรกs rรกpido hacer una convoluciรณn de 1x1, y se puede aumentar la profundidad. Un cuello de botella invertido hace lo contrario al expandir el nรบmero de canales y luego reducirlos, lo cual es mรกs eficiente en memoria. 3. Reemplaza la tรญpica capa convolucional de 3x3 en la capa de cuello de botella con una convoluciรณn *depthwise*, que aplica una convoluciรณn a cada canal de entrada por separado y luego los apila de nuevo al final. Esto ensancha el ancho de la red para mejorar el rendimiento. 4. ViT tiene un campo receptivo global, lo que significa que puede ver mรกs de una imagen a la vez gracias a su mecanismo de atenciรณn. ConvNeXT intenta replicar este efecto aumentando el tamaรฑo del kernel a 7x7. 5. ConvNeXT tambiรฉn hace varios cambios en el diseรฑo de capas que imitan a los modelos Transformer. Hay menos capas de activaciรณn y normalizaciรณn, la funciรณn de activaciรณn se cambia a GELU en lugar de ReLU, y utiliza LayerNorm en lugar de BatchNorm. La salida de los bloques convolucionales se pasa a una cabecera de clasificaciรณn que convierte las salidas en logits y calcula la pรฉrdida de entropรญa cruzada para encontrar la etiqueta mรกs probable. ### Object detection [DETR](https://huggingface.co/docs/transformers/model_doc/detr), *DEtection TRansformer*, es un modelo de detecciรณn de objetos de un extremo a otro que combina una CNN con un codificador-decodificador Transformer. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/detr_architecture.png"/> </div> 1. Una CNN preentrenada *backbone* toma una imagen, representada por sus valores de pรญxeles, y crea un mapa de caracterรญsticas de baja resoluciรณn de la misma. A continuaciรณn, se aplica una convoluciรณn 1x1 al mapa de caracterรญsticas para reducir la dimensionalidad y se crea un nuevo mapa de caracterรญsticas con una representaciรณn de imagen de alto nivel. Dado que el Transformer es un modelo secuencial, el mapa de caracterรญsticas se aplana en una secuencia de vectores de caracterรญsticas que se combinan con incrustaciones posicionales. 2. Los vectores de caracterรญsticas se pasan al codificador, que aprende las representaciones de imagen usando sus capas de atenciรณn. A continuaciรณn, los estados ocultos del codificador se combinan con *consultas de objeto* en el decodificador. Las consultas de objeto son incrustaciones aprendidas que se enfocan en las diferentes regiones de una imagen, y se actualizan a medida que avanzan a travรฉs de cada capa de atenciรณn. Los estados ocultos del decodificador se pasan a una red feedforward que predice las coordenadas del cuadro delimitador y la etiqueta de clase para cada consulta de objeto, o `no objeto` si no hay ninguno. DETR descodifica cada consulta de objeto en paralelo para producir *N* predicciones finales, donde *N* es el nรบmero de consultas. A diferencia de un modelo autoregresivo tรญpico que predice un elemento a la vez, la detecciรณn de objetos es una tarea de predicciรณn de conjuntos (`cuadro delimitador`, `etiqueta de clase`) que hace *N* predicciones en un solo paso. 3. DETR utiliza una **pรฉrdida de coincidencia bipartita** durante el entrenamiento para comparar un nรบmero fijo de predicciones con un conjunto fijo de etiquetas de verdad bรกsica. Si hay menos etiquetas de verdad bรกsica en el conjunto de *N* etiquetas, entonces se rellenan con una clase `no objeto`. Esta funciรณn de pรฉrdida fomenta que DETR encuentre una asignaciรณn uno a uno entre las predicciones y las etiquetas de verdad bรกsica. Si los cuadros delimitadores o las etiquetas de clase no son correctos, se incurre en una pรฉrdida. Del mismo modo, si DETR predice un objeto que no existe, se penaliza. Esto fomenta que DETR encuentre otros objetos en una imagen en lugar de centrarse en un objeto realmente prominente. Se aรฑade una cabecera de detecciรณn de objetos encima de DETR para encontrar la etiqueta de clase y las coordenadas del cuadro delimitador. Hay dos componentes en la cabecera de detecciรณn de objetos: una capa lineal para transformar los estados ocultos del decodificador en logits sobre las etiquetas de clase, y una MLP para predecir el cuadro delimitador. ยฟListo para probar la detecciรณn de objetos? ยกConsulta nuestra guรญa completa de [detecciรณn de objetos](https://huggingface.co/docs/transformers/tasks/object_detection) para aprender cรณmo ajustar DETR y usarlo para inferencia! ### Segmentaciรณn de imรกgenes [Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former) es una arquitectura universal para resolver todos los tipos de tareas de segmentaciรณn de imรกgenes. Los modelos de segmentaciรณn tradicionales suelen estar adaptados a una tarea particular de segmentaciรณn de imรกgenes, como la segmentaciรณn de instancias, semรกntica o panรณptica. Mask2Former enmarca cada una de esas tareas como un problema de *clasificaciรณn de mรกscaras*. La clasificaciรณn de mรกscaras agrupa pรญxeles en *N* segmentos, y predice *N* mรกscaras y su etiqueta de clase correspondiente para una imagen dada. Explicaremos cรณmo funciona Mask2Former en esta secciรณn, y luego podrรกs probar el ajuste fino de SegFormer al final. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png"/> </div> Hay tres componentes principales en Mask2Former: 1. Un [backbone Swin](https://huggingface.co/docs/transformers/model_doc/swin) acepta una imagen y crea un mapa de caracterรญsticas de imagen de baja resoluciรณn a partir de 3 convoluciones consecutivas de 3x3. 2. El mapa de caracterรญsticas se pasa a un *decodificador de pรญxeles* que aumenta gradualmente las caracterรญsticas de baja resoluciรณn en incrustaciones de alta resoluciรณn por pรญxel. De hecho, el decodificador de pรญxeles genera caracterรญsticas multiescala (contiene caracterรญsticas de baja y alta resoluciรณn) con resoluciones de 1/32, 1/16 y 1/8 de la imagen original. 3. Cada uno de estos mapas de caracterรญsticas de diferentes escalas se alimenta sucesivamente a una capa decodificadora Transformer a la vez para capturar objetos pequeรฑos de las caracterรญsticas de alta resoluciรณn. La clave de Mask2Former es el mecanismo de *atenciรณn enmascarada* en el decodificador. A diferencia de la atenciรณn cruzada que puede atender a toda la imagen, la atenciรณn enmascarada solo se centra en cierta รกrea de la imagen. Esto es mรกs rรกpido y conduce a un mejor rendimiento porque las caracterรญsticas locales de una imagen son suficientes para que el modelo aprenda. 4. Al igual que [DETR](tasks_explained#object-detection), Mask2Former tambiรฉn utiliza consultas de objetos aprendidas y las combina con las caracterรญsticas de la imagen del decodificador de pรญxeles para hacer una predicciรณn de conjunto (`etiqueta de clase`, `predicciรณn de mรกscara`). Los estados ocultos del decodificador se pasan a una capa lineal y se transforman en logits sobre las etiquetas de clase. Se calcula la pรฉrdida de entropรญa cruzada entre los logits y la etiqueta de clase para encontrar la mรกs probable. Las predicciones de mรกscara se generan combinando las incrustaciones de pรญxeles con los estados ocultos finales del decodificador. La pรฉrdida de entropรญa cruzada sigmoidea y de la pรฉrdida DICE se calcula entre los logits y la mรกscara de verdad bรกsica para encontrar la mรกscara mรกs probable. ยฟListo para probar la detecciรณn de objetos? ยกConsulta nuestra guรญa completa de [segmentaciรณn de imรกgenes](https://huggingface.co/docs/transformers/tasks/semantic_segmentation) para aprender cรณmo ajustar SegFormer y usarlo para inferencia! ### Estimaciรณn de profundidad [GLPN](https://huggingface.co/docs/transformers/model_doc/glpn), *Global-Local Path Network*, es un Transformer para la estimaciรณn de profundidad que combina un codificador [SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer) con un decodificador ligero. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg"/> </div> 1. Al igual que ViT, una imagen se divide en una secuencia de parches, excepto que estos parches de imagen son mรกs pequeรฑos. Esto es mejor para tareas de predicciรณn densa como la segmentaciรณn o la estimaciรณn de profundidad. Los parches de imagen se transforman en incrustaciones de parches (ver la secciรณn de [clasificaciรณn de imรกgenes](#clasificaciรณn-de-imรกgenes) para mรกs detalles sobre cรณmo se crean las incrustaciones de parches), que se alimentan al codificador. 2. El codificador acepta las incrustaciones de parches y las pasa a travรฉs de varios bloques codificadores. Cada bloque consiste en capas de atenciรณn y Mix-FFN. El propรณsito de este รบltimo es proporcionar informaciรณn posicional. Al final de cada bloque codificador hay una capa de *fusiรณn de parches* para crear representaciones jerรกrquicas. Las caracterรญsticas de cada grupo de parches vecinos se concatenan, y se aplica una capa lineal a las caracterรญsticas concatenadas para reducir el nรบmero de parches a una resoluciรณn de 1/4. Esto se convierte en la entrada al siguiente bloque codificador, donde se repite todo este proceso hasta que tengas caracterรญsticas de imagen con resoluciones de 1/8, 1/16 y 1/32. 3. Un decodificador ligero toma el รบltimo mapa de caracterรญsticas (escala 1/32) del codificador y lo aumenta a una escala de 1/16. A partir de aquรญ, la caracterรญstica se pasa a un mรณdulo de *Fusiรณn Selectiva de Caracterรญsticas (SFF)*, que selecciona y combina caracterรญsticas locales y globales de un mapa de atenciรณn para cada caracterรญstica y luego la aumenta a 1/8. Este proceso se repite hasta que las caracterรญsticas decodificadas sean del mismo tamaรฑo que la imagen original. La salida se pasa a travรฉs de dos capas de convoluciรณn y luego se aplica una activaciรณn sigmoide para predecir la profundidad de cada pรญxel. ## Procesamiento del lenguaje natural El Transformer fue diseรฑado inicialmente para la traducciรณn automรกtica, y desde entonces, prรกcticamente se ha convertido en la arquitectura predeterminada para resolver todas las tareas de procesamiento del lenguaje natural (NLP, por sus siglas en inglรฉs). Algunas tareas se prestan a la estructura del codificador del Transformer, mientras que otras son mรกs adecuadas para el decodificador. Todavรญa hay otras tareas que hacen uso de la estructura codificador-decodificador del Transformer. ### Clasificaciรณn de texto [BERT](https://huggingface.co/docs/transformers/model_doc/bert) es un modelo que solo tiene codificador y es el primer modelo en implementar efectivamente la bidireccionalidad profunda para aprender representaciones mรกs ricas del texto al atender a las palabras en ambos lados. 1. BERT utiliza la tokenizaciรณn [WordPiece](https://huggingface.co/docs/transformers/tokenizer_summary#wordpiece) para generar una incrustaciรณn de tokens del texto. Para diferenciar entre una sola oraciรณn y un par de oraciones, se agrega un token especial `[SEP]` para diferenciarlos. Tambiรฉn se agrega un token especial `[CLS]` al principio de cada secuencia de texto. La salida final con el token `[CLS]` se utiliza como la entrada a la cabeza de clasificaciรณn para tareas de clasificaciรณn. BERT tambiรฉn agrega una incrustaciรณn de segmento para indicar si un token pertenece a la primera o segunda oraciรณn en un par de oraciones. 2. BERT se preentrena con dos objetivos: modelar el lenguaje enmascarado y predecir de prรณxima oraciรณn. En el modelado de lenguaje enmascarado, un cierto porcentaje de los tokens de entrada se enmascaran aleatoriamente, y el modelo necesita predecir estos. Esto resuelve el problema de la bidireccionalidad, donde el modelo podrรญa hacer trampa y ver todas las palabras y "predecir" la siguiente palabra. Los estados ocultos finales de los tokens de mรกscara predichos se pasan a una red feedforward con una softmax sobre el vocabulario para predecir la palabra enmascarada. El segundo objetivo de preentrenamiento es la predicciรณn de prรณxima oraciรณn. El modelo debe predecir si la oraciรณn B sigue a la oraciรณn A. La mitad del tiempo, la oraciรณn B es la siguiente oraciรณn, y la otra mitad del tiempo, la oraciรณn B es una oraciรณn aleatoria. La predicciรณn, ya sea que sea la prรณxima oraciรณn o no, se pasa a una red feedforward con una softmax sobre las dos clases (`EsSiguiente` y `NoSiguiente`). 3. Las incrustaciones de entrada se pasan a travรฉs de mรบltiples capas codificadoras para producir algunos estados ocultos finales. Para usar el modelo preentrenado para clasificaciรณn de texto, se aรฑade una cabecera de clasificaciรณn de secuencia encima del modelo base de BERT. La cabecera de clasificaciรณn de secuencia es una capa lineal que acepta los estados ocultos finales y realiza una transformaciรณn lineal para convertirlos en logits. Se calcula la pรฉrdida de entropรญa cruzada entre los logits y el objetivo para encontrar la etiqueta mรกs probable. ยฟListo para probar la clasificaciรณn de texto? ยกConsulta nuestra guรญa completa de [clasificaciรณn de texto](https://huggingface.co/docs/transformers/tasks/sequence_classification) para aprender cรณmo ajustar DistilBERT y usarlo para inferencia! ### Clasificaciรณn de tokens Para usar BERT en tareas de clasificaciรณn de tokens como el reconocimiento de entidades nombradas (NER), aรฑade una cabecera de clasificaciรณn de tokens encima del modelo base de BERT. La cabecera de clasificaciรณn de tokens es una capa lineal que acepta los estados ocultos finales y realiza una transformaciรณn lineal para convertirlos en logits. Se calcula la pรฉrdida de entropรญa cruzada entre los logits y cada token para encontrar la etiqueta mรกs probable. ยฟListo para probar la clasificaciรณn de tokens? ยกConsulta nuestra guรญa completa de [clasificaciรณn de tokens](https://huggingface.co/docs/transformers/tasks/token_classification) para aprender cรณmo ajustar DistilBERT y usarlo para inferencia! ### Respuesta a preguntas Para usar BERT en la respuesta a preguntas, aรฑade una cabecera de clasificaciรณn de span encima del modelo base de BERT. Esta capa lineal acepta los estados ocultos finales y realiza una transformaciรณn lineal para calcular los logits de inicio y fin del `span` correspondiente a la respuesta. Se calcula la pรฉrdida de entropรญa cruzada entre los logits y la posiciรณn de la etiqueta para encontrar el span mรกs probable de texto correspondiente a la respuesta. ยฟListo para probar la respuesta a preguntas? ยกConsulta nuestra guรญa completa de [respuesta a preguntas](tasks/question_answering) para aprender cรณmo ajustar DistilBERT y usarlo para inferencia! <Tip> ๐Ÿ’ก ยกObserva lo fรกcil que es usar BERT para diferentes tareas una vez que ha sido preentrenado! ยกSolo necesitas aรฑadir una cabecera especรญfica al modelo preentrenado para manipular los estados ocultos en tu salida deseada! </Tip> ### Generaciรณn de texto [GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2) es un modelo que solo tiene decodificador y se preentrena en una gran cantidad de texto. Puede generar texto convincente (ยกaunque no siempre verdadero!) dado un estรญmulo y completar otras tareas de procesamiento del lenguaje natural como responder preguntas, a pesar de no haber sido entrenado explรญcitamente para ello. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gpt2_architecture.png"/> </div> 1. GPT-2 utiliza [codificaciรณn de pares de bytes (BPE)](https://huggingface.co/docs/transformers/tokenizer_summary#bytepair-encoding-bpe) para tokenizar palabras y generar una incrustaciรณn de token. Se aรฑaden incrustaciones posicionales a las incrustaciones de token para indicar la posiciรณn de cada token en la secuencia. Las incrustaciones de entrada se pasan a travรฉs de varios bloques decodificadores para producir algรบn estado oculto final. Dentro de cada bloque decodificador, GPT-2 utiliza una capa de *autoatenciรณn enmascarada*, lo que significa que GPT-2 no puede atender a los tokens futuros. Solo puede atender a los tokens a la izquierda. Esto es diferente al token [`mask`] de BERT porque, en la autoatenciรณn enmascarada, se utiliza una mรกscara de atenciรณn para establecer la puntuaciรณn en `0` para los tokens futuros. 2. La salida del decodificador se pasa a una cabecera de modelado de lenguaje, que realiza una transformaciรณn lineal para convertir los estados ocultos en logits. La etiqueta es el siguiente token en la secuencia, que se crea desplazando los logits a la derecha en uno. Se calcula la pรฉrdida de entropรญa cruzada entre los logits desplazados y las etiquetas para obtener el siguiente token mรกs probable. El objetivo del preentrenamiento de GPT-2 se basa completamente en el [modelado de lenguaje causal](glossary#causal-language-modeling), prediciendo la siguiente palabra en una secuencia. Esto hace que GPT-2 sea especialmente bueno en tareas que implican la generaciรณn de texto. ยฟListo para probar la generaciรณn de texto? ยกConsulta nuestra guรญa completa de [modelado de lenguaje causal](tasks/language_modeling#modelado-de-lenguaje-causal) para aprender cรณmo ajustar DistilGPT-2 y usarlo para inferencia! <Tip> Para obtener mรกs informaciรณn sobre la generaciรณn de texto, ยกconsulta la guรญa de [estrategias de generaciรณn de texto](https://huggingface.co/docs/transformers/generation_strategies)! </Tip> ### Resumir Los modelos codificador-decodificador como [BART](https://huggingface.co/docs/transformers/model_doc/bart) y [T5](https://huggingface.co/docs/transformers/model_doc/t5) estรกn diseรฑados para el patrรณn de secuencia a secuencia de una tarea de resumen. Explicaremos cรณmo funciona BART en esta secciรณn, y luego podrรกs probar el ajuste fino de T5 al final. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bart_architecture.png"/> </div> 1. La arquitectura del codificador de BART es muy similar a la de BERT y acepta una incrustaciรณn de token y posicional del texto. BART se preentrena corrompiendo la entrada y luego reconstruyรฉndola con el decodificador. A diferencia de otros codificadores con estrategias especรญficas de corrupciรณn, BART puede aplicar cualquier tipo de corrupciรณn. Sin embargo, la estrategia de corrupciรณn de *relleno de texto* funciona mejor. En el relleno de texto, varios fragmentos de texto se reemplazan con un **รบnico** token [`mask`]. Esto es importante porque el modelo tiene que predecir los tokens enmascarados, y le enseรฑa al modelo a predecir la cantidad de tokens faltantes. Las incrustaciones de entrada y los fragmentos enmascarados se pasan a travรฉs del codificador para producir algunos estados ocultos finales, pero a diferencia de BERT, BART no aรฑade una red feedforward final al final para predecir una palabra. 2. La salida del codificador se pasa al decodificador, que debe predecir los tokens enmascarados y cualquier token no corrompido de la salida del codificador. Esto proporciona un contexto adicional para ayudar al decodificador a restaurar el texto original. La salida del decodificador se pasa a una cabeza de modelado de lenguaje, que realiza una transformaciรณn lineal para convertir los estados ocultos en logits. Se calcula la pรฉrdida de entropรญa cruzada entre los logits y la etiqueta, que es simplemente el token desplazado hacia la derecha. ยฟListo para probar la sumarizaciรณn? ยกConsulta nuestra guรญa completa de [Generaciรณn de resรบmenes](tasks/summarization) para aprender cรณmo ajustar T5 y usarlo para inferencia! <Tip> Para obtener mรกs informaciรณn sobre la generaciรณn de texto, ยกconsulta la guรญa de [estrategias de generaciรณn de texto](https://huggingface.co/docs/transformers/generation_strategies)! </Tip> ### Traducciรณn La traducciรณn es otro ejemplo de una tarea de secuencia a secuencia, lo que significa que puedes usar un modelo codificador-decodificador como [BART](https://huggingface.co/docs/transformers/model_doc/bart) o [T5](https://huggingface.co/docs/transformers/model_doc/t5) para hacerlo. Explicaremos cรณmo funciona BART en esta secciรณn, y luego podrรกs probar el ajuste fino de T5 al final. BART se adapta a la traducciรณn aรฑadiendo un codificador separado inicializado aleatoriamente para mapear un idioma fuente a una entrada que pueda ser decodificada en el idioma objetivo. Las incrustaciones de este nuevo codificador se pasan al codificador preentrenado en lugar de las incrustaciones de palabras originales. El codificador de origen se entrena actualizando el codificador de origen, las incrustaciones posicionales y las incrustaciones de entrada con la pรฉrdida de entropรญa cruzada de la salida del modelo. Los parรกmetros del modelo estรกn congelados en este primer paso, y todos los parรกmetros del modelo se entrenan juntos en el segundo paso. Desde entonces, BART ha sido seguido por una versiรณn multilingรผe, mBART, destinada a la traducciรณn y preentrenada en muchos idiomas diferentes. ยฟListo para probar la traducciรณn? ยกConsulta nuestra guรญa completa de [traducciรณn](https://huggingface.co/docs/transformers/tasks/translation) para aprender cรณmo ajustar T5 y usarlo para inferencia! <Tip> Para obtener mรกs informaciรณn sobre la generaciรณn de texto, ยกconsulta la guรญa de [estrategias de generaciรณn de texto](https://huggingface.co/docs/transformers/generation_strategies)! </Tip>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/debugging.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Debugging ## Debug de problemas de Network multi-GPU Cuando entrenas o infieres con `DistributedDataParallel` y varias GPUs, si encuentras problemas de intercomunicaciรณn entre procesos y/o nodos, puedes usar el siguiente script para diagnosticar problemas de red. ```bash wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py ``` Por ejemplo, para probar cรณmo interactรบan 2 GPUs, haz lo siguiente: ```bash python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` Si ambos procesos pueden hablar entre sรญ y asignar la memoria de la GPU, cada uno imprimirรก un status OK. Para mรกs GPUs o nodos, ajusta los argumentos en el script. Encontrarรกs muchos mรกs detalles dentro del script de diagnรณstico e incluso una receta de cรณmo ejecutarlo en un entorno SLURM. Un nivel adicional de debug es agregar la variable de entorno `NCCL_DEBUG=INFO` de la siguiente manera: ```bash NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` Esto mostrarรก mucha informaciรณn de debug relacionada con NCCL, que luego puedes buscar online si encuentras que reporta algรบn problema. O si no estรกs seguro de cรณmo interpretar el output, puedes compartir el archivo de log en un Issue. ## Detecciรณn de Underflow y Overflow <Tip> Esta funciรณn estรก disponible actualmente sรณlo para PyTorch. </Tip> <Tip> Para el entrenamiento multi-GPU, requiere DDP (`torch.distributed.launch`). </Tip> <Tip> Esta funciรณn puede utilizarse con cualquier modelo basado en `nn.Module`. </Tip> Si empiezas a obtener `loss=NaN` o el modelo muestra algรบn otro comportamiento anormal debido a `inf` o `nan` en activations o weights hay que descubrir dรณnde se produce el primer underflow o overflow y quรฉ lo ha provocado. Por suerte puedes lograrlo fรกcilmente activando un mรณdulo especial que harรก la detecciรณn automรกticamente. Si estรกs usando [`Trainer`], solo necesitas aรฑadir: ```bash --debug underflow_overflow ``` a los argumentos normales de la lรญnea de comandos, o pasar `debug="underflow_overflow"` al crear el objeto [`TrainingArguments`]. Si estรกs usando tu propio bucle de entrenamiento u otro Trainer puedes lograr lo mismo con: ```python from .debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model) ``` [`~debug_utils.DebugUnderflowOverflow`] inserta hooks en el modelo que inmediatamente despuรฉs de cada forward testearรก las variables de input y output y tambiรฉn los weights del mรณdulo correspondiente. Tan pronto como se detecte `inf` o `nan` se detecta en al menos un elemento de las activations o weights, el programa afirmarรก e imprimirรก un informe como este (esto fue capturado con `google/mt5-small` bajo fp16 mixed precision): ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata encoder.block.1.layer.1.DenseReluDense.dropout Dropout 0.00e+00 2.57e+02 input[0] 0.00e+00 2.85e+02 output [...] encoder.block.2.layer.0 T5LayerSelfAttention 6.78e-04 3.15e+03 input[0] 2.65e-04 3.42e+03 output[0] None output[1] 2.25e-01 1.00e+04 output[2] encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.dropout Dropout 0.00e+00 8.76e+03 input[0] 0.00e+00 9.74e+03 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` El output del ejemplo se ha recortado en el centro por razones de brevedad. La segunda columna muestra el valor del elemento mรกs grande en tรฉrminos absolutos, por lo que si observas con detenimiento los รบltimos fotogramas, los inputs y outputs estaban en el rango de `1e4`. Asรญ que cuando este entrenamiento se hizo con fp16 mixed precision, el รบltimo paso sufriรณ overflow (ya que bajo `fp16` el mayor nรบmero antes de `inf` es `64e3`). Para evitar overflows en `fp16` las activations deben permanecer muy por debajo de `1e4`, porque `1e4 * 1e4 = 1e8` por lo que cualquier matrix multiplication con grandes activations va a llevar a una condiciรณn de overflow numรฉrico. Al principio del output puedes descubrir en quรฉ nรบmero de batch se produjo el problema (aquรญ `Detected inf/nan during batch_number=0` significa que el problema se produjo en el primer batch). Cada frame del informe comienza declarando la entrada completamente calificada para el mรณdulo correspondiente que este frame estรก reportando. Si nos fijamos sรณlo en este frame: ``` encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output ``` Aquรญ, `encoder.block.2.layer.1.layer_norm` indica que era una layer norm para la primera capa, del segundo block del encoder. Y la call especรญfica del `forward` es `T5LayerNorm`. Veamos los รบltimos frames de ese informe: ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` El รบltimo frame informa para la funciรณn `Dropout.forward` con la primera entrada para el รบnico input y la segunda para el รบnico output. Puedes ver que fue llamada desde un atributo `dropout` dentro de la clase `DenseReluDense`. Podemos ver que ocurriรณ durante la primera capa, del segundo block, durante el primer batch. Por รบltimo, el mayor absoluto elementos de input fue `6.27e+04` y el mismo para el output fue `inf`. Puedes ver aquรญ, que `T5DenseGatedGeluDense.forward` resultรณ en output activations, cuyo valor mรกximo absoluto fue alrededor de 62.7K, que estรก muy cerca del lรญmite mรกximo de fp16 de 64K. En el siguiente frame tenemos `Dropout`, el cual renormaliza los weights, despuรฉs de poner a cero algunos de los elementos, lo que empuja el valor mรกximo absoluto a mรกs de 64K, y obtenemos un overflow (`inf`). Como puedes ver son los frames anteriores los que tenemos que mirar cuando los nรบmeros empiezan a ser muy grandes para nรบmeros fp16. Combinemos el informe con el cรณdigo de `models/t5/modeling_t5.py`: ```python class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states ``` Ahora es fรกcil ver la call `dropout`, y tambiรฉn todas las calls anteriores. Dado que la detecciรณn se produce en un forward hook, estos informes se imprimen inmediatamente despuรฉs de que cada `forward` responda. Volviendo al informe completo, para actuar sobre รฉl y arreglar el problema, tenemos que subir unos cuantos frames donde los nรบmeros empezaron a subir y probablemente cambiar al modo `fp32` aquรญ, para que los nรบmeros no sufran overflow cuando se multipliquen o al sumarlos. Por supuesto, puede haber otras soluciones. Por ejemplo, podrรญamos desactivar `amp` temporalmente si estรก activado, despuรฉs de mover el original `forward` dentro de un helper wrapper, asรญ: ```python def _forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states import torch def forward(self, hidden_states): if torch.is_autocast_enabled(): with torch.cuda.amp.autocast(enabled=False): return self._forward(hidden_states) else: return self._forward(hidden_states) ``` Como el detector automรกtico sรณlo informa de los inputs y outputs de los frames completos, una vez que sepas dรณnde buscar, puedes analizar tambiรฉn las etapas intermedias de una funciรณn especรญfica de `forward`. En este caso, puede utilizar la funciรณn funciรณn de ayuda `detect_overflow` para inyectar el detector donde quieras, por ejemplo: ```python from debug_utils import detect_overflow class T5LayerFF(nn.Module): [...] def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, "after layer_norm") forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, "after DenseReluDense") return hidden_states + self.dropout(forwarded_states) ``` Puedes ver que hemos aรฑadido 2 de estos y ahora se trackea si `inf` o `nan` para `forwarded_states` fue detectado en algรบn punto intermedio. De hecho, el detector ya informa de esto porque cada una de las llamadas en el ejemplo anterior es un `nn.Module`, pero digamos que si tuvieras algunos cรกlculos directos locales, asรญ es como lo harรญas. Ademรกs, si estรกs instanciando el debugger en tu propio cรณdigo, puedes ajustar el nรบmero de frames impresos de su valor por defecto, por ejemplo: ```python from .debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ``` ### Rastreo de valores mรญnimos y mรกximos absolutos de batches especรญficos La misma clase de debugging se puede utilizar para el rastreo por batches con la funciรณn de detecciรณn de underflow/overflow desactivada. Digamos que quieres ver los valores mรญnimos y mรกximos absolutos de todos los ingredientes de cada call `forward` de un determinado batch, y sรณlo hacerlo para los batches 1 y 3. Entonces instancias esta clase como: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ``` Y ahora los batches 1 y 3 completos serรกn rastreados usando el mismo formato que el detector de underflow/overflow. Los batches son 0-index. Esto es muy รบtil si sabes que el programa empieza a comportarse mal despuรฉs de un determinado nรบmero de batch, para que puedas avanzar rรกpidamente hasta esa รกrea. Aquรญ hay un ejemplo de output recortado para tal configuraciรณn: ``` *** Starting batch number=1 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.47e+04 input[0] 5.36e-05 7.92e+02 output [...] decoder.dropout Dropout 1.60e-07 2.27e+01 input[0] 0.00e+00 2.52e+01 output decoder T5Stack not a tensor output lm_head Linear 1.01e-06 7.92e+02 weight 0.00e+00 1.11e+00 input[0] 6.06e-02 8.39e+01 output T5ForConditionalGeneration not a tensor output *** Starting batch number=3 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.78e+04 input[0] 5.36e-05 7.92e+02 output [...] ``` Aquรญ obtendrรกs un gran nรบmero de frames mostrados - tantos como forward calls haya en tu modelo, por lo que puede o no ser lo que quieras, pero a veces puede ser mรกs fรกcil de usar para debug que un debugger normal. Por ejemplo, si un problema comienza a ocurrir en el batch 150. Entonces puedes mostrar las trazas de los batches 149 y 150 y comparar dรณnde los nรบmeros empezaron a divergir. Tambiรฉn puedes especificar el nรบmero de batch despuรฉs del cual se debe detener el entrenamiento, con: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Machine Learning de รบltima generaciรณn para PyTorch, TensorFlow y JAX. ๐Ÿค— Transformers proporciona APIs para descargar y entrenar fรกcilmente modelos preentrenados de รบltima generaciรณn. El uso de modelos preentrenados puede reducir tus costos de cรณmputo, tu huella de carbono y ahorrarte tiempo al entrenar un modelo desde cero. Los modelos se pueden utilizar en diferentes modalidades, tales como: * ๐Ÿ“ Texto: clasificaciรณn de texto, extracciรณn de informaciรณn, respuesta a preguntas, resumir, traducciรณn y generaciรณn de texto en mรกs de 100 idiomas. * ๐Ÿ–ผ๏ธ Imรกgenes: clasificaciรณn de imรกgenes, detecciรณn de objetos y segmentaciรณn. * ๐Ÿ—ฃ๏ธ Audio: reconocimiento de voz y clasificaciรณn de audio. * ๐Ÿ™ Multimodal: respuesta a preguntas en tablas, reconocimiento รณptico de caracteres, extracciรณn de informaciรณn de documentos escaneados, clasificaciรณn de videos y respuesta visual a preguntas. Nuestra biblioteca admite una integraciรณn perfecta entre tres de las bibliotecas de deep learning mรกs populares: [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/) y [JAX](https://jax.readthedocs.io/en/latest/). Entrena tu modelo con tres lรญneas de cรณdigo en un framework y cรกrgalo para inferencia con otro. Cada arquitectura de ๐Ÿค— Transformers se define en un mรณdulo de Python independiente para que se puedan personalizar fรกcilmente para investigaciรณn y experimentos. ## Si estรกs buscando soporte personalizado del equipo de Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Contenidos La documentaciรณn estรก organizada en cuatro partes: - **EMPEZAR** contiene un recorrido rรกpido e instrucciones de instalaciรณn para comenzar a usar ๐Ÿค— Transformers. - **TUTORIALES** es un excelente lugar para comenzar. Esta secciรณn te ayudarรก a obtener las habilidades bรกsicas que necesitas para comenzar a usar ๐Ÿค— Transformers. - **GUรAS PRรCTICAS** te mostrarรก cรณmo lograr un objetivo especรญfico, cรณmo hacer fine-tuning a un modelo preentrenado para el modelado de lenguaje o cรณmo crear un cabezal para un modelo personalizado. - **GUรAS CONCEPTUALES** proporciona mรกs discusiรณn y explicaciรณn de los conceptos e ideas subyacentes detrรกs de los modelos, las tareas y la filosofรญa de diseรฑo de ๐Ÿค— Transformers. La biblioteca actualmente contiene implementaciones de JAX, PyTorch y TensorFlow, pesos de modelos preentrenados, scripts de uso y utilidades de conversiรณn para los siguientes modelos. ### Modelos compatibles <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (de Google Research y el Instituto Tecnolรณgico de Toyota en Chicago) publicado con el paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), por Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (de Google Research) publicado con el paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) por Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[BART](model_doc/bart)** (de Facebook) publicado con el paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) por Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov y Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (de ร‰cole polytechnique) publicado con el paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) por Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (de VinAI Research) publicado con el paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) por Nguyen Luong Tran, Duong Minh Le y Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (de Microsoft) publicado con el paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) por Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (de Google) publicado con el paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) por Jacob Devlin, Ming-Wei Chang, Kenton Lee y Kristina Toutanova. 1. **[BERTweet](model_doc/bertweet)** (de VinAI Research) publicado con el paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) por Dat Quoc Nguyen, Thanh Vu y Anh Tuan Nguyen. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (de Google) publicado con el paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) por Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (de Google Research) publicado con el paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) por Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (de Google Research) publicado con el paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) por Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[Blenderbot](model_doc/blenderbot)** (de Facebook) publicado con el paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) por Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (de Facebook) publicado con el paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) por Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BORT](model_doc/bort)** (de Alexa) publicado con el paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) por Adrian de Wynter y Daniel J. Perry. 1. **[ByT5](model_doc/byt5)** (de Google Research) publicado con el paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) por Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (de Inria/Facebook/Sorbonne) publicado con el paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) por Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah y Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (de Google Research) publicado con el paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) por Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[ConvNeXT](model_doc/convnext)** (de Facebook AI) publicado con el paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) por Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (de Facebook AI) publicado con el paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) por Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CLIP](model_doc/clip)** (de OpenAI) publicado con el paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) por Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[ConvBERT](model_doc/convbert)** (de YituTech) publicado con el paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) por Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[CPM](model_doc/cpm)** (de Universidad de Tsinghua) publicado con el paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) por Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (de Salesforce) publicado con el paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) por Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong y Richard Socher. 1. **[Data2Vec](model_doc/data2vec)** (de Facebook) publicado con el paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) por Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (de Microsoft) publicado con el paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) por Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (de Microsoft) publicado con el paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) por Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (de Berkeley/Facebook/Google) publicado con el paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) por Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[DiT](model_doc/dit)** (de Microsoft Research) publicado con el paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) por Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[DeiT](model_doc/deit)** (de Facebook) publicado con el paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) por Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETR](model_doc/detr)** (de Facebook) publicado con el paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) por Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (de Microsoft Research) publicado con el paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) por Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DistilBERT](model_doc/distilbert)** (de HuggingFace), publicado junto con el paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) por Victor Sanh, Lysandre Debut y Thomas Wolf. Se ha aplicado el mismo mรฉtodo para comprimir GPT2 en [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa en [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), BERT multilingรผe en [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) y una versiรณn alemana de DistilBERT. 1. **[DPR](model_doc/dpr)** (de Facebook) publicado con el paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) por Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, y Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (de Intel Labs) publicado con el paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) por Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (de Google Research) publicado con el paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) por Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ELECTRA](model_doc/electra)** (de Google Research/Universidad de Stanford) publicado con el paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) por Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[FlauBERT](model_doc/flaubert)** (de CNRS) publicado con el paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) por Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FNet](model_doc/fnet)** (de Google Research) publicado con el paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) por James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (de CMU/Google Brain) publicado con el paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) por Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GLPN](model_doc/glpn)** (de KAIST) publicado con el paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) por Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (de OpenAI) publicado con el paper [Improving Language Understanding by Generative Pre-Training](https://openai.com/research/language-unsupervised/) por Alec Radford, Karthik Narasimhan, Tim Salimans y Ilya Sutskever. 1. **[GPT-2](model_doc/gpt2)** (de OpenAI) publicado con el paper [Language Models are Unsupervised Multitask Learners](https://openai.com/research/better-language-models/) por Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei y Ilya Sutskever. 1. **[GPT-J](model_doc/gptj)** (de EleutherAI) publicado con el repositorio [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) por Ben Wang y Aran Komatsuzaki. 1. **[GPT Neo](model_doc/gpt_neo)** (de EleutherAI) publicado en el paper [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) por Sid Black, Stella Biderman, Leo Gao, Phil Wang y Connor Leahy. 1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released with [GPTSAN](https://github.com/tanreinama/GPTSAN) by Toshiyuki Sakamoto (tanreinama). 1. **[Hubert](model_doc/hubert)** (de Facebook) publicado con el paper [HuBERT: Self-Supervised Speech Representation Learning por Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) por Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (de Berkeley) publicado con el paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) por Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (de OpenAI) publicado con el paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) por Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (de Microsoft Research Asia) publicado con el paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) por Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (de Microsoft Research Asia) publicado con el paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) por Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutXLM](model_doc/layoutxlm)** (de Microsoft Research Asia) publicado con el paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) por Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (de AllenAI) publicado con el paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) por Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[Longformer](model_doc/longformer)** (de AllenAI) publicado con el paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) por Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LUKE](model_doc/luke)** (de Studio Ousia) publicado con el paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) por Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[mLUKE](model_doc/mluke)** (de Studio Ousia) publicado con el paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) por Ryokan Ri, Ikuya Yamada, y Yoshimasa Tsuruoka. 1. **[LXMERT](model_doc/lxmert)** (de UNC Chapel Hill) publicado con el paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) por Hao Tan y Mohit Bansal. 1. **[M2M100](model_doc/m2m_100)** (de Facebook) publicado con el paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) por Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Modelos de traducciรณn automรกtica entrenados usando [OPUS](http://opus.nlpl.eu/) data por Jรถrg Tiedemann. El [Marian Framework](https://marian-nmt.github.io/) estรก siendo desarrollado por el equipo de traductores de Microsoft. 1. **[Mask2Former](model_doc/mask2former)** (de FAIR y UIUC) publicado con el paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) por Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (de Meta y UIUC) publicado con el paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) por Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[MBart](model_doc/mbart)** (de Facebook) publicado con el paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) por Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[MBart-50](model_doc/mbart)** (de Facebook) publicado con el paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) por Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (de NVIDIA) publicado con el paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) por Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper y Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (de NVIDIA) publicado con el paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) por Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper y Bryan Catanzaro. 1. **[MPNet](model_doc/mpnet)** (de Microsoft Research) publicado con el paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) por Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (de Google AI) publicado con el paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) por Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[Nystrรถmformer](model_doc/nystromformer)** (de la Universidad de Wisconsin - Madison) publicado con el paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) por Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (de la SHI Labs) publicado con el paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) por Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[Pegasus](model_doc/pegasus)** (de Google) publicado con el paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) por Jingqing Zhang, Yao Zhao, Mohammad Saleh y Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (de Deepmind) publicado con el paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) por Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (de VinAI Research) publicado con el paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) por Dat Quoc Nguyen y Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (de UCLA NLP) publicado con el paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) por Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (de Sea AI Labs) publicado con el paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) por Yu, Weihao y Luo, Mi y Zhou, Pan y Si, Chenyang y Zhou, Yichen y Wang, Xinchao y Feng, Jiashi y Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (de Microsoft Research) publicado con el paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) por Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang y Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (de NVIDIA) publicado con el paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) por Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev y Paulius Micikevicius. 1. **[REALM](model_doc/realm.html)** (de Google Research) publicado con el paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) por Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat y Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (de Google Research) publicado con el paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) por Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RemBERT](model_doc/rembert)** (de Google Research) publicado con el paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) por Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[RegNet](model_doc/regnet)** (de META Platforms) publicado con el paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) por Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[ResNet](model_doc/resnet)** (de Microsoft Research) publicado con el paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) por Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (de Facebook), publicado junto con el paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) por Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoFormer](model_doc/roformer)** (de ZhuiyiTechnology), publicado junto con el paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) por Jianlin Su y Yu Lu y Shengfeng Pan y Bo Wen y Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (de NVIDIA) publicado con el paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) por Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (de ASAPP) publicado con el paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) por Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (de ASAPP) publicado con el paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) por Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (de Facebook), publicado junto con el paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) por Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (de Facebook), publicado junto con el paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) por Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (de Universidad de Tel Aviv), publicado junto con el paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) pory Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBert](model_doc/squeezebert)** (de Berkeley) publicado con el paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) por Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, y Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (de Microsoft) publicado con el paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) por Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[T5](model_doc/t5)** (de Google AI) publicado con el paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) por Colin Raffel y Noam Shazeer y Adam Roberts y Katherine Lee y Sharan Narang y Michael Matena y Yanqi Zhou y Wei Li y Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (de Google AI) publicado en el repositorio [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) por Colin Raffel y Noam Shazeer y Adam Roberts y Katherine Lee y Sharan Narang y Michael Matena y Yanqi Zhou y Wei Li y Peter J. Liu. 1. **[TAPAS](model_doc/tapas)** (de Google AI) publicado con el paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) por Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno y Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (de Microsoft Research) publicado con el paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) por Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Transformer-XL](model_doc/transfo-xl)** (de Google/CMU) publicado con el paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) por Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (de Microsoft), publicado junto con el paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) por Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UniSpeech](model_doc/unispeech)** (de Microsoft Research) publicado con el paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) por Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (de Microsoft Research) publicado con el paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) por Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[VAN](model_doc/van)** (de la Universidad de Tsinghua y la Universidad de Nankai) publicado con el paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) por Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[ViLT](model_doc/vilt)** (de NAVER AI Lab/Kakao Enterprise/Kakao Brain) publicado con el paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) por Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (de Google AI) publicado con el paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) por Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[ViTMAE](model_doc/vit_mae)** (de Meta AI) publicado con el paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) por Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[VisualBERT](model_doc/visual_bert)** (de UCLA NLP) publicado con el paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) por Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[WavLM](model_doc/wavlm)** (de Microsoft Research) publicado con el paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) por Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Wav2Vec2](model_doc/wav2vec2)** (de Facebook AI) publicado con el paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) por Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (de Facebook AI) publicado con el paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) por Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[XGLM](model_doc/xglm)** (de Facebook AI) publicado con el paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) por Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (de Facebook) publicado junto con el paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) por Guillaume Lample y Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (de Microsoft Research) publicado con el paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) por Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang y Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (de Facebook AI), publicado junto con el paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) por Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer y Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (de Facebook AI), publicado junto con el paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) por Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (de Google/CMU) publicado con el paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) por Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (de Facebook AI) publicado con el paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) por Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[XLS-R](model_doc/xls_r)** (de Facebook AI) publicado con el paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) por Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[YOSO](model_doc/yoso)** (de la Universidad de Wisconsin-Madison) publicado con el paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) por Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Frameworks compatibles La siguiente tabla representa el soporte actual en la biblioteca para cada uno de esos modelos, ya sea que tengan un tokenizador de Python (llamado "slow"). Un tokenizador "fast" respaldado por la biblioteca ๐Ÿค— Tokenizers, ya sea que tengan soporte en Jax (a travรฉs de Flax), PyTorch y/o TensorFlow. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Modelo | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:---------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBirdPegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | Canine | โœ… | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNext | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โŒ | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โŒ | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | MegatronBert | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | mT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Nystromformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | Realm | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โŒ | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | TAPEX | โœ… | โœ… | โœ… | โœ… | โœ… | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBert | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โŒ | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLMProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/training.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Fine-tuning a un modelo pre-entrenado [[open-in-colab]] El uso de un modelo pre-entrenado tiene importantes ventajas. Reduce los costos de computaciรณn, la huella de carbono y te permite utilizar modelos de รบltima generaciรณn sin tener que entrenar uno desde cero. * Fine-tuning a un modelo pre-entrenado con ๐Ÿค— Transformers [`Trainer`]. * Fine-tuning a un modelo pre-entrenado en TensorFlow con Keras. * Fine-tuning a un modelo pre-entrenado en PyTorch nativo. <a id='data-processing'></a> ## Prepara un dataset <Youtube id="_BZearw7f0w"/> Antes de aplicar fine-tuning a un modelo pre-entrenado, descarga un dataset y prepรกralo para el entrenamiento. El tutorial anterior nos enseรฑรณ cรณmo procesar los datos para el entrenamiento, y ahora es la oportunidad de poner a prueba estas habilidades. Comienza cargando el dataset de [Yelp Reviews](https://huggingface.co/datasets/yelp_review_full): ```py >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset[100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` Como ya sabes, necesitas un tokenizador para procesar el texto e incluir una estrategia para el padding y el truncamiento para manejar cualquier longitud de secuencia variable. Para procesar tu dataset en un solo paso, utiliza el mรฉtodo de ๐Ÿค— Datasets map para aplicar una funciรณn de preprocesamiento sobre todo el dataset: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` Si lo deseas, puedes crear un subconjunto mรกs pequeรฑo del dataset completo para aplicarle fine-tuning y asรญ reducir el tiempo. ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Fine-tuning con `Trainer` <Youtube id="nvBXf7s7vTI"/> ๐Ÿค— Transformers proporciona una clase [`Trainer`] optimizada para el entrenamiento de modelos de ๐Ÿค— Transformers, haciendo mรกs fรกcil el inicio del entrenamiento sin necesidad de escribir manualmente tu propio ciclo. La API del [`Trainer`] soporta una amplia gama de opciones de entrenamiento y caracterรญsticas como el logging, el gradient accumulation y el mixed precision. Comienza cargando tu modelo y especifica el nรบmero de labels previstas. A partir del [Card Dataset](https://huggingface.co/datasets/yelp_review_full#data-fields) de Yelp Review, que como ya sabemos tiene 5 labels: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) ``` <Tip> Verรกs una advertencia acerca de que algunos de los pesos pre-entrenados no estรกn siendo utilizados y que algunos pesos estรกn siendo inicializados al azar. No te preocupes, esto es completamente normal. El head/cabezal pre-entrenado del modelo BERT se descarta y se sustituye por un head de clasificaciรณn inicializado aleatoriamente. Puedes aplicar fine-tuning a este nuevo head del modelo en tu tarea de clasificaciรณn de secuencias haciendo transfer learning del modelo pre-entrenado. </Tip> ### Hiperparรกmetros de entrenamiento A continuaciรณn, crea una clase [`TrainingArguments`] que contenga todos los hiperparรกmetros que puedes ajustar asรญ como los indicadores para activar las diferentes opciones de entrenamiento. Para este tutorial puedes empezar con los [hiperparรกmetros](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) de entrenamiento por defecto, pero siรฉntete libre de experimentar con ellos para encontrar tu configuraciรณn รณptima. Especifica dรณnde vas a guardar los checkpoints de tu entrenamiento: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### Mรฉtricas El [`Trainer`] no evalรบa automรกticamente el rendimiento del modelo durante el entrenamiento. Tendrรกs que pasarle a [`Trainer`] una funciรณn para calcular y hacer un reporte de las mรฉtricas. La biblioteca de ๐Ÿค— Datasets proporciona una funciรณn de [`accuracy`](https://huggingface.co/metrics/accuracy) simple que puedes cargar con la funciรณn `load_metric` (ver este [tutorial](https://huggingface.co/docs/datasets/metrics) para mรกs informaciรณn): ```py >>> import numpy as np >>> from datasets import load_metric >>> metric = load_metric("accuracy") ``` Define la funciรณn `compute` en `metric` para calcular el accuracy de tus predicciones. Antes de pasar tus predicciones a `compute`, necesitas convertir las predicciones a logits (recuerda que todos los modelos de ๐Ÿค— Transformers devuelven logits). ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` Si quieres controlar tus mรฉtricas de evaluaciรณn durante el fine-tuning, especifica el parรกmetro `eval_strategy` en tus argumentos de entrenamiento para que el modelo tenga en cuenta la mรฉtrica de evaluaciรณn al final de cada รฉpoca: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch") ``` ### Trainer Crea un objeto [`Trainer`] con tu modelo, argumentos de entrenamiento, datasets de entrenamiento y de prueba, y tu funciรณn de evaluaciรณn: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` A continuaciรณn, aplica fine-tuning a tu modelo llamando [`~transformers.Trainer.train`]: ```py >>> trainer.train() ``` <a id='keras'></a> ## Fine-tuning con Keras <Youtube id="rnTGBy2ax1c"/> Los modelos de ๐Ÿค— Transformers tambiรฉn permiten realizar el entrenamiento en TensorFlow con la API de Keras. Solo es necesario hacer algunos cambios antes de hacer fine-tuning. ### Convierte el dataset al formato de TensorFlow El [`DefaultDataCollator`] junta los tensores en un batch para que el modelo se entrene en รฉl. Asegรบrate de especificar `return_tensors` para devolver los tensores de TensorFlow: ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` <Tip> [`Trainer`] utiliza [`DataCollatorWithPadding`] por defecto por lo que no es necesario especificar explรญcitamente un intercalador de datos (data collator, en inglรฉs). </Tip> A continuaciรณn, convierte los datasets tokenizados en datasets de TensorFlow con el mรฉtodo [`to_tf_dataset`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.to_tf_dataset). Especifica tus entradas en `columns` y tu etiqueta en `label_cols`: ```py >>> tf_train_dataset = small_train_dataset.to_tf_dataset( ... columns=["attention_mask", "input_ids", "token_type_ids"], ... label_cols="labels", ... shuffle=True, ... collate_fn=data_collator, ... batch_size=8, ... ) >>> tf_validation_dataset = small_eval_dataset.to_tf_dataset( ... columns=["attention_mask", "input_ids", "token_type_ids"], ... label_cols="labels", ... shuffle=False, ... collate_fn=data_collator, ... batch_size=8, ... ) ``` ### Compila y ajusta Carguemos un modelo TensorFlow con el nรบmero esperado de labels: ```py >>> import tensorflow as tf >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) ``` A continuaciรณn, compila y aplica fine-tuning a tu modelo con [`fit`](https://keras.io/api/models/model_training_apis/) como lo harรญas con cualquier otro modelo de Keras: ```py >>> model.compile( ... optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5), ... loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), ... metrics=tf.metrics.SparseCategoricalAccuracy(), ... ) >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3) ``` <a id='pytorch_native'></a> ## Fine-tune en PyTorch nativo <Youtube id="Dh9CL8fyG80"/> El [`Trainer`] se encarga del ciclo de entrenamiento y permite aplicar fine-tuning a un modelo en una sola lรญnea de cรณdigo. Para los que prefieran escribir su propio ciclo de entrenamiento, tambiรฉn pueden aplicar fine-tuning a un modelo de ๐Ÿค— Transformers en PyTorch nativo. En este punto, es posible que necesites reiniciar tu notebook o ejecutar el siguiente cรณdigo para liberar algo de memoria: ```py del model del pytorch_model del trainer torch.cuda.empty_cache() ``` A continuaciรณn, haremos un post-procesamiento manual al `tokenized_dataset` y asรญ prepararlo para el entrenamiento. 1. Elimina la columna de `text` porque el modelo no acepta texto en crudo como entrada: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. Cambia el nombre de la columna de `label` a `labels` porque el modelo espera que el argumento se llame `labels`: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. Establece el formato del dataset para devolver tensores PyTorch en lugar de listas: ```py >>> tokenized_datasets.set_format("torch") ``` A continuaciรณn, crea un subconjunto mรกs pequeรฑo del dataset como se ha mostrado anteriormente para acelerar el fine-tuning: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader Crea un `DataLoader` para tus datasets de entrenamiento y de prueba para poder iterar sobre batches de datos: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` Carga tu modelo con el nรบmero de labels previstas: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) ``` ### Optimiza y programa el learning rate Crea un optimizador y el learning rate para aplicar fine-tuning al modelo. Vamos a utilizar el optimizador [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) de PyTorch: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` Crea el learning rate desde el [`Trainer`]: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` Por รบltimo, especifica el `device` o entorno de ejecuciรณn para utilizar una GPU si tienes acceso a una. De lo contrario, el entrenamiento en una CPU puede llevarte varias horas en lugar de un par de minutos. ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> Consigue acceso gratuito a una GPU en la nube si es que no tienes este recurso de forma local con un notebook alojado en [Colaboratory](https://colab.research.google.com/) o [SageMaker StudioLab](https://studiolab.sagemaker.aws/). </Tip> Genial, ยกahora podemos entrenar! ๐Ÿฅณ ### Ciclo de entrenamiento Para hacer un seguimiento al progreso del entrenamiento, utiliza la biblioteca [tqdm](https://tqdm.github.io/) para aรฑadir una barra de progreso sobre el nรบmero de pasos de entrenamiento: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### Mรฉtricas De la misma manera que necesitas aรฑadir una funciรณn de evaluaciรณn al [`Trainer`], necesitas hacer lo mismo cuando escribas tu propio ciclo de entrenamiento. Pero en lugar de calcular y reportar la mรฉtrica al final de cada รฉpoca, esta vez acumularรกs todos los batches con [`add_batch`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=add_batch#datasets.Metric.add_batch) y calcularรกs la mรฉtrica al final. ```py >>> metric = load_metric("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` <a id='additional-resources'></a> ## Recursos adicionales Para mรกs ejemplos de fine-tuning consulta: - [๐Ÿค— Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) incluye scripts para entrenar tareas comunes de NLP en PyTorch y TensorFlow. - [๐Ÿค— Transformers Notebooks](notebooks) contiene varios notebooks sobre cรณmo aplicar fine-tuning a un modelo para tareas especรญficas en PyTorch y TensorFlow.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/serialization.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Exportar modelos ๐Ÿค— Transformers Si necesitas implementar modelos ๐Ÿค— Transformers en entornos de producciรณn, te recomendamos exportarlos a un formato serializado que se pueda cargar y ejecutar en tiempos de ejecuciรณn y hardware especializados. En esta guรญa, te mostraremos cรณmo exportar modelos ๐Ÿค— Transformers en dos formatos ampliamente utilizados: ONNX y TorchScript. Una vez exportado, un modelo puede optimizarse para la inferencia a travรฉs de tรฉcnicas como la cuantizaciรณn y _pruning_. Si estรกs interesado en optimizar tus modelos para que funcionen con la mรกxima eficiencia, consulta la [biblioteca de ๐Ÿค— Optimum](https://github.com/huggingface/optimum). ## ONNX El proyecto [ONNX (Open Neural Network eXchange)](http://onnx.ai) es un estรกndar abierto que define un conjunto comรบn de operadores y un formato de archivo comรบn para representar modelos de aprendizaje profundo en una amplia variedad de _frameworks_, incluidos PyTorch y TensorFlow. Cuando un modelo se exporta al formato ONNX, estos operadores se usan para construir un grafo computacional (a menudo llamado _representaciรณn intermedia_) que representa el flujo de datos a travรฉs de la red neuronal. Al exponer un grafo con operadores y tipos de datos estandarizados, ONNX facilita el cambio entre frameworks. Por ejemplo, un modelo entrenado en PyTorch se puede exportar a formato ONNX y luego importar en TensorFlow (y viceversa). ๐Ÿค— Transformers proporciona un paquete llamado `transformers.onnx`, el cual permite convertir los checkpoints de un modelo en un grafo ONNX aprovechando los objetos de configuraciรณn. Estos objetos de configuraciรณn estรกn hechos a la medida de diferentes arquitecturas de modelos y estรกn diseรฑados para ser fรกcilmente extensibles a otras arquitecturas. Las configuraciones a la medida incluyen las siguientes arquitecturas: <!--This table is automatically generated by `make fix-copies`, do not fill manually!--> - ALBERT - BART - BEiT - BERT - BigBird - BigBird-Pegasus - Blenderbot - BlenderbotSmall - BLOOM - CamemBERT - CLIP - CodeGen - ConvBERT - ConvNeXT - ConvNeXTV2 - Data2VecText - Data2VecVision - DeBERTa - DeBERTa-v2 - DeiT - DETR - DistilBERT - ELECTRA - FlauBERT - GPT Neo - GPT-J - I-BERT - LayoutLM - LayoutLMv3 - LeViT - LongT5 - M2M100 - Marian - mBART - MobileBERT - MobileViT - MT5 - OpenAI GPT-2 - Perceiver - PLBart - ResNet - RoBERTa - RoFormer - SqueezeBERT - T5 - ViT - XLM - XLM-RoBERTa - XLM-RoBERTa-XL - YOLOS En las prรณximas dos secciones, te mostraremos cรณmo: * Exportar un modelo compatible utilizando el paquete `transformers.onnx`. * Exportar un modelo personalizado para una arquitectura no compatible. ### Exportar un model a ONNX Para exportar un modelo ๐Ÿค— Transformers a ONNX, tienes que instalar primero algunas dependencias extra: ```bash pip install transformers[onnx] ``` El paquete `transformers.onnx` puede ser usado luego como un mรณdulo de Python: ```bash python -m transformers.onnx --help usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output positional arguments: output Path indicating where to store generated ONNX model. optional arguments: -h, --help show this help message and exit -m MODEL, --model MODEL Model ID on huggingface.co or path on disk to load model from. --feature {causal-lm, ...} The type of features to export the model with. --opset OPSET ONNX opset version to export the model with. --atol ATOL Absolute difference tolerence when validating the model. ``` Exportar un checkpoint usando una configuraciรณn a la medida se puede hacer de la siguiente manera: ```bash python -m transformers.onnx --model=distilbert/distilbert-base-uncased onnx/ ``` que deberรญa mostrar los siguientes registros: ```bash Validating ONNX model... -[โœ“] ONNX model output names match reference model ({'last_hidden_state'}) - Validating ONNX Model output "last_hidden_state": -[โœ“] (2, 8, 768) matches (2, 8, 768) -[โœ“] all values close (atol: 1e-05) All good, model saved at: onnx/model.onnx ``` Esto exporta un grafo ONNX del checkpoint definido por el argumento `--model`. En este ejemplo, es un modelo `distilbert/distilbert-base-uncased`, pero puede ser cualquier checkpoint en Hugging Face Hub o que estรฉ almacenado localmente. El archivo `model.onnx` resultante se puede ejecutar en uno de los [muchos aceleradores](https://onnx.ai/supported-tools.html#deployModel) que admiten el estรกndar ONNX. Por ejemplo, podemos cargar y ejecutar el modelo con [ONNX Runtime](https://onnxruntime.ai/) de la siguiente manera: ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` Los nombres necesarios de salida (es decir, `["last_hidden_state"]`) se pueden obtener echando un vistazo a la configuraciรณn ONNX de cada modelo. Por ejemplo, para DistilBERT tenemos: ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"]s ``` El proceso es idรฉntico para los checkpoints de TensorFlow en Hub. Por ejemplo, podemos exportar un checkpoint puro de TensorFlow desde [Keras](https://huggingface.co/keras-io) de la siguiente manera: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` Para exportar un modelo que estรก almacenado localmente, deberรกs tener los pesos y tokenizadores del modelo almacenados en un directorio. Por ejemplo, podemos cargar y guardar un checkpoint de la siguiente manera: <frameworkcontent> <pt> ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> # Load tokenizer and PyTorch weights form the Hub >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") >>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") >>> # Save to disk >>> tokenizer.save_pretrained("local-pt-checkpoint") >>> pt_model.save_pretrained("local-pt-checkpoint") ``` Una vez que se guarda el checkpoint, podemos exportarlo a ONNX usando el argumento `--model` del paquete `transformers.onnx` al directorio deseado: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ``` </pt> <tf> ```python >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> # Load tokenizer and TensorFlow weights from the Hub >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") >>> # Save to disk >>> tokenizer.save_pretrained("local-tf-checkpoint") >>> tf_model.save_pretrained("local-tf-checkpoint") ``` Una vez que se guarda el checkpoint, podemos exportarlo a ONNX usando el argumento `--model` del paquete `transformers.onnx` al directorio deseado: ```bash python -m transformers.onnx --model=local-tf-checkpoint onnx/ ``` </tf> </frameworkcontent> ### Seleccionar caracterรญsticas para diferentes topologรญas de un modelo Cada configuraciรณn a la medida viene con un conjunto de _caracterรญsticas_ que te permiten exportar modelos para diferentes tipos de topologรญas o tareas. Como se muestra en la siguiente tabla, cada funciรณn estรก asociada con una auto-clase de automรณvil diferente: | Feature | Auto Class | | ------------------------------------ | ------------------------------------ | | `causal-lm`, `causal-lm-with-past` | `AutoModelForCausalLM` | | `default`, `default-with-past` | `AutoModel` | | `masked-lm` | `AutoModelForMaskedLM` | | `question-answering` | `AutoModelForQuestionAnswering` | | `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM` | | `sequence-classification` | `AutoModelForSequenceClassification` | | `token-classification` | `AutoModelForTokenClassification` | Para cada configuraciรณn, puedes encontrar la lista de funciones admitidas a travรฉs de `FeaturesManager`. Por ejemplo, para DistilBERT tenemos: ```python >>> from transformers.onnx.features import FeaturesManager >>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys()) >>> print(distilbert_features) ["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"] ``` Le puedes pasar una de estas caracterรญsticas al argumento `--feature` en el paquete `transformers.onnx`. Por ejemplo, para exportar un modelo de clasificaciรณn de texto, podemos elegir un modelo ya ajustado del Hub y ejecutar: ```bash python -m transformers.onnx --model=distilbert/distilbert-base-uncased-finetuned-sst-2-english \ --feature=sequence-classification onnx/ ``` que mostrarรก los siguientes registros: ```bash Validating ONNX model... -[โœ“] ONNX model output names match reference model ({'logits'}) - Validating ONNX Model output "logits": -[โœ“] (2, 2) matches (2, 2) -[โœ“] all values close (atol: 1e-05) All good, model saved at: onnx/model.onnx ``` Ten en cuenta que, en este caso, los nombres de salida del modelo ajustado son `logits` en lugar de `last_hidden_state` que vimos anteriormente con el checkpoint `distilbert/distilbert-base-uncased`. Esto es de esperarse ya que el modelo ajustado tiene un cabezal de clasificaciรณn secuencial. <Tip> Las caracterรญsticas que tienen un sufijo 'with-past' (por ejemplo, 'causal-lm-with-past') corresponden a topologรญas de modelo con estados ocultos precalculados (clave y valores en los bloques de atenciรณn) que se pueden usar para una decodificaciรณn autorregresiva mรกs rรกpida. </Tip> ### Exportar un modelo para una arquitectura no compatible Si deseas exportar un modelo cuya arquitectura no es compatible de forma nativa con la biblioteca, debes seguir tres pasos principales: 1. Implementa una configuraciรณn personalizada en ONNX. 2. Exporta el modelo a ONNX. 3. Valide los resultados de PyTorch y los modelos exportados. En esta secciรณn, veremos cรณmo se implementรณ la serializaciรณn de DistilBERT para mostrar lo que implica cada paso. #### Implementar una configuraciรณn personalizada en ONNX Comencemos con el objeto de configuraciรณn de ONNX. Proporcionamos tres clases abstractas de las que debe heredar, segรบn el tipo de arquitectura del modelo que quieras exportar: * Modelos basados en el _Encoder_ inherente de [`~onnx.config.OnnxConfig`] * Modelos basados en el _Decoder_ inherente de [`~onnx.config.OnnxConfigWithPast`] * Modelos _Encoder-decoder_ inherente de [`~onnx.config.OnnxSeq2SeqConfigWithPast`] <Tip> Una buena manera de implementar una configuraciรณn personalizada en ONNX es observar la implementaciรณn existente en el archivo `configuration_<model_name>.py` de una arquitectura similar. </Tip> Dado que DistilBERT es un modelo de tipo _encoder_, su configuraciรณn se hereda de `OnnxConfig`: ```python >>> from typing import Mapping, OrderedDict >>> from transformers.onnx import OnnxConfig >>> class DistilBertOnnxConfig(OnnxConfig): ... @property ... def inputs(self) -> Mapping[str, Mapping[int, str]]: ... return OrderedDict( ... [ ... ("input_ids", {0: "batch", 1: "sequence"}), ... ("attention_mask", {0: "batch", 1: "sequence"}), ... ] ... ) ``` Cada objeto de configuraciรณn debe implementar la propiedad `inputs` y devolver un mapeo, donde cada llave corresponde a una entrada esperada y cada valor indica el eje de esa entrada. Para DistilBERT, podemos ver que se requieren dos entradas: `input_ids` y `attention_mask`. Estas entradas tienen la misma forma de `(batch_size, sequence_length)`, es por lo que vemos los mismos ejes utilizados en la configuraciรณn. <Tip> Observa que la propiedad `inputs` para `DistilBertOnnxConfig` devuelve un `OrderedDict`. Esto nos asegura que las entradas coincidan con su posiciรณn relativa dentro del mรฉtodo `PreTrainedModel.forward()` al rastrear el grafo. Recomendamos usar un `OrderedDict` para las propiedades `inputs` y `outputs` al implementar configuraciones ONNX personalizadas. </Tip> Una vez que hayas implementado una configuraciรณn ONNX, puedes crear una instancia proporcionando la configuraciรณn del modelo base de la siguiente manera: ```python >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased") >>> onnx_config = DistilBertOnnxConfig(config) ``` El objeto resultante tiene varias propiedades รบtiles. Por ejemplo, puedes ver el conjunto de operadores ONNX que se utilizarรก durante la exportaciรณn: ```python >>> print(onnx_config.default_onnx_opset) 11 ``` Tambiรฉn puedes ver los resultados asociados con el modelo de la siguiente manera: ```python >>> print(onnx_config.outputs) OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})]) ``` Observa que la propiedad de salidas sigue la misma estructura que las entradas; devuelve un objecto `OrderedDict` de salidas nombradas y sus formas. La estructura de salida estรก vinculada a la elecciรณn de la funciรณn con la que se inicializa la configuraciรณn. Por defecto, la configuraciรณn de ONNX se inicializa con la funciรณn `default` que corresponde a exportar un modelo cargado con la clase `AutoModel`. Si quieres exportar una topologรญa de modelo diferente, simplemente proporciona una caracterรญstica diferente al argumento `task` cuando inicialices la configuraciรณn de ONNX. Por ejemplo, si quisiรฉramos exportar DistilBERT con un cabezal de clasificaciรณn de secuencias, podrรญamos usar: ```python >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased") >>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification") >>> print(onnx_config_for_seq_clf.outputs) OrderedDict([('logits', {0: 'batch'})]) ``` <Tip> Todas las propiedades base y mรฉtodos asociados con [`~onnx.config.OnnxConfig`] y las otras clases de configuraciรณn se pueden sobreescribir si es necesario. Consulte [`BartOnnxConfig`] para ver un ejemplo avanzado. </Tip> #### Exportar el modelo Una vez que hayas implementado la configuraciรณn de ONNX, el siguiente paso es exportar el modelo. Aquรญ podemos usar la funciรณn `export()` proporcionada por el paquete `transformers.onnx`. Esta funciรณn espera la configuraciรณn de ONNX, junto con el modelo base y el tokenizador, y la ruta para guardar el archivo exportado: ```python >>> from pathlib import Path >>> from transformers.onnx import export >>> from transformers import AutoTokenizer, AutoModel >>> onnx_path = Path("model.onnx") >>> model_ckpt = "distilbert/distilbert-base-uncased" >>> base_model = AutoModel.from_pretrained(model_ckpt) >>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt) >>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path) ``` Los objetos `onnx_inputs` y `onnx_outputs` devueltos por la funciรณn `export()` son listas de llaves definidas en las propiedades `inputs` y `outputs` de la configuraciรณn. Una vez exportado el modelo, puedes probar que el modelo estรก bien formado de la siguiente manera: ```python >>> import onnx >>> onnx_model = onnx.load("model.onnx") >>> onnx.checker.check_model(onnx_model) ``` <Tip> Si tu modelo tiene mรกs de 2GB, verรกs que se crean muchos archivos adicionales durante la exportaciรณn. Esto es _esperado_ porque ONNX usa [Bรบferes de protocolo](https://developers.google.com/protocol-buffers/) para almacenar el modelo y รฉstos tienen un lรญmite de tamaรฑo de 2 GB. Consulta la [documentaciรณn de ONNX](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) para obtener instrucciones sobre cรณmo cargar modelos con datos externos. </Tip> #### Validar los resultados del modelo El paso final es validar que los resultados del modelo base y exportado coincidan dentro de cierta tolerancia absoluta. Aquรญ podemos usar la funciรณn `validate_model_outputs()` proporcionada por el paquete `transformers.onnx` de la siguiente manera: ```python >>> from transformers.onnx import validate_model_outputs >>> validate_model_outputs( ... onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation ... ) ``` Esta funciรณn usa el mรฉtodo `OnnxConfig.generate_dummy_inputs()` para generar entradas para el modelo base y exportado, y la tolerancia absoluta se puede definir en la configuraciรณn. En general, encontramos una concordancia numรฉrica en el rango de 1e-6 a 1e-4, aunque es probable que cualquier valor menor que 1e-3 estรฉ bien. ### Contribuir con una nueva configuraciรณn a ๐Ÿค— Transformers ยกEstamos buscando expandir el conjunto de configuraciones a la medida para usar y agradecemos las contribuciones de la comunidad! Si deseas contribuir con su colaboraciรณn a la biblioteca, deberรกs: * Implementa la configuraciรณn de ONNX en el archivo `configuration_<model_name>.py` correspondiente * Incluye la arquitectura del modelo y las caracterรญsticas correspondientes en [`~onnx.features.FeatureManager`] * Agrega tu arquitectura de modelo a las pruebas en `test_onnx_v2.py` Revisa cรณmo fue la contribuciรณn para la [configuraciรณn de IBERT](https://github.com/huggingface/transformers/pull/14868/files) y asรญ tener una idea de lo que necesito. ## TorchScript <Tip> Este es el comienzo de nuestros experimentos con TorchScript y todavรญa estamos explorando sus capacidades con modelos de tamaรฑo de entrada variable. Es un tema de interรฉs y profundizaremos nuestro anรกlisis en las prรณximas versiones, con mรกs ejemplos de cรณdigo, una implementaciรณn mรกs flexible y puntos de referencia que comparen cรณdigos basados en Python con TorchScript compilado. </Tip> Segรบn la documentaciรณn de PyTorch: "TorchScript es una forma de crear modelos serializables y optimizables a partir del cรณdigo de PyTorch". Los dos mรณdulos de Pytorch [JIT y TRACE](https://pytorch.org/docs/stable/jit.html) permiten al desarrollador exportar su modelo para reutilizarlo en otros programas, como los programas C++ orientados a la eficiencia. Hemos proporcionado una interfaz que permite exportar modelos de ๐Ÿค— Transformers a TorchScript para que puedan reutilizarse en un entorno diferente al de un programa Python basado en PyTorch. Aquรญ explicamos cรณmo exportar y usar nuestros modelos usando TorchScript. Exportar un modelo requiere de dos cosas: - un pase hacia adelante con entradas ficticias. - instanciaciรณn del modelo con la indicador `torchscript`. Estas necesidades implican varias cosas con las que los desarrolladores deben tener cuidado. ร‰stas se detallan a continuaciรณn. ### Indicador de TorchScript y pesos atados Este indicador es necesario porque la mayorรญa de los modelos de lenguaje en este repositorio tienen pesos vinculados entre su capa de `Embedding` y su capa de `Decoding`. TorchScript no permite la exportaciรณn de modelos que tengan pesos atados, por lo que es necesario desvincular y clonar los pesos previamente. Esto implica que los modelos instanciados con el indicador `torchscript` tienen su capa `Embedding` y `Decoding` separadas, lo que significa que no deben entrenarse mรกs adelante. El entrenamiento desincronizarรญa las dos capas, lo que generarรญa resultados inesperados. Este no es el caso de los modelos que no tienen un cabezal de modelo de lenguaje, ya que no tienen pesos atados. Estos modelos se pueden exportar de forma segura sin el indicador `torchscript`. ### Entradas ficticias y longitudes estรกndar Las entradas ficticias se utilizan para crear un modelo de pase hacia adelante. Mientras los valores de las entradas se propagan a travรฉs de las capas, PyTorch realiza un seguimiento de las diferentes operaciones ejecutadas en cada tensor. Estas operaciones registradas se utilizan luego para crear el "rastro" del modelo. El rastro se crea en relaciรณn con las dimensiones de las entradas. Por lo tanto, estรก limitado por las dimensiones de la entrada ficticia y no funcionarรก para ninguna otra longitud de secuencia o tamaรฑo de lote. Al intentar con un tamaรฑo diferente, un error como: `The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2` aparecerรก. Por lo tanto, se recomienda rastrear el modelo con un tamaรฑo de entrada ficticia al menos tan grande como la entrada mรกs grande que se alimentarรก al modelo durante la inferencia. El _padding_ se puede realizar para completar los valores que faltan. Sin embargo, como el modelo se habrรก rastreado con un tamaรฑo de entrada grande, las dimensiones de las diferentes matrices tambiรฉn serรกn grandes, lo que darรก como resultado mรกs cรกlculos. Se recomienda tener cuidado con el nรบmero total de operaciones realizadas en cada entrada y seguir de cerca el rendimiento al exportar modelos de longitud de secuencia variable. ### Usar TorchScript en Python A continuaciรณn se muestra un ejemplo que muestra cรณmo guardar, cargar modelos y cรณmo usar el rastreo para la inferencia. #### Guardando un modelo Este fragmento muestra cรณmo usar TorchScript para exportar un `BertModel`. Aquรญ, el `BertModel` se instancia de acuerdo con la clase `BertConfig` y luego se guarda en el disco con el nombre de archivo `traced_bert.pt` ```python from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") # Tokenizing input text text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) # Masking one of the input tokens masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # Creating a dummy input tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # Initializing the model with the torchscript flag # Flag set to True even though it is not necessary as this model does not have an LM Head. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # Instantiating the model model = BertModel(config) # The model needs to be in evaluation mode model.eval() # If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag model = BertModel.from_pretrained("google-bert/bert-base-uncased", torchscript=True) # Creating the trace traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt") ``` #### Cargar un modelo Este fragmento muestra cรณmo cargar el `BertModel` que se guardรณ previamente en el disco con el nombre `traced_bert.pt`. Estamos reutilizando el `dummy_input` previamente inicializado. ```python loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ``` #### Usar un modelo rastreado para la inferencia Usar el modelo rastreado para la inferencia es tan simple como usar su mรฉtodo `__call__`: ```python traced_model(tokens_tensor, segments_tensors) ``` ### Implementar los modelos HuggingFace TorchScript en AWS mediante Neuron SDK AWS presentรณ la familia de instancias [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) para la inferencia de aprendizaje automรกtico de bajo costo y alto rendimiento en la nube. Las instancias Inf1 funcionan con el chip AWS Inferentia, un acelerador de hardware personalizado, que se especializa en cargas de trabajo de inferencia de aprendizaje profundo. [AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#) es el kit de desarrollo para Inferentia que admite el rastreo y la optimizaciรณn de modelos de transformers para su implementaciรณn en Inf1. El SDK de Neuron proporciona: 1. API fรกcil de usar con una lรญnea de cambio de cรณdigo para rastrear y optimizar un modelo de TorchScript para la inferencia en la nube. 2. Optimizaciones de rendimiento listas para usar con un [costo-rendimiento mejorado](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>) 3. Soporte para modelos HuggingFace Transformers construidos con [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) o [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html). #### Implicaciones Los modelos Transformers basados en la arquitectura [BERT (Representaciones de _Enconder_ bidireccional de Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert), o sus variantes, como [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) y [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta), se ejecutarรกn mejor en Inf1 para tareas no generativas, como la respuesta extractiva de preguntas, la clasificaciรณn de secuencias y la clasificaciรณn de tokens. Como alternativa, las tareas de generaciรณn de texto se pueden adaptar para ejecutarse en Inf1, segรบn este [tutorial de AWS Neuron MarianMT](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html). Puedes encontrar mรกs informaciรณn sobre los modelos que estรกn listos para usarse en Inferentia en la [secciรณn _Model Architecture Fit_ de la documentaciรณn de Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia). #### Dependencias Usar AWS Neuron para convertir modelos requiere las siguientes dependencias y entornos: * Un [entorno Neuron SDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide), que viene preconfigurado en [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html). #### Convertir un modelo a AWS Neuron Con el mismo script usado en [Uso de TorchScript en Python](https://huggingface.co/docs/transformers/main/es/serialization#using-torchscript-in-python) para rastrear un "BertModel", puedes importar la extensiรณn del _framework_ `torch.neuron` para acceder a los componentes del SDK de Neuron a travรฉs de una API de Python. ```python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron ``` Y modificando la lรญnea de cรณdigo de rastreo de: ```python torch.jit.trace(model, [tokens_tensor, segments_tensors]) ``` con lo siguiente: ```python torch.neuron.trace(model, [token_tensor, segments_tensors]) ``` Este cambio permite a Neuron SDK rastrear el modelo y optimizarlo para ejecutarse en instancias Inf1. Para obtener mรกs informaciรณn sobre las funciones, las herramientas, los tutoriales de ejemplo y las รบltimas actualizaciones de AWS Neuron SDK, consulte la [documentaciรณn de AWS NeuronSDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Entrenamiento distribuido con ๐Ÿค— Accelerate El paralelismo ha emergido como una estrategia para entrenar modelos grandes en hardware limitado e incrementar la velocidad de entrenamiento en varios รณrdenes de magnitud. En Hugging Face creamos la biblioteca [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) para ayudar a los usuarios a entrenar modelos ๐Ÿค— Transformers en cualquier tipo de configuraciรณn distribuida, ya sea en una mรกquina con mรบltiples GPUs o en mรบltiples GPUs distribuidas entre muchas mรกquinas. En este tutorial aprenderรกs cรณmo personalizar tu bucle de entrenamiento de PyTorch nativo para poder entrenar en entornos distribuidos. ## Configuraciรณn Empecemos por instalar ๐Ÿค— Accelerate: ```bash pip install accelerate ``` Luego, importamos y creamos un objeto [`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator). `Accelerator` detectarรก automรกticamente el tipo de configuraciรณn distribuida que tengas disponible e inicializarรก todos los componentes necesarios para el entrenamiento. No necesitas especificar el dispositivo en donde se debe colocar tu modelo. ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## Prepรกrate para acelerar Pasa todos los objetos relevantes para el entrenamiento al mรฉtodo [`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare). Esto incluye los DataLoaders de entrenamiento y evaluaciรณn, un modelo y un optimizador: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## Backward Por รบltimo, reemplaza el tรญpico `loss.backward()` en tu bucle de entrenamiento con el mรฉtodo [`backward`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.backward) de ๐Ÿค— Accelerate: ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` Como se puede ver en el siguiente cรณdigo, ยกsolo necesitas adicionar cuatro lรญneas de cรณdigo a tu bucle de entrenamiento para habilitar el entrenamiento distribuido! ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## Entrenamiento Una vez que hayas aรฑadido las lรญneas de cรณdigo relevantes, inicia el entrenamiento desde un script o notebook como Colaboratory. ### Entrenar con un script Si estรกs corriendo tu entrenamiento desde un script ejecuta el siguiente comando para crear y guardar un archivo de configuraciรณn: ```bash accelerate config ``` Comienza el entrenamiento con: ```bash accelerate launch train.py ``` ### Entrenar con un notebook ๐Ÿค— Accelerate puede correr en un notebook si, por ejemplo, estรกs planeando utilizar las TPUs de Colaboratory. Encierra el cรณdigo responsable del entrenamiento en una funciรณn y pรกsalo a `notebook_launcher`: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` Para obtener mรกs informaciรณn sobre ๐Ÿค— Accelerate y sus numerosas funciones, consulta la [documentaciรณn](https://huggingface.co/docs/accelerate).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/sagemaker.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Ejecutar el entrenamiento en Amazon SageMaker La documentaciรณn ha sido trasladada a [hf.co/docs/sagemaker](https://huggingface.co/docs/sagemaker). Esta pรกgina serรก eliminada en `transformers` 5.0. ### Tabla de contenido - [Entrenar modelos de Hugging Face en Amazon SageMaker con SageMaker Python SDK](https://huggingface.co/docs/sagemaker/train) - [Desplegar modelos de Hugging Face en Amazon SageMaker con SageMaker Python SDK](https://huggingface.co/docs/sagemaker/inference)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/chat_templating.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Plantillas para Modelos de Chat ## Introducciรณn Un caso de uso cada vez mรกs comรบn para LLMs es **el chat**. En un contexto de chat, en lugar de continuar una รบnica cadena de texto (como es el caso con un modelo de lenguaje estรกndar), el modelo continรบa una conversaciรณn que consta de uno o mรกs **mensajes**, cada uno de los cuales incluye un **rol**, como "usuario" o "asistente", asรญ como el texto del mensaje. Al igual que con la tokenizaciรณn, diferentes modelos esperan formatos de entrada muy diferentes para el chat. Esta es la razรณn por la que agregamos las plantillas de chat como una caracterรญstica. Las plantillas de chat son parte del tokenizador. Especifican cรณmo convertir conversaciones, representadas como listas de mensajes, en una รบnica cadena tokenizable en el formato que el modelo espera. Vamos a hacer esto con un ejemplo concreto utilizando el modelo `BlenderBot`. BlenderBot tiene una plantilla predeterminada extremadamente simple, que principalmente solo agrega espacios en blanco entre rondas de diรกlogo: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> chat = [ ... {"role": "user", "content": "Hello, how are you?"}, ... {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, ... {"role": "user", "content": "I'd like to show off how chat templating works!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) " Hello, how are you? I'm doing great. How can I help you today? I'd like to show off how chat templating works!</s>" ``` Observa cรณmo todo el chat se condensa en una sola cadena. Si usamos `tokenize=True`, que es la configuraciรณn predeterminada, esa cadena tambiรฉn serรก tokenizada para nosotros. Sin embargo, para ver una plantilla mรกs compleja en acciรณn, usemos el modelo `mistralai/Mistral-7B-Instruct-v0.1` ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") >>> chat = [ ... {"role": "user", "content": "Hello, how are you?"}, ... {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, ... {"role": "user", "content": "I'd like to show off how chat templating works!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]" ``` Ten en cuenta que esta vez, el tokenizador ha aรฑadido los tokens de control [INST] y [/INST] para indicar el inicio y el final de los mensajes de usuario (ยกpero no de los mensajes del asistente!). Mistral-instruct fue entrenado con estos tokens, pero BlenderBot no lo fue. ## ยฟCรณmo uso las plantillas de chat? Como puedes ver en el ejemplo anterior, las plantillas de chat son fรกciles de usar. Simplemente construye una lista de mensajes, con claves de `rol` y `contenido`, y luego pรกsala al mรฉtodo [`~PreTrainedTokenizer.apply_chat_template`]. Una vez que hagas eso, ยกobtendrรกs una salida lista para usar! Al utilizar plantillas de chat como entrada para la generaciรณn de modelos, tambiรฉn es una buena idea usar `add_generation_prompt=True` para agregar una [indicaciรณn de generaciรณn](#ยฟQuรฉ-son-los-"generation-prompts"?). Aquรญ tienes un ejemplo de cรณmo preparar la entrada para `model.generate()` utilizando el modelo de asistente `Zephyr`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceH4/zephyr-7b-beta" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) # You may want to use bfloat16 and/or move to GPU here messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") print(tokenizer.decode(tokenized_chat[0])) ``` Esto generarรก una cadena en el formato de entrada que Zephyr espera. ```text <|system|> You are a friendly chatbot who always responds in the style of a pirate</s> <|user|> How many helicopters can a human eat in one sitting?</s> <|assistant|> ``` Ahora que nuestra entrada estรก formateada correctamente para Zephyr, podemos usar el modelo para generar una respuesta a la pregunta del usuario: ```python outputs = model.generate(tokenized_chat, max_new_tokens=128) print(tokenizer.decode(outputs[0])) ``` Esto producirรก: ```text <|system|> You are a friendly chatbot who always responds in the style of a pirate</s> <|user|> How many helicopters can a human eat in one sitting?</s> <|assistant|> Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all. ``` ยกArr, al final resultรณ ser fรกcil! ## ยฟExiste un pipeline automatizado para chats? Sรญ, lo hay! Nuestros canales de generaciรณn de texto admiten entradas de chat, cual facilita mรกs facรญl utilizar los modelos de chat. En el pasado, solรญamos utilizar una clase dedicada "ConversationalPipeline", pero ahora ha quedado obsoleta y su funcionalidad se ha fusionado en [`TextGenerationPipeline`]. Este pipeline estรก diseรฑado para facilitar el uso de modelos de chat. Intentemos el ejemplo de `Zephyr` de nuevo, pero esta vez utilizando el pipeline: ```python from transformers import pipeline pipe = pipeline("conversational", "HuggingFaceH4/zephyr-7b-beta") messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] print(pipe(messages, max_new_tokens=128)[0]['generated_text'][-1]) # Print the assistant's response ``` ```text {'role': 'assistant', 'content': "Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all."} ``` La canalizaciรณn se encargarรก de todos los detalles de la tokenizaciรณn y de llamar a `apply_chat_template` por ti. Una vez que el modelo tenga una plantilla de chat, ยกtodo lo que necesitas hacer es inicializar el pipeline y pasarle la lista de mensajes! # ยฟQuรฉ son los "generation prompts"? Puede que hayas notado que el mรฉtodo `apply_chat_template` tiene un argumento `add_generation_prompt`. Este argumento indica a la plantilla que agregue tokens que indiquen el inicio de una respuesta del bot. Por ejemplo, considera el siguiente chat: ```python messages = [ {"role": "user", "content": "Hi there!"}, {"role": "assistant", "content": "Nice to meet you!"}, {"role": "user", "content": "Can I ask a question?"} ] ``` Asรญ es cรณmo se verรก esto sin un "generation prompt", usando la plantilla ChatML que vimos en el ejemplo de Zephyr: ```python tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False) """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> """ ``` Y asรญ es como se ve **con** un "generation prompt": ```python tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """ ``` Ten en cuenta que esta vez, hemos agregado los tokens que indican el inicio de una respuesta del bot. Esto asegura que cuando el modelo genere texto, escribirรก una respuesta del bot en lugar de hacer algo inesperado, como continuar el mensaje del usuario. Recuerda, los modelos de chat siguen siendo solo modelos de lenguaje: estรกn entrenados para continuar texto, ยกy el chat es solo un tipo especial de texto para ellos! Necesitas guiarlos con los tokens de control apropiados para que sepan lo que se supone que deben estar haciendo. No todos los modelos requieren "generation prompts". Algunos modelos, como BlenderBot y LLaMA, no tienen ningรบn token especial antes de las respuestas del bot. En estos casos, el argumento `add_generation_prompt` no tendrรก ningรบn efecto. El efecto exacto que tiene `add_generation_prompt` dependerรก de la plantilla que se estรฉ utilizando. ## ยฟPuedo usar plantillas de chat en el entrenamiento? ยกSรญ! Recomendamos que apliques la plantilla de chat como un paso de preprocesamiento para tu conjunto de datos. Despuรฉs de esto, simplemente puedes continuar como cualquier otra tarea de entrenamiento de modelos de lenguaje. Durante el entrenamiento, generalmente deberรญas establecer `add_generation_prompt=False`, porque los tokens aรฑadidos para solicitar una respuesta del asistente no serรกn รบtiles durante el entrenamiento. Veamos un ejemplo: ```python from transformers import AutoTokenizer from datasets import Dataset tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta") chat1 = [ {"role": "user", "content": "Which is bigger, the moon or the sun?"}, {"role": "assistant", "content": "The sun."} ] chat2 = [ {"role": "user", "content": "Which is bigger, a virus or a bacterium?"}, {"role": "assistant", "content": "A bacterium."} ] dataset = Dataset.from_dict({"chat": [chat1, chat2]}) dataset = dataset.map(lambda x: {"formatted_chat": tokenizer.apply_chat_template(x["chat"], tokenize=False, add_generation_prompt=False)}) print(dataset['formatted_chat'][0]) ``` Y obtenemos: ```text <|user|> Which is bigger, the moon or the sun?</s> <|assistant|> The sun.</s> ``` Desde aquรญ, simplemente continรบa el entrenamiento como lo harรญas con una tarea estรกndar de modelado de lenguaje, utilizando la columna `formatted_chat`. ## Avanzado: ยฟCรณmo funcionan las plantillas de chat? La plantilla de chat para un modelo se almacena en el atributo `tokenizer.chat_template`. Si no se establece ninguna plantilla de chat, se utiliza en su lugar la plantilla predeterminada para esa clase de modelo. Echemos un vistazo a la plantilla para `BlenderBot`: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> tokenizer.default_chat_template "{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}" ``` ยกEs un poco intimidante! Vamos a agregar algunas lรญneas nuevas y sangria para que sea mรกs legible. Ten en cuenta que la primera lรญnea nueva despuรฉs de cada bloque, asรญ como cualquier espacio en blanco anterior a un bloque, se ignoran de forma predeterminada, utilizando las banderas `trim_blocks` y `lstrip_blocks` de Jinja. Sin embargo, ยกten cuidado! Aunque el espacio en blanco inicial en cada lรญnea se elimina, los espacios entre bloques en la misma lรญnea no. ยกTe recomendamos encarecidamente que verifiques que tu plantilla no estรฉ imprimiendo espacios adicionales donde no deberรญa estarlo! ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ ' ' }} {% endif %} {{ message['content'] }} {% if not loop.last %} {{ ' ' }} {% endif %} {% endfor %} {{ eos_token }} ``` Si nunca has visto uno de estos antes, esto es una [plantilla de Jinja](https://jinja.palletsprojects.com/en/3.1.x/templates/). Jinja es un lenguaje de plantillas que te permite escribir cรณdigo simple que genera texto. En muchos aspectos, el cรณdigo y la sintaxis se asemejan a Python. En Python puro, esta plantilla se verรญa algo asรญ: ```python for idx, message in enumerate(messages): if message['role'] == 'user': print(' ') print(message['content']) if not idx == len(messages) - 1: # Check for the last message in the conversation print(' ') print(eos_token) ``` Efectivamente, la plantilla hace tres cosas: 1. Para cada mensaje, si el mensaje es un mensaje de usuario, aรฑade un espacio en blanco antes de รฉl, de lo contrario no imprime nada. 2. Aรฑade el contenido del mensaje. 3. Si el mensaje no es el รบltimo mensaje, aรฑade dos espacios despuรฉs de รฉl. Despuรฉs del รบltimo mensaje, imprime el token EOS. Esta es una plantilla bastante simple: no aรฑade ningรบn token de control y no admite mensajes "del sistema", que son una forma comรบn de dar al modelo directivas sobre cรณmo debe comportarse en la conversaciรณn posterior. ยกPero Jinja te brinda mucha flexibilidad para hacer esas cosas! Veamos una plantilla de Jinja que pueda formatear las entradas de manera similar a la forma en que LLaMA las formatea (nota que la plantilla real de LLaMA incluye el manejo de mensajes del sistema predeterminados y el manejo de mensajes del sistema ligeramente diferentes en general; ยกno uses esta en tu cรณdigo real!) ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'] + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<<SYS>>\\n' + message['content'] + '\\n<</SYS>>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ ' ' + message['content'] + ' ' + eos_token }} {% endif %} {% endfor %} ``` Si observas esto por un momento, puedas ver lo que esta plantilla estรก haciendo: aรฑade tokens especรญficos basados en el "rol" de cada mensaje, que representa quiรฉn lo enviรณ. Los mensajes de usuario, asistente y sistema son claramente distinguibles para el modelo debido a los tokens en los que estรกn envueltos. ## Avanzado: Aรฑadiendo y editando plantillas de chat ### ยฟCรณmo creo una plantilla de chat? Simple, solo escribe una plantilla de Jinja y establece `tokenizer.chat_template`. ยกPuede resultarte mรกs fรกcil comenzar con una plantilla existente de otro modelo y simplemente editarla segรบn tus necesidades! Por ejemplo, podrรญamos tomar la plantilla de LLaMA de arriba y aรฑadir "[ASST]" y "[/ASST]" a los mensajes del asistente: ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'].strip() + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<<SYS>>\\n' + message['content'].strip() + '\\n<</SYS>>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ '[ASST] ' + message['content'] + ' [/ASST]' + eos_token }} {% endif %} {% endfor %} ``` Ahora, simplemente establece el atributo `tokenizer.chat_template`. ยกLa prรณxima vez que uses [`~PreTrainedTokenizer.apply_chat_template`], se utilizarรก tu nueva plantilla! Este atributo se guardarรก en el archivo tokenizer_config.json, por lo que puedes usar [`~utils.PushToHubMixin.push_to_hub`] para cargar tu nueva plantilla en el Hub y asegurarte de que todos estรฉn utilizando la plantilla correcta para tu modelo. ```python template = tokenizer.chat_template template = template.replace("SYS", "SYSTEM") # Change the system token tokenizer.chat_template = template # Set the new template tokenizer.push_to_hub("model_name") # Upload your new template to the Hub! ``` El mรฉtodo [`~PreTrainedTokenizer.apply_chat_template`], que utiliza tu plantilla de chat, es llamado por la clase [`TextGenerationPipeline`], asรญ que una vez que configures la plantilla de chat correcta, tu modelo se volverรก automรกticamente compatible con [`TextGenerationPipeline`]. <Tip> Si estรกs ajustando finamente un modelo para chat, ademรกs de establecer una plantilla de chat, probablemente deberรญas agregar cualquier nuevo token de control de chat como los tokens especiales en el tokenizador. Los tokens especiales nunca se dividen, asegurando que tus tokens de control siempre se manejen como tokens รบnicos en lugar de ser tokenizados en piezas. Tambiรฉn deberรญas establecer el atributo `eos_token` del tokenizador con el token que marca el final de las generaciones del asistente en tu plantilla. Esto asegurarรก que las herramientas de generaciรณn de texto puedan determinar correctamente cuรกndo detener la generaciรณn de texto. </Tip> ### ยฟQuรฉ son las plantillas "default"? Antes de la introducciรณn de las plantillas de chat, el manejo del chat estaba codificado en el nivel de la clase del modelo. Por razones de compatibilidad con versiones anteriores, hemos conservado este manejo especรญfico de la clase como plantillas predeterminadas, tambiรฉn establecidas a nivel de clase. Si un modelo no tiene una plantilla de chat establecida, pero hay una plantilla predeterminada para su clase de modelo, la clase `TextGenerationPipeline` y mรฉtodos como `apply_chat_template` usarรกn la plantilla de clase en su lugar. Puedes averiguar cuรกl es la plantilla predeterminada para tu tokenizador comprobando el atributo `tokenizer.default_chat_template`. Esto es algo que hacemos puramente por razones de compatibilidad con versiones anteriores, para evitar romper cualquier flujo de trabajo existente. Incluso cuando la plantilla de clase es apropiada para tu modelo, recomendamos encarecidamente anular la plantilla predeterminada estableciendo explรญcitamente el atributo `chat_template` para dejar claro a los usuarios que tu modelo ha sido configurado correctamente para el chat, y para estar preparados para el futuro en caso de que las plantillas predeterminadas alguna vez se alteren o se eliminen. ### ยฟQuรฉ plantilla deberรญa usar? Cuando establezcas la plantilla para un modelo que ya ha sido entrenado para chat, debes asegurarte de que la plantilla coincida exactamente con el formato de mensajes que el modelo vio durante el entrenamiento, o de lo contrario es probable que experimentes degradaciรณn del rendimiento. Esto es cierto incluso si estรกs entrenando aรบn mรกs el modelo; probablemente obtendrรกs el mejor rendimiento si mantienes constantes los tokens de chat. Esto es muy anรกlogo a la tokenizaciรณn: generalmente obtienes el mejor rendimiento para la inferencia o el ajuste fino cuando coincides precisamente con la tokenizaciรณn utilizada durante el entrenamiento. Si estรกs entrenando un modelo desde cero o ajustando finamente un modelo de lenguaje base para chat, por otro lado, ยกtienes mucha libertad para elegir una plantilla apropiada! Los LLM son lo suficientemente inteligentes como para aprender a manejar muchos formatos de entrada diferentes. Nuestra plantilla predeterminada para modelos que no tienen una plantilla especรญfica de clase sigue el formato ChatML, y esta es una buena elecciรณn flexible para muchos casos de uso. Se ve asรญ: ``` {% for message in messages %} {{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}} {% endfor %} ``` Si te gusta esta plantilla, aquรญ estรก en forma de una sola lรญnea, lista para copiar en tu cรณdigo. La versiรณn de una sola lรญnea tambiรฉn incluye un prรกctico soporte para [prompts de generaciรณn](#ยฟQuรฉ-son-los-"generation-prompts"?), ยกpero ten en cuenta que no aรฑade tokens de BOS o EOS! Si tu modelo espera esos tokens, no se agregarรกn automรกticamente por `apply_chat_template`, en otras palabras, el texto serรก tokenizado con `add_special_tokens=False`. Esto es para evitar posibles conflictos entre la plantilla y la lรณgica de `add_special_tokens`. ยกSi tu modelo espera tokens especiales, asegรบrate de aรฑadirlos a la plantilla! ```python tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}" ``` Esta plantilla envuelve cada mensaje en tokens `<|im_start|>` y `<|im_end|>`, y simplemente escribe el rol como una cadena, lo que permite flexibilidad en los roles con los que entrenas. La salida se ve asรญ: ```text <|im_start|>system You are a helpful chatbot that will do its best not to say anything so stupid that people tweet about it.<|im_end|> <|im_start|>user How are you?<|im_end|> <|im_start|>assistant I'm doing great!<|im_end|> ``` Los roles "usuario", "sistema" y "asistente" son los estรกndar para chat, y recomendamos usarlos cuando tenga sentido, particularmente si deseas que tu modelo funcione bien con [`TextGenerationPipeline`]. Sin embargo, no estรกs limitado a estos roles: la plantilla es extremadamente flexible y cualquier cadena puede ser un rol. ### ยกQuiero aรฑadir algunas plantillas de chat! ยฟCรณmo debo empezar? Si tienes algรบn modelo de chat, debes establecer su atributo `tokenizer.chat_template` y probarlo usando [`~PreTrainedTokenizer.apply_chat_template`], luego subir el tokenizador actualizado al Hub. Esto se aplica incluso si no eres el propietario del modelo: si estรกs usando un modelo con una plantilla de chat vacรญa o que todavรญa estรก utilizando la plantilla predeterminada de clase, por favor abre una solicitud de extracciรณn [pull request](https://huggingface.co/docs/hub/repositories-pull-requests-discussions) al repositorio del modelo para que este atributo se pueda establecer correctamente. Una vez que se establece el atributo, ยกeso es todo, has terminado! `tokenizer.apply_chat_template` ahora funcionarรก correctamente para ese modelo, ยกlo que significa que tambiรฉn es compatible automรกticamente en lugares como `TextGenerationPipeline`! Al asegurarnos de que los modelos tengan este atributo, podemos garantizar que toda la comunidad pueda utilizar todo el poder de los modelos de cรณdigo abierto. Los desajustes de formato han estado acechando el campo y daรฑando silenciosamente el rendimiento durante demasiado tiempo: ยกes hora de ponerles fin! ## Avanzado: Consejos para escribir plantillas Si no estรกs familiarizado con Jinja, generalmente encontramos que la forma mรกs fรกcil de escribir una plantilla de chat es primero escribir un script de Python corto que formatee los mensajes como desees, y luego convertir ese script en una plantilla. Recuerda que el manejador de plantillas recibirรก el historial de conversaciรณn como una variable llamada mensajes. Cada mensaje es un diccionario con dos claves, `role` y `content`. Podrรกs acceder a los `mensajes` en tu plantilla tal como lo harรญas en Python, lo que significa que puedes recorrerlo con `{% for message in messages %}` o acceder a mensajes individuales con, por ejemplo, `{{ messages[0] }}`. Tambiรฉn puedes usar los siguientes consejos para convertir tu cรณdigo a Jinja: ### Bucles For Los bucles For en Jinja se ven asรญ: ``` {% for message in messages %} {{ message['content'] }} {% endfor %} ``` Ten en cuenta que todo lo que estรฉ dentro del {{bloque de expresiรณn}} se imprimirรก en la salida. Puedes usar operadores como `+` para combinar cadenas dentro de bloques de expresiรณn. ### Declaraciones if Las declaraciones if en Jinja se ven asรญ: ``` {% if message['role'] == 'user' %} {{ message['content'] }} {% endif %} ``` Observa cรณmo donde Python utiliza espacios en blanco para marcar el inicio y el final de los bloques `for` e `if`, Jinja requiere que los termines explรญcitamente con `{% endfor %}` y `{% endif %}`. ### Variables especiales Dentro de tu plantilla, tendrรกs acceso a la lista de `mensajes`, pero tambiรฉn puedes acceder a varias otras variables especiales. Estas incluyen tokens especiales como `bos_token` y `eos_token`, asรญ como la variable `add_generation_prompt` que discutimos anteriormente. Tambiรฉn puedes usar la variable `loop` para acceder a informaciรณn sobre la iteraciรณn actual del bucle, por ejemplo, usando `{% if loop.last %}` para verificar si el mensaje actual es el รบltimo mensaje en la conversaciรณn. Aquรญ tienes un ejemplo que combina estas ideas para agregar un prompt de generaciรณn al final de la conversaciรณn si add_generation_prompt es `True`: ``` {% if loop.last and add_generation_prompt %} {{ bos_token + 'Assistant:\n' }} {% endif %} ``` ### Notas sobre los espacios en blanco Hemos intentado que Jinja ignore los espacios en blanco fuera de las {{expresiones}} tanto como sea posible. Sin embargo, ten en cuenta que Jinja es un motor de plantillas de propรณsito general y puede tratar el espacio en blanco entre bloques en la misma lรญnea como significativo e imprimirlo en la salida. ยกTe recomendamos **encarecidamente** que verifiques que tu plantilla no estรฉ imprimiendo espacios adicionales donde no deberรญa antes de subirla!
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Instalaciรณn En esta guรญa puedes encontrar informaciรณn para instalar ๐Ÿค— Transformers para cualquier biblioteca de Machine Learning con la que estรฉs trabajando. Ademรกs, encontrarรกs informaciรณn sobre cรณmo establecer el cachรฉ y cรณmo configurar ๐Ÿค— Transformers para correrlo de manera offline (opcional). ๐Ÿค— Transformers ha sido probada en Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, y Flax. Para instalar la biblioteca de deep learning con la que desees trabajar, sigue las instrucciones correspondientes listadas a continuaciรณn: * [PyTorch](https://pytorch.org/get-started/locally/) * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) * [Flax](https://flax.readthedocs.io/en/latest/) ## Instalaciรณn con pip Es necesario instalar ๐Ÿค— Transformers en un [entorno virtual](https://docs.python.org/3/library/venv.html). Si necesitas mรกs informaciรณn sobre entornos virtuales de Python, consulta esta [guรญa](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/ ). Un entorno virtual facilita el manejo de proyectos y evita problemas de compatibilidad entre dependencias. Comienza por crear un entorno virtual en el directorio de tu proyecto: ```bash python -m venv .env ``` Activa el entorno virtual: ```bash source .env/bin/activate ``` Ahora puedes instalar ๐Ÿค— Transformers con el siguiente comando: ```bash pip install transformers ``` Solo para CPU, puedes instalar ๐Ÿค— Transformers y una biblioteca de deep learning con un comando de una sola lรญnea. Por ejemplo, instala ๐Ÿค— Transformers y Pytorch: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers y TensorFlow 2.0: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers y Flax: ```bash pip install transformers[flax] ``` Por รบltimo, revisa si ๐Ÿค— Transformers ha sido instalada exitosamente con el siguiente comando que descarga un modelo pre-entrenado: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Despuรฉs imprime la etiqueta y el puntaje: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Instalaciรณn desde la fuente Instala ๐Ÿค— Transformers desde la fuente con el siguiente comando: ```bash pip install git+https://github.com/huggingface/transformers ``` El comando de arriba instala la versiรณn `master` mรกs actual en vez de la รบltima versiรณn estable. La versiรณn `master` es รบtil para obtener los รบltimos avances de ๐Ÿค— Transformers. Por ejemplo, se puede dar el caso de que un error fue corregido despuรฉs de la รบltima versiรณn estable pero aรบn no se ha liberado un nuevo lanzamiento. Sin embargo, existe la posibilidad de que la versiรณn `master` no sea estable. El equipo trata de mantener la versiรณn `master` operacional y la mayorรญa de los errores son resueltos en unas cuantas horas o un dรญa. Si encuentras algรบn problema, por favor abre un [Issue](https://github.com/huggingface/transformers/issues) para que pueda ser corregido mรกs rรกpido. Verifica si ๐Ÿค— Transformers estรก instalada apropiadamente con el siguiente comando: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Instalaciรณn editable Necesitarรกs una instalaciรณn editable si deseas: * Usar la versiรณn `master` del cรณdigo fuente. * Contribuir a ๐Ÿค— Transformers y necesitas probar cambios en el cรณdigo. Clona el repositorio e instala ๐Ÿค— Transformers con los siguientes comandos: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` ร‰stos comandos van a ligar el directorio desde donde clonamos el repositorio al path de las bibliotecas de Python. Python ahora buscarรก dentro de la carpeta que clonaste ademรกs de los paths normales de la biblioteca. Por ejemplo, si los paquetes de Python se encuentran instalados en `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python tambiรฉn buscarรก en el directorio desde donde clonamos el repositorio `~/transformers/`. <Tip warning={true}> Debes mantener el directorio `transformers` si deseas seguir usando la biblioteca. </Tip> Puedes actualizar tu copia local a la รบltima versiรณn de ๐Ÿค— Transformers con el siguiente comando: ```bash cd ~/transformers/ git pull ``` El entorno de Python que creaste para la instalaciรณn de ๐Ÿค— Transformers encontrarรก la versiรณn `master` en la siguiente ejecuciรณn. ## Instalaciรณn con conda Puedes instalar ๐Ÿค— Transformers desde el canal de conda `conda-forge` con el siguiente comando: ```bash conda install conda-forge::transformers ``` ## Configuraciรณn de Cachรฉ Los modelos preentrenados se descargan y almacenan en cachรฉ localmente en: `~/.cache/huggingface/transformers/`. Este es el directorio predeterminado proporcionado por la variable de entorno de shell `TRANSFORMERS_CACHE`. En Windows, el directorio predeterminado es dado por `C:\Users\username\.cache\huggingface\transformers`. Puedes cambiar las variables de entorno de shell que se muestran a continuaciรณn, en orden de prioridad, para especificar un directorio de cachรฉ diferente: 1. Variable de entorno del shell (por defecto): `TRANSFORMERS_CACHE`. 2. Variable de entorno del shell:`HF_HOME` + `transformers/`. 3. Variable de entorno del shell: `XDG_CACHE_HOME` + `/huggingface/transformers`. <Tip> ๐Ÿค— Transformers usarรก las variables de entorno de shell `PYTORCH_TRANSFORMERS_CACHE` o `PYTORCH_PRETRAINED_BERT_CACHE` si viene de una iteraciรณn anterior de la biblioteca y ha configurado esas variables de entorno, a menos que especifiques la variable de entorno de shell `TRANSFORMERS_CACHE`. </Tip> ## Modo Offline ๐Ÿค— Transformers puede ejecutarse en un entorno con firewall o fuera de lรญnea (offline) usando solo archivos locales. Configura la variable de entorno `TRANSFORMERS_OFFLINE=1` para habilitar este comportamiento. <Tip> Puedes aรฑadir [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) al flujo de entrenamiento offline declarando la variable de entorno `HF_DATASETS_OFFLINE=1`. </Tip> Por ejemplo, normalmente ejecutarรญas un programa en una red normal con firewall para instancias externas con el siguiente comando: ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Ejecuta este mismo programa en una instancia offline con el siguiente comando: ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` El script ahora deberรญa ejecutarse sin bloquearse ni esperar a que se agote el tiempo de espera porque sabe que solo debe buscar archivos locales. ### Obtener modelos y tokenizers para uso offline Otra opciรณn para usar ๐Ÿค— Transformers offline es descargando previamente los archivos y despuรฉs apuntar al path local donde se encuentren. Hay tres maneras de hacer esto: * Descarga un archivo mediante la interfaz de usuario del [Model Hub](https://huggingface.co/models) haciendo click en el รญcono โ†“. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Utiliza el flujo de [`PreTrainedModel.from_pretrained`] y [`PreTrainedModel.save_pretrained`]: 1. Descarga previamente los archivos con [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Guarda los archivos en un directorio especรญfico con [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. Cuando te encuentres offline, recarga los archivos con [`PreTrainedModel.from_pretrained`] desde el directorio especificado: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * Descarga de manera programรกtica los archivos con la biblioteca [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub): 1. Instala la biblioteca [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) en tu entorno virtual: ```bash python -m pip install huggingface_hub ``` 2. Utiliza la funciรณn [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) para descargar un archivo a un path especรญfico. Por ejemplo, el siguiente comando descarga el archivo `config.json` del modelo [T0](https://huggingface.co/bigscience/T0_3B) al path deseado: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` Una vez que el archivo se descargue y se almacene en cachรฉ localmente, especifica tu ruta local para cargarlo y usarlo: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Para mรกs detalles sobre cรณmo descargar archivos almacenados en el Hub consulta la secciรณn [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream). </Tip>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/es/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Tour rรกpido [[open-in-colab]] ยกEntra en marcha con los ๐Ÿค— Transformers! Comienza usando [`pipeline`] para una inferencia veloz, carga un modelo preentrenado y un tokenizador con una [AutoClass](./model_doc/auto) para resolver tu tarea de texto, visiรณn o audio. <Tip> Todos los ejemplos de cรณdigo presentados en la documentaciรณn tienen un botรณn arriba a la derecha para elegir si quieres ocultar o mostrar el cรณdigo en Pytorch o TensorFlow. Si no fuese asรญ, se espera que el cรณdigo funcione para ambos backends sin ningรบn cambio. </Tip> ## Pipeline [`pipeline`] es la forma mรกs fรกcil de usar un modelo preentrenado para una tarea dada. <Youtube id="tiZFewofSLM"/> El [`pipeline`] soporta muchas tareas comunes listas para usar: **Texto**: * Anรกlisis de Sentimiento (Sentiment Analysis, en inglรฉs): clasifica la polaridad de un texto dado. * Generaciรณn de Texto (Text Generation, en inglรฉs): genera texto a partir de un input dado. * Reconocimiento de Entidades (Name Entity Recognition o NER, en inglรฉs): etiqueta cada palabra con la entidad que representa (persona, fecha, ubicaciรณn, etc.). * Responder Preguntas (Question answering, en inglรฉs): extrae la respuesta del contexto dado un contexto y una pregunta. * Rellenar Mรกscara (Fill-mask, en inglรฉs): rellena el espacio faltante dado un texto con palabras enmascaradas. * Resumir (Summarization, en inglรฉs): genera un resumen de una secuencia larga de texto o un documento. * Traducciรณn (Translation, en inglรฉs): traduce un texto a otro idioma. * Extracciรณn de Caracterรญsticas (Feature Extraction, en inglรฉs): crea una representaciรณn tensorial del texto. **Imagen**: * Clasificaciรณn de Imรกgenes (Image Classification, en inglรฉs): clasifica una imagen. * Segmentaciรณn de Imรกgenes (Image Segmentation, en inglรฉs): clasifica cada pixel de una imagen. * Detecciรณn de Objetos (Object Detection, en inglรฉs): detecta objetos dentro de una imagen. **Audio**: * Clasificaciรณn de Audios (Audio Classification, en inglรฉs): asigna una etiqueta a un segmento de audio. * Reconocimiento de Voz Automรกtico (Automatic Speech Recognition o ASR, en inglรฉs): transcribe datos de audio a un texto. <Tip> Para mรกs detalles acerca del [`pipeline`] y tareas asociadas, consulta la documentaciรณn [aquรญ](./main_classes/pipelines). </Tip> ### Uso del Pipeline En el siguiente ejemplo, usarรกs el [`pipeline`] para anรกlisis de sentimiento. Instala las siguientes dependencias si aรบn no lo has hecho: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> Importa [`pipeline`] y especifica la tarea que deseas completar: ```py >>> from transformers import pipeline >>> clasificador = pipeline("sentiment-analysis", model="pysentimiento/robertuito-sentiment-analysis") ``` El pipeline descarga y almacena en cachรฉ el [modelo preentrenado](https://huggingface.co/pysentimiento/robertuito-sentiment-analysis) y tokeniza para anรกlisis de sentimiento. Si no hubieramos elegido un modelo el pipeline habrรญa elegido uno por defecto. Ahora puedes usar `clasificador` en tu texto objetivo: ```py >>> clasificador("Estamos muy felices de mostrarte la biblioteca de ๐Ÿค— Transformers.") [{'label': 'POS', 'score': 0.9320}] ``` Para mรกs de un enunciado, entrega una lista al [`pipeline`] que devolverรก una lista de diccionarios: El [`pipeline`] tambiรฉn puede iterar sobre un dataset entero. Comienza instalando la biblioteca [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/): ```bash pip install datasets ``` Crea un [`pipeline`] con la tarea que deseas resolver y el modelo que quieres usar. Coloca el parรกmetro `device` a `0` para poner los tensores en un dispositivo CUDA: ```py >>> import torch >>> from transformers import pipeline >>> reconocedor_de_voz = pipeline( ... "automatic-speech-recognition", model="jonatasgrosman/wav2vec2-large-xlsr-53-spanish", device=0 ... ) ``` A continuaciรณn, carga el dataset (ve ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart.html) para mรกs detalles) sobre el que quisieras iterar. Por ejemplo, vamos a cargar el dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="es-ES", split="train") # doctest: +IGNORE_RESULT ``` Debemos asegurarnos de que la frecuencia de muestreo del conjunto de datos coincide con la frecuencia de muestreo con la que se entrenรณ `jonatasgrosman/wav2vec2-large-xlsr-53-spanish`. ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=reconocedor_de_voz.feature_extractor.sampling_rate)) ``` Los archivos de audio se cargan y remuestrean automรกticamente cuando llamamos a la columna `"audio"`. Extraigamos las matrices de onda cruda (raw waveform, en inglรฉs) de las primeras 4 muestras y pasรฉmosla como una lista al pipeline: ```py >>> resultado = reconocedor_de_voz(dataset[:4]["audio"]) >>> print([d["text"] for d in resultado]) ['ahora buenas eh a ver tengo un problema con vuestra aplicaciรณn resulta que que quiero hacer una transferencia bancaria a una cuenta conocida pero me da error la aplicaciรณn a ver que a ver que puede ser', 'la aplicaciรณn no cargue saldo de mi nueva cuenta', 'hola tengo un problema con la aplicaciรณn no carga y y tampoco veo que carga el saldo de mi cuenta nueva dice que la aplicaciรณn estรก siendo reparada y ahora no puedo acceder a mi cuenta no necesito inmediatamente', 'hora buena la aplicaciรณn no se carga la vida no carga el saldo de mi cuenta nueva dice que la villadenta siendo reparada y oro no puedo hacer a mi cuenta'] ``` Para un dataset mรกs grande, donde los inputs son de mayor tamaรฑo (como en habla/audio o visiรณn), querrรกs pasar un generador en lugar de una lista que carga todos los inputs en memoria. Ve la [documentaciรณn del pipeline](./main_classes/pipelines) para mรกs informaciรณn. ### Usa otro modelo y otro tokenizador en el pipeline El [`pipeline`] puede acomodarse a cualquier modelo del [Model Hub](https://huggingface.co/models) haciendo mรกs fรกcil adaptar el [`pipeline`] para otros casos de uso. Por ejemplo, si quisieras un modelo capaz de manejar texto en francรฉs, usa los tags en el Model Hub para filtrar entre los modelos apropiados. El resultado mejor filtrado devuelve un [modelo BERT](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) multilingual fine-tuned para el anรกlisis de sentimiento. Genial, ยกvamos a usar este modelo! ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Usa [`AutoModelForSequenceClassification`] y ['AutoTokenizer'] para cargar un modelo preentrenado y un tokenizador asociado (mรกs en un `AutoClass` debajo): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Usa [`TFAutoModelForSequenceClassification`] y ['AutoTokenizer'] para cargar un modelo preentrenado y un tokenizador asociado (mรกs en un `TFAutoClass` debajo): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Despuรฉs puedes especificar el modelo y el tokenizador en el [`pipeline`], y aplicar el `classifier` en tu texto objetivo: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Si no pudieras encontrar el modelo para tu caso respectivo de uso necesitarรกs ajustar un modelo preentrenado a tus datos. Mira nuestro [tutorial de fine-tuning](./training) para aprender cรณmo. Finalmente, despuรฉs de que has ajustado tu modelo preentrenado, ยกpor favor considera compartirlo (ve el tutorial [aquรญ](./model_sharing)) con la comunidad en el Model Hub para democratizar el NLP! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Por debajo, las clases [`AutoModelForSequenceClassification`] y [`AutoTokenizer`] trabajan juntas para dar poder al [`pipeline`]. Una [AutoClass](./model_doc/auto) es un atajo que automรกticamente recupera la arquitectura de un modelo preentrenado con su nombre o el path. Sรณlo necesitarรกs seleccionar el `AutoClass` apropiado para tu tarea y tu tokenizador asociado con [`AutoTokenizer`]. Regresemos a nuestro ejemplo y veamos cรณmo puedes usar el `AutoClass` para reproducir los resultados del [`pipeline`]. ### AutoTokenizer Un tokenizador es responsable de procesar el texto a un formato que sea entendible para el modelo. Primero, el tokenizador separarรก el texto en palabras llamadas *tokens*. Hay mรบltiples reglas que gobiernan el proceso de tokenizaciรณn incluyendo el cรณmo separar una palabra y en quรฉ nivel (aprende mรกs sobre tokenizaciรณn [aquรญ](./tokenizer_summary)). Lo mรกs importante es recordar que necesitarรกs instanciar el tokenizador con el mismo nombre del modelo para asegurar que estรกs usando las mismas reglas de tokenizaciรณn con las que el modelo fue preentrenado. Carga un tokenizador con [`AutoTokenizer`]: ```py >>> from transformers import AutoTokenizer >>> nombre_del_modelo = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(nombre_del_modelo) ``` Despuรฉs, el tokenizador convierte los tokens a nรบmeros para construir un tensor que servirรก como input para el modelo. Esto es conocido como el *vocabulario* del modelo. Pasa tu texto al tokenizador: ```py >>> encoding = tokenizer("Estamos muy felices de mostrarte la biblioteca de ๐Ÿค— Transformers.") >>> print(encoding) {'input_ids': [101, 10602, 14000, 13653, 43353, 10107, 10102, 47201, 10218, 10106, 18283, 10102, 100, 58263, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` El tokenizador devolverรก un diccionario conteniendo: * [input_ids](./glossary#input-ids): representaciones numรฉricas de los tokens. * [atttention_mask](.glossary#attention-mask): indica cuรกles tokens deben ser atendidos. Como con el [`pipeline`], el tokenizador aceptarรก una lista de inputs. Ademรกs, el tokenizador tambiรฉn puede rellenar (pad, en inglรฉs) y truncar el texto para devolver un lote (batch, en inglรฉs) de longitud uniforme: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> Lee el tutorial de [preprocessing](./preprocessing) para mรกs detalles acerca de la tokenizaciรณn. ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers provee una forma simple y unificada de cargar tus instancias preentrenadas. Esto significa que puedes cargar un [`AutoModel`] como cargarรญas un [`AutoTokenizer`]. La รบnica diferencia es seleccionar el [`AutoModel`] correcto para la tarea. Ya que estรกs clasificando texto, o secuencias, carga [`AutoModelForSequenceClassification`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Ve el [task summary](./task_summary) para revisar quรฉ clase del [`AutoModel`] deberรญas usar para cada tarea. </Tip> Ahora puedes pasar tu lote (batch) preprocesado de inputs directamente al modelo. Solo tienes que desempacar el diccionario aรฑadiendo `**`: ```py >>> pt_outputs = pt_model(**pt_batch) ``` El modelo producirรก las activaciones finales en el atributo `logits`. Aplica la funciรณn softmax a `logits` para obtener las probabilidades: ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers provee una forma simple y unificada de cargar tus instancias preentrenadas. Esto significa que puedes cargar un [`TFAutoModel`] como cargarรญas un [`AutoTokenizer`]. La รบnica diferencia es seleccionar el [`TFAutoModel`] correcto para la tarea. Ya que estรกs clasificando texto, o secuencias, carga [`TFAutoModelForSequenceClassification`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Ve el [task summary](./task_summary) para revisar quรฉ clase del [`AutoModel`] deberรญas usar para cada tarea. </Tip> Ahora puedes pasar tu lote preprocesado de inputs directamente al modelo pasando las llaves del diccionario directamente a los tensores: ```py >>> tf_outputs = tf_model(tf_batch) ``` El modelo producirรก las activaciones finales en el atributo `logits`. Aplica la funciรณn softmax a `logits` para obtener las probabilidades: ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> print(tf.math.round(tf_predictions * 10**4) / 10**4) tf.Tensor( [[0.0021 0.0018 0.0116 0.2121 0.7725] [0.2084 0.1826 0.1969 0.1755 0.2365]], shape=(2, 5), dtype=float32) ``` </tf> </frameworkcontent> <Tip> Todos los modelos de ๐Ÿค— Transformers (PyTorch o TensorFlow) producirรกn los tensores *antes* de la funciรณn de activaciรณn final (como softmax) porque la funciรณn de activaciรณn final es comรบnmente fusionada con la pรฉrdida. </Tip> Los modelos son [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) o [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) estรกndares asรญ que podrรกs usarlos en tu training loop usual. Sin embargo, para facilitar las cosas, ๐Ÿค— Transformers provee una clase [`Trainer`] para PyTorch que aรฑade funcionalidades para entrenamiento distribuido, preciciรณn mixta, y mรกs. Para TensorFlow, puedes usar el mรฉtodo `fit` desde [Keras](https://keras.io/). Consulta el [tutorial de entrenamiento](./training) para mรกs detalles. <Tip> Los outputs del modelo de ๐Ÿค— Transformers son dataclasses especiales por lo que sus atributos pueden ser completados en un IDE. Los outputs del modelo tambiรฉn se comportan como tuplas o diccionarios (e.g., puedes indexar con un entero, un slice o una cadena) en cuyo caso los atributos que son `None` son ignorados. </Tip> ### Guarda un modelo <frameworkcontent> <pt> Una vez que se haya hecho fine-tuning a tu modelo puedes guardarlo con tu tokenizador usando [`PreTrainedModel.save_pretrained`]: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Cuando quieras usar el modelo otra vez cรกrgalo con [`PreTrainedModel.from_pretrained`]: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Una vez que se haya hecho fine-tuning a tu modelo puedes guardarlo con tu tokenizador usando [`TFPreTrainedModel.save_pretrained`]: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Cuando quieras usar el modelo otra vez cรกrgalo con [`TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Una caracterรญstica particularmente interesante de ๐Ÿค— Transformers es la habilidad de guardar el modelo y cargarlo como un modelo de PyTorch o TensorFlow. El parรกmetro `from_pt` o `from_tf` puede convertir el modelo de un framework al otro: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent>
0