repo_id
stringclasses
55 values
file_path
stringlengths
42
186
content
stringlengths
1
333k
__index_level_0__
int64
0
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/roc_bert.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # RoCBert ## Overview The RoCBert model was proposed in [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. It's a pretrained Chinese language model that is robust under various forms of adversarial attacks. The abstract from the paper is the following: *Large-scale pretrained language models have achieved SOTA results on NLP tasks. However, they have been shown vulnerable to adversarial attacks especially for logographic languages like Chinese. In this work, we propose ROCBERT: a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation, synonyms, typos, etc. It is pretrained with the contrastive learning objective which maximizes the label consistency under different synthesized adversarial examples. The model takes as input multimodal information including the semantic, phonetic and visual features. We show all these features are important to the model robustness since the attack can be performed in all the three forms. Across 5 Chinese NLU tasks, ROCBERT outperforms strong baselines under three blackbox adversarial algorithms without sacrificing the performance on clean testset. It also performs the best in the toxic content detection task under human-made attacks.* This model was contributed by [weiweishi](https://huggingface.co/weiweishi). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## RoCBertConfig [[autodoc]] RoCBertConfig - all ## RoCBertTokenizer [[autodoc]] RoCBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## RoCBertModel [[autodoc]] RoCBertModel - forward ## RoCBertForPreTraining [[autodoc]] RoCBertForPreTraining - forward ## RoCBertForCausalLM [[autodoc]] RoCBertForCausalLM - forward ## RoCBertForMaskedLM [[autodoc]] RoCBertForMaskedLM - forward ## RoCBertForSequenceClassification [[autodoc]] transformers.RoCBertForSequenceClassification - forward ## RoCBertForMultipleChoice [[autodoc]] transformers.RoCBertForMultipleChoice - forward ## RoCBertForTokenClassification [[autodoc]] transformers.RoCBertForTokenClassification - forward ## RoCBertForQuestionAnswering [[autodoc]] RoCBertForQuestionAnswering - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/dinov2.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # DINOv2 ## Overview The DINOv2 model was proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Maxime Oquab, Timothรฉe Darcet, Thรฉo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervรฉ Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski. DINOv2 is an upgrade of [DINO](https://arxiv.org/abs/2104.14294), a self-supervised method applied on [Vision Transformers](vit). This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. The abstract from the paper is the following: *The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/facebookresearch/dinov2). ## Usage tips The model can be traced using `torch.jit.trace` which leverages JIT compilation to optimize the model making it faster to run. Note this still produces some mis-matched elements and the difference between the original model and the traced model is of the order of 1e-4. ```python import torch from transformers import AutoImageProcessor, AutoModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base') model = AutoModel.from_pretrained('facebook/dinov2-base') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs[0] # We have to force return_dict=False for tracing model.config.return_dict = False with torch.no_grad(): traced_model = torch.jit.trace(model, [inputs.pixel_values]) traced_outputs = traced_model(inputs.pixel_values) print((last_hidden_states - traced_outputs[0]).abs().max()) ``` ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with DPT. - Demo notebooks for DINOv2 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DINOv2). ๐ŸŒŽ <PipelineTag pipeline="image-classification"/> - [`Dinov2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## Dinov2Config [[autodoc]] Dinov2Config ## Dinov2Model [[autodoc]] Dinov2Model - forward ## Dinov2ForImageClassification [[autodoc]] Dinov2ForImageClassification - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/musicgen_melody.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. :warning: Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MusicGen Melody ## Overview The MusicGen Melody model was proposed in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Dรฉfossez. MusicGen Melody is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or *audio codes*, conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec, to recover the audio waveform. Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g. hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass. The abstract from the paper is the following: *We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.* This model was contributed by [ylacombe](https://huggingface.co/ylacombe). The original code can be found [here](https://github.com/facebookresearch/audiocraft). The pre-trained checkpoints can be found on the [Hugging Face Hub](https://huggingface.co/models?sort=downloads&search=facebook%2Fmusicgen). ## Difference with [MusicGen](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen) There are two key differences with MusicGen: 1. The audio prompt is used here as a conditional signal for the generated audio sample, whereas it's used for audio continuation in [MusicGen](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen). 2. Conditional text and audio signals are concatenated to the decoder's hidden states instead of being used as a cross-attention signal, as in MusicGen. ## Generation MusicGen Melody is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default, and can be explicitly specified by setting `do_sample=True` in the call to [`MusicgenMelodyForConditionalGeneration.generate`], or by overriding the model's generation config (see below). Transformers supports both mono (1-channel) and stereo (2-channel) variants of MusicGen Melody. The mono channel versions generate a single set of codebooks. The stereo versions generate 2 sets of codebooks, 1 for each channel (left/right), and each set of codebooks is decoded independently through the audio compression model. The audio streams for each channel are combined to give the final stereo output. #### Audio Conditional Generation The model can generate an audio sample conditioned on a text and an audio prompt through use of the [`MusicgenMelodyProcessor`] to pre-process the inputs. In the following examples, we load an audio file using the ๐Ÿค— Datasets library, which can be pip installed through the command below: ``` pip install --upgrade pip pip install datasets[audio] ``` The audio file we are about to use is loaded as follows: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True) >>> sample = next(iter(dataset))["audio"] ``` The audio prompt should ideally be free of the low-frequency signals usually produced by instruments such as drums and bass. The [Demucs](https://github.com/adefossez/demucs/tree/main) model can be used to separate vocals and other signals from the drums and bass components. If you wish to use Demucs, you first need to follow the installation steps [here](https://github.com/adefossez/demucs/tree/main?tab=readme-ov-file#for-musicians) before using the following snippet: ```python from demucs import pretrained from demucs.apply import apply_model from demucs.audio import convert_audio import torch wav = torch.tensor(sample["array"]).to(torch.float32) demucs = pretrained.get_model('htdemucs') wav = convert_audio(wav[None], sample["sampling_rate"], demucs.samplerate, demucs.audio_channels) wav = apply_model(demucs, wav[None]) ``` You can then use the following snippet to generate music: ```python >>> from transformers import AutoProcessor, MusicgenMelodyForConditionalGeneration >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-melody") >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody") >>> inputs = processor( ... audio=wav, ... sampling_rate=demucs.samplerate, ... text=["80s blues track with groovy saxophone"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` You can also pass the audio signal directly without using Demucs, although the quality of the generation will probably be degraded: ```python >>> from transformers import AutoProcessor, MusicgenMelodyForConditionalGeneration >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-melody") >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody") >>> inputs = processor( ... audio=sample["array"], ... sampling_rate=sample["sampling_rate"], ... text=["80s blues track with groovy saxophone"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` The audio outputs are a three-dimensional Torch tensor of shape `(batch_size, num_channels, sequence_length)`. To listen to the generated audio samples, you can either play them in an ipynb notebook: ```python from IPython.display import Audio sampling_rate = model.config.audio_encoder.sampling_rate Audio(audio_values[0].numpy(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `soundfile`: ```python >>> import soundfile as sf >>> sampling_rate = model.config.audio_encoder.sampling_rate >>> sf.write("musicgen_out.wav", audio_values[0].T.numpy(), sampling_rate) ``` ### Text-only Conditional Generation The same [`MusicgenMelodyProcessor`] can be used to pre-process a text-only prompt. ```python >>> from transformers import AutoProcessor, MusicgenMelodyForConditionalGeneration >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-melody") >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody") >>> inputs = processor( ... text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` The `guidance_scale` is used in classifier free guidance (CFG), setting the weighting between the conditional logits (which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or 'null' prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer audio quality. CFG is enabled by setting `guidance_scale > 1`. For best results, use `guidance_scale=3` (default). You can also generate in batch: ```python >>> from transformers import AutoProcessor, MusicgenMelodyForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("facebook/musicgen-melody") >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody") >>> # take the first quarter of the audio sample >>> sample_1 = sample["array"][: len(sample["array"]) // 4] >>> # take the first half of the audio sample >>> sample_2 = sample["array"][: len(sample["array"]) // 2] >>> inputs = processor( ... audio=[sample_1, sample_2], ... sampling_rate=sample["sampling_rate"], ... text=["80s blues track with groovy saxophone", "90s rock song with loud guitars and heavy drums"], ... padding=True, ... return_tensors="pt", ... ) >>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) ``` ### Unconditional Generation The inputs for unconditional (or 'null') generation can be obtained through the method [`MusicgenMelodyProcessor.get_unconditional_inputs`]: ```python >>> from transformers import MusicgenMelodyForConditionalGeneration, MusicgenMelodyProcessor >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody") >>> unconditional_inputs = MusicgenMelodyProcessor.from_pretrained("facebook/musicgen-melody").get_unconditional_inputs(num_samples=1) >>> audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256) ``` ### Generation Configuration The default parameters that control the generation process, such as sampling, guidance scale and number of generated tokens, can be found in the model's generation config, and updated as desired: ```python >>> from transformers import MusicgenMelodyForConditionalGeneration >>> model = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody") >>> # inspect the default generation config >>> model.generation_config >>> # increase the guidance scale to 4.0 >>> model.generation_config.guidance_scale = 4.0 >>> # decrease the max length to 256 tokens >>> model.generation_config.max_length = 256 ``` Note that any arguments passed to the generate method will **supersede** those in the generation config, so setting `do_sample=False` in the call to generate will supersede the setting of `model.generation_config.do_sample` in the generation config. ## Model Structure The MusicGen model can be de-composed into three distinct stages: 1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5. 2. MusicGen Melody decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations 3. Audio decoder: used to recover the audio waveform from the audio tokens predicted by the decoder. Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class [`MusicgenMelodyForCausalLM`], or as a composite model that includes the text encoder and audio encoder, corresponding to the class [`MusicgenMelodyForConditionalGeneration`]. If only the decoder needs to be loaded from the pre-trained checkpoint, it can be loaded by first specifying the correct config, or be accessed through the `.decoder` attribute of the composite model: ```python >>> from transformers import AutoConfig, MusicgenMelodyForCausalLM, MusicgenMelodyForConditionalGeneration >>> # Option 1: get decoder config and pass to `.from_pretrained` >>> decoder_config = AutoConfig.from_pretrained("facebook/musicgen-melody").decoder >>> decoder = MusicgenMelodyForCausalLM.from_pretrained("facebook/musicgen-melody", **decoder_config.to_dict()) >>> # Option 2: load the entire composite model, but only return the decoder >>> decoder = MusicgenMelodyForConditionalGeneration.from_pretrained("facebook/musicgen-melody").decoder ``` Since the text encoder and audio encoder models are frozen during training, the MusicGen decoder [`MusicgenMelodyForCausalLM`] can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can be combined with the frozen text encoder and audio encoder to recover the composite [`MusicgenMelodyForConditionalGeneration`] model. ## Checkpoint Conversion - After downloading the original checkpoints from [here](https://github.com/facebookresearch/audiocraft/blob/main/docs/MUSICGEN.md#importing--exporting-models), you can convert them using the **conversion script** available at `src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py` with the following command: ```bash python src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py \ --checkpoint="facebook/musicgen-melody" --pytorch_dump_folder /output/path ``` Tips: * MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model. * Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable `do_sample` in the call to [`MusicgenMelodyForConditionalGeneration.generate`] ## MusicgenMelodyDecoderConfig [[autodoc]] MusicgenMelodyDecoderConfig ## MusicgenMelodyProcessor [[autodoc]] MusicgenMelodyProcessor - get_unconditional_inputs ## MusicgenMelodyFeatureExtractor [[autodoc]] MusicgenMelodyFeatureExtractor - _extract_stem_indices ## MusicgenMelodyConfig [[autodoc]] MusicgenMelodyConfig ## MusicgenMelodyModel [[autodoc]] MusicgenMelodyModel - forward ## MusicgenMelodyForCausalLM [[autodoc]] MusicgenMelodyForCausalLM - forward ## MusicgenMelodyForConditionalGeneration [[autodoc]] MusicgenMelodyForConditionalGeneration - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/clipseg.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CLIPSeg ## Overview The CLIPSeg model was proposed in [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker. CLIPSeg adds a minimal decoder on top of a frozen [CLIP](clip) model for zero- and one-shot image segmentation. The abstract from the paper is the following: *Image segmentation is usually addressed by training a model for a fixed set of object classes. Incorporating additional classes or more complex queries later is expensive as it requires re-training the model on a dataset that encompasses these expressions. Here we propose a system that can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text or an image. This approach enables us to create a unified model (trained once) for three common segmentation tasks, which come with distinct challenges: referring expression segmentation, zero-shot segmentation and one-shot segmentation. We build upon the CLIP model as a backbone which we extend with a transformer-based decoder that enables dense prediction. After training on an extended version of the PhraseCut dataset, our system generates a binary segmentation map for an image based on a free-text prompt or on an additional image expressing the query. We analyze different variants of the latter image-based prompts in detail. This novel hybrid input allows for dynamic adaptation not only to the three segmentation tasks mentioned above, but to any binary segmentation task where a text or image query can be formulated. Finally, we find our system to adapt well to generalized queries involving affordances or properties* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/clipseg_architecture.png" alt="drawing" width="600"/> <small> CLIPSeg overview. Taken from the <a href="https://arxiv.org/abs/2112.10003">original paper.</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/timojl/clipseg). ## Usage tips - [`CLIPSegForImageSegmentation`] adds a decoder on top of [`CLIPSegModel`]. The latter is identical to [`CLIPModel`]. - [`CLIPSegForImageSegmentation`] can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text (provided to the model as `input_ids`) or an image (provided to the model as `conditional_pixel_values`). One can also provide custom conditional embeddings (provided to the model as `conditional_embeddings`). ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with CLIPSeg. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="image-segmentation"/> - A notebook that illustrates [zero-shot image segmentation with CLIPSeg](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/CLIPSeg/Zero_shot_image_segmentation_with_CLIPSeg.ipynb). ## CLIPSegConfig [[autodoc]] CLIPSegConfig - from_text_vision_configs ## CLIPSegTextConfig [[autodoc]] CLIPSegTextConfig ## CLIPSegVisionConfig [[autodoc]] CLIPSegVisionConfig ## CLIPSegProcessor [[autodoc]] CLIPSegProcessor ## CLIPSegModel [[autodoc]] CLIPSegModel - forward - get_text_features - get_image_features ## CLIPSegTextModel [[autodoc]] CLIPSegTextModel - forward ## CLIPSegVisionModel [[autodoc]] CLIPSegVisionModel - forward ## CLIPSegForImageSegmentation [[autodoc]] CLIPSegForImageSegmentation - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/tapas.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TAPAS ## Overview The TAPAS model was proposed in [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://www.aclweb.org/anthology/2020.acl-main.398) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. It's a BERT-based model specifically designed (and pre-trained) for answering questions about tabular data. Compared to BERT, TAPAS uses relative position embeddings and has 7 token types that encode tabular structure. TAPAS is pre-trained on the masked language modeling (MLM) objective on a large dataset comprising millions of tables from English Wikipedia and corresponding texts. For question answering, TAPAS has 2 heads on top: a cell selection head and an aggregation head, for (optionally) performing aggregations (such as counting or summing) among selected cells. TAPAS has been fine-tuned on several datasets: - [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253) (Sequential Question Answering by Microsoft) - [WTQ](https://github.com/ppasupat/WikiTableQuestions) (Wiki Table Questions by Stanford University) - [WikiSQL](https://github.com/salesforce/WikiSQL) (by Salesforce). It achieves state-of-the-art on both SQA and WTQ, while having comparable performance to SOTA on WikiSQL, with a much simpler architecture. The abstract from the paper is the following: *Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.* In addition, the authors have further pre-trained TAPAS to recognize **table entailment**, by creating a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning. The authors of TAPAS call this further pre-training intermediate pre-training (since TAPAS is first pre-trained on MLM, and then on another dataset). They found that intermediate pre-training further improves performance on SQA, achieving a new state-of-the-art as well as state-of-the-art on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking), a large-scale dataset with 16k Wikipedia tables for table entailment (a binary classification task). For more details, see their follow-up paper: [Understanding tables with intermediate pre-training](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) by Julian Martin Eisenschlos, Syrine Krichene and Thomas Mรผller. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/tapas_architecture.png" alt="drawing" width="600"/> <small> TAPAS architecture. Taken from the <a href="https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html">original blog post</a>.</small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The Tensorflow version of this model was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/tapas). ## Usage tips - TAPAS is a model that uses relative position embeddings by default (restarting the position embeddings at every cell of the table). Note that this is something that was added after the publication of the original TAPAS paper. According to the authors, this usually results in a slightly better performance, and allows you to encode longer sequences without running out of embeddings. This is reflected in the `reset_position_index_per_cell` parameter of [`TapasConfig`], which is set to `True` by default. The default versions of the models available on the [hub](https://huggingface.co/models?search=tapas) all use relative position embeddings. You can still use the ones with absolute position embeddings by passing in an additional argument `revision="no_reset"` when calling the `from_pretrained()` method. Note that it's usually advised to pad the inputs on the right rather than the left. - TAPAS is based on BERT, so `TAPAS-base` for example corresponds to a `BERT-base` architecture. Of course, `TAPAS-large` will result in the best performance (the results reported in the paper are from `TAPAS-large`). Results of the various sized models are shown on the [original GitHub repository](https://github.com/google-research/tapas). - TAPAS has checkpoints fine-tuned on SQA, which are capable of answering questions related to a table in a conversational set-up. This means that you can ask follow-up questions such as "what is his age?" related to the previous question. Note that the forward pass of TAPAS is a bit different in case of a conversational set-up: in that case, you have to feed every table-question pair one by one to the model, such that the `prev_labels` token type ids can be overwritten by the predicted `labels` of the model to the previous question. See "Usage" section for more info. - TAPAS is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. Note that TAPAS can be used as an encoder in the EncoderDecoderModel framework, to combine it with an autoregressive text decoder such as GPT-2. ## Usage: fine-tuning Here we explain how you can fine-tune [`TapasForQuestionAnswering`] on your own dataset. **STEP 1: Choose one of the 3 ways in which you can use TAPAS - or experiment** Basically, there are 3 different ways in which one can fine-tune [`TapasForQuestionAnswering`], corresponding to the different datasets on which Tapas was fine-tuned: 1. SQA: if you're interested in asking follow-up questions related to a table, in a conversational set-up. For example if you first ask "what's the name of the first actor?" then you can ask a follow-up question such as "how old is he?". Here, questions do not involve any aggregation (all questions are cell selection questions). 2. WTQ: if you're not interested in asking questions in a conversational set-up, but rather just asking questions related to a table, which might involve aggregation, such as counting a number of rows, summing up cell values or averaging cell values. You can then for example ask "what's the total number of goals Cristiano Ronaldo made in his career?". This case is also called **weak supervision**, since the model itself must learn the appropriate aggregation operator (SUM/COUNT/AVERAGE/NONE) given only the answer to the question as supervision. 3. WikiSQL-supervised: this dataset is based on WikiSQL with the model being given the ground truth aggregation operator during training. This is also called **strong supervision**. Here, learning the appropriate aggregation operator is much easier. To summarize: | **Task** | **Example dataset** | **Description** | |-------------------------------------|---------------------|---------------------------------------------------------------------------------------------------------| | Conversational | SQA | Conversational, only cell selection questions | | Weak supervision for aggregation | WTQ | Questions might involve aggregation, and the model must learn this given only the answer as supervision | | Strong supervision for aggregation | WikiSQL-supervised | Questions might involve aggregation, and the model must learn this given the gold aggregation operator | <frameworkcontent> <pt> Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. ```py >>> from transformers import TapasConfig, TapasForQuestionAnswering >>> # for example, the base sized model with default SQA configuration >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base") >>> # or, the base sized model with WTQ configuration >>> config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq") >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) >>> # or, the base sized model with WikiSQL configuration >>> config = TapasConfig("google-base-finetuned-wikisql-supervised") >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ``` Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [`TapasConfig`], and then create a [`TapasForQuestionAnswering`] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: ```py >>> from transformers import TapasConfig, TapasForQuestionAnswering >>> # you can initialize the classification heads any way you want (see docs of TapasConfig) >>> config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True) >>> # initializing the pre-trained base sized model with our custom classification heads >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ``` </pt> <tf> Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. Be sure to have installed the [tensorflow_probability](https://github.com/tensorflow/probability) dependency: ```py >>> from transformers import TapasConfig, TFTapasForQuestionAnswering >>> # for example, the base sized model with default SQA configuration >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base") >>> # or, the base sized model with WTQ configuration >>> config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq") >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) >>> # or, the base sized model with WikiSQL configuration >>> config = TapasConfig("google-base-finetuned-wikisql-supervised") >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ``` Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [`TapasConfig`], and then create a [`TFTapasForQuestionAnswering`] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: ```py >>> from transformers import TapasConfig, TFTapasForQuestionAnswering >>> # you can initialize the classification heads any way you want (see docs of TapasConfig) >>> config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True) >>> # initializing the pre-trained base sized model with our custom classification heads >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) ``` </tf> </frameworkcontent> What you can also do is start from an already fine-tuned checkpoint. A note here is that the already fine-tuned checkpoint on WTQ has some issues due to the L2-loss which is somewhat brittle. See [here](https://github.com/google-research/tapas/issues/91#issuecomment-735719340) for more info. For a list of all pre-trained and fine-tuned TAPAS checkpoints available on HuggingFace's hub, see [here](https://huggingface.co/models?search=tapas). **STEP 2: Prepare your data in the SQA format** Second, no matter what you picked above, you should prepare your dataset in the [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253) format. This format is a TSV/CSV file with the following columns: - `id`: optional, id of the table-question pair, for bookkeeping purposes. - `annotator`: optional, id of the person who annotated the table-question pair, for bookkeeping purposes. - `position`: integer indicating if the question is the first, second, third,... related to the table. Only required in case of conversational setup (SQA). You don't need this column in case you're going for WTQ/WikiSQL-supervised. - `question`: string - `table_file`: string, name of a csv file containing the tabular data - `answer_coordinates`: list of one or more tuples (each tuple being a cell coordinate, i.e. row, column pair that is part of the answer) - `answer_text`: list of one or more strings (each string being a cell value that is part of the answer) - `aggregation_label`: index of the aggregation operator. Only required in case of strong supervision for aggregation (the WikiSQL-supervised case) - `float_answer`: the float answer to the question, if there is one (np.nan if there isn't). Only required in case of weak supervision for aggregation (such as WTQ and WikiSQL) The tables themselves should be present in a folder, each table being a separate csv file. Note that the authors of the TAPAS algorithm used conversion scripts with some automated logic to convert the other datasets (WTQ, WikiSQL) into the SQA format. The author explains this [here](https://github.com/google-research/tapas/issues/50#issuecomment-705465960). A conversion of this script that works with HuggingFace's implementation can be found [here](https://github.com/NielsRogge/tapas_utils). Interestingly, these conversion scripts are not perfect (the `answer_coordinates` and `float_answer` fields are populated based on the `answer_text`), meaning that WTQ and WikiSQL results could actually be improved. **STEP 3: Convert your data into tensors using TapasTokenizer** <frameworkcontent> <pt> Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [`TapasTokenizer`] to convert table-question pairs into `input_ids`, `attention_mask`, `token_type_ids` and so on. Again, based on which of the three cases you picked above, [`TapasForQuestionAnswering`] requires different inputs to be fine-tuned: | **Task** | **Required inputs** | |------------------------------------|---------------------------------------------------------------------------------------------------------------------| | Conversational | `input_ids`, `attention_mask`, `token_type_ids`, `labels` | | Weak supervision for aggregation | `input_ids`, `attention_mask`, `token_type_ids`, `labels`, `numeric_values`, `numeric_values_scale`, `float_answer` | | Strong supervision for aggregation | `input ids`, `attention mask`, `token type ids`, `labels`, `aggregation_labels` | [`TapasTokenizer`] creates the `labels`, `numeric_values` and `numeric_values_scale` based on the `answer_coordinates` and `answer_text` columns of the TSV file. The `float_answer` and `aggregation_labels` are already in the TSV file of step 2. Here's an example: ```py >>> from transformers import TapasTokenizer >>> import pandas as pd >>> model_name = "google/tapas-base" >>> tokenizer = TapasTokenizer.from_pretrained(model_name) >>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} >>> queries = [ ... "What is the name of the first actor?", ... "How many movies has George Clooney played in?", ... "What is the total number of movies?", ... ] >>> answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]] >>> answer_text = [["Brad Pitt"], ["69"], ["209"]] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer( ... table=table, ... queries=queries, ... answer_coordinates=answer_coordinates, ... answer_text=answer_text, ... padding="max_length", ... return_tensors="pt", ... ) >>> inputs {'input_ids': tensor([[ ... ]]), 'attention_mask': tensor([[...]]), 'token_type_ids': tensor([[[...]]]), 'numeric_values': tensor([[ ... ]]), 'numeric_values_scale: tensor([[ ... ]]), labels: tensor([[ ... ]])} ``` Note that [`TapasTokenizer`] expects the data of the table to be **text-only**. You can use `.astype(str)` on a dataframe to turn it into text-only data. Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: ```py >>> import torch >>> import pandas as pd >>> tsv_path = "your_path_to_the_tsv_file" >>> table_csv_path = "your_path_to_a_directory_containing_all_csv_files" >>> class TableDataset(torch.utils.data.Dataset): ... def __init__(self, data, tokenizer): ... self.data = data ... self.tokenizer = tokenizer ... def __getitem__(self, idx): ... item = data.iloc[idx] ... table = pd.read_csv(table_csv_path + item.table_file).astype( ... str ... ) # be sure to make your table data text only ... encoding = self.tokenizer( ... table=table, ... queries=item.question, ... answer_coordinates=item.answer_coordinates, ... answer_text=item.answer_text, ... truncation=True, ... padding="max_length", ... return_tensors="pt", ... ) ... # remove the batch dimension which the tokenizer adds by default ... encoding = {key: val.squeeze(0) for key, val in encoding.items()} ... # add the float_answer which is also required (weak supervision for aggregation case) ... encoding["float_answer"] = torch.tensor(item.float_answer) ... return encoding ... def __len__(self): ... return len(self.data) >>> data = pd.read_csv(tsv_path, sep="\t") >>> train_dataset = TableDataset(data, tokenizer) >>> train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32) ``` </pt> <tf> Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [`TapasTokenizer`] to convert table-question pairs into `input_ids`, `attention_mask`, `token_type_ids` and so on. Again, based on which of the three cases you picked above, [`TFTapasForQuestionAnswering`] requires different inputs to be fine-tuned: | **Task** | **Required inputs** | |------------------------------------|---------------------------------------------------------------------------------------------------------------------| | Conversational | `input_ids`, `attention_mask`, `token_type_ids`, `labels` | | Weak supervision for aggregation | `input_ids`, `attention_mask`, `token_type_ids`, `labels`, `numeric_values`, `numeric_values_scale`, `float_answer` | | Strong supervision for aggregation | `input ids`, `attention mask`, `token type ids`, `labels`, `aggregation_labels` | [`TapasTokenizer`] creates the `labels`, `numeric_values` and `numeric_values_scale` based on the `answer_coordinates` and `answer_text` columns of the TSV file. The `float_answer` and `aggregation_labels` are already in the TSV file of step 2. Here's an example: ```py >>> from transformers import TapasTokenizer >>> import pandas as pd >>> model_name = "google/tapas-base" >>> tokenizer = TapasTokenizer.from_pretrained(model_name) >>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} >>> queries = [ ... "What is the name of the first actor?", ... "How many movies has George Clooney played in?", ... "What is the total number of movies?", ... ] >>> answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]] >>> answer_text = [["Brad Pitt"], ["69"], ["209"]] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer( ... table=table, ... queries=queries, ... answer_coordinates=answer_coordinates, ... answer_text=answer_text, ... padding="max_length", ... return_tensors="tf", ... ) >>> inputs {'input_ids': tensor([[ ... ]]), 'attention_mask': tensor([[...]]), 'token_type_ids': tensor([[[...]]]), 'numeric_values': tensor([[ ... ]]), 'numeric_values_scale: tensor([[ ... ]]), labels: tensor([[ ... ]])} ``` Note that [`TapasTokenizer`] expects the data of the table to be **text-only**. You can use `.astype(str)` on a dataframe to turn it into text-only data. Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: ```py >>> import tensorflow as tf >>> import pandas as pd >>> tsv_path = "your_path_to_the_tsv_file" >>> table_csv_path = "your_path_to_a_directory_containing_all_csv_files" >>> class TableDataset: ... def __init__(self, data, tokenizer): ... self.data = data ... self.tokenizer = tokenizer ... def __iter__(self): ... for idx in range(self.__len__()): ... item = self.data.iloc[idx] ... table = pd.read_csv(table_csv_path + item.table_file).astype( ... str ... ) # be sure to make your table data text only ... encoding = self.tokenizer( ... table=table, ... queries=item.question, ... answer_coordinates=item.answer_coordinates, ... answer_text=item.answer_text, ... truncation=True, ... padding="max_length", ... return_tensors="tf", ... ) ... # remove the batch dimension which the tokenizer adds by default ... encoding = {key: tf.squeeze(val, 0) for key, val in encoding.items()} ... # add the float_answer which is also required (weak supervision for aggregation case) ... encoding["float_answer"] = tf.convert_to_tensor(item.float_answer, dtype=tf.float32) ... yield encoding["input_ids"], encoding["attention_mask"], encoding["numeric_values"], encoding[ ... "numeric_values_scale" ... ], encoding["token_type_ids"], encoding["labels"], encoding["float_answer"] ... def __len__(self): ... return len(self.data) >>> data = pd.read_csv(tsv_path, sep="\t") >>> train_dataset = TableDataset(data, tokenizer) >>> output_signature = ( ... tf.TensorSpec(shape=(512,), dtype=tf.int32), ... tf.TensorSpec(shape=(512,), dtype=tf.int32), ... tf.TensorSpec(shape=(512,), dtype=tf.float32), ... tf.TensorSpec(shape=(512,), dtype=tf.float32), ... tf.TensorSpec(shape=(512, 7), dtype=tf.int32), ... tf.TensorSpec(shape=(512,), dtype=tf.int32), ... tf.TensorSpec(shape=(512,), dtype=tf.float32), ... ) >>> train_dataloader = tf.data.Dataset.from_generator(train_dataset, output_signature=output_signature).batch(32) ``` </tf> </frameworkcontent> Note that here, we encode each table-question pair independently. This is fine as long as your dataset is **not conversational**. In case your dataset involves conversational questions (such as in SQA), then you should first group together the `queries`, `answer_coordinates` and `answer_text` per table (in the order of their `position` index) and batch encode each table with its questions. This will make sure that the `prev_labels` token types (see docs of [`TapasTokenizer`]) are set correctly. See [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) for more info. See [this notebook](https://github.com/kamalkraj/Tapas-Tutorial/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) for more info regarding using the TensorFlow model. **STEP 4: Train (fine-tune) the model <frameworkcontent> <pt> You can then fine-tune [`TapasForQuestionAnswering`] as follows (shown here for the weak supervision for aggregation case): ```py >>> from transformers import TapasConfig, TapasForQuestionAnswering, AdamW >>> # this is the default WTQ configuration >>> config = TapasConfig( ... num_aggregation_labels=4, ... use_answer_as_supervision=True, ... answer_loss_cutoff=0.664694, ... cell_selection_preference=0.207951, ... huber_loss_delta=0.121194, ... init_cell_selection_weights_to_zero=True, ... select_one_column=True, ... allow_empty_column_selection=False, ... temperature=0.0352513, ... ) >>> model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) >>> optimizer = AdamW(model.parameters(), lr=5e-5) >>> model.train() >>> for epoch in range(2): # loop over the dataset multiple times ... for batch in train_dataloader: ... # get the inputs; ... input_ids = batch["input_ids"] ... attention_mask = batch["attention_mask"] ... token_type_ids = batch["token_type_ids"] ... labels = batch["labels"] ... numeric_values = batch["numeric_values"] ... numeric_values_scale = batch["numeric_values_scale"] ... float_answer = batch["float_answer"] ... # zero the parameter gradients ... optimizer.zero_grad() ... # forward + backward + optimize ... outputs = model( ... input_ids=input_ids, ... attention_mask=attention_mask, ... token_type_ids=token_type_ids, ... labels=labels, ... numeric_values=numeric_values, ... numeric_values_scale=numeric_values_scale, ... float_answer=float_answer, ... ) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ``` </pt> <tf> You can then fine-tune [`TFTapasForQuestionAnswering`] as follows (shown here for the weak supervision for aggregation case): ```py >>> import tensorflow as tf >>> from transformers import TapasConfig, TFTapasForQuestionAnswering >>> # this is the default WTQ configuration >>> config = TapasConfig( ... num_aggregation_labels=4, ... use_answer_as_supervision=True, ... answer_loss_cutoff=0.664694, ... cell_selection_preference=0.207951, ... huber_loss_delta=0.121194, ... init_cell_selection_weights_to_zero=True, ... select_one_column=True, ... allow_empty_column_selection=False, ... temperature=0.0352513, ... ) >>> model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) >>> optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) >>> for epoch in range(2): # loop over the dataset multiple times ... for batch in train_dataloader: ... # get the inputs; ... input_ids = batch[0] ... attention_mask = batch[1] ... token_type_ids = batch[4] ... labels = batch[-1] ... numeric_values = batch[2] ... numeric_values_scale = batch[3] ... float_answer = batch[6] ... # forward + backward + optimize ... with tf.GradientTape() as tape: ... outputs = model( ... input_ids=input_ids, ... attention_mask=attention_mask, ... token_type_ids=token_type_ids, ... labels=labels, ... numeric_values=numeric_values, ... numeric_values_scale=numeric_values_scale, ... float_answer=float_answer, ... ) ... grads = tape.gradient(outputs.loss, model.trainable_weights) ... optimizer.apply_gradients(zip(grads, model.trainable_weights)) ``` </tf> </frameworkcontent> ## Usage: inference <frameworkcontent> <pt> Here we explain how you can use [`TapasForQuestionAnswering`] or [`TFTapasForQuestionAnswering`] for inference (i.e. making predictions on new data). For inference, only `input_ids`, `attention_mask` and `token_type_ids` (which you can obtain using [`TapasTokenizer`]) have to be provided to the model to obtain the logits. Next, you can use the handy [`~models.tapas.tokenization_tapas.convert_logits_to_predictions`] method to convert these into predicted coordinates and optional aggregation indices. However, note that inference is **different** depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: ```py >>> from transformers import TapasTokenizer, TapasForQuestionAnswering >>> import pandas as pd >>> model_name = "google/tapas-base-finetuned-wtq" >>> model = TapasForQuestionAnswering.from_pretrained(model_name) >>> tokenizer = TapasTokenizer.from_pretrained(model_name) >>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} >>> queries = [ ... "What is the name of the first actor?", ... "How many movies has George Clooney played in?", ... "What is the total number of movies?", ... ] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt") >>> outputs = model(**inputs) >>> predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions( ... inputs, outputs.logits.detach(), outputs.logits_aggregation.detach() ... ) >>> # let's print out the results: >>> id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"} >>> aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices] >>> answers = [] >>> for coordinates in predicted_answer_coordinates: ... if len(coordinates) == 1: ... # only a single cell: ... answers.append(table.iat[coordinates[0]]) ... else: ... # multiple cells ... cell_values = [] ... for coordinate in coordinates: ... cell_values.append(table.iat[coordinate]) ... answers.append(", ".join(cell_values)) >>> display(table) >>> print("") >>> for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): ... print(query) ... if predicted_agg == "NONE": ... print("Predicted answer: " + answer) ... else: ... print("Predicted answer: " + predicted_agg + " > " + answer) What is the name of the first actor? Predicted answer: Brad Pitt How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies? Predicted answer: SUM > 87, 53, 69 ``` </pt> <tf> Here we explain how you can use [`TFTapasForQuestionAnswering`] for inference (i.e. making predictions on new data). For inference, only `input_ids`, `attention_mask` and `token_type_ids` (which you can obtain using [`TapasTokenizer`]) have to be provided to the model to obtain the logits. Next, you can use the handy [`~models.tapas.tokenization_tapas.convert_logits_to_predictions`] method to convert these into predicted coordinates and optional aggregation indices. However, note that inference is **different** depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: ```py >>> from transformers import TapasTokenizer, TFTapasForQuestionAnswering >>> import pandas as pd >>> model_name = "google/tapas-base-finetuned-wtq" >>> model = TFTapasForQuestionAnswering.from_pretrained(model_name) >>> tokenizer = TapasTokenizer.from_pretrained(model_name) >>> data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} >>> queries = [ ... "What is the name of the first actor?", ... "How many movies has George Clooney played in?", ... "What is the total number of movies?", ... ] >>> table = pd.DataFrame.from_dict(data) >>> inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf") >>> outputs = model(**inputs) >>> predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions( ... inputs, outputs.logits, outputs.logits_aggregation ... ) >>> # let's print out the results: >>> id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"} >>> aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices] >>> answers = [] >>> for coordinates in predicted_answer_coordinates: ... if len(coordinates) == 1: ... # only a single cell: ... answers.append(table.iat[coordinates[0]]) ... else: ... # multiple cells ... cell_values = [] ... for coordinate in coordinates: ... cell_values.append(table.iat[coordinate]) ... answers.append(", ".join(cell_values)) >>> display(table) >>> print("") >>> for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): ... print(query) ... if predicted_agg == "NONE": ... print("Predicted answer: " + answer) ... else: ... print("Predicted answer: " + predicted_agg + " > " + answer) What is the name of the first actor? Predicted answer: Brad Pitt How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies? Predicted answer: SUM > 87, 53, 69 ``` </tf> </frameworkcontent> In case of a conversational set-up, then each table-question pair must be provided **sequentially** to the model, such that the `prev_labels` token types can be overwritten by the predicted `labels` of the previous table-question pair. Again, more info can be found in [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) (for PyTorch) and [this notebook](https://github.com/kamalkraj/Tapas-Tutorial/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) (for TensorFlow). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Masked language modeling task guide](../tasks/masked_language_modeling) ## TAPAS specific outputs [[autodoc]] models.tapas.modeling_tapas.TableQuestionAnsweringOutput ## TapasConfig [[autodoc]] TapasConfig ## TapasTokenizer [[autodoc]] TapasTokenizer - __call__ - convert_logits_to_predictions - save_vocabulary <frameworkcontent> <pt> ## TapasModel [[autodoc]] TapasModel - forward ## TapasForMaskedLM [[autodoc]] TapasForMaskedLM - forward ## TapasForSequenceClassification [[autodoc]] TapasForSequenceClassification - forward ## TapasForQuestionAnswering [[autodoc]] TapasForQuestionAnswering - forward </pt> <tf> ## TFTapasModel [[autodoc]] TFTapasModel - call ## TFTapasForMaskedLM [[autodoc]] TFTapasForMaskedLM - call ## TFTapasForSequenceClassification [[autodoc]] TFTapasForSequenceClassification - call ## TFTapasForQuestionAnswering [[autodoc]] TFTapasForQuestionAnswering - call </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/van.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # VAN <Tip warning={true}> This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. </Tip> ## Overview The VAN model was proposed in [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations. The abstract from the paper is the following: *While originally designed for natural language processing tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. Code is available at [this https URL](https://github.com/Visual-Attention-Network/VAN-Classification).* Tips: - VAN does not have an embedding layer, thus the `hidden_states` will have a length equal to the number of stages. The figure below illustrates the architecture of a Visual Attention Layer. Taken from the [original paper](https://arxiv.org/abs/2202.09741). <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/van_architecture.png"/> This model was contributed by [Francesco](https://huggingface.co/Francesco). The original code can be found [here](https://github.com/Visual-Attention-Network/VAN-Classification). ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with VAN. <PipelineTag pipeline="image-classification"/> - [`VanForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## VanConfig [[autodoc]] VanConfig ## VanModel [[autodoc]] VanModel - forward ## VanForImageClassification [[autodoc]] VanForImageClassification - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/efficientnet.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # EfficientNet ## Overview The EfficientNet model was proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models. The abstract from the paper is the following: *Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters.* This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). ## EfficientNetConfig [[autodoc]] EfficientNetConfig ## EfficientNetImageProcessor [[autodoc]] EfficientNetImageProcessor - preprocess ## EfficientNetModel [[autodoc]] EfficientNetModel - forward ## EfficientNetForImageClassification [[autodoc]] EfficientNetForImageClassification - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/encoder-decoder.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Encoder Decoder Models ## Overview The [`EncoderDecoderModel`] can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. After such an [`EncoderDecoderModel`] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). An application of this architecture could be to leverage two pretrained [`BertModel`] as the encoder and decoder for a summarization model as was shown in: [Text Summarization with Pretrained Encoders](https://arxiv.org/abs/1908.08345) by Yang Liu and Mirella Lapata. ## Randomly initializing `EncoderDecoderModel` from model configurations. [`EncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`BertModel`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. ```python >>> from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel >>> config_encoder = BertConfig() >>> config_decoder = BertConfig() >>> config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = EncoderDecoderModel(config=config) ``` ## Initialising `EncoderDecoderModel` from a pretrained encoder and a pretrained decoder. [`EncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, *e.g.* BERT, can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [`EncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `EncoderDecoderModel` class provides a [`EncoderDecoderModel.from_encoder_decoder_pretrained`] method. ```python >>> from transformers import EncoderDecoderModel, BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased") ``` ## Loading an existing `EncoderDecoderModel` checkpoint and perform inference. To load fine-tuned checkpoints of the `EncoderDecoderModel` class, [`EncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers. To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. ```python >>> from transformers import AutoTokenizer, EncoderDecoderModel >>> # load a fine-tuned seq2seq model and corresponding tokenizer >>> model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") >>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") >>> # let's perform inference on a long piece of text >>> ARTICLE_TO_SUMMARIZE = ( ... "PG&E stated it scheduled the blackouts in response to forecasts for high winds " ... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " ... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." ... ) >>> input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors="pt").input_ids >>> # autoregressively generate summary (uses greedy decoding by default) >>> generated_ids = model.generate(input_ids) >>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow. ``` ## Loading a PyTorch checkpoint into `TFEncoderDecoderModel`. [`TFEncoderDecoderModel.from_pretrained`] currently doesn't support initializing the model from a pytorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only pytorch checkpoints for a particular encoder-decoder model, a workaround is: ```python >>> # a workaround to load from pytorch checkpoint >>> from transformers import EncoderDecoderModel, TFEncoderDecoderModel >>> _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") >>> _model.encoder.save_pretrained("./encoder") >>> _model.decoder.save_pretrained("./decoder") >>> model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( ... "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ... ) >>> # This is only for copying some specific attributes of this particular model. >>> model.config = _model.config ``` ## Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model. As you can see, only 2 inputs are required for the model in order to compute a loss: `input_ids` (which are the `input_ids` of the encoded input sequence) and `labels` (which are the `input_ids` of the encoded target sequence). ```python >>> from transformers import BertTokenizer, EncoderDecoderModel >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased") >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> input_ids = tokenizer( ... "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side.During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft).Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.", ... return_tensors="pt", ... ).input_ids >>> labels = tokenizer( ... "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2 metres ( 17 ft ) and is the second tallest free - standing structure in paris.", ... return_tensors="pt", ... ).input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(input_ids=input_ids, labels=labels).loss ``` Detailed [colab](https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing#scrollTo=ZwQIEhKOrJpl) for training. This model was contributed by [thomwolf](https://github.com/thomwolf). This model's TensorFlow and Flax versions were contributed by [ydshieh](https://github.com/ydshieh). ## EncoderDecoderConfig [[autodoc]] EncoderDecoderConfig <frameworkcontent> <pt> ## EncoderDecoderModel [[autodoc]] EncoderDecoderModel - forward - from_encoder_decoder_pretrained </pt> <tf> ## TFEncoderDecoderModel [[autodoc]] TFEncoderDecoderModel - call - from_encoder_decoder_pretrained </tf> <jax> ## FlaxEncoderDecoderModel [[autodoc]] FlaxEncoderDecoderModel - __call__ - from_encoder_decoder_pretrained </jax> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/bartpho.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BARTpho ## Overview The BARTpho model was proposed in [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. The abstract from the paper is the following: *We present BARTpho with two versions -- BARTpho_word and BARTpho_syllable -- the first public large-scale monolingual sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the "large" architecture and pre-training scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future research and applications of generative Vietnamese NLP tasks.* This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/BARTpho). ## Usage example ```python >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable") >>> tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable") >>> line = "Chรบng tรดi lร  nhแปฏng nghiรชn cแปฉu viรชn." >>> input_ids = tokenizer(line, return_tensors="pt") >>> with torch.no_grad(): ... features = bartpho(**input_ids) # Models outputs are now tuples >>> # With TensorFlow 2.0+: >>> from transformers import TFAutoModel >>> bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable") >>> input_ids = tokenizer(line, return_tensors="tf") >>> features = bartpho(**input_ids) ``` ## Usage tips - Following mBART, BARTpho uses the "large" architecture of BART with an additional layer-normalization layer on top of both the encoder and decoder. Thus, usage examples in the [documentation of BART](bart), when adapting to use with BARTpho, should be adjusted by replacing the BART-specialized classes with the mBART-specialized counterparts. For example: ```python >>> from transformers import MBartForConditionalGeneration >>> bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable") >>> TXT = "Chรบng tรดi lร  <mask> nghiรชn cแปฉu viรชn." >>> input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"] >>> logits = bartpho(input_ids).logits >>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() >>> probs = logits[0, masked_index].softmax(dim=0) >>> values, predictions = probs.topk(5) >>> print(tokenizer.decode(predictions).split()) ``` - This implementation is only for tokenization: "monolingual_vocab_file" consists of Vietnamese-specialized types extracted from the pre-trained SentencePiece model "vocab_file" that is available from the multilingual XLM-RoBERTa. Other languages, if employing this pre-trained multilingual SentencePiece model "vocab_file" for subword segmentation, can reuse BartphoTokenizer with their own language-specialized "monolingual_vocab_file". ## BartphoTokenizer [[autodoc]] BartphoTokenizer
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/roberta.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # RoBERTa <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=roberta"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-roberta-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/roberta-base"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> <a href="https://huggingface.co/papers/1907.11692"> <img alt="Paper page" src="https://img.shields.io/badge/Paper%20page-1907.11692-green"> </a> </div> ## Overview The RoBERTa model was proposed in [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, [Myle Ott](https://huggingface.co/myleott), Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google's BERT model released in 2018. It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates. The abstract from the paper is the following: *Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.* This model was contributed by [julien-c](https://huggingface.co/julien-c). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/roberta). ## Usage tips - This implementation is the same as [`BertModel`] with a tiny embeddings tweak as well as a setup for Roberta pretrained models. - RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a different pretraining scheme. - RoBERTa doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `</s>`) - Same as BERT with better pretraining tricks: * dynamic masking: tokens are masked differently at each epoch, whereas BERT does it once and for all * together to reach 512 tokens (so the sentences are in an order than may span several documents) * train with larger batches * use BPE with bytes as a subunit and not characters (because of unicode characters) - [CamemBERT](camembert) is a wrapper around RoBERTa. Refer to this page for usage examples. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/> - A blog on [Getting Started with Sentiment Analysis on Twitter](https://huggingface.co/blog/sentiment-analysis-twitter) using RoBERTa and the [Inference API](https://huggingface.co/inference-api). - A blog on [Opinion Classification with Kili and Hugging Face AutoTrain](https://huggingface.co/blog/opinion-classification-with-kili) using RoBERTa. - A notebook on how to [finetune RoBERTa for sentiment analysis](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb). ๐ŸŒŽ - [`RobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [`FlaxRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification"/> - [`RobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. - [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="fill-mask"/> - A blog on [How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train) with RoBERTa. - [`RobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) <PipelineTag pipeline="question-answering"/> - A blog on [Accelerated Inference with Optimum and Transformers Pipelines](https://huggingface.co/blog/optimum-inference) with RoBERTa for question answering. - [`RobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`RobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) ## RobertaConfig [[autodoc]] RobertaConfig ## RobertaTokenizer [[autodoc]] RobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## RobertaTokenizerFast [[autodoc]] RobertaTokenizerFast - build_inputs_with_special_tokens <frameworkcontent> <pt> ## RobertaModel [[autodoc]] RobertaModel - forward ## RobertaForCausalLM [[autodoc]] RobertaForCausalLM - forward ## RobertaForMaskedLM [[autodoc]] RobertaForMaskedLM - forward ## RobertaForSequenceClassification [[autodoc]] RobertaForSequenceClassification - forward ## RobertaForMultipleChoice [[autodoc]] RobertaForMultipleChoice - forward ## RobertaForTokenClassification [[autodoc]] RobertaForTokenClassification - forward ## RobertaForQuestionAnswering [[autodoc]] RobertaForQuestionAnswering - forward </pt> <tf> ## TFRobertaModel [[autodoc]] TFRobertaModel - call ## TFRobertaForCausalLM [[autodoc]] TFRobertaForCausalLM - call ## TFRobertaForMaskedLM [[autodoc]] TFRobertaForMaskedLM - call ## TFRobertaForSequenceClassification [[autodoc]] TFRobertaForSequenceClassification - call ## TFRobertaForMultipleChoice [[autodoc]] TFRobertaForMultipleChoice - call ## TFRobertaForTokenClassification [[autodoc]] TFRobertaForTokenClassification - call ## TFRobertaForQuestionAnswering [[autodoc]] TFRobertaForQuestionAnswering - call </tf> <jax> ## FlaxRobertaModel [[autodoc]] FlaxRobertaModel - __call__ ## FlaxRobertaForCausalLM [[autodoc]] FlaxRobertaForCausalLM - __call__ ## FlaxRobertaForMaskedLM [[autodoc]] FlaxRobertaForMaskedLM - __call__ ## FlaxRobertaForSequenceClassification [[autodoc]] FlaxRobertaForSequenceClassification - __call__ ## FlaxRobertaForMultipleChoice [[autodoc]] FlaxRobertaForMultipleChoice - __call__ ## FlaxRobertaForTokenClassification [[autodoc]] FlaxRobertaForTokenClassification - __call__ ## FlaxRobertaForQuestionAnswering [[autodoc]] FlaxRobertaForQuestionAnswering - __call__ </jax> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/oneformer.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # OneFormer ## Overview The OneFormer model was proposed in [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. OneFormer is a universal image segmentation framework that can be trained on a single panoptic dataset to perform semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference. <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_teaser.png"/> The abstract from the paper is the following: *Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.* The figure below illustrates the architecture of OneFormer. Taken from the [original paper](https://arxiv.org/abs/2211.06220). <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_architecture.png"/> This model was contributed by [Jitesh Jain](https://huggingface.co/praeclarumjj3). The original code can be found [here](https://github.com/SHI-Labs/OneFormer). ## Usage tips - OneFormer requires two inputs during inference: *image* and *task token*. - During training, OneFormer only uses panoptic annotations. - If you want to train the model in a distributed environment across multiple nodes, then one should update the `get_num_masks` function inside in the `OneFormerLoss` class of `modeling_oneformer.py`. When training on multiple nodes, this should be set to the average number of target masks across all nodes, as can be seen in the original implementation [here](https://github.com/SHI-Labs/OneFormer/blob/33ebb56ed34f970a30ae103e786c0cb64c653d9a/oneformer/modeling/criterion.py#L287). - One can use [`OneFormerProcessor`] to prepare input images and task inputs for the model and optional targets for the model. [`OneformerProcessor`] wraps [`OneFormerImageProcessor`] and [`CLIPTokenizer`] into a single instance to both prepare the images and encode the task inputs. - To get the final segmentation, depending on the task, you can call [`~OneFormerProcessor.post_process_semantic_segmentation`] or [`~OneFormerImageProcessor.post_process_instance_segmentation`] or [`~OneFormerImageProcessor.post_process_panoptic_segmentation`]. All three tasks can be solved using [`OneFormerForUniversalSegmentation`] output, panoptic segmentation accepts an optional `label_ids_to_fuse` argument to fuse instances of the target object/s (e.g. sky) together. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with OneFormer. - Demo notebooks regarding inference + fine-tuning on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/OneFormer). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. ## OneFormer specific outputs [[autodoc]] models.oneformer.modeling_oneformer.OneFormerModelOutput [[autodoc]] models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput ## OneFormerConfig [[autodoc]] OneFormerConfig ## OneFormerImageProcessor [[autodoc]] OneFormerImageProcessor - preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation ## OneFormerProcessor [[autodoc]] OneFormerProcessor ## OneFormerModel [[autodoc]] OneFormerModel - forward ## OneFormerForUniversalSegmentation [[autodoc]] OneFormerForUniversalSegmentation - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/code_llama.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CodeLlama ## Overview The Code Llama model was proposed in [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) by Baptiste Roziรจre, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jรฉrรฉmy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Dรฉfossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve. The abstract from the paper is the following: *We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.* Check out all Code Llama model checkpoints [here](https://huggingface.co/models?search=code_llama) and the officially released ones in the [codellama org](https://huggingface.co/codellama). This model was contributed by [ArthurZucker](https://huggingface.co/ArthurZ). The original code of the authors can be found [here](https://github.com/facebookresearch/llama). ## Usage tips and examples <Tip warning={true}> The `Llama2` family models, on which Code Llama is based, were trained using `bfloat16`, but the original inference uses `float16`. Let's look at the different precisions: * `float32`: PyTorch convention on model initialization is to load models in `float32`, no matter with which `dtype` the model weights were stored. `transformers` also follows this convention for consistency with PyTorch. This will be picked by default. If you want the `AutoModel` API to cast the load the checkpoints with the storage weights type, you must specify `torch_dtype="auto"`, e.g. `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. * `bfloat16`: Code Llama was trained with this precision, so we recommend using it for further training or fine-tuning. * `float16`: We recommend running inference using this precision, as it's usually faster than `bfloat16`, and evaluation metrics show no discernible degradation with respect to `bfloat16`. You can also run inference using `bfloat16`, and we recommend you check inference results with both `float16` and `bfloat16` after fine-tuning. As mentioned above, the `dtype` of the storage weights is mostly irrelevant unless you are using `torch_dtype="auto"` when initializing a model using. The reason is that the model will first be downloaded (using the `dtype` of the checkpoints online) and then will be casted to the default `dtype` of `torch` (becomes `torch.float32`). If there is a specified `torch_dtype`, it will be used instead. </Tip> Tips: - The infilling task is supported out of the box. You should be using the `tokenizer.fill_token` where you want your input to be filled. - The model conversion script is the same as for the `Llama2` family: Here is a sample usage: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). After conversion, the model and tokenizer can be loaded via: ```python >>> from transformers import LlamaForCausalLM, CodeLlamaTokenizer >>> tokenizer = CodeLlamaTokenizer.from_pretrained("codellama/CodeLlama-7b-hf") >>> model = LlamaForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf") >>> PROMPT = '''def remove_non_ascii(s: str) -> str: ... """ <FILL_ME> ... return result ... ''' >>> input_ids = tokenizer(PROMPT, return_tensors="pt")["input_ids"] >>> generated_ids = model.generate(input_ids, max_new_tokens=128) >>> filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0] >>> print(PROMPT.replace("<FILL_ME>", filling)) def remove_non_ascii(s: str) -> str: """ Remove non-ASCII characters from a string. <BLANKLINE> Args: s: The string to remove non-ASCII characters from. <BLANKLINE> Returns: The string with non-ASCII characters removed. """ result = "" for c in s: if ord(c) < 128: result += c return result <BLANKLINE> ``` If you only want the infilled part: ```python >>> from transformers import pipeline >>> import torch >>> generator = pipeline("text-generation",model="codellama/CodeLlama-7b-hf",torch_dtype=torch.float16, device_map="auto") >>> generator('def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\n return result', max_new_tokens = 128) [{'generated_text': 'def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\n return resultRemove non-ASCII characters from a string. """\n result = ""\n for c in s:\n if ord(c) < 128:\n result += c'}] ``` Under the hood, the tokenizer [automatically splits by `<FILL_ME>`](https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer.fill_token) to create a formatted input string that follows [the original training pattern](https://github.com/facebookresearch/codellama/blob/cb51c14ec761370ba2e2bc351374a79265d0465e/llama/generation.py#L402). This is more robust than preparing the pattern yourself: it avoids pitfalls, such as token glueing, that are very hard to debug. To see how much CPU and GPU memory you need for this model or others, try [this calculator](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) which can help determine that value. The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string. <Tip> Code Llama has the same architecture as the `Llama2` models, refer to [Llama2's documentation page](llama2) for the API reference. Find Code Llama tokenizer reference below. </Tip> ## CodeLlamaTokenizer [[autodoc]] CodeLlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## CodeLlamaTokenizerFast [[autodoc]] CodeLlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/flan-t5.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # FLAN-T5 ## Overview FLAN-T5 was released in the paper [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) - it is an enhanced version of T5 that has been finetuned in a mixture of tasks. One can directly use FLAN-T5 weights without finetuning the model: ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") >>> inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Pour a cup of bolognese into a large bowl and add the pasta'] ``` FLAN-T5 includes the same improvements as T5 version 1.1 (see [here](https://huggingface.co/docs/transformers/model_doc/t5v1.1) for the full details of the model's improvements.) Google has released the following variants: - [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) - [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) - [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) - [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) - [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl). The original checkpoints can be found [here](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints). <Tip> Refer to [T5's documentation page](t5) for all API reference, code examples and notebooks. For more details regarding training and evaluation of the FLAN-T5, refer to the model card. </Tip>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/deit.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DeiT ## Overview The DeiT model was proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. The [Vision Transformer (ViT)](vit) introduced in [Dosovitskiy et al., 2020](https://arxiv.org/abs/2010.11929) has shown that one can match or even outperform existing convolutional neural networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more efficiently trained transformers for image classification, requiring far less data and far less computing resources compared to the original ViT models. The abstract from the paper is the following: *Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The TensorFlow version of this model was added by [amyeroberts](https://huggingface.co/amyeroberts). ## Usage tips - Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. - There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time, one takes the average prediction between both heads as final prediction. (2) is also called "fine-tuning with distillation", because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to [`DeiTForImageClassification`] and (2) corresponds to [`DeiTForImageClassificationWithTeacher`]. - Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results. - All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for pre-training. - The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [`ViTModel`] or [`ViTForImageClassification`]. Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset (while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes): *facebook/deit-tiny-patch16-224*, *facebook/deit-small-patch16-224*, *facebook/deit-base-patch16-224* and *facebook/deit-base-patch16-384*. Note that one should use [`DeiTImageProcessor`] in order to prepare images for the model. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with DeiT. <PipelineTag pipeline="image-classification"/> - [`DeiTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) Besides that: - [`DeiTForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DeiTConfig [[autodoc]] DeiTConfig ## DeiTFeatureExtractor [[autodoc]] DeiTFeatureExtractor - __call__ ## DeiTImageProcessor [[autodoc]] DeiTImageProcessor - preprocess <frameworkcontent> <pt> ## DeiTModel [[autodoc]] DeiTModel - forward ## DeiTForMaskedImageModeling [[autodoc]] DeiTForMaskedImageModeling - forward ## DeiTForImageClassification [[autodoc]] DeiTForImageClassification - forward ## DeiTForImageClassificationWithTeacher [[autodoc]] DeiTForImageClassificationWithTeacher - forward </pt> <tf> ## TFDeiTModel [[autodoc]] TFDeiTModel - call ## TFDeiTForMaskedImageModeling [[autodoc]] TFDeiTForMaskedImageModeling - call ## TFDeiTForImageClassification [[autodoc]] TFDeiTForImageClassification - call ## TFDeiTForImageClassificationWithTeacher [[autodoc]] TFDeiTForImageClassificationWithTeacher - call </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/switch_transformers.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # SwitchTransformers ## Overview The SwitchTransformers model was proposed in [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. The Switch Transformer model uses a sparse T5 encoder-decoder architecture, where the MLP are replaced by a Mixture of Experts (MoE). A routing mechanism (top 1 in this case) associates each token to one of the expert, where each expert is a dense MLP. While switch transformers have a lot more weights than their equivalent dense models, the sparsity allows better scaling and better finetuning performance at scale. During a forward pass, only a fraction of the weights are used. The routing mechanism allows the model to select relevant weights on the fly which increases the model capacity without increasing the number of operations. The abstract from the paper is the following: *In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model.* This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/google/flaxformer/tree/main/flaxformer/architectures/moe). ## Usage tips - SwitchTransformers uses the [`T5Tokenizer`], which can be loaded directly from each model's repository. - The released weights are pretrained on English [Masked Language Modeling](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323/en/glossary#general-terms) task, and should be finetuned. ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## SwitchTransformersConfig [[autodoc]] SwitchTransformersConfig ## SwitchTransformersTop1Router [[autodoc]] SwitchTransformersTop1Router - _compute_router_probabilities - forward ## SwitchTransformersSparseMLP [[autodoc]] SwitchTransformersSparseMLP - forward ## SwitchTransformersModel [[autodoc]] SwitchTransformersModel - forward ## SwitchTransformersForConditionalGeneration [[autodoc]] SwitchTransformersForConditionalGeneration - forward ## SwitchTransformersEncoderModel [[autodoc]] SwitchTransformersEncoderModel - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/llava_next.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # LLaVA-NeXT ## Overview The LLaVA-NeXT model was proposed in [LLaVA-NeXT: Improved reasoning, OCR, and world knowledge](https://llava-vl.github.io/blog/2024-01-30-llava-next/) by Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, Yong Jae Lee. LLaVa-NeXT (also called LLaVa-1.6) improves upon [LLaVa](llava) by increasing the input image resolution and training on an improved visual instruction tuning dataset to improve OCR and common sense reasoning. The introduction from the blog is the following: *In October 2023, we released LLaVA-1.5 with a simple and efficient design along with great performance on a benchmark suite of 12 datasets. It has since served as the foundation of many comprehensive studies of data, model, and capabilities of large multimodal models (LMM), and has enabled various new applications. Today, we are thrilled to present LLaVA-NeXT, with improved reasoning, OCR, and world knowledge. LLaVA-NeXT even exceeds Gemini Pro on several benchmarks. Compared with LLaVA-1.5, LLaVA-NeXT has several improvements: Increasing the input image resolution to 4x more pixels. This allows it to grasp more visual details. It supports three aspect ratios, up to 672x672, 336x1344, 1344x336 resolution. Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture. Better visual conversation for more scenarios, covering different applications. Better world knowledge and logical reasoning. Efficient deployment and inference with SGLang. Along with performance improvements, LLaVA-NeXT maintains the minimalist design and data efficiency of LLaVA-1.5. It re-uses the pretrained connector of LLaVA-1.5, and still uses less than 1M visual instruction tuning samples. The largest 34B variant finishes training in ~1 day with 32 A100s.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/llava_next_overview.png" alt="drawing" width="600"/> <small> LLaVa-NeXT incorporates a higher input resolution by encoding various patches of the input image. Taken from the <a href="https://arxiv.org/abs/2310.03744">original paper.</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/haotian-liu/LLaVA/tree/main). ## Usage tips - We advise users to use `padding_side="left"` when computing batched generation as it leads to more accurate results. Simply make sure to call `processor.tokenizer.padding_side = "left"` before generating. - Note that each checkpoint has been trained with a specific prompt format, depending on which large language model (LLM) was used. Below, we list the correct prompt formats to use for the text prompt "What is shown in this image?": [llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) requires the following format: ```bash "[INST] <image>\nWhat is shown in this image? [/INST]" ``` [llava-v1.6-vicuna-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-vicuna-7b-hf) and [llava-v1.6-vicuna-13b-hf](https://huggingface.co/llava-hf/llava-v1.6-vicuna-13b-hf) require the following format: ```bash "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. USER: <image>\nWhat is shown in this image? ASSISTANT:" ``` [llava-v1.6-34b-hf](https://huggingface.co/llava-hf/llava-v1.6-34b-hf) requires the following format: ```bash "<|im_start|>system\nAnswer the questions.<|im_end|><|im_start|>user\n<image>\nWhat is shown in this image?<|im_end|><|im_start|>assistant\n" ``` ## Usage example Here's how to load the model and perform inference in half-precision (`torch.float16`): ```python from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration import torch from PIL import Image import requests processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf") model = LlavaNextForConditionalGeneration.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf", torch_dtype=torch.float16, low_cpu_mem_usage=True) model.to("cuda:0") # prepare image and text prompt, using the appropriate prompt template url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true" image = Image.open(requests.get(url, stream=True).raw) prompt = "[INST] <image>\nWhat is shown in this image? [/INST]" inputs = processor(prompt, image, return_tensors="pt").to("cuda:0") # autoregressively complete prompt output = model.generate(**inputs, max_new_tokens=100) print(processor.decode(output[0], skip_special_tokens=True)) ``` ## Model optimization ### Quantization using Bitsandbytes The model can be loaded in 8 or 4 bits, greatly reducing the memory requirements while maintaining the performance of the original model. First make sure to install bitsandbytes, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with: ```python from transformers import LlavaNextForConditionalGeneration, BitsAndBytesConfig # specify how to quantize the model quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, ) model = LlavaNextForConditionalGeneration.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf", quantization_config=quantization_config, device_map="auto") ``` ### Use Flash-Attention 2 to further speed-up generation First make sure to install flash-attn. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with: ```python from transformers import LlavaNextForConditionalGeneration model = LlavaNextForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, use_flash_attention_2=True ).to(0) ``` ## LlavaNextConfig [[autodoc]] LlavaNextConfig ## LlavaNextImageProcessor [[autodoc]] LlavaNextImageProcessor - preprocess ## LlavaNextProcessor [[autodoc]] LlavaNextProcessor ## LlavaNextForConditionalGeneration [[autodoc]] LlavaNextForConditionalGeneration - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/altclip.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # AltCLIP ## Overview The AltCLIP model was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679v2) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. AltCLIP (Altering the Language Encoder in CLIP) is a neural network trained on a variety of image-text and text-text pairs. By switching CLIP's text encoder with a pretrained multilingual text encoder XLM-R, we could obtain very close performances with CLIP on almost all tasks, and extended original CLIP's capabilities such as multilingual understanding. The abstract from the paper is the following: *In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.* This model was contributed by [jongjyh](https://huggingface.co/jongjyh). ## Usage tips and example The usage of AltCLIP is very similar to the CLIP. the difference between CLIP is the text encoder. Note that we use bidirectional attention instead of casual attention and we take the [CLS] token in XLM-R to represent text embedding. AltCLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score. To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The [`CLIPImageProcessor`] can be used to resize (or rescale) and normalize images for the model. The [`AltCLIPProcessor`] wraps a [`CLIPImageProcessor`] and a [`XLMRobertaTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using [`AltCLIPProcessor`] and [`AltCLIPModel`]. ```python >>> from PIL import Image >>> import requests >>> from transformers import AltCLIPModel, AltCLIPProcessor >>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP") >>> processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` <Tip> This model is based on `CLIPModel`, use it like you would use the original [CLIP](clip). </Tip> ## AltCLIPConfig [[autodoc]] AltCLIPConfig - from_text_vision_configs ## AltCLIPTextConfig [[autodoc]] AltCLIPTextConfig ## AltCLIPVisionConfig [[autodoc]] AltCLIPVisionConfig ## AltCLIPProcessor [[autodoc]] AltCLIPProcessor ## AltCLIPModel [[autodoc]] AltCLIPModel - forward - get_text_features - get_image_features ## AltCLIPTextModel [[autodoc]] AltCLIPTextModel - forward ## AltCLIPVisionModel [[autodoc]] AltCLIPVisionModel - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/swin.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Swin Transformer ## Overview The Swin Transformer was proposed in [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. The abstract from the paper is the following: *This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \bold{S}hifted \bold{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png" alt="drawing" width="600"/> <small> Swin Transformer architecture. Taken from the <a href="https://arxiv.org/abs/2102.03334">original paper</a>.</small> This model was contributed by [novice03](https://huggingface.co/novice03). The Tensorflow version of this model was contributed by [amyeroberts](https://huggingface.co/amyeroberts). The original code can be found [here](https://github.com/microsoft/Swin-Transformer). ## Usage tips - Swin pads the inputs supporting any input height and width (if divisible by `32`). - Swin can be used as a *backbone*. When `output_hidden_states = True`, it will output both `hidden_states` and `reshaped_hidden_states`. The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than `(batch_size, sequence_length, num_channels)`. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with Swin Transformer. <PipelineTag pipeline="image-classification"/> - [`SwinForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) Besides that: - [`SwinForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## SwinConfig [[autodoc]] SwinConfig <frameworkcontent> <pt> ## SwinModel [[autodoc]] SwinModel - forward ## SwinForMaskedImageModeling [[autodoc]] SwinForMaskedImageModeling - forward ## SwinForImageClassification [[autodoc]] transformers.SwinForImageClassification - forward </pt> <tf> ## TFSwinModel [[autodoc]] TFSwinModel - call ## TFSwinForMaskedImageModeling [[autodoc]] TFSwinForMaskedImageModeling - call ## TFSwinForImageClassification [[autodoc]] transformers.TFSwinForImageClassification - call </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/opt.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # OPT ## Overview The OPT model was proposed in [Open Pre-trained Transformer Language Models](https://arxiv.org/pdf/2205.01068) by Meta AI. OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3. The abstract from the paper is the following: *Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.* This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ), [Younes Belkada](https://huggingface.co/ybelkada), and [Patrick Von Platen](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/facebookresearch/metaseq). Tips: - OPT has the same architecture as [`BartDecoder`]. - Contrary to GPT2, OPT adds the EOS token `</s>` to the beginning of every prompt. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with OPT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-generation" /> - A notebook on [fine-tuning OPT with PEFT, bitsandbytes, and Transformers](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing). ๐ŸŒŽ - A blog post on [decoding strategies with OPT](https://huggingface.co/blog/introducing-csearch#62-example-two---opt). - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the ๐Ÿค— Hugging Face Course. - [`OPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling). <PipelineTag pipeline="text-classification" /> - [Text classification task guide](sequence_classification.md) - [`OPTForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). <PipelineTag pipeline="question-answering" /> - [`OPTForQuestionAnswering`] is supported by this [question answering example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. โšก๏ธ Inference - A blog post on [How ๐Ÿค— Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models) with OPT. ## Combining OPT and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation ``` Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``) To load and run a model using Flash Attention 2, refer to the snippet below: ```python >>> import torch >>> from transformers import OPTForCausalLM, GPT2Tokenizer >>> device = "cuda" # the device to load the model onto >>> model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="flash_attention_2") >>> tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m") >>> prompt = ("A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the " "Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived " "there?") >>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device) >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False) >>> tokenizer.batch_decode(generated_ids)[0] '</s>A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived there?\nStatue: I have lived here for about a year.\nHuman: What is your favorite place to eat?\nStatue: I love' ``` ### Expected speedups Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-2.7b` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths. <div style="text-align: center"> <img src="https://user-images.githubusercontent.com/49240599/281101546-d2fca6d2-ee44-48f3-9534-ba8d5bee4531.png"> </div> Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-350m` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths. <div style="text-align: center"> <img src="https://user-images.githubusercontent.com/49240599/281101682-d1144e90-0dbc-46f4-8fc8-c6206cb793c9.png"> </div> ## OPTConfig [[autodoc]] OPTConfig <frameworkcontent> <pt> ## OPTModel [[autodoc]] OPTModel - forward ## OPTForCausalLM [[autodoc]] OPTForCausalLM - forward ## OPTForSequenceClassification [[autodoc]] OPTForSequenceClassification - forward ## OPTForQuestionAnswering [[autodoc]] OPTForQuestionAnswering - forward </pt> <tf> ## TFOPTModel [[autodoc]] TFOPTModel - call ## TFOPTForCausalLM [[autodoc]] TFOPTForCausalLM - call </tf> <jax> ## FlaxOPTModel [[autodoc]] FlaxOPTModel - __call__ ## FlaxOPTForCausalLM [[autodoc]] FlaxOPTForCausalLM - __call__ </jax> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/cohere.md
# Cohere ## Overview The Cohere Command-R model was proposed in the blogpost [Command-R: Retrieval Augmented Generation at Production Scale](https://txt.cohere.com/command-r/) by the Cohere Team. The abstract from the paper is the following: *Command-R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprise. Today, we are introducing Command-R, a new LLM aimed at large-scale production workloads. Command-R targets the emerging โ€œscalableโ€ category of models that balance high efficiency with strong accuracy, enabling companies to move beyond proof of concept, and into production.* *Command-R is a generative model optimized for long context tasks such as retrieval augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with our industry-leading Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases. As a model built for companies to implement at scale, Command-R boasts: - Strong accuracy on RAG and Tool Use - Low latency, and high throughput - Longer 128k context and lower pricing - Strong capabilities across 10 key languages - Model weights available on HuggingFace for research and evaluation Checkout model checkpoints [here](https://huggingface.co/CohereForAI/c4ai-command-r-v01). This model was contributed by [Saurabh Dash](https://huggingface.co/saurabhdash) and [Ahmet รœstรผn](https://huggingface.co/ahmetustun). The code of the implementation in Hugging Face is based on GPT-NeoX [here](https://github.com/EleutherAI/gpt-neox). ## Usage tips <Tip warning={true}> The checkpoints uploaded on the Hub use `torch_dtype = 'float16'`, which will be used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`. The `dtype` of the online weights is mostly irrelevant unless you are using `torch_dtype="auto"` when initializing a model using `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online), then it will be casted to the default `dtype` of `torch` (becomes `torch.float32`), and finally, if there is a `torch_dtype` provided in the config, it will be used. Training the model in `float16` is not recommended and is known to produce `nan`; as such, the model should be trained in `bfloat16`. </Tip> The model and tokenizer can be loaded via: ```python # pip install transformers from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` - When using Flash Attention 2 via `attn_implementation="flash_attention_2"`, don't pass `torch_dtype` to the `from_pretrained` class method and use Automatic Mixed-Precision training. When using `Trainer`, it is simply specifying either `fp16` or `bf16` to `True`. Otherwise, make sure you are using `torch.autocast`. This is required because the Flash Attention only support `fp16` and `bf16` data type. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with Command-R. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-generation"/> Loading FP16 model ```python # pip install transformers from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` Loading bitsnbytes 4bit quantized model ```python # pip install transformers bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_4bit=True) model_id = "CohereForAI/c4ai-command-r-v01" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config) gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ## CohereConfig [[autodoc]] CohereConfig ## CohereTokenizerFast [[autodoc]] CohereTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary ## CohereModel [[autodoc]] CohereModel - forward ## CohereForCausalLM [[autodoc]] CohereForCausalLM - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/mluke.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # mLUKE ## Overview The mLUKE model was proposed in [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension of the [LUKE model](https://arxiv.org/abs/2010.01057) trained on the basis of XLM-RoBERTa. It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks involving reasoning about entities such as named entity recognition, extractive question answering, relation classification, cloze-style knowledge completion. The abstract from the paper is the following: *Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual knowledge more likely than using only word representations.* This model was contributed by [ryo0634](https://huggingface.co/ryo0634). The original code can be found [here](https://github.com/studio-ousia/luke). ## Usage tips One can directly plug in the weights of mLUKE into a LUKE model, like so: ```python from transformers import LukeModel model = LukeModel.from_pretrained("studio-ousia/mluke-base") ``` Note that mLUKE has its own tokenizer, [`MLukeTokenizer`]. You can initialize it as follows: ```python from transformers import MLukeTokenizer tokenizer = MLukeTokenizer.from_pretrained("studio-ousia/mluke-base") ``` <Tip> As mLUKE's architecture is equivalent to that of LUKE, one can refer to [LUKE's documentation page](luke) for all tips, code examples and notebooks. </Tip> ## MLukeTokenizer [[autodoc]] MLukeTokenizer - __call__ - save_vocabulary
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/table-transformer.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Table Transformer ## Overview The Table Transformer model was proposed in [PubTables-1M: Towards comprehensive table extraction from unstructured documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. The authors introduce a new dataset, PubTables-1M, to benchmark progress in table extraction from unstructured documents, as well as table structure recognition and functional analysis. The authors train 2 [DETR](detr) models, one for table detection and one for table structure recognition, dubbed Table Transformers. The abstract from the paper is the following: *Recently, significant progress has been made applying machine learning to the problem of table structure inference and extraction from unstructured documents. However, one of the greatest challenges remains the creation of datasets with complete, unambiguous ground truth at scale. To address this, we develop a new, more comprehensive dataset for table extraction, called PubTables-1M. PubTables-1M contains nearly one million tables from scientific articles, supports multiple input modalities, and contains detailed header and location information for table structures, making it useful for a wide variety of modeling approaches. It also addresses a significant source of ground truth inconsistency observed in prior datasets called oversegmentation, using a novel canonicalization procedure. We demonstrate that these improvements lead to a significant increase in training performance and a more reliable estimate of model performance at evaluation for table structure recognition. Further, we show that transformer-based object detection models trained on PubTables-1M produce excellent results for all three tasks of detection, structure recognition, and functional analysis without the need for any special customization for these tasks.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/table_transformer_architecture.jpeg" alt="drawing" width="600"/> <small> Table detection and table structure recognition clarified. Taken from the <a href="https://arxiv.org/abs/2110.00061">original paper</a>. </small> The authors released 2 models, one for [table detection](https://huggingface.co/microsoft/table-transformer-detection) in documents, one for [table structure recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) (the task of recognizing the individual rows, columns etc. in a table). This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/table-transformer). ## Resources <PipelineTag pipeline="object-detection"/> - A demo notebook for the Table Transformer can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Table%20Transformer). - It turns out padding of images is quite important for detection. An interesting Github thread with replies from the authors can be found [here](https://github.com/microsoft/table-transformer/issues/68). ## TableTransformerConfig [[autodoc]] TableTransformerConfig ## TableTransformerModel [[autodoc]] TableTransformerModel - forward ## TableTransformerForObjectDetection [[autodoc]] TableTransformerForObjectDetection - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/flaubert.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # FlauBERT <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=flaubert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-flaubert-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/flaubert_small_cased"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview The FlauBERT model was proposed in the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le et al. It's a transformer model pretrained using a masked language modeling (MLM) objective (like BERT). The abstract from the paper is the following: *Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pretraining approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.* This model was contributed by [formiel](https://huggingface.co/formiel). The original code can be found [here](https://github.com/getalp/Flaubert). Tips: - Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## FlaubertConfig [[autodoc]] FlaubertConfig ## FlaubertTokenizer [[autodoc]] FlaubertTokenizer <frameworkcontent> <pt> ## FlaubertModel [[autodoc]] FlaubertModel - forward ## FlaubertWithLMHeadModel [[autodoc]] FlaubertWithLMHeadModel - forward ## FlaubertForSequenceClassification [[autodoc]] FlaubertForSequenceClassification - forward ## FlaubertForMultipleChoice [[autodoc]] FlaubertForMultipleChoice - forward ## FlaubertForTokenClassification [[autodoc]] FlaubertForTokenClassification - forward ## FlaubertForQuestionAnsweringSimple [[autodoc]] FlaubertForQuestionAnsweringSimple - forward ## FlaubertForQuestionAnswering [[autodoc]] FlaubertForQuestionAnswering - forward </pt> <tf> ## TFFlaubertModel [[autodoc]] TFFlaubertModel - call ## TFFlaubertWithLMHeadModel [[autodoc]] TFFlaubertWithLMHeadModel - call ## TFFlaubertForSequenceClassification [[autodoc]] TFFlaubertForSequenceClassification - call ## TFFlaubertForMultipleChoice [[autodoc]] TFFlaubertForMultipleChoice - call ## TFFlaubertForTokenClassification [[autodoc]] TFFlaubertForTokenClassification - call ## TFFlaubertForQuestionAnsweringSimple [[autodoc]] TFFlaubertForQuestionAnsweringSimple - call </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/mobilevit.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MobileViT ## Overview The MobileViT model was proposed in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. MobileViT introduces a new layer that replaces local processing in convolutions with global processing using transformers. The abstract from the paper is the following: *Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.* This model was contributed by [matthijs](https://huggingface.co/Matthijs). The TensorFlow version of the model was contributed by [sayakpaul](https://huggingface.co/sayakpaul). The original code and weights can be found [here](https://github.com/apple/ml-cvnets). ## Usage tips - MobileViT is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. You can follow [this tutorial](https://keras.io/examples/vision/mobilevit) for a lightweight introduction. - One can use [`MobileViTImageProcessor`] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB). - The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - The segmentation model uses a [DeepLabV3](https://arxiv.org/abs/1706.05587) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/). - As the name suggests MobileViT was designed to be performant and efficient on mobile phones. The TensorFlow versions of the MobileViT models are fully compatible with [TensorFlow Lite](https://www.tensorflow.org/lite). You can use the following code to convert a MobileViT checkpoint (be it image classification or semantic segmentation) to generate a TensorFlow Lite model: ```py from transformers import TFMobileViTForImageClassification import tensorflow as tf model_ckpt = "apple/mobilevit-xx-small" model = TFMobileViTForImageClassification.from_pretrained(model_ckpt) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS, ] tflite_model = converter.convert() tflite_filename = model_ckpt.split("/")[-1] + ".tflite" with open(tflite_filename, "wb") as f: f.write(tflite_model) ``` The resulting model will be just **about an MB** making it a good fit for mobile applications where resources and network bandwidth can be constrained. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with MobileViT. <PipelineTag pipeline="image-classification"/> - [`MobileViTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) **Semantic segmentation** - [Semantic segmentation task guide](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## MobileViTConfig [[autodoc]] MobileViTConfig ## MobileViTFeatureExtractor [[autodoc]] MobileViTFeatureExtractor - __call__ - post_process_semantic_segmentation ## MobileViTImageProcessor [[autodoc]] MobileViTImageProcessor - preprocess - post_process_semantic_segmentation <frameworkcontent> <pt> ## MobileViTModel [[autodoc]] MobileViTModel - forward ## MobileViTForImageClassification [[autodoc]] MobileViTForImageClassification - forward ## MobileViTForSemanticSegmentation [[autodoc]] MobileViTForSemanticSegmentation - forward </pt> <tf> ## TFMobileViTModel [[autodoc]] TFMobileViTModel - call ## TFMobileViTForImageClassification [[autodoc]] TFMobileViTForImageClassification - call ## TFMobileViTForSemanticSegmentation [[autodoc]] TFMobileViTForSemanticSegmentation - call </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/bert.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BERT <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=bert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-bert-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/bert-base-uncased"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview The BERT model was proposed in [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It's a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia. The abstract from the paper is the following: *We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.* *BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/google-research/bert). ## Usage tips - BERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. - Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by: * a special mask token with probability 0.8 * a random token different from the one masked with probability 0.1 * the same token with probability 0.1 - The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not. ### Using Scaled Dot Product Attention (SDPA) PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import BertModel model = BertModel.from_pretrained("bert-base-uncased", torch_dtype=torch.float16, attn_implementation="sdpa") ... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (A100-80GB, CPUx12, RAM 96.6GB, PyTorch 2.2.0, OS Ubuntu 22.04) with `float16`, we saw the following speedups during training and inference. #### Training |batch_size|seq_len|Time per batch (eager - s)|Time per batch (sdpa - s)|Speedup (%)|Eager peak mem (MB)|sdpa peak mem (MB)|Mem saving (%)| |----------|-------|--------------------------|-------------------------|-----------|-------------------|------------------|--------------| |4 |256 |0.023 |0.017 |35.472 |939.213 |764.834 |22.800 | |4 |512 |0.023 |0.018 |23.687 |1970.447 |1227.162 |60.569 | |8 |256 |0.023 |0.018 |23.491 |1594.295 |1226.114 |30.028 | |8 |512 |0.035 |0.025 |43.058 |3629.401 |2134.262 |70.054 | |16 |256 |0.030 |0.024 |25.583 |2874.426 |2134.262 |34.680 | |16 |512 |0.064 |0.044 |46.223 |6964.659 |3961.013 |75.830 | #### Inference |batch_size|seq_len|Per token latency eager (ms)|Per token latency SDPA (ms)|Speedup (%)|Mem eager (MB)|Mem BT (MB)|Mem saved (%)| |----------|-------|----------------------------|---------------------------|-----------|--------------|-----------|-------------| |1 |128 |5.736 |4.987 |15.022 |282.661 |282.924 |-0.093 | |1 |256 |5.689 |4.945 |15.055 |298.686 |298.948 |-0.088 | |2 |128 |6.154 |4.982 |23.521 |314.523 |314.785 |-0.083 | |2 |256 |6.201 |4.949 |25.303 |347.546 |347.033 |0.148 | |4 |128 |6.049 |4.987 |21.305 |378.895 |379.301 |-0.107 | |4 |256 |6.285 |5.364 |17.166 |443.209 |444.382 |-0.264 | ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with BERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/> - A blog post on [BERT Text Classification in a different language](https://www.philschmid.de/bert-text-classification-in-a-different-language). - A notebook for [Finetuning BERT (and friends) for multi-label text classification](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb). - A notebook on how to [Finetune BERT for multi-label classification using PyTorch](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb). ๐ŸŒŽ - A notebook on how to [warm-start an EncoderDecoder model with BERT for summarization](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb). - [`BertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [`FlaxBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification"/> - A blog post on how to use [Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition](https://www.philschmid.de/huggingface-transformers-keras-tf). - A notebook for [Finetuning BERT for named-entity recognition](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb) using only the first wordpiece of each word in the word label during tokenization. To propagate the label of the word to all wordpieces, see this [version](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT.ipynb) of the notebook instead. - [`BertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. - [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="fill-mask"/> - [`BertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. - [Masked language modeling task guide](../tasks/masked_language_modeling) <PipelineTag pipeline="question-answering"/> - [`BertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`BertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) โšก๏ธ **Inference** - A blog post on how to [Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia](https://huggingface.co/blog/bert-inferentia-sagemaker). - A blog post on how to [Accelerate BERT inference with DeepSpeed-Inference on GPUs](https://www.philschmid.de/bert-deepspeed-inference). โš™๏ธ **Pretraining** - A blog post on [Pre-Training BERT with Hugging Face Transformers and Habana Gaudi](https://www.philschmid.de/pre-training-bert-habana). ๐Ÿš€ **Deploy** - A blog post on how to [Convert Transformers to ONNX with Hugging Face Optimum](https://www.philschmid.de/convert-transformers-to-onnx). - A blog post on how to [Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS](https://www.philschmid.de/getting-started-habana-gaudi#conclusion). - A blog post on [Autoscaling BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker-advanced). - A blog post on [Serverless BERT with HuggingFace, AWS Lambda, and Docker](https://www.philschmid.de/serverless-bert-with-huggingface-aws-lambda-docker). - A blog post on [Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler](https://www.philschmid.de/huggingface-amazon-sagemaker-training-compiler). - A blog post on [Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker](https://www.philschmid.de/knowledge-distillation-bert-transformers). ## BertConfig [[autodoc]] BertConfig - all ## BertTokenizer [[autodoc]] BertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary <frameworkcontent> <pt> ## BertTokenizerFast [[autodoc]] BertTokenizerFast </pt> <tf> ## TFBertTokenizer [[autodoc]] TFBertTokenizer </tf> </frameworkcontent> ## Bert specific outputs [[autodoc]] models.bert.modeling_bert.BertForPreTrainingOutput [[autodoc]] models.bert.modeling_tf_bert.TFBertForPreTrainingOutput [[autodoc]] models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput <frameworkcontent> <pt> ## BertModel [[autodoc]] BertModel - forward ## BertForPreTraining [[autodoc]] BertForPreTraining - forward ## BertLMHeadModel [[autodoc]] BertLMHeadModel - forward ## BertForMaskedLM [[autodoc]] BertForMaskedLM - forward ## BertForNextSentencePrediction [[autodoc]] BertForNextSentencePrediction - forward ## BertForSequenceClassification [[autodoc]] BertForSequenceClassification - forward ## BertForMultipleChoice [[autodoc]] BertForMultipleChoice - forward ## BertForTokenClassification [[autodoc]] BertForTokenClassification - forward ## BertForQuestionAnswering [[autodoc]] BertForQuestionAnswering - forward </pt> <tf> ## TFBertModel [[autodoc]] TFBertModel - call ## TFBertForPreTraining [[autodoc]] TFBertForPreTraining - call ## TFBertModelLMHeadModel [[autodoc]] TFBertLMHeadModel - call ## TFBertForMaskedLM [[autodoc]] TFBertForMaskedLM - call ## TFBertForNextSentencePrediction [[autodoc]] TFBertForNextSentencePrediction - call ## TFBertForSequenceClassification [[autodoc]] TFBertForSequenceClassification - call ## TFBertForMultipleChoice [[autodoc]] TFBertForMultipleChoice - call ## TFBertForTokenClassification [[autodoc]] TFBertForTokenClassification - call ## TFBertForQuestionAnswering [[autodoc]] TFBertForQuestionAnswering - call </tf> <jax> ## FlaxBertModel [[autodoc]] FlaxBertModel - __call__ ## FlaxBertForPreTraining [[autodoc]] FlaxBertForPreTraining - __call__ ## FlaxBertForCausalLM [[autodoc]] FlaxBertForCausalLM - __call__ ## FlaxBertForMaskedLM [[autodoc]] FlaxBertForMaskedLM - __call__ ## FlaxBertForNextSentencePrediction [[autodoc]] FlaxBertForNextSentencePrediction - __call__ ## FlaxBertForSequenceClassification [[autodoc]] FlaxBertForSequenceClassification - __call__ ## FlaxBertForMultipleChoice [[autodoc]] FlaxBertForMultipleChoice - __call__ ## FlaxBertForTokenClassification [[autodoc]] FlaxBertForTokenClassification - __call__ ## FlaxBertForQuestionAnswering [[autodoc]] FlaxBertForQuestionAnswering - __call__ </jax> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/squeezebert.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # SqueezeBERT ## Overview The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It's a bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the SqueezeBERT architecture is that SqueezeBERT uses [grouped convolutions](https://blog.yani.io/filter-group-tutorial) instead of fully-connected layers for the Q, K, V and FFN layers. The abstract from the paper is the following: *Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets, large computing systems, and better neural network models, natural language processing (NLP) technology has made significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test set. The SqueezeBERT code will be released.* This model was contributed by [forresti](https://huggingface.co/forresti). ## Usage tips - SqueezeBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. - For best results when finetuning on sequence classification tasks, it is recommended to start with the *squeezebert/squeezebert-mnli-headless* checkpoint. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## SqueezeBertConfig [[autodoc]] SqueezeBertConfig ## SqueezeBertTokenizer [[autodoc]] SqueezeBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## SqueezeBertTokenizerFast [[autodoc]] SqueezeBertTokenizerFast ## SqueezeBertModel [[autodoc]] SqueezeBertModel ## SqueezeBertForMaskedLM [[autodoc]] SqueezeBertForMaskedLM ## SqueezeBertForSequenceClassification [[autodoc]] SqueezeBertForSequenceClassification ## SqueezeBertForMultipleChoice [[autodoc]] SqueezeBertForMultipleChoice ## SqueezeBertForTokenClassification [[autodoc]] SqueezeBertForTokenClassification ## SqueezeBertForQuestionAnswering [[autodoc]] SqueezeBertForQuestionAnswering
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/nat.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Neighborhood Attention Transformer ## Overview NAT was proposed in [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. It is a hierarchical vision transformer based on Neighborhood Attention, a sliding-window self attention pattern. The abstract from the paper is the following: *We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision. NA is a pixel-wise operation, localizing self attention (SA) to the nearest neighboring pixels, and therefore enjoys a linear time and space complexity compared to the quadratic complexity of SA. The sliding-window pattern allows NA's receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike Swin Transformer's Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin's WSA while using up to 25% less memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA that boosts image classification and downstream vision performance. Experimental results on NAT are competitive; NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9% ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size. * <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/neighborhood-attention-pattern.jpg" alt="drawing" width="600"/> <small> Neighborhood Attention compared to other attention patterns. Taken from the <a href="https://arxiv.org/abs/2204.07143">original paper</a>.</small> This model was contributed by [Ali Hassani](https://huggingface.co/alihassanijr). The original code can be found [here](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer). ## Usage tips - One can use the [`AutoImageProcessor`] API to prepare images for the model. - NAT can be used as a *backbone*. When `output_hidden_states = True`, it will output both `hidden_states` and `reshaped_hidden_states`. The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than `(batch_size, height, width, num_channels)`. Notes: - NAT depends on [NATTEN](https://github.com/SHI-Labs/NATTEN/)'s implementation of Neighborhood Attention. You can install it with pre-built wheels for Linux by referring to [shi-labs.com/natten](https://shi-labs.com/natten), or build on your system by running `pip install natten`. Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet. - Patch size of 4 is only supported at the moment. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with NAT. <PipelineTag pipeline="image-classification"/> - [`NatForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## NatConfig [[autodoc]] NatConfig ## NatModel [[autodoc]] NatModel - forward ## NatForImageClassification [[autodoc]] NatForImageClassification - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/speech_to_text.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Speech2Text ## Overview The Speech2Text model was proposed in [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. It's a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the transcripts/translations autoregressively. Speech2Text has been fine-tuned on several datasets for ASR and ST: [LibriSpeech](http://www.openslr.org/12), [CoVoST 2](https://github.com/facebookresearch/covost), [MuST-C](https://ict.fbk.eu/must-c/). This model was contributed by [valhalla](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text). ## Inference Speech2Text is a speech model that accepts a float tensor of log-mel filter-bank features extracted from the speech signal. It's a transformer-based seq2seq model, so the transcripts/translations are generated autoregressively. The `generate()` method can be used for inference. The [`Speech2TextFeatureExtractor`] class is responsible for extracting the log-mel filter-bank features. The [`Speech2TextProcessor`] wraps [`Speech2TextFeatureExtractor`] and [`Speech2TextTokenizer`] into a single instance to both extract the input features and decode the predicted token ids. The feature extractor depends on `torchaudio` and the tokenizer depends on `sentencepiece` so be sure to install those packages before running the examples. You could either install those as extra speech dependencies with `pip install transformers"[speech, sentencepiece]"` or install the packages separately with `pip install torchaudio sentencepiece`. Also `torchaudio` requires the development version of the [libsndfile](http://www.mega-nerd.com/libsndfile/) package which can be installed via a system package manager. On Ubuntu it can be installed as follows: `apt install libsndfile1-dev` - ASR and Speech Translation ```python >>> import torch >>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration >>> from datasets import load_dataset >>> model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") >>> processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt") >>> generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"]) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> transcription ['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel'] ``` - Multilingual speech translation For multilingual speech translation models, `eos_token_id` is used as the `decoder_start_token_id` and the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate()` method. The following example shows how to transate English speech to French text using the *facebook/s2t-medium-mustc-multilingual-st* checkpoint. ```python >>> import torch >>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration >>> from datasets import load_dataset >>> model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") >>> processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt") >>> generated_ids = model.generate( ... inputs["input_features"], ... attention_mask=inputs["attention_mask"], ... forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"], ... ) >>> translation = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> translation ["(Vidรฉo) Si M. Kilder est l'apossible des classes moyennes, et nous sommes heureux d'รชtre accueillis dans son รฉvangile."] ``` See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for Speech2Text checkpoints. ## Speech2TextConfig [[autodoc]] Speech2TextConfig ## Speech2TextTokenizer [[autodoc]] Speech2TextTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## Speech2TextFeatureExtractor [[autodoc]] Speech2TextFeatureExtractor - __call__ ## Speech2TextProcessor [[autodoc]] Speech2TextProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode <frameworkcontent> <pt> ## Speech2TextModel [[autodoc]] Speech2TextModel - forward ## Speech2TextForConditionalGeneration [[autodoc]] Speech2TextForConditionalGeneration - forward </pt> <tf> ## TFSpeech2TextModel [[autodoc]] TFSpeech2TextModel - call ## TFSpeech2TextForConditionalGeneration [[autodoc]] TFSpeech2TextForConditionalGeneration - call </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/rwkv.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # RWKV ## Overview The RWKV model was proposed in [this repo](https://github.com/BlinkDL/RWKV-LM) It suggests a tweak in the traditional Transformer attention to make it linear. This way, the model can be used as recurrent network: passing inputs for timestamp 0 and timestamp 1 together is the same as passing inputs at timestamp 0, then inputs at timestamp 1 along with the state of timestamp 0 (see example below). This can be more efficient than a regular Transformer and can deal with sentence of any length (even if the model uses a fixed context length for training). This model was contributed by [sgugger](https://huggingface.co/sgugger). The original code can be found [here](https://github.com/BlinkDL/RWKV-LM). ## Usage example ```py import torch from transformers import AutoTokenizer, RwkvConfig, RwkvModel model = RwkvModel.from_pretrained("sgugger/rwkv-430M-pile") tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-430M-pile") inputs = tokenizer("This is an example.", return_tensors="pt") # Feed everything to the model outputs = model(inputs["input_ids"]) output_whole = outputs.last_hidden_state outputs = model(inputs["input_ids"][:, :2]) output_one = outputs.last_hidden_state # Using the state computed on the first inputs, we will get the same output outputs = model(inputs["input_ids"][:, 2:], state=outputs.state) output_two = outputs.last_hidden_state torch.allclose(torch.cat([output_one, output_two], dim=1), output_whole, atol=1e-5) ``` If you want to make sure the model stops generating when `'\n\n'` is detected, we recommend using the following stopping criteria: ```python from transformers import StoppingCriteria class RwkvStoppingCriteria(StoppingCriteria): def __init__(self, eos_sequence = [187,187], eos_token_id = 537): self.eos_sequence = eos_sequence self.eos_token_id = eos_token_id def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: last_2_ids = input_ids[:,-2:].tolist() return self.eos_sequence in last_2_ids output = model.generate(inputs["input_ids"], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()]) ``` ## RwkvConfig [[autodoc]] RwkvConfig ## RwkvModel [[autodoc]] RwkvModel - forward ## RwkvLMHeadModel [[autodoc]] RwkvForCausalLM - forward ## Rwkv attention and the recurrent formulas In a traditional auto-regressive Transformer, attention is written as $$O = \hbox{softmax}(QK^{T} / \sqrt{d}) V$$ with \\(Q\\), \\(K\\) and \\(V\\) are matrices of shape `seq_len x hidden_size` named query, key and value (they are actually bigger matrices with a batch dimension and an attention head dimension but we're only interested in the last two, which is where the matrix product is taken, so for the sake of simplicity we only consider those two). The product \\(QK^{T}\\) then has shape `seq_len x seq_len` and we can take the matrix product with \\(V\\) to get the output \\(O\\) of the same shape as the others. Replacing the softmax by its value gives: $$O_{i} = \frac{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}} V_{j}}{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}}}$$ Note that the entries in \\(QK^{T}\\) corresponding to \\(j > i\\) are masked (the sum stops at j) because the attention is not allowed to look at future tokens (only past ones). In comparison, the RWKV attention is given by $$O_{i} = \sigma(R_{i}) \frac{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}} V_{j}}{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}}}$$ where \\(R\\) is a new matrix called receptance by the author, \\(K\\) and \\(V\\) are still the key and value (\\(\sigma\\) here is the sigmoid function). \\(W\\) is a new vector that represents the position of the token and is given by $$W_{0} = u \hbox{ and } W_{k} = (k-1)w \hbox{ for } k \geq 1$$ with \\(u\\) and \\(w\\) learnable parameters called in the code `time_first` and `time_decay` respectively. The numerator and denominator can both be expressed recursively. Naming them \\(N_{i}\\) and \\(D_{i}\\) we have: $$N_{i} = e^{u + K_{i}} V_{i} + \hat{N}_{i} \hbox{ where } \hat{N}_{i} = e^{K_{i-1}} V_{i-1} + e^{w + K_{i-2}} V_{i-2} \cdots + e^{(i-2)w + K_{1}} V_{1}$$ so \\(\hat{N}_{i}\\) (called `numerator_state` in the code) satisfies $$\hat{N}_{0} = 0 \hbox{ and } \hat{N}_{j+1} = e^{K_{j}} V_{j} + e^{w} \hat{N}_{j}$$ and $$D_{i} = e^{u + K_{i}} + \hat{D}_{i} \hbox{ where } \hat{D}_{i} = e^{K_{i-1}} + e^{w + K_{i-2}} \cdots + e^{(i-2)w + K_{1}}$$ so \\(\hat{D}_{i}\\) (called `denominator_state` in the code) satisfies $$\hat{D}_{0} = 0 \hbox{ and } \hat{D}_{j+1} = e^{K_{j}} + e^{w} \hat{D}_{j}$$ The actual recurrent formula used are a tiny bit more complex, as for numerical stability we don't want to compute exponentials of big numbers. Usually the softmax is not computed as is, but the exponential of the maximum term is divided of the numerator and denominator: $$\frac{e^{x_{i}}}{\sum_{j=1}^{n} e^{x_{j}}} = \frac{e^{x_{i} - M}}{\sum_{j=1}^{n} e^{x_{j} - M}}$$ with \\(M\\) the maximum of all \\(x_{j}\\). So here on top of saving the numerator state (\\(\hat{N}\\)) and the denominator state (\\(\hat{D}\\)) we also keep track of the maximum of all terms encountered in the exponentials. So we actually use $$\tilde{N}_{i} = e^{-M_{i}} \hat{N}_{i} \hbox{ and } \tilde{D}_{i} = e^{-M_{i}} \hat{D}_{i}$$ defined by the following recurrent formulas: $$\tilde{N}_{0} = 0 \hbox{ and } \tilde{N}_{j+1} = e^{K_{j} - q} V_{j} + e^{w + M_{j} - q} \tilde{N}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})$$ and $$\tilde{D}_{0} = 0 \hbox{ and } \tilde{D}_{j+1} = e^{K_{j} - q} + e^{w + M_{j} - q} \tilde{D}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})$$ and \\(M_{j+1} = q\\). With those, we can then compute $$N_{i} = e^{u + K_{i} - q} V_{i} + e^{M_{i}} \tilde{N}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})$$ and $$D_{i} = e^{u + K_{i} - q} + e^{M_{i}} \tilde{D}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})$$ which finally gives us $$O_{i} = \sigma(R_{i}) \frac{N_{i}}{D_{i}}$$
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/unispeech-sat.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # UniSpeech-SAT ## Overview The UniSpeech-SAT model was proposed in [UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu . The abstract from the paper is the following: *Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisedly and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT). ## Usage tips - UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [`Wav2Vec2Processor`] for the feature extraction. - UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. - UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## UniSpeechSatConfig [[autodoc]] UniSpeechSatConfig ## UniSpeechSat specific outputs [[autodoc]] models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput ## UniSpeechSatModel [[autodoc]] UniSpeechSatModel - forward ## UniSpeechSatForCTC [[autodoc]] UniSpeechSatForCTC - forward ## UniSpeechSatForSequenceClassification [[autodoc]] UniSpeechSatForSequenceClassification - forward ## UniSpeechSatForAudioFrameClassification [[autodoc]] UniSpeechSatForAudioFrameClassification - forward ## UniSpeechSatForXVector [[autodoc]] UniSpeechSatForXVector - forward ## UniSpeechSatForPreTraining [[autodoc]] UniSpeechSatForPreTraining - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Speech Encoder Decoder Models The [`SpeechEncoderDecoderModel`] can be used to initialize a speech-to-text model with any pretrained speech autoencoding model as the encoder (*e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert)) and any pretrained autoregressive model as the decoder. The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech recognition and speech translation has *e.g.* been shown in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. An example of how to use a [`SpeechEncoderDecoderModel`] for inference can be seen in [Speech2Text2](speech_to_text_2). ## Randomly initializing `SpeechEncoderDecoderModel` from model configurations. [`SpeechEncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`Wav2Vec2Model`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. ```python >>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel >>> config_encoder = Wav2Vec2Config() >>> config_decoder = BertConfig() >>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = SpeechEncoderDecoderModel(config=config) ``` ## Initialising `SpeechEncoderDecoderModel` from a pretrained encoder and a pretrained decoder. [`SpeechEncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, *e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert) can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [`SpeechEncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `SpeechEncoderDecoderModel` class provides a [`SpeechEncoderDecoderModel.from_encoder_decoder_pretrained`] method. ```python >>> from transformers import SpeechEncoderDecoderModel >>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained( ... "facebook/hubert-large-ll60k", "google-bert/bert-base-uncased" ... ) ``` ## Loading an existing `SpeechEncoderDecoderModel` checkpoint and perform inference. To load fine-tuned checkpoints of the `SpeechEncoderDecoderModel` class, [`SpeechEncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers. To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. ```python >>> from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> # load a fine-tuned speech translation model and corresponding processor >>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15") >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15") >>> # let's perform inference on a piece of English speech (which we'll translate to German) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values >>> # autoregressively generate transcription (uses greedy decoding by default) >>> generated_ids = model.generate(input_values) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heiรŸen zu kรถnnen. ``` ## Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: `input_values` (which are the speech inputs) and `labels` (which are the `input_ids` of the encoded target sequence). ```python >>> from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel >>> from datasets import load_dataset >>> encoder_id = "facebook/wav2vec2-base-960h" # acoustic model encoder >>> decoder_id = "google-bert/bert-base-uncased" # text decoder >>> feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id) >>> tokenizer = AutoTokenizer.from_pretrained(decoder_id) >>> # Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model >>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id) >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> # load an audio input and pre-process (normalise mean/std to 0/1) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values >>> # load its corresponding transcription and tokenize to generate labels >>> labels = tokenizer(ds[0]["text"], return_tensors="pt").input_ids >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(input_values=input_values, labels=labels).loss >>> loss.backward() ``` ## SpeechEncoderDecoderConfig [[autodoc]] SpeechEncoderDecoderConfig ## SpeechEncoderDecoderModel [[autodoc]] SpeechEncoderDecoderModel - forward - from_encoder_decoder_pretrained ## FlaxSpeechEncoderDecoderModel [[autodoc]] FlaxSpeechEncoderDecoderModel - __call__ - from_encoder_decoder_pretrained
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/xlm-roberta.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # XLM-RoBERTa <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=xlm-roberta"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlm--roberta-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/xlm-roberta-base"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. The abstract from the paper is the following: *This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.* This model was contributed by [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/xlmr). ## Usage tips - XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require `lang` tensors to understand which language is used, and should be able to determine the correct language from the input ids. - Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/> - A blog post on how to [finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS](https://www.philschmid.de/habana-distributed-training) - [`XLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [`TFXLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [`FlaxXLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification](https://huggingface.co/docs/transformers/tasks/sequence_classification) chapter of the ๐Ÿค— Hugging Face Task Guides. - [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="token-classification"/> - [`XLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [`TFXLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [`FlaxXLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. - [Token classification task guide](../tasks/token_classification) <PipelineTag pipeline="text-generation"/> - [`XLMRobertaForCausalLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [Causal language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling) chapter of the ๐Ÿค— Hugging Face Task Guides. - [Causal language modeling task guide](../tasks/language_modeling) <PipelineTag pipeline="fill-mask"/> - [`XLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFXLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [`FlaxXLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. - [Masked language modeling](../tasks/masked_language_modeling) <PipelineTag pipeline="question-answering"/> - [`XLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [`TFXLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [`FlaxXLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the ๐Ÿค— Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [`XLMRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [`TFXLMRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) ๐Ÿš€ Deploy - A blog post on how to [Deploy Serverless XLM RoBERTa on AWS Lambda](https://www.philschmid.de/multilingual-serverless-xlm-roberta-with-huggingface). <Tip> This implementation is the same as RoBERTa. Refer to the [documentation of RoBERTa](roberta) for usage examples as well as the information relative to the inputs and outputs. </Tip> ## XLMRobertaConfig [[autodoc]] XLMRobertaConfig ## XLMRobertaTokenizer [[autodoc]] XLMRobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## XLMRobertaTokenizerFast [[autodoc]] XLMRobertaTokenizerFast <frameworkcontent> <pt> ## XLMRobertaModel [[autodoc]] XLMRobertaModel - forward ## XLMRobertaForCausalLM [[autodoc]] XLMRobertaForCausalLM - forward ## XLMRobertaForMaskedLM [[autodoc]] XLMRobertaForMaskedLM - forward ## XLMRobertaForSequenceClassification [[autodoc]] XLMRobertaForSequenceClassification - forward ## XLMRobertaForMultipleChoice [[autodoc]] XLMRobertaForMultipleChoice - forward ## XLMRobertaForTokenClassification [[autodoc]] XLMRobertaForTokenClassification - forward ## XLMRobertaForQuestionAnswering [[autodoc]] XLMRobertaForQuestionAnswering - forward </pt> <tf> ## TFXLMRobertaModel [[autodoc]] TFXLMRobertaModel - call ## TFXLMRobertaForCausalLM [[autodoc]] TFXLMRobertaForCausalLM - call ## TFXLMRobertaForMaskedLM [[autodoc]] TFXLMRobertaForMaskedLM - call ## TFXLMRobertaForSequenceClassification [[autodoc]] TFXLMRobertaForSequenceClassification - call ## TFXLMRobertaForMultipleChoice [[autodoc]] TFXLMRobertaForMultipleChoice - call ## TFXLMRobertaForTokenClassification [[autodoc]] TFXLMRobertaForTokenClassification - call ## TFXLMRobertaForQuestionAnswering [[autodoc]] TFXLMRobertaForQuestionAnswering - call </tf> <jax> ## FlaxXLMRobertaModel [[autodoc]] FlaxXLMRobertaModel - __call__ ## FlaxXLMRobertaForCausalLM [[autodoc]] FlaxXLMRobertaForCausalLM - __call__ ## FlaxXLMRobertaForMaskedLM [[autodoc]] FlaxXLMRobertaForMaskedLM - __call__ ## FlaxXLMRobertaForSequenceClassification [[autodoc]] FlaxXLMRobertaForSequenceClassification - __call__ ## FlaxXLMRobertaForMultipleChoice [[autodoc]] FlaxXLMRobertaForMultipleChoice - __call__ ## FlaxXLMRobertaForTokenClassification [[autodoc]] FlaxXLMRobertaForTokenClassification - __call__ ## FlaxXLMRobertaForQuestionAnswering [[autodoc]] FlaxXLMRobertaForQuestionAnswering - __call__ </jax> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/encodec.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # EnCodec ## Overview The EnCodec neural codec model was proposed in [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) by Alexandre Dรฉfossez, Jade Copet, Gabriel Synnaeve, Yossi Adi. The abstract from the paper is the following: *We introduce a state-of-the-art real-time, high-fidelity, audio codec leveraging neural networks. It consists in a streaming encoder-decoder architecture with quantized latent space trained in an end-to-end fashion. We simplify and speed-up the training by using a single multiscale spectrogram adversary that efficiently reduces artifacts and produce high-quality samples. We introduce a novel loss balancer mechanism to stabilize training: the weight of a loss now defines the fraction of the overall gradient it should represent, thus decoupling the choice of this hyper-parameter from the typical scale of the loss. Finally, we study how lightweight Transformer models can be used to further compress the obtained representation by up to 40%, while staying faster than real time. We provide a detailed description of the key design choices of the proposed model including: training objective, architectural changes and a study of various perceptual loss functions. We present an extensive subjective evaluation (MUSHRA tests) together with an ablation study for a range of bandwidths and audio domains, including speech, noisy-reverberant speech, and music. Our approach is superior to the baselines methods across all evaluated settings, considering both 24 kHz monophonic and 48 kHz stereophonic audio.* This model was contributed by [Matthijs](https://huggingface.co/Matthijs), [Patrick Von Platen](https://huggingface.co/patrickvonplaten) and [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/facebookresearch/encodec). ## Usage example Here is a quick example of how to encode and decode an audio using this model: ```python >>> from datasets import load_dataset, Audio >>> from transformers import EncodecModel, AutoProcessor >>> librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> model = EncodecModel.from_pretrained("facebook/encodec_24khz") >>> processor = AutoProcessor.from_pretrained("facebook/encodec_24khz") >>> librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate)) >>> audio_sample = librispeech_dummy[-1]["audio"]["array"] >>> inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt") >>> encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"]) >>> audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0] >>> # or the equivalent with a forward pass >>> audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values ``` ## EncodecConfig [[autodoc]] EncodecConfig ## EncodecFeatureExtractor [[autodoc]] EncodecFeatureExtractor - __call__ ## EncodecModel [[autodoc]] EncodecModel - decode - encode - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/videomae.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # VideoMAE ## Overview The VideoMAE model was proposed in [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. VideoMAE extends masked auto encoders ([MAE](vit_mae)) to video, claiming state-of-the-art performance on several video classification benchmarks. The abstract from the paper is the following: *Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking and reconstruction. These simple designs turn out to be effective for overcoming information leakage caused by the temporal correlation during video reconstruction. We obtain three important findings on SSVP: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance of VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets are important issues in SSVP. Notably, our VideoMAE with the vanilla ViT backbone can achieve 83.9% on Kinects-400, 75.3% on Something-Something V2, 90.8% on UCF101, and 61.1% on HMDB51 without using any extra data.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/videomae_architecture.jpeg" alt="drawing" width="600"/> <small> VideoMAE pre-training. Taken from the <a href="https://arxiv.org/abs/2203.12602">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/MCG-NJU/VideoMAE). ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with VideoMAE. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. **Video classification** - [A notebook](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) that shows how to fine-tune a VideoMAE model on a custom dataset. - [Video classification task guide](../tasks/video_classification) - [A ๐Ÿค— Space](https://huggingface.co/spaces/sayakpaul/video-classification-ucf101-subset) showing how to perform inference with a video classification model. ## VideoMAEConfig [[autodoc]] VideoMAEConfig ## VideoMAEFeatureExtractor [[autodoc]] VideoMAEFeatureExtractor - __call__ ## VideoMAEImageProcessor [[autodoc]] VideoMAEImageProcessor - preprocess ## VideoMAEModel [[autodoc]] VideoMAEModel - forward ## VideoMAEForPreTraining `VideoMAEForPreTraining` includes the decoder on top for self-supervised pre-training. [[autodoc]] transformers.VideoMAEForPreTraining - forward ## VideoMAEForVideoClassification [[autodoc]] transformers.VideoMAEForVideoClassification - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/cpmant.md
<!--Copyright 2022 The HuggingFace Team and The OpenBMB Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPMAnt ## Overview CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. [See more](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) This model was contributed by [OpenBMB](https://huggingface.co/openbmb). The original code can be found [here](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live). ## Resources - A tutorial on [CPM-Live](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live). ## CpmAntConfig [[autodoc]] CpmAntConfig - all ## CpmAntTokenizer [[autodoc]] CpmAntTokenizer - all ## CpmAntModel [[autodoc]] CpmAntModel - all ## CpmAntForCausalLM [[autodoc]] CpmAntForCausalLM - all
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/layoutxlm.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # LayoutXLM ## Overview LayoutXLM was proposed in [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. It's a multilingual extension of the [LayoutLMv2 model](https://arxiv.org/abs/2012.14740) trained on 53 languages. The abstract from the paper is the following: *Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. To accurately evaluate LayoutXLM, we also introduce a multilingual form understanding benchmark dataset named XFUN, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/unilm). ## Usage tips and examples One can directly plug in the weights of LayoutXLM into a LayoutLMv2 model, like so: ```python from transformers import LayoutLMv2Model model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base") ``` Note that LayoutXLM has its own tokenizer, based on [`LayoutXLMTokenizer`]/[`LayoutXLMTokenizerFast`]. You can initialize it as follows: ```python from transformers import LayoutXLMTokenizer tokenizer = LayoutXLMTokenizer.from_pretrained("microsoft/layoutxlm-base") ``` Similar to LayoutLMv2, you can use [`LayoutXLMProcessor`] (which internally applies [`LayoutLMv2ImageProcessor`] and [`LayoutXLMTokenizer`]/[`LayoutXLMTokenizerFast`] in sequence) to prepare all data for the model. <Tip> As LayoutXLM's architecture is equivalent to that of LayoutLMv2, one can refer to [LayoutLMv2's documentation page](layoutlmv2) for all tips, code examples and notebooks. </Tip> ## LayoutXLMTokenizer [[autodoc]] LayoutXLMTokenizer - __call__ - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LayoutXLMTokenizerFast [[autodoc]] LayoutXLMTokenizerFast - __call__ ## LayoutXLMProcessor [[autodoc]] LayoutXLMProcessor - __call__
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/persimmon.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Persimmon ## Overview The Persimmon model was created by [ADEPT](https://www.adept.ai/blog/persimmon-8b), and authored by Erich Elsen, Augustus Odena, Maxwell Nye, SaฤŸnak TaลŸฤฑrlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani. The authors introduced Persimmon-8B, a decoder model based on the classic transformers architecture, with query and key normalization. Persimmon-8B is a fully permissively-licensed model with approximately 8 billion parameters, released under the Apache license. Some of the key attributes of Persimmon-8B are long context size (16K), performance, and capabilities for multimodal extensions. The authors showcase their approach to model evaluation, focusing on practical text generation, mirroring how users interact with language models. The work also includes a comparative analysis, pitting Persimmon-8B against other prominent models (MPT 7B Instruct and Llama 2 Base 7B 1-Shot), across various evaluation tasks. The results demonstrate Persimmon-8B's competitive performance, even with limited training data. In terms of model details, the work outlines the architecture and training methodology of Persimmon-8B, providing insights into its design choices, sequence length, and dataset composition. The authors present a fast inference code that outperforms traditional implementations through operator fusion and CUDA graph utilization while maintaining code coherence. They express their anticipation of how the community will leverage this contribution to drive innovation, hinting at further upcoming releases as part of an ongoing series of developments. This model was contributed by [ArthurZ](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/persimmon-ai-labs/adept-inference). ## Usage tips <Tip warning={true}> The `Persimmon` models were trained using `bfloat16`, but the original inference uses `float16` The checkpoints uploaded on the hub use `torch_dtype = 'float16'` which will be used by the `AutoModel` API to cast the checkpoints from `torch.float32` to `torch.float16`. The `dtype` of the online weights is mostly irrelevant, unless you are using `torch_dtype="auto"` when initializing a model using `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. The reason is that the model will first be downloaded ( using the `dtype` of the checkpoints online) then it will be cast to the default `dtype` of `torch` (becomes `torch.float32`). Users should specify the `torch_dtype` they want, and if they don't it will be `torch.float32`. Finetuning the model in `float16` is not recommended and known to produce `nan`, as such the model should be fine-tuned in `bfloat16`. </Tip> Tips: - To convert the model, you need to clone the original repository using `git clone https://github.com/persimmon-ai-labs/adept-inference`, then get the checkpoints: ```bash git clone https://github.com/persimmon-ai-labs/adept-inference wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_base_model_release.tar tar -xvf 8b_base_model_release.tar python src/transformers/models/persimmon/convert_persimmon_weights_to_hf.py --input_dir /path/to/downloaded/persimmon/weights/ --output_dir /output/path \ --pt_model_path /path/to/8b_chat_model_release/iter_0001251/mp_rank_00/model_optim_rng.pt --ada_lib_path /path/to/adept-inference ``` For the chat model: ```bash wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_chat_model_release.tar tar -xvf 8b_base_model_release.tar ``` Thereafter, models can be loaded via: ```py from transformers import PersimmonForCausalLM, PersimmonTokenizer model = PersimmonForCausalLM.from_pretrained("/output/path") tokenizer = PersimmonTokenizer.from_pretrained("/output/path") ``` - Perismmon uses a `sentencepiece` based tokenizer, with a `Unigram` model. It supports bytefallback, which is only available in `tokenizers==0.14.0` for the fast tokenizer. The `LlamaTokenizer` is used as it is a standard wrapper around sentencepiece. The `chat` template will be updated with the templating functions in a follow up PR! - The authors suggest to use the following prompt format for the chat mode: `f"human: {prompt}\n\nadept:"` ## PersimmonConfig [[autodoc]] PersimmonConfig ## PersimmonModel [[autodoc]] PersimmonModel - forward ## PersimmonForCausalLM [[autodoc]] PersimmonForCausalLM - forward ## PersimmonForSequenceClassification [[autodoc]] PersimmonForSequenceClassification - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/gpt-sw3.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # GPT-Sw3 ## Overview The GPT-Sw3 model was first proposed in [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey ร–hman, Fredrik Carlsson, Magnus Sahlgren. Since that first paper the authors have extended their work and trained new models on their new 1.2TB corpora named The Nordic Pile. GPT-Sw3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. This model was contributed by [AI Sweden Models](https://huggingface.co/AI-Sweden-Models). ## Usage example ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("AI-Sweden-Models/gpt-sw3-356m") >>> model = AutoModelForCausalLM.from_pretrained("AI-Sweden-Models/gpt-sw3-356m") >>> input_ids = tokenizer("Trรคd รคr fina fรถr att", return_tensors="pt")["input_ids"] >>> generated_token_ids = model.generate(inputs=input_ids, max_new_tokens=10, do_sample=True)[0] >>> print(tokenizer.decode(generated_token_ids)) Trรคd รคr fina fรถr att de รคr fรคrgstarka. Men ibland รคr det fint ``` ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Causal language modeling task guide](../tasks/language_modeling) <Tip> The implementation uses the `GPT2Model` coupled with our `GPTSw3Tokenizer`. Refer to [GPT2Model documentation](gpt2) for API reference and examples. Note that sentencepiece is required to use our tokenizer and can be installed with `pip install transformers[sentencepiece]` or `pip install sentencepiece` </Tip> ## GPTSw3Tokenizer [[autodoc]] GPTSw3Tokenizer - save_vocabulary
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/mixtral.md
<!--Copyright 2023 Mistral AI and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Mixtral ## Overview Mixtral-8x7B was introduced in the [Mixtral of Experts blogpost](https://mistral.ai/news/mixtral-of-experts/) by Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lรฉlio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothรฉe Lacroix, William El Sayed. The introduction of the blog post says: *Today, the team is proud to release Mixtral 8x7B, a high-quality sparse mixture of experts models (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT3.5 on most standard benchmarks.* Mixtral-8x7B is the second large language model (LLM) released by [mistral.ai](https://mistral.ai/), after [Mistral-7B](mistral). ### Architectural details Mixtral-8x7B is a decoder-only Transformer with the following architectural choices: - Mixtral is a Mixture of Experts (MoE) model with 8 experts per MLP, with a total of 45 billion parameters. To learn more about mixture-of-experts, refer to the [blog post](https://huggingface.co/blog/moe). - Despite the model having 45 billion parameters,, the compute required for a single forward pass is the same as that of a 14 billion parameter model. This is because even though each of the experts have to be loaded in RAM (70B like ram requirement) each token from the hidden states are dispatched twice (top 2 routing) and thus the compute (the operation required at each forward computation) is just 2 X sequence_length. The following implementation details are shared with Mistral AI's first model [Mistral-7B](mistral): - Sliding Window Attention - Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens - GQA (Grouped Query Attention) - allowing faster inference and lower cache size. - Byte-fallback BPE tokenizer - ensures that characters are never mapped to out of vocabulary tokens. For more details refer to the [release blog post](https://mistral.ai/news/mixtral-of-experts/). ### License `Mixtral-8x7B` is released under the Apache 2.0 license. ## Usage tips The Mistral team has released 2 checkpoints: - a base model, [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), which has been pre-trained to predict the next token on internet-scale data. - an instruction tuned model, [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), which is the base model optimized for chat purposes using supervised fine-tuning (SFT) and direct preference optimization (DPO). The base model can be used as follows: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-v0.1", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-v0.1") >>> prompt = "My favourite condiment is" >>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "My favourite condiment is to ..." ``` The instruction tuned model can be used as follows: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1") >>> messages = [ ... {"role": "user", "content": "What is your favourite condiment?"}, ... {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, ... {"role": "user", "content": "Do you have mayonnaise recipes?"} ... ] >>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") >>> generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "Mayonnaise can be made as follows: (...)" ``` As can be seen, the instruction-tuned model requires a [chat template](../chat_templating) to be applied to make sure the inputs are prepared in the right format. ## Speeding up Mixtral by using Flash Attention The code snippets above showcase inference without any optimization tricks. However, one can drastically speed up the model by leveraging [Flash Attention](../perf_train_gpu_one.md#flash-attention-2), which is a faster implementation of the attention mechanism used inside the model. First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. ```bash pip install -U flash-attn --no-build-isolation ``` Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). Make also sure to load your model in half-precision (e.g. `torch.float16`) To load and run a model using Flash Attention-2, refer to the snippet below: ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-v0.1", torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-v0.1") >>> prompt = "My favourite condiment is" >>> model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") >>> model.to(device) >>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "The expected output" ``` ### Expected speedups Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using `mistralai/Mixtral-8x7B-v0.1` checkpoint and the Flash Attention 2 version of the model. <div style="text-align: center"> <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/mixtral-7b-inference-large-seqlen.png"> </div> ### Sliding window Attention The current implementation supports the sliding window attention mechanism and memory efficient cache management. To enable sliding window attention, just make sure to have a `flash-attn` version that is compatible with sliding window attention (`>=2.3.0`). The Flash Attention-2 model uses also a more memory efficient cache slicing mechanism - as recommended per the official implementation of Mistral model that use rolling cache mechanism we keep the cache size fixed (`self.config.sliding_window`), support batched generation only for `padding_side="left"` and use the absolute position of the current token to compute the positional embedding. ## Shrinking down Mixtral using quantization As the Mixtral model has 45 billion parameters, that would require about 90GB of GPU RAM in half precision (float16), since each parameter is stored in 2 bytes. However, one can shrink down the size of the model using [quantization](../quantization.md). If the model is quantized to 4 bits (or half a byte per parameter), a single A100 with 40GB of RAM is enough to fit the entire model, as in that case only about 27 GB of RAM is required. Quantizing a model is as simple as passing a `quantization_config` to the model. Below, we'll leverage the BitsAndyBytes quantization (but refer to [this page](../quantization.md) for other quantization methods): ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig >>> # specify how to quantize the model >>> quantization_config = BitsAndBytesConfig( ... load_in_4bit=True, ... bnb_4bit_quant_type="nf4", ... bnb_4bit_compute_dtype="torch.float16", ... ) >>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1", quantization_config=True, device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1") >>> prompt = "My favourite condiment is" >>> messages = [ ... {"role": "user", "content": "What is your favourite condiment?"}, ... {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, ... {"role": "user", "content": "Do you have mayonnaise recipes?"} ... ] >>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") >>> generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "The expected output" ``` This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ) . The original code can be found [here](https://github.com/mistralai/mistral-src). ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with Mixtral. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-generation"/> - A demo notebook to perform supervised fine-tuning (SFT) of Mixtral-8x7B can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Mistral/Supervised_fine_tuning_(SFT)_of_an_LLM_using_Hugging_Face_tooling.ipynb). ๐ŸŒŽ - A [blog post](https://medium.com/@prakharsaxena11111/finetuning-mixtral-7bx8-6071b0ebf114) on fine-tuning Mixtral-8x7B using PEFT. ๐ŸŒŽ - The [Alignment Handbook](https://github.com/huggingface/alignment-handbook) by Hugging Face includes scripts and recipes to perform supervised fine-tuning (SFT) and direct preference optimization with Mistral-7B. This includes scripts for full fine-tuning, QLoRa on a single GPU as well as multi-GPU fine-tuning. - [Causal language modeling task guide](../tasks/language_modeling) ## MixtralConfig [[autodoc]] MixtralConfig ## MixtralModel [[autodoc]] MixtralModel - forward ## MixtralForCausalLM [[autodoc]] MixtralForCausalLM - forward ## MixtralForSequenceClassification [[autodoc]] MixtralForSequenceClassification - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/deberta-v2.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DeBERTa-v2 ## Overview The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's BERT model released in 2018 and Facebook's RoBERTa model released in 2019. It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa. The abstract from the paper is the following: *Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.* The following information is visible directly on the [original implementation repository](https://github.com/microsoft/DeBERTa). DeBERTa v2 is the second version of the DeBERTa model. It includes the 1.5B model used for the SuperGLUE single-model submission and achieving 89.9, versus human baseline 89.8. You can find more details about this submission in the authors' [blog](https://www.microsoft.com/en-us/research/blog/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark/) New in v2: - **Vocabulary** In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data. Instead of a GPT2-based tokenizer, the tokenizer is now [sentencepiece-based](https://github.com/google/sentencepiece) tokenizer. - **nGiE(nGram Induced Input Encoding)** The DeBERTa-v2 model uses an additional convolution layer aside with the first transformer layer to better learn the local dependency of input tokens. - **Sharing position projection matrix with content projection matrix in attention layer** Based on previous experiments, this can save parameters without affecting the performance. - **Apply bucket to encode relative positions** The DeBERTa-v2 model uses log bucket to encode relative positions similar to T5. - **900M model & 1.5B model** Two additional model sizes are available: 900M and 1.5B, which significantly improves the performance of downstream tasks. This model was contributed by [DeBERTa](https://huggingface.co/DeBERTa). This model TF 2.0 implementation was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/microsoft/DeBERTa). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## DebertaV2Config [[autodoc]] DebertaV2Config ## DebertaV2Tokenizer [[autodoc]] DebertaV2Tokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## DebertaV2TokenizerFast [[autodoc]] DebertaV2TokenizerFast - build_inputs_with_special_tokens - create_token_type_ids_from_sequences <frameworkcontent> <pt> ## DebertaV2Model [[autodoc]] DebertaV2Model - forward ## DebertaV2PreTrainedModel [[autodoc]] DebertaV2PreTrainedModel - forward ## DebertaV2ForMaskedLM [[autodoc]] DebertaV2ForMaskedLM - forward ## DebertaV2ForSequenceClassification [[autodoc]] DebertaV2ForSequenceClassification - forward ## DebertaV2ForTokenClassification [[autodoc]] DebertaV2ForTokenClassification - forward ## DebertaV2ForQuestionAnswering [[autodoc]] DebertaV2ForQuestionAnswering - forward ## DebertaV2ForMultipleChoice [[autodoc]] DebertaV2ForMultipleChoice - forward </pt> <tf> ## TFDebertaV2Model [[autodoc]] TFDebertaV2Model - call ## TFDebertaV2PreTrainedModel [[autodoc]] TFDebertaV2PreTrainedModel - call ## TFDebertaV2ForMaskedLM [[autodoc]] TFDebertaV2ForMaskedLM - call ## TFDebertaV2ForSequenceClassification [[autodoc]] TFDebertaV2ForSequenceClassification - call ## TFDebertaV2ForTokenClassification [[autodoc]] TFDebertaV2ForTokenClassification - call ## TFDebertaV2ForQuestionAnswering [[autodoc]] TFDebertaV2ForQuestionAnswering - call ## TFDebertaV2ForMultipleChoice [[autodoc]] TFDebertaV2ForMultipleChoice - call </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/nllb.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # NLLB ## Updated tokenizer behavior **DISCLAIMER:** The default behaviour for the tokenizer was fixed and thus changed in April 2023. The previous version adds `[self.eos_token_id, self.cur_lang_code]` at the end of the token sequence for both target and source tokenization. This is wrong as the NLLB paper mentions (page 48, 6.1.1. Model Architecture) : *Note that we prefix the source sequence with the source language, as opposed to the target language as previously done in several works (Arivazhagan et al., 2019; Johnson et al., 2017). This is primarily because we prioritize optimizing zero-shot performance of our model on any pair of 200 languages at a minor cost to supervised performance.* Previous behaviour: ```python >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> tokenizer("How was your day?").input_ids [13374, 1398, 4260, 4039, 248130, 2, 256047] >>> # 2: '</s>' >>> # 256047 : 'eng_Latn' ``` New behaviour ```python >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> tokenizer("How was your day?").input_ids [256047, 13374, 1398, 4260, 4039, 248130, 2] ``` Enabling the old behaviour can be done as follows: ```python >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour=True) ``` For more details, feel free to check the linked [PR](https://github.com/huggingface/transformers/pull/22313) and [Issue](https://github.com/huggingface/transformers/issues/19943). ## Overview The NLLB model was presented in [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by Marta R. Costa-jussร , James Cross, Onur ร‡elebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmรกn, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. The abstract of the paper is the following: *Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.* This implementation contains the dense models available on release. **The sparse model NLLB-MoE (Mixture of Expert) is now available! More details [here](nllb-moe)** This model was contributed by [Lysandre](https://huggingface.co/lysandre). The authors' code can be found [here](https://github.com/facebookresearch/fairseq/tree/nllb). ## Generating with NLLB While generating the target text set the `forced_bos_token_id` to the target language id. The following example shows how to translate English to French using the *facebook/nllb-200-distilled-600M* model. Note that we're using the BCP-47 code for French `fra_Latn`. See [here](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200) for the list of all BCP-47 in the Flores 200 dataset. ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") >>> article = "UN Chief says there is no military solution in Syria" >>> inputs = tokenizer(article, return_tensors="pt") >>> translated_tokens = model.generate( ... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=30 ... ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie ``` ### Generating from any other language than English English (`eng_Latn`) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language, you should specify the BCP-47 code in the `src_lang` keyword argument of the tokenizer initialization. See example below for a translation from romanian to german: ```py >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( ... "facebook/nllb-200-distilled-600M", token=True, src_lang="ron_Latn" ... ) >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", token=True) >>> article = "ลžeful ONU spune cฤƒ nu existฤƒ o soluลฃie militarฤƒ รฎn Siria" >>> inputs = tokenizer(article, return_tensors="pt") >>> translated_tokens = model.generate( ... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30 ... ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] UN-Chef sagt, es gibt keine militรคrische Lรถsung in Syrien ``` ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## NllbTokenizer [[autodoc]] NllbTokenizer - build_inputs_with_special_tokens ## NllbTokenizerFast [[autodoc]] NllbTokenizerFast ## Using Flash Attention 2 Flash Attention 2 is a faster, optimized version of the attention scores computation which relies on `cuda` kernels. ### Installation First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ``` ### Usage To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). You can use either `torch.float16` or `torch.bfloat16` precision. ```python >>> import torch >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to("cuda").eval() >>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> article = "ลžeful ONU spune cฤƒ nu existฤƒ o soluลฃie militarฤƒ รฎn Siria" >>> inputs = tokenizer(article, return_tensors="pt").to("cuda") >>> translated_tokens = model.generate( ... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30 ... ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] "UN-Chef sagt, es gibt keine militรคrische Lรถsung in Syrien" ``` ### Expected speedups Below is an expected speedup diagram that compares pure inference time between the native implementation and the Flash Attention 2. <div style="text-align: center"> <img src="https://huggingface.co/datasets/visheratin/documentation-images/resolve/main/nllb-speedup.webp"> </div>
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/tvlt.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TVLT ## Overview The TVLT model was proposed in [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal (the first three authors contributed equally). The Textless Vision-Language Transformer (TVLT) is a model that uses raw visual and audio inputs for vision-and-language representation learning, without using text-specific modules such as tokenization or automatic speech recognition (ASR). It can perform various audiovisual and vision-language tasks like retrieval, question answering, etc. The abstract from the paper is the following: *In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text.* <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/tvlt_architecture.png" alt="drawing" width="600"/> </p> <small> TVLT architecture. Taken from the <a href="[https://arxiv.org/abs/2102.03334](https://arxiv.org/abs/2209.14156)">original paper</a>. </small> The original code can be found [here](https://github.com/zinengtang/TVLT). This model was contributed by [Zineng Tang](https://huggingface.co/ZinengTang). ## Usage tips - TVLT is a model that takes both `pixel_values` and `audio_values` as input. One can use [`TvltProcessor`] to prepare data for the model. This processor wraps an image processor (for the image/video modality) and an audio feature extractor (for the audio modality) into one. - TVLT is trained with images/videos and audios of various sizes: the authors resize and crop the input images/videos to 224 and limit the length of audio spectrogram to 2048. To make batching of videos and audios possible, the authors use a `pixel_mask` that indicates which pixels are real/padding and `audio_mask` that indicates which audio values are real/padding. - The design of TVLT is very similar to that of a standard Vision Transformer (ViT) and masked autoencoder (MAE) as in [ViTMAE](vitmae). The difference is that the model includes embedding layers for the audio modality. - The PyTorch version of this model is only available in torch 1.10 and higher. ## TvltConfig [[autodoc]] TvltConfig ## TvltProcessor [[autodoc]] TvltProcessor - __call__ ## TvltImageProcessor [[autodoc]] TvltImageProcessor - preprocess ## TvltFeatureExtractor [[autodoc]] TvltFeatureExtractor - __call__ ## TvltModel [[autodoc]] TvltModel - forward ## TvltForPreTraining [[autodoc]] TvltForPreTraining - forward ## TvltForAudioVisualClassification [[autodoc]] TvltForAudioVisualClassification - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/lilt.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # LiLT ## Overview The LiLT model was proposed in [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. LiLT allows to combine any pre-trained RoBERTa text encoder with a lightweight Layout Transformer, to enable [LayoutLM](layoutlm)-like document understanding for many languages. The abstract from the paper is the following: *Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/lilt_architecture.jpg" alt="drawing" width="600"/> <small> LiLT architecture. Taken from the <a href="https://arxiv.org/abs/2202.13669">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/jpwang/lilt). ## Usage tips - To combine the Language-Independent Layout Transformer with a new RoBERTa checkpoint from the [hub](https://huggingface.co/models?search=roberta), refer to [this guide](https://github.com/jpWang/LiLT#or-generate-your-own-checkpoint-optional). The script will result in `config.json` and `pytorch_model.bin` files being stored locally. After doing this, one can do the following (assuming you're logged in with your HuggingFace account): ```python from transformers import LiltModel model = LiltModel.from_pretrained("path_to_your_files") model.push_to_hub("name_of_repo_on_the_hub") ``` - When preparing data for the model, make sure to use the token vocabulary that corresponds to the RoBERTa checkpoint you combined with the Layout Transformer. - As [lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) uses the same vocabulary as [LayoutLMv3](layoutlmv3), one can use [`LayoutLMv3TokenizerFast`] to prepare data for the model. The same is true for [lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-infoxlm-base): one can use [`LayoutXLMTokenizerFast`] for that model. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with LiLT. - Demo notebooks for LiLT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT). **Documentation resources** - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## LiltConfig [[autodoc]] LiltConfig ## LiltModel [[autodoc]] LiltModel - forward ## LiltForSequenceClassification [[autodoc]] LiltForSequenceClassification - forward ## LiltForTokenClassification [[autodoc]] LiltForTokenClassification - forward ## LiltForQuestionAnswering [[autodoc]] LiltForQuestionAnswering - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/vits.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # VITS ## Overview The VITS model was proposed in [Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech](https://arxiv.org/abs/2106.06103) by Jaehyeon Kim, Jungil Kong, Juhee Son. VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. The abstract from the paper is the following: *Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.* This model can also be used with TTS checkpoints from [Massively Multilingual Speech (MMS)](https://arxiv.org/abs/2305.13516) as these checkpoints use the same architecture and a slightly modified tokenizer. This model was contributed by [Matthijs](https://huggingface.co/Matthijs) and [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original code can be found [here](https://github.com/jaywalnut310/vits). ## Usage examples Both the VITS and MMS-TTS checkpoints can be used with the same API. Since the flow-based model is non-deterministic, it is good practice to set a seed to ensure reproducibility of the outputs. For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint: ```python import torch from transformers import VitsTokenizer, VitsModel, set_seed tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng") model = VitsModel.from_pretrained("facebook/mms-tts-eng") inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt") set_seed(555) # make deterministic with torch.no_grad(): outputs = model(**inputs) waveform = outputs.waveform[0] ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=waveform) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(waveform, rate=model.config.sampling_rate) ``` For certain languages with a non-Roman alphabet, such as Arabic, Mandarin or Hindi, the [`uroman`](https://github.com/isi-nlp/uroman) perl package is required to pre-process the text inputs to the Roman alphabet. You can check whether you require the `uroman` package for your language by inspecting the `is_uroman` attribute of the pre-trained `tokenizer`: ```python from transformers import VitsTokenizer tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng") print(tokenizer.is_uroman) ``` If required, you should apply the uroman package to your text inputs **prior** to passing them to the `VitsTokenizer`, since currently the tokenizer does not support performing the pre-processing itself. To do this, first clone the uroman repository to your local machine and set the bash variable `UROMAN` to the local path: ```bash git clone https://github.com/isi-nlp/uroman.git cd uroman export UROMAN=$(pwd) ``` You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable `UROMAN` to point to the uroman repository, or you can pass the uroman directory as an argument to the `uromaize` function: ```python import torch from transformers import VitsTokenizer, VitsModel, set_seed import os import subprocess tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-kor") model = VitsModel.from_pretrained("facebook/mms-tts-kor") def uromanize(input_string, uroman_path): """Convert non-Roman strings to Roman using the `uroman` perl package.""" script_path = os.path.join(uroman_path, "bin", "uroman.pl") command = ["perl", script_path] process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Execute the perl command stdout, stderr = process.communicate(input=input_string.encode()) if process.returncode != 0: raise ValueError(f"Error {process.returncode}: {stderr.decode()}") # Return the output as a string and skip the new-line character at the end return stdout.decode()[:-1] text = "์ด๋ด ๋ฌด์Šจ ์ผ์ด์•ผ" uromaized_text = uromanize(text, uroman_path=os.environ["UROMAN"]) inputs = tokenizer(text=uromaized_text, return_tensors="pt") set_seed(555) # make deterministic with torch.no_grad(): outputs = model(inputs["input_ids"]) waveform = outputs.waveform[0] ``` ## VitsConfig [[autodoc]] VitsConfig ## VitsTokenizer [[autodoc]] VitsTokenizer - __call__ - save_vocabulary ## VitsModel [[autodoc]] VitsModel - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/jamba.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Jamba ## Overview Jamba is a state-of-the-art, hybrid SSM-Transformer LLM. It is the first production-scale Mamba implementation, which opens up interesting research and application opportunities. While this initial experimentation shows encouraging gains, we expect these to be further enhanced with future optimizations and explorations. For full details of this model please read the [release blog post](https://www.ai21.com/blog/announcing-jamba). ### Model Details Jamba is a pretrained, mixture-of-experts (MoE) generative text model, with 12B active parameters and an overall of 52B parameters across all experts. It supports a 256K context length, and can fit up to 140K tokens on a single 80GB GPU. As depicted in the diagram below, Jamba's architecture features a blocks-and-layers approach that allows Jamba to successfully integrate Transformer and Mamba architectures altogether. Each Jamba block contains either an attention or a Mamba layer, followed by a multi-layer perceptron (MLP), producing an overall ratio of one Transformer layer out of every eight total layers. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/jamba_architecture.png" alt="drawing" width="600"/> ## Usage ### Presequities Jamba requires you use `transformers` version 4.39.0 or higher: ```bash pip install transformers>=4.39.0 ``` In order to run optimized Mamba implementations, you first need to install `mamba-ssm` and `causal-conv1d`: ```bash pip install mamba-ssm causal-conv1d>=1.2.0 ``` You also have to have the model on a CUDA device. You can run the model not using the optimized Mamba kernels, but it is **not** recommended as it will result in significantly lower latencies. In order to do that, you'll need to specify `use_mamba_kernels=False` when loading the model. ### Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1") tokenizer = AutoTokenizer.from_pretrained("ai21labs/Jamba-v0.1") input_ids = tokenizer("In the recent Super Bowl LVIII,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) print(tokenizer.batch_decode(outputs)) # ["<|startoftext|>In the recent Super Bowl LVIII, the Kansas City Chiefs emerged victorious, defeating the San Francisco 49ers in a thrilling overtime showdown. The game was a nail-biter, with both teams showcasing their skills and determination.\n\nThe Chiefs, led by their star quarterback Patrick Mahomes, displayed their offensive prowess, while the 49ers, led by their strong defense, put up a tough fight. The game went into overtime, with the Chiefs ultimately securing the win with a touchdown.\n\nThe victory marked the Chiefs' second Super Bowl win in four years, solidifying their status as one of the top teams in the NFL. The game was a testament to the skill and talent of both teams, and a thrilling end to the NFL season.\n\nThe Super Bowl is not just about the game itself, but also about the halftime show and the commercials. This year's halftime show featured a star-studded lineup, including Usher, Alicia Keys, and Lil Jon. The show was a spectacle of music and dance, with the performers delivering an energetic and entertaining performance.\n"] ``` <details> <summary><strong>Loading the model in half precision</strong></summary> The published checkpoint is saved in BF16. In order to load it into RAM in BF16/FP16, you need to specify `torch_dtype`: ```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1", torch_dtype=torch.bfloat16) # you can also use torch_dtype=torch.float16 ``` When using half precision, you can enable the [FlashAttention2](https://github.com/Dao-AILab/flash-attention) implementation of the Attention blocks. In order to use it, you also need the model on a CUDA device. Since in this precision the model is to big to fit on a single 80GB GPU, you'll also need to parallelize it using [accelerate](https://huggingface.co/docs/accelerate/index): ```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map="auto") ``` </details> <details><summary><strong>Load the model in 8-bit</strong></summary> **Using 8-bit precision, it is possible to fit up to 140K sequence lengths on a single 80GB GPU.** You can easily quantize the model to 8-bit using [bitsandbytes](https://huggingface.co/docs/bitsandbytes/index). In order to not degrade model quality, we recommend to exclude the Mamba blocks from the quantization: ```python from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True, llm_int8_skip_modules=["mamba"]) model = AutoModelForCausalLM.from_pretrained( "ai21labs/Jamba-v0.1", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", quantization_config=quantization_config ) ``` </details> ## JambaConfig [[autodoc]] JambaConfig ## JambaModel [[autodoc]] JambaModel - forward ## JambaForCausalLM [[autodoc]] JambaForCausalLM - forward ## JambaForSequenceClassification [[autodoc]] transformers.JambaForSequenceClassification - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/glpn.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # GLPN <Tip> This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title). </Tip> ## Overview The GLPN model was proposed in [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. GLPN combines [SegFormer](segformer)'s hierarchical mix-Transformer with a lightweight decoder for monocular depth estimation. The proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. The abstract from the paper is the following: *Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the development of convolutional neural networks. In this paper, we propose a novel structure and training strategy for monocular depth estimation to further improve the prediction accuracy of the network. We deploy a hierarchical transformer encoder to capture and convey the global context, and design a lightweight yet powerful decoder to generate an estimated depth map while considering local connectivity. By constructing connected paths between multi-scale local features and the global decoding stream with our proposed selective feature fusion module, the network can integrate both representations and recover fine details. In addition, the proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. Furthermore, we improve the depth-specific augmentation method by utilizing an important observation in depth estimation to enhance the model. Our network achieves state-of-the-art performance over the challenging depth dataset NYU Depth V2. Extensive experiments have been conducted to validate and show the effectiveness of the proposed approach. Finally, our model shows better generalisation ability and robustness than other comparative models.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg" alt="drawing" width="600"/> <small> Summary of the approach. Taken from the <a href="https://arxiv.org/abs/2201.07436" target="_blank">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/vinvino02/GLPDepth). ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with GLPN. - Demo notebooks for [`GLPNForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/GLPN). - [Monocular depth estimation task guide](../tasks/monocular_depth_estimation) ## GLPNConfig [[autodoc]] GLPNConfig ## GLPNFeatureExtractor [[autodoc]] GLPNFeatureExtractor - __call__ ## GLPNImageProcessor [[autodoc]] GLPNImageProcessor - preprocess ## GLPNModel [[autodoc]] GLPNModel - forward ## GLPNForDepthEstimation [[autodoc]] GLPNForDepthEstimation - forward
0
mavonic_private_repos/transformers/docs/source/en
mavonic_private_repos/transformers/docs/source/en/model_doc/wav2vec2.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Wav2Vec2 ## Overview The Wav2Vec2 model was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. The abstract from the paper is the following: *We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). ## Usage tips - Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. ## Using Flash Attention 2 Flash Attention 2 is an faster, optimized version of the model. ### Installation First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer). Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ``` ### Usage To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). We'll also load the model in half-precision (e.g. `torch.float16`), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference: ```python >>> from transformers import Wav2Vec2Model model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-large-960h-lv60-self", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device) ... ``` ### Expected speedups Below is an expected speedup diagram comparing the pure inference time between the native implementation in transformers of the `facebook/wav2vec2-large-960h-lv60-self` model and the flash-attention-2 and sdpa (scale-dot-product-attention) versions. . We show the average speedup obtained on the `librispeech_asr` `clean` validation split: <div style="text-align: center"> <img src="https://huggingface.co/datasets/kamilakesbi/transformers_image_doc/resolve/main/data/Wav2Vec2_speedup.png"> </div> ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with Wav2Vec2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="audio-classification"/> - A notebook on how to [leverage a pretrained Wav2Vec2 model for emotion classification](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb). ๐ŸŒŽ - [`Wav2Vec2ForCTC`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). - [Audio classification task guide](../tasks/audio_classification) <PipelineTag pipeline="automatic-speech-recognition"/> - A blog post on [boosting Wav2Vec2 with n-grams in ๐Ÿค— Transformers](https://huggingface.co/blog/wav2vec2-with-ngram). - A blog post on how to [finetune Wav2Vec2 for English ASR with ๐Ÿค— Transformers](https://huggingface.co/blog/fine-tune-wav2vec2-english). - A blog post on [finetuning XLS-R for Multi-Lingual ASR with ๐Ÿค— Transformers](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2). - A notebook on how to [create YouTube captions from any video by transcribing audio with Wav2Vec2](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb). ๐ŸŒŽ - [`Wav2Vec2ForCTC`] is supported by a notebook on [how to finetune a speech recognition model in English](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb), and [how to finetune a speech recognition model in any language](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb). - [Automatic speech recognition task guide](../tasks/asr) ๐Ÿš€ Deploy - A blog post on how to deploy Wav2Vec2 for [Automatic Speech Recognition with Hugging Face's Transformers & Amazon SageMaker](https://www.philschmid.de/automatic-speech-recognition-sagemaker). ## Wav2Vec2Config [[autodoc]] Wav2Vec2Config ## Wav2Vec2CTCTokenizer [[autodoc]] Wav2Vec2CTCTokenizer - __call__ - save_vocabulary - decode - batch_decode - set_target_lang ## Wav2Vec2FeatureExtractor [[autodoc]] Wav2Vec2FeatureExtractor - __call__ ## Wav2Vec2Processor [[autodoc]] Wav2Vec2Processor - __call__ - pad - from_pretrained - save_pretrained - batch_decode - decode ## Wav2Vec2ProcessorWithLM [[autodoc]] Wav2Vec2ProcessorWithLM - __call__ - pad - from_pretrained - save_pretrained - batch_decode - decode ### Decoding multiple audios If you are planning to decode multiple batches of audios, you should consider using [`~Wav2Vec2ProcessorWithLM.batch_decode`] and passing an instantiated `multiprocessing.Pool`. Otherwise, [`~Wav2Vec2ProcessorWithLM.batch_decode`] performance will be slower than calling [`~Wav2Vec2ProcessorWithLM.decode`] for each audio individually, as it internally instantiates a new `Pool` for every call. See the example below: ```python >>> # Let's see how to use a user-managed pool for batch decoding multiple audios >>> from multiprocessing import get_context >>> from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC >>> from datasets import load_dataset >>> import datasets >>> import torch >>> # import model, feature extractor, tokenizer >>> model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm").to("cuda") >>> processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm") >>> # load example dataset >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000)) >>> def map_to_array(batch): ... batch["speech"] = batch["audio"]["array"] ... return batch >>> # prepare speech data for batch inference >>> dataset = dataset.map(map_to_array, remove_columns=["audio"]) >>> def map_to_pred(batch, pool): ... inputs = processor(batch["speech"], sampling_rate=16_000, padding=True, return_tensors="pt") ... inputs = {k: v.to("cuda") for k, v in inputs.items()} ... with torch.no_grad(): ... logits = model(**inputs).logits ... transcription = processor.batch_decode(logits.cpu().numpy(), pool).text ... batch["transcription"] = transcription ... return batch >>> # note: pool should be instantiated *after* `Wav2Vec2ProcessorWithLM`. >>> # otherwise, the LM won't be available to the pool's sub-processes >>> # select number of processes and batch_size based on number of CPU cores available and on dataset size >>> with get_context("fork").Pool(processes=2) as pool: ... result = dataset.map( ... map_to_pred, batched=True, batch_size=2, fn_kwargs={"pool": pool}, remove_columns=["speech"] ... ) >>> result["transcription"][:2] ['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', "NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"] ``` ## Wav2Vec2 specific outputs [[autodoc]] models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput [[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2BaseModelOutput [[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput [[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput [[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput <frameworkcontent> <pt> ## Wav2Vec2Model [[autodoc]] Wav2Vec2Model - forward ## Wav2Vec2ForCTC [[autodoc]] Wav2Vec2ForCTC - forward - load_adapter ## Wav2Vec2ForSequenceClassification [[autodoc]] Wav2Vec2ForSequenceClassification - forward ## Wav2Vec2ForAudioFrameClassification [[autodoc]] Wav2Vec2ForAudioFrameClassification - forward ## Wav2Vec2ForXVector [[autodoc]] Wav2Vec2ForXVector - forward ## Wav2Vec2ForPreTraining [[autodoc]] Wav2Vec2ForPreTraining - forward </pt> <tf> ## TFWav2Vec2Model [[autodoc]] TFWav2Vec2Model - call ## TFWav2Vec2ForSequenceClassification [[autodoc]] TFWav2Vec2ForSequenceClassification - call ## TFWav2Vec2ForCTC [[autodoc]] TFWav2Vec2ForCTC - call </tf> <jax> ## FlaxWav2Vec2Model [[autodoc]] FlaxWav2Vec2Model - __call__ ## FlaxWav2Vec2ForCTC [[autodoc]] FlaxWav2Vec2ForCTC - __call__ ## FlaxWav2Vec2ForPreTraining [[autodoc]] FlaxWav2Vec2ForPreTraining - __call__ </jax> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/fr/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Installation de Transformers ! pip install transformers datasets evaluate accelerate # Pour installer ร  partir du code source au lieu de la derniรจre version, commentez la commande ci-dessus et dรฉcommentez la suivante. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/fr/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Chargement d'instances prรฉ-entraรฎnรฉes avec une AutoClass Avec autant d'architectures Transformer diffรฉrentes, il peut รชtre difficile d'en crรฉer une pour votre ensemble de poids (aussi appelรฉs "weights" ou "checkpoint" en anglais). Dans l'idรฉe de crรฉer une librairie facile, simple et flexible ร  utiliser, ๐Ÿค— Transformers fournit une `AutoClass` qui infรจre et charge automatiquement l'architecture correcte ร  partir d'un ensemble de poids donnรฉ. La fonction `from_pretrained()` vous permet de charger rapidement un modรจle prรฉ-entraรฎnรฉ pour n'importe quelle architecture afin que vous n'ayez pas ร  consacrer du temps et des ressources ร  l'entraรฎnement d'un modรจle ร  partir de zรฉro. Produire un tel code indรฉpendant d'un ensemble de poids signifie que si votre code fonctionne pour un ensemble de poids, il fonctionnera avec un autre ensemble - tant qu'il a รฉtรฉ entraรฎnรฉ pour une tรขche similaire - mรชme si l'architecture est diffรฉrente. <Tip> Rappel, l'architecture fait rรฉfรฉrence au squelette du modรจle et l'ensemble de poids contient les poids pour une architecture donnรฉe. Par exemple, [BERT](https://huggingface.co/google-bert/bert-base-uncased) est une architecture, tandis que `google-bert/bert-base-uncased` est un ensemble de poids. Le terme modรจle est gรฉnรฉral et peut signifier soit architecture soit ensemble de poids. </Tip> Dans ce tutoriel, vous apprendrez ร : * Charger un tokenizer prรฉ-entraรฎnรฉ. * Charger un processeur d'image prรฉ-entraรฎnรฉ. * Charger un extracteur de caractรฉristiques prรฉ-entraรฎnรฉ. * Charger un processeur prรฉ-entraรฎnรฉ. * Charger un modรจle prรฉ-entraรฎnรฉ. ## AutoTokenizer Quasiment toutes les tรขches de traitement du langage (NLP) commencent avec un tokenizer. Un tokenizer convertit votre texte initial dans un format qui peut รชtre traitรฉ par le modรจle. Chargez un tokenizer avec [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") ``` Puis, transformez votre texte initial comme montrรฉ ci-dessous: ```py >>> sequence = "In a hole in the ground there lived a hobbit." >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoImageProcessor Pour les tรขches de vision, un processeur d'image traite l'image pour la formater correctment. ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ## AutoFeatureExtractor Pour les tรขches audio, un extracteur de caractรฉristiques (aussi appelรฉs "features" en anglais) traite le signal audio pour le formater correctement. Chargez un extracteur de caractรฉristiques avec [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor Les tรขches multimodales nรฉcessitent un processeur qui combine deux types d'outils de prรฉtraitement. Par exemple, le modรจle [LayoutLMV2](model_doc/layoutlmv2) nรฉcessite un processeur d'image pour traiter les images et un tokenizer pour traiter le texte ; un processeur combine les deux. Chargez un processeur avec [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel <frameworkcontent> <pt> Enfin, les classes `AutoModelFor` vous permettent de charger un modรจle prรฉ-entraรฎnรฉ pour une tรขche donnรฉe (voir [ici](model_doc/auto) pour une liste complรจte des tรขches disponibles). Par exemple, chargez un modรจle pour la classification de sรฉquence avec [`AutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Rรฉutilisez facilement le mรชme ensemble de poids pour charger une architecture pour une tรขche diffรฉrente : ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` <Tip warning={true}> Pour les modรจles PyTorch, la fonction `from_pretrained()` utilise `torch.load()` qui utilise `pickle` en interne et est connu pour รชtre non sรฉcurisรฉ. En gรฉnรฉral, ne chargez jamais un modรจle qui pourrait provenir d'une source non fiable, ou qui pourrait avoir รฉtรฉ altรฉrรฉ. Ce risque de sรฉcuritรฉ est partiellement attรฉnuรฉ pour les modรจles hรฉbergรฉs publiquement sur le Hugging Face Hub, qui sont [scannรฉs pour les logiciels malveillants](https://huggingface.co/docs/hub/security-malware) ร  chaque modification. Consultez la [documentation du Hub](https://huggingface.co/docs/hub/security) pour connaรฎtre les meilleures pratiques comme la [vรฉrification des modifications signรฉes](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg) avec GPG. Les points de contrรดle TensorFlow et Flax ne sont pas concernรฉs, et peuvent รชtre chargรฉs dans des architectures PyTorch en utilisant les arguments `from_tf` et `from_flax` de la fonction `from_pretrained` pour contourner ce problรจme. </Tip> En gรฉnรฉral, nous recommandons d'utiliser les classes `AutoTokenizer` et `AutoModelFor` pour charger des instances prรฉ-entraรฎnรฉes de tokenizers et modรจles respectivement. Cela vous permettra de charger la bonne architecture ร  chaque fois. Dans le prochain [tutoriel](preprocessing), vous apprenez ร  utiliser un tokenizer, processeur d'image, extracteur de caractรฉristiques et processeur pour prรฉ-traiter un jeu de donnรฉes pour le fine-tuning. </pt> <tf> Enfin, les classes `TFAutoModelFor` vous permettent de charger un modรจle prรฉ-entraรฎnรฉ pour une tรขche donnรฉe (voir [ici](model_doc/auto) pour une liste complรจte des tรขches disponibles). Par exemple, chargez un modรจle pour la classification de sรฉquence avec [`TFAutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` Rรฉutilisez facilement le mรชme ensemble de poids pour charger une architecture pour une tรขche diffรฉrente : ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` En gรฉnรฉral, nous recommandons d'utiliser les classes `AutoTokenizer` et `TFAutoModelFor` pour charger des instances prรฉ-entraรฎnรฉes de tokenizers et modรจles respectivement. Cela vous permettra de charger la bonne architecture ร  chaque fois. Dans le prochain [tutoriel](preprocessing), vous apprenez ร  utiliser un tokenizer, processeur d'image, extracteur de caractรฉristiques et processeur pour prรฉ-traiter un jeu de donnรฉes pour le fine-tuning. </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/fr/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Visite rapide - local: installation title: Installation title: Dรฉmarrer - sections: - local: in_translation title: Pipelines pour l'infรฉrence - local: autoclass_tutorial title: Chargement d'instances prรฉ-entraรฎnรฉes avec une AutoClass - local: in_translation title: Prรฉparation des donnรฉes - local: in_translation title: Fine-tune un modรจle prรฉ-entraรฎnรฉ - local: in_translation title: Entraรฎnement avec un script - local: in_translation title: Entraรฎnement distribuรฉ avec ๐Ÿค— Accelerate - local: in_translation title: Chargement et entraรฎnement des adaptateurs avec ๐Ÿค— PEFT - local: in_translation title: Partager un modรจle - local: in_translation title: Agents - local: in_translation title: Gรฉnรฉration avec LLMs title: Tutoriels
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/fr/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Apprentissage automatique de pointe pour [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), et [JAX](https://jax.readthedocs.io/en/latest/). ๐Ÿค— Transformers fournit des API et des outils pour tรฉlรฉcharger et entraรฎner facilement des modรจles prรฉ-entraรฎnรฉs de pointe. L'utilisation de modรจles prรฉ-entraรฎnรฉs peut rรฉduire vos coรปts de calcul, votre empreinte carbone, et vous faire รฉconomiser le temps et les ressources nรฉcessaires pour entraรฎner un modรจle ร  partir de zรฉro. Ces modรจles prennent en charge des tรขches courantes dans diffรฉrentes modalitรฉs, telles que : ๐Ÿ“ **Traitement automatique des langues**: classification de texte, reconnaissance d'entitรฉs, systรจme de question-rรฉponse, modรจle de langage, gรฉnรฉration de rรฉsumรฉ, traduction, question ร  choix multiples et gรฉnรฉration de texte.<br> ๐Ÿ–ผ๏ธ **Vision par ordinateur**: classification d'image, dรฉtection d'objet et segmentation.<br> ๐Ÿ—ฃ๏ธ **Audio**: reconnaissance automatique de la parole et classification audio.<br> ๐Ÿ™ **Multimodalitรฉ**: systรจme de question-rรฉponse avec des tableaux ou images, reconnaissance optique de caractรจres, extraction d'information depuis des documents scannรฉs et classification de vidรฉo. ๐Ÿค— Transformers prend en charge l'interopรฉrabilitรฉ entre PyTorch, TensorFlow et JAX. Cela permet d'utiliser un framework diffรฉrent ร  chaque รฉtape de la vie d'un modรจle, par exemple entraรฎner un modรจle en trois lignes de code avec un framework, et le charger pour l'infรฉrence avec un autre. Les modรจles peuvent รฉgalement รชtre exportรฉs dans un format comme ONNX et TorchScript pour รชtre dรฉployรฉs dans des environnements de production. Rejoignez la communautรฉ grandissante sur le [Hub](https://huggingface.co/models), le [forum](https://discuss.huggingface.co/) ou [Discord](https://discord.com/invite/JfAtkvEtRb) dรจs aujourd'hui ! ## Si vous cherchez un support personnalisรฉ de l'รฉquipe Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Contents La documentation est organisรฉe en 5 parties: - **DEMARRER** propose une visite rapide de la bibliothรจque et des instructions d'installation pour รชtre opรฉrationnel. - **TUTORIELS** excellent point de dรฉpart pour les dรฉbutants. Cette section vous aidera ร  acquรฉrir les compรฉtences de base dont vous avez besoin pour commencer ร  utiliser la bibliothรจque. - **GUIDES D'UTILISATION** pour diffรฉrentes tรขches comme par exemple le finetuning d'un modรจle prรฉ-entraรฎnรฉ pour la classification de texte ou comment crรฉer et partager votre propre modรจle. - **GUIDES CONCEPTUELS** pour plus de discussions et d'explications sur les concepts et les idรฉes sous-jacentes aux modรจles, aux tรขches et ร  la philosophie de conception de ๐Ÿค— Transformers. - **API** dรฉcrit toutes les classes et fonctions : - **CLASSES PRINCIPALES** dรฉtaille les classes les plus importantes comme la configuration, le modรจle, le tokenizer et le pipeline.. - **MODELES** dรฉtaille les classes et les fonctions propres ร  chaque modรจle de la bibliothรจque. - **UTILITAIRES INTERNES** dรฉtaille les classes et fonctions utilitaires utilisรฉes en interne. ### Modรจles supportรฉs <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[AltCLIP](model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell. 1. **[Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BioGpt](model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 1. **[BiT](model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLIP](model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[BridgeTower](model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[Chinese-CLIP](model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CLIPSeg](model_doc/clipseg)** (from University of Gรถttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[Conditional DETR](model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETA](model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krรคhenbรผhl. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DiNAT](model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[Donut](model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientFormer](model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ERNIE](model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 1. **[ESM](model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 1. **[FastSpeech2Conformer](model_doc/fastspeech2_conformer)** (from ESPnet) released with the paper [Recent Developments On Espnet Toolkit Boosted By Conformer](https://arxiv.org/abs/2010.13956) by Pengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, Jiatong Shi, Jing Shi, Shinji Watanabe, Kun Wei, Wangyou Zhang, and Yuekai Zhang. 1. **[FLAN-T5](model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GIT](model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://openai.com/research/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://openai.com/research/better-language-models/) by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPT-Sw3](model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey ร–hman, Fredrik Carlsson, Magnus Sahlgren. 1. **[Graphormer](model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu. 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[Jukebox](model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[LiLT](model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[MarkupLM](model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileNetV1](model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 1. **[MobileNetV2](model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[NAT](model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[PEGASUS-X](model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. 1. **[RoCBert](model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechT5](model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[Swin2SR](model_doc/swin2sr)** (from University of Wรผrzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. 1. **[SwitchTransformers](model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[Table Transformer](model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Time Series Transformer](model_doc/time_series_transformer)** (from HuggingFace). 1. **[TimeSformer](model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[UPerNet](model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViT Hybrid](model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[ViTMSN](model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Whisper](model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 1. **[X-CLIP](model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Frameworks compatibles Le tableau ci-dessous reprรฉsente la prise en charge actuelle dans la bibliothรจque pour chacun de ces modรจles, qu'ils aient ou non un tokenizer Python (appelรฉ "slow"). Un tokenizer rapide ("fast") soutenu par la bibliothรจque ๐Ÿค— Tokenizers, qu'ils aient un support en Jax (via Flax), PyTorch, et/ou TensorFlow. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Modรจle | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:-----------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | AltCLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | Audio Spectrogram Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | BioGpt | โœ… | โŒ | โœ… | โŒ | โŒ | | BiT | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | BridgeTower | โŒ | โŒ | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | Chinese-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CLIPSeg | โŒ | โŒ | โœ… | โŒ | โŒ | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | Conditional DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Deformable DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETA | โŒ | โŒ | โœ… | โŒ | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DiNAT | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DonutSwin | โŒ | โŒ | โœ… | โŒ | โŒ | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | EfficientFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | ERNIE | โŒ | โŒ | โœ… | โŒ | โŒ | | ESM | โœ… | โŒ | โœ… | โœ… | โŒ | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FastSpeech2Conformer | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GIT | โŒ | โŒ | โœ… | โŒ | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT NeoX Japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GPT-Sw3 | โœ… | โœ… | โœ… | โœ… | โœ… | | Graphormer | โŒ | โŒ | โœ… | โŒ | โŒ | | GroupViT | โŒ | โŒ | โœ… | โœ… | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Jukebox | โœ… | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | LiLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MarkupLM | โœ… | โœ… | โœ… | โŒ | โŒ | | Mask2Former | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormerSwin | โŒ | โŒ | โŒ | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileNetV1 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileNetV2 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileViT | โŒ | โŒ | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | NAT | โŒ | โŒ | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OneFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | PEGASUS-X | โŒ | โŒ | โœ… | โŒ | โŒ | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โŒ | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoBERTa-PreLayerNorm | โŒ | โŒ | โœ… | โœ… | โœ… | | RoCBert | โœ… | โŒ | โœ… | โŒ | โŒ | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | SpeechT5 | โœ… | โŒ | โœ… | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | Swin2SR | โŒ | โŒ | โœ… | โŒ | โŒ | | SwitchTransformers | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Table Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Time Series Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TimeSformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | UPerNet | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViT Hybrid | โŒ | โŒ | โœ… | โŒ | โŒ | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | ViTMSN | โŒ | โŒ | โœ… | โŒ | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | Whisper | โœ… | โŒ | โœ… | โœ… | โŒ | | X-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/fr/in_translation.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Traduction en cours.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/fr/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Installation Installez ๐Ÿค— Transformers pour n'importe quelle librairie d'apprentissage profond avec laquelle vous avez l'habitude de travaillez, configurez votre cache et configurez ๐Ÿค— Transformers pour un usage hors ligne (facultatif). ๐Ÿค— Transformers est testรฉ avec Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+ et Flax. Consulter les instructions d'installation ci-dessous pour la librairie d'apprentissage profond que vous utilisez: * Instructions d'installation pour [PyTorch](https://pytorch.org/get-started/locally/). * Instructions d'installation pour [TensorFlow 2.0](https://www.tensorflow.org/install/pip). * Instructions d'installation pour [Flax](https://flax.readthedocs.io/en/latest/). ## Installation avec pip Vous devriez installer ๐Ÿค— Transformers dans un [environnement virtuel](https://docs.python.org/3/library/venv.html). Si vous n'รชtes pas ร  l'aise avec les environnements virtuels, consultez ce [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Utiliser un environnement virtuel permet de facilement gรฉrer diffรฉrents projets et d'รฉviter des erreurs de compatibilitรฉ entre les diffรฉrentes dรฉpendances. Commencez par crรฉer un environnement virtuel dans l'espace de travail de votre projet : ```bash python -m venv .env ``` Activez l'environnement virtuel. Sur Linux ou MacOs : ```bash source .env/bin/activate ``` Activez l'environnement virtuel sur Windows : ```bash .env/Scripts/activate ``` Maintenant, ๐Ÿค— Transformers peut รชtre installรฉ avec la commande suivante : ```bash pip install transformers ``` Pour une utilisation avec CPU seulement, ๐Ÿค— Transformers et la librairie d'apprentissage profond de votre choix peuvent รชtre installรฉs en une seule ligne. Par exemple, installez ๐Ÿค— Transformers et PyTorch avec la commande suivante : ```bash pip install 'transformers[torch]' ``` ๐Ÿค— Transformers et TensorFlow 2.0 : ```bash pip install 'transformers[tf-cpu]' ``` <Tip warning={true}> Pour les architectures mac M1 / ARM Vous devez installer les outils suivants avant d'installer TensorFLow 2.0 ```bash brew install cmake brew install pkg-config ``` </Tip> ๐Ÿค— Transformers et Flax : ```bash pip install 'transformers[flax]' ``` Vรฉrifiez que ๐Ÿค— Transformers a bien รฉtรฉ installรฉ avec la commande suivante. La commande va tรฉlรฉcharger un modรจle prรฉ-entraรฎnรฉ : ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Le label et score sont ensuite affichรฉs : ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Installation depuis le code source Installez ๐Ÿค— Transformers depuis le code source avec la commande suivante : ```bash pip install git+https://github.com/huggingface/transformers ``` Cette commande installe la version depuis la branche `main` au lieu de la derniรจre version stable. La version de la branche `main` est utile pour avoir les derniers dรฉveloppements. Par exemple, si un bug a รฉtรฉ rรฉsolu depuis la derniรจre version stable mais n'a pas encore รฉtรฉ publiรฉ officiellement. Cependant, cela veut aussi dire que la version de la branche `main` n'est pas toujours stable. Nous nous efforรงons de maintenir la version de la branche `main` opรฉrationnelle, et la plupart des problรจmes sont gรฉnรฉralement rรฉsolus en l'espace de quelques heures ou d'un jour. Si vous recontrez un problรจme, n'hรฉsitez pas ร  crรฉer une [Issue](https://github.com/huggingface/transformers/issues) pour que l'on puisse trouver une solution au plus vite ! Vรฉrifiez que ๐Ÿค— Transformers a bien รฉtรฉ installรฉ avec la commande suivante : ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Installation modifiable Vous aurez besoin d'une installation modifiable si vous le souhaitez : * Utiliser la version de la branche `main` du code source. * Contribuer ร  ๐Ÿค— Transformers et vouler tester vos modifications du code source. Clonez le projet et installez ๐Ÿค— Transformers avec les commandes suivantes : ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` Ces commandes crรฉent des liens entre le dossier oรน le projet a รฉtรฉ clonรฉ et les chemins de vos librairies Python. Python regardera maintenant dans le dossier que vous avez clonรฉ en plus des dossiers oรน sont installรฉes vos autres librairies. Par exemple, si vos librairies Python sont installรฉes dans `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python cherchera aussi dans le dossier oรน vous avez clonรฉ : `~/transformers/`. <Tip warning={true}> Vous devez garder le dossier `transformers` si vous voulez continuer d'utiliser la librairie. </Tip> Maintenant, vous pouvez facilement mettre ร  jour votre clone avec la derniรจre version de ๐Ÿค— Transformers en utilisant la commande suivante : ```bash cd ~/transformers/ git pull ``` Votre environnement Python utilisera la version de la branche `main` lors de la prochaine exรฉcution. ## Installation avec conda Installation via le canal `conda-forge` de conda : ```bash conda install conda-forge::transformers ``` ## Configuration du cache Les modรจles prรฉ-entraรฎnรฉs sont tรฉlรฉchargรฉs et mis en cache localement dans le dossier suivant : `~/.cache/huggingface/hub`. C'est le dossier par dรฉfaut donnรฉ par la variable d'environnement `TRANSFORMERS_CACHE`. Sur Windows, le dossier par dรฉfaut est `C:\Users\nom_utilisateur\.cache\huggingface\hub`. Vous pouvez modifier les variables d'environnement indiquรฉes ci-dessous - par ordre de prioritรฉ - pour spรฉcifier un dossier de cache diffรฉrent : 1. Variable d'environnement (par dรฉfaut) : `HUGGINGFACE_HUB_CACHE` ou `TRANSFORMERS_CACHE`. 2. Variable d'environnement : `HF_HOME`. 3. Variable d'environnement : `XDG_CACHE_HOME` + `/huggingface`. <Tip> ๐Ÿค— Transformers utilisera les variables d'environnement `PYTORCH_TRANSFORMERS_CACHE` ou `PYTORCH_PRETRAINED_BERT_CACHE` si vous utilisez une version prรฉcรฉdente de cette librairie et avez dรฉfini ces variables d'environnement, sauf si vous spรฉcifiez la variable d'environnement `TRANSFORMERS_CACHE`. </Tip> ## Mode hors ligne ๐Ÿค— Transformers peut fonctionner dans un environnement cloisonnรฉ ou hors ligne en n'utilisant que des fichiers locaux. Dรฉfinissez la variable d'environnement `TRANSFORMERS_OFFLINE=1` pour activer ce mode. <Tip> Ajoutez [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) ร  votre processus d'entraรฎnement hors ligne en dรฉfinissant la variable d'environnement `HF_DATASETS_OFFLINE=1`. </Tip> ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Le script devrait maintenant s'exรฉcuter sans rester en attente ou attendre une expiration, car il n'essaiera pas de tรฉlรฉcharger des modรจle sur le Hub. Vous pouvez aussi รฉviter de tรฉlรฉcharger un modรจle ร  chaque appel de la fonction [`~PreTrainedModel.from_pretrained`] en utilisant le paramรจtre [local_files_only]. Seuls les fichiers locaux sont chargรฉs lorsque ce paramรจtre est activรฉ (c.-ร -d. `local_files_only=True`) : ```py from transformers import T5Model model = T5Model.from_pretrained("./path/to/local/directory", local_files_only=True) ``` ### Rรฉcupรฉrer des modรจles et des tokenizers pour une utilisation hors ligne Une autre option pour utiliser ๐Ÿค— Transformers hors ligne est de tรฉlรฉcharger les fichiers ร  l'avance, puis d'utiliser les chemins locaux lorsque vous en avez besoin en mode hors ligne. Il existe trois faรงons de faire cela : * Tรฉlรฉchargez un fichier via l'interface utilisateur sur le [Model Hub](https://huggingface.co/models) en cliquant sur l'icรดne โ†“. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Utilisez les fonctions [`PreTrainedModel.from_pretrained`] et [`PreTrainedModel.save_pretrained`] : 1. Tรฉlรฉchargez vos fichiers ร  l'avance avec [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Sauvegardez les fichiers dans un dossier de votre choix avec [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. Maintenant, lorsque vous รชtes hors ligne, rechargez vos fichiers avec [`PreTrainedModel.from_pretrained`] depuis le dossier oรน vous les avez sauvegardรฉs : ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * Tรฉlรฉchargez des fichiers de maniรจre automatique avec la librairie [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) : 1. Installez la librairie `huggingface_hub` dans votre environnement virtuel : ```bash python -m pip install huggingface_hub ``` 2. Utilisez la fonction [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) pour tรฉlรฉcharger un fichier vers un chemin de votre choix. Par exemple, la commande suivante tรฉlรฉcharge le fichier `config.json` du modรจle [T0](https://huggingface.co/bigscience/T0_3B) vers le chemin de votre choix : ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` Une fois que votre fichier est tรฉlรฉchargรฉ et cachรฉ localement, spรฉcifiez son chemin local pour le charger et l'utiliser : ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Consultez la section [How to download files from the Hub (Comment tรฉlรฉcharger des fichiers depuis le Hub)](https://huggingface.co/docs/hub/how-to-downstream) pour plus de dรฉtails sur le tรฉlรฉchargement de fichiers stockรฉs sur le Hub. </Tip>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/fr/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Visite rapide [[open-in-colab]] Soyez opรฉrationnel avec ๐Ÿค— Transformers ! Que vous soyez un dรฉveloppeur ou un utilisateur lambda, cette visite rapide vous aidera ร  dรฉmarrer et vous montrera comment utiliser le [`pipeline`] pour l'infรฉrence, charger un modรจle prรฉ-entraรฎnรฉ et un prรฉprocesseur avec une [AutoClass](./model_doc/auto), et entraรฎner rapidement un modรจle avec PyTorch ou TensorFlow. Si vous รชtes un dรฉbutant, nous vous recommandons de consulter nos tutoriels ou notre [cours](https://huggingface.co/course/chapter1/1) suivant pour des explications plus approfondies des concepts prรฉsentรฉs ici. Avant de commencer, assurez-vous que vous avez installรฉ toutes les bibliothรจques nรฉcessaires : ```bash !pip install transformers datasets evaluate accelerate ``` Vous aurez aussi besoin d'installer votre bibliothรจque d'apprentissage profond favorite : <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## Pipeline <Youtube id="tiZFewofSLM"/> Le [`pipeline`] est le moyen le plus simple d'utiliser un modรจle prรฉ-entraรฎnรฉ pour l'infรฉrence. Vous pouvez utiliser le [`pipeline`] prรชt ร  l'emploi pour de nombreuses tรขches dans diffรฉrentes modalitรฉs. Consultez le tableau ci-dessous pour connaรฎtre les tรขches prises en charge : | **Tรขche** | **Description** | **Modalitรฉ** | **Identifiant du pipeline** | |------------------------------|--------------------------------------------------------------------------------------------------------------|----------------------|-----------------------------------------------| | Classification de texte | Attribue une catรฉgorie ร  une sรฉquence de texte donnรฉe | Texte | pipeline(task="sentiment-analysis") | | Gรฉnรฉration de texte | Gรฉnรจre du texte ร  partir d'une consigne donnรฉe | Texte | pipeline(task="text-generation") | | Reconnaissance de token nommรฉ | Attribue une catรฉgorie ร  chaque token dans une sรฉquence (personnes, organisation, localisation, etc.) | Texte | pipeline(task="ner") | | Question rรฉponse | Extrait une rรฉponse du texte en fonction du contexte et d'une question | Texte | pipeline(task="question-answering") | | Prรฉdiction de token masquรฉ | Prรฉdit correctement le token masquรฉ dans une sรฉquence | Texte | pipeline(task="fill-mask") | | Gรฉnรฉration de rรฉsumรฉ | Gรฉnรจre un rรฉsumรฉ d'une sรฉquence de texte donnรฉe ou d'un document | Texte | pipeline(task="summarization") | | Traduction | Traduit du texte d'un langage ร  un autre | Texte | pipeline(task="translation") | | Classification d'image | Attribue une catรฉgorie ร  une image | Image | pipeline(task="image-classification") | | Segmentation d'image | Attribue une catรฉgorie ร  chaque pixel d'une image (supporte la segmentation sรฉmantique, panoptique et d'instance) | Image | pipeline(task="image-segmentation") | | Dรฉtection d'objets | Prรฉdit les dรฉlimitations et catรฉgories d'objets dans une image | Image | pipeline(task="object-detection") | | Classification d'audio | Attribue une catรฉgorie ร  un fichier audio | Audio | pipeline(task="audio-classification") | | Reconnaissance automatique de la parole | Extrait le discours d'un fichier audio en texte | Audio | pipeline(task="automatic-speech-recognition") | | Question rรฉponse visuels | Etant donnรฉes une image et une question, rรฉpond correctement ร  une question sur l'image | Modalitรฉs multiples | pipeline(task="vqa") | Commencez par crรฉer une instance de [`pipeline`] et spรฉcifiez la tรขche pour laquelle vous souhaitez l'utiliser. Vous pouvez utiliser le [`pipeline`] pour n'importe laquelle des tรขches mentionnรฉes dans le tableau prรฉcรฉdent. Pour obtenir une liste complรจte des tรขches prises en charge, consultez la documentation de l'[API pipeline](./main_classes/pipelines). Dans ce guide, nous utiliserons le [`pipeline`] pour l'analyse des sentiments ร  titre d'exemple : ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` Le [`pipeline`] tรฉlรฉcharge et stocke en cache un [modรจle prรฉ-entraรฎnรฉ](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) et un tokenizer par dรฉfaut pour l'analyse des sentiments. Vous pouvez maintenant utiliser le `classifier` sur le texte de votre choix : ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` Si vous voulez classifier plus qu'un texte, donnez une liste de textes au [`pipeline`] pour obtenir une liste de dictionnaires en retour : ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, avec le score de: {round(result['score'], 4)}") label: POSITIVE, avec le score de: 0.9998 label: NEGATIVE, avec le score de: 0.5309 ``` Le [`pipeline`] peut aussi itรฉrer sur un jeu de donnรฉes entier pour n'importe quelle tรขche. Prenons par exemple la reconnaissance automatique de la parole : ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` Chargez un jeu de donnรฉes audio (voir le ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart#audio) pour plus de dรฉtails) sur lequel vous souhaitez itรฉrer. Pour cet exemple, nous chargeons le jeu de donnรฉes [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) : ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` Vous devez vous assurer que le taux d'รฉchantillonnage de l'ensemble de donnรฉes correspond au taux d'รฉchantillonnage sur lequel [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) a รฉtรฉ entraรฎnรฉ : ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` Les fichiers audio sont automatiquement chargรฉs et rรฉรฉchantillonnรฉs lors de l'appel de la colonne `"audio"`. Extrayez les tableaux de formes d'ondes brutes des quatre premiers รฉchantillons et passez-les comme une liste au pipeline : ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT'] ``` Pour les ensembles de donnรฉes plus importants oรน les entrรฉes sont volumineuses (comme dans les domaines de la parole ou de la vision), utilisez plutรดt un gรฉnรฉrateur au lieu d'une liste pour charger toutes les entrรฉes en mรฉmoire. Pour plus d'informations, consultez la documentation de l'[API pipeline](./main_classes/pipelines). ### Utiliser une autre modรจle et tokenizer dans le pipeline Le [`pipeline`] peut รชtre utilisรฉ avec n'importe quel modรจle du [Hub](https://huggingface.co/models), ce qui permet d'adapter facilement le [`pipeline`] ร  d'autres cas d'utilisation. Par exemple, si vous souhaitez un modรจle capable de traiter du texte franรงais, utilisez les filtres du Hub pour trouver un modรจle appropriรฉ. Le premier rรฉsultat renvoie un [modรจle BERT](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) multilingue finetunรฉ pour l'analyse des sentiments que vous pouvez utiliser pour le texte franรงais : ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Utilisez [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `AutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Utilisez [`TFAutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `TFAutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Spรฉcifiez le modรจle et le tokenizer dans le [`pipeline`], et utilisez le `classifier` sur le texte en franรงais : ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Si vous ne parvenez pas ร  trouver un modรจle adaptรฉ ร  votre cas d'utilisation, vous devrez finetuner un modรจle prรฉ-entraรฎnรฉ sur vos donnรฉes. Jetez un coup d'ล“il ร  notre [tutoriel sur le finetuning](./training) pour apprendre comment faire. Enfin, aprรจs avoir finetunรฉ votre modรจle prรฉ-entraรฎnรฉ, pensez ร  [partager](./model_sharing) le modรจle avec la communautรฉ sur le Hub afin de dรฉmocratiser l'apprentissage automatique pour tous ! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Les classes [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] fonctionnent ensemble pour crรฉer un [`pipeline`] comme celui que vous avez utilisรฉ ci-dessus. Une [AutoClass](./model_doc/auto) est un raccourci qui rรฉcupรจre automatiquement l'architecture d'un modรจle prรฉ-entraรฎnรฉ ร  partir de son nom ou de son emplacement. Il vous suffit de sรฉlectionner l'`AutoClass` appropriรฉe ร  votre tรขche et la classe de prรฉtraitement qui lui est associรฉe. Reprenons l'exemple de la section prรฉcรฉdente et voyons comment vous pouvez utiliser l'`AutoClass` pour reproduire les rรฉsultats du [`pipeline`]. ### AutoTokenizer Un tokenizer est chargรฉ de prรฉtraiter le texte pour en faire un tableau de chiffres qui servira d'entrรฉe ร  un modรจle. De nombreuses rรจgles rรฉgissent le processus de tokenisation, notamment la maniรจre de diviser un mot et le niveau auquel les mots doivent รชtre divisรฉs (pour en savoir plus sur la tokenisation, consultez le [rรฉsumรฉ](./tokenizer_summary)). La chose la plus importante ร  retenir est que vous devez instancier un tokenizer avec le mรชme nom de modรจle pour vous assurer que vous utilisez les mรชmes rรจgles de tokenisation que celles avec lesquelles un modรจle a รฉtรฉ prรฉ-entraรฎnรฉ. Chargez un tokenizer avec [`AutoTokenizer`] : ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Passez votre texte au tokenizer : ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Le tokenizer retourne un dictionnaire contenant : * [input_ids](./glossary#input-ids): la reprรฉsentation numรฉrique des tokens. * [attention_mask](.glossary#attention-mask): indique quels tokens doivent faire l'objet d'une attention particuliรจre (plus particuliรจrement les tokens de remplissage). Un tokenizer peut รฉgalement accepter une liste de textes, et remplir et tronquer le texte pour retourner un รฉchantillon de longueur uniforme : <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> Consultez le tutoriel [prรฉtraitement](./preprocessing) pour plus de dรฉtails sur la tokenisation, et sur la maniรจre d'utiliser un [`AutoImageProcessor`], un [`AutoFeatureExtractor`] et un [`AutoProcessor`] pour prรฉtraiter les images, l'audio et les contenus multimodaux. </Tip> ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉes. Cela signifie que vous pouvez charger un [`AutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner l'[`AutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`AutoModelForSequenceClassification`] : ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Maintenant, passez votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle. Il vous suffit de dรฉcompresser le dictionnaire en ajoutant `**` : ```py >>> pt_outputs = pt_model(**pt_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉs. Cela signifie que vous pouvez charger un [`TFAutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner le [`TFAutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`TFAutoModelForSequenceClassification`] : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Passez maintenant votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle en passant les clรฉs du dictionnaire directement aux tensors : ```py >>> tf_outputs = tf_model(tf_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Tous les modรจles ๐Ÿค— Transformers (PyTorch ou TensorFlow) produisent les tensors *avant* la fonction d'activation finale (comme softmax) car la fonction d'activation finale est souvent fusionnรฉe avec le calcul de la perte. Les structures produites par le modรจle sont des classes de donnรฉes spรฉciales, de sorte que leurs attributs sont autocomplรฉtรฉs dans un environnement de dรฉveloppement. Les structures produites par le modรจle se comportent comme un tuple ou un dictionnaire (vous pouvez les indexer avec un entier, une tranche ou une chaรฎne), auquel cas les attributs qui sont None sont ignorรฉs. </Tip> ### Sauvegarder un modรจle <frameworkcontent> <pt> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`PreTrainedModel.save_pretrained`] : ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`PreTrainedModel.from_pretrained`] : ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`TFPreTrainedModel.save_pretrained`] : ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`TFPreTrainedModel.from_pretrained`] : ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Une fonctionnalitรฉ particuliรจrement cool ๐Ÿค— Transformers est la possibilitรฉ d'enregistrer un modรจle et de le recharger en tant que modรจle PyTorch ou TensorFlow. Le paramรจtre `from_pt` ou `from_tf` permet de convertir le modรจle d'un framework ร  l'autre : <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## Constructions de modรจles personnalisรฉs Vous pouvez modifier la configuration du modรจle pour changer la faรงon dont un modรจle est construit. La configuration spรฉcifie les attributs d'un modรจle, tels que le nombre de couches ou de tรชtes d'attention. Vous partez de zรฉro lorsque vous initialisez un modรจle ร  partir d'une configuration personnalisรฉe. Les attributs du modรจle sont initialisรฉs de maniรจre alรฉatoire et vous devrez entraรฎner le modรจle avant de pouvoir l'utiliser pour obtenir des rรฉsultats significatifs. Commencez par importer [`AutoConfig`], puis chargez le modรจle prรฉ-entraรฎnรฉ que vous voulez modifier. Dans [`AutoConfig.from_pretrained`], vous pouvez spรฉcifier l'attribut que vous souhaitez modifier, tel que le nombre de tรชtes d'attention : ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`AutoModel.from_config`] : ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`TFAutoModel.from_config`] : ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> Consultez le guide [Crรฉer une architecture personnalisรฉe](./create_a_model) pour plus d'informations sur la crรฉation de configurations personnalisรฉes. ## Trainer - une boucle d'entraรฎnement optimisรฉe par PyTorch Tous les modรจles sont des [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) standard, vous pouvez donc les utiliser dans n'importe quelle boucle d'entraรฎnement typique. Bien que vous puissiez รฉcrire votre propre boucle d'entraรฎnement, ๐Ÿค— Transformers fournit une classe [`Trainer`] pour PyTorch, qui contient la boucle d'entraรฎnement de base et ajoute des fonctionnalitรฉs supplรฉmentaires comme l'entraรฎnement distribuรฉ, la prรฉcision mixte, et plus encore. En fonction de votre tรขche, vous passerez gรฉnรฉralement les paramรจtres suivants ร  [`Trainer`] : 1. Un [`PreTrainedModel`] ou un [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module): ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` 2. [`TrainingArguments`] contient les hyperparamรจtres du modรจle que vous pouvez changer comme le taux d'apprentissage, la taille de l'รฉchantillon, et le nombre d'รฉpoques pour s'entraรฎner. Les valeurs par dรฉfaut sont utilisรฉes si vous ne spรฉcifiez pas d'hyperparamรจtres d'apprentissage : ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` 4. Chargez un jeu de donnรฉes : ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` Puis appliquez-la ร  l'intรฉgralitรฉ du jeu de donnรฉes avec [`~datasets.Dataset.map`]: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. Un [`DataCollatorWithPadding`] pour crรฉer un รฉchantillon d'exemples ร  partir de votre jeu de donnรฉes : ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` Maintenant, rassemblez tous ces รฉlรฉments dans un [`Trainer`] : ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` Une fois que vous รชtes prรชt, appelez la fonction [`~Trainer.train`] pour commencer l'entraรฎnement : ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> Pour les tรขches - comme la traduction ou la gรฉnรฉration de rรฉsumรฉ - qui utilisent un modรจle sรฉquence ร  sรฉquence, utilisez plutรดt les classes [`Seq2SeqTrainer`] et [`Seq2SeqTrainingArguments`]. </Tip> Vous pouvez personnaliser le comportement de la boucle d'apprentissage en redรฉfinissant les mรฉthodes ร  l'intรฉrieur de [`Trainer`]. Cela vous permet de personnaliser des caractรฉristiques telles que la fonction de perte, l'optimiseur et le planificateur. Consultez la documentation de [`Trainer`] pour savoir quelles mรฉthodes peuvent รชtre redรฉfinies. L'autre moyen de personnaliser la boucle d'apprentissage est d'utiliser les [Callbacks](./main_classes/callbacks). Vous pouvez utiliser les callbacks pour intรฉgrer d'autres bibliothรจques et inspecter la boucle d'apprentissage afin de suivre la progression ou d'arrรชter l'apprentissage plus tรดt. Les callbacks ne modifient rien dans la boucle d'apprentissage elle-mรชme. Pour personnaliser quelque chose comme la fonction de perte, vous devez redรฉfinir le [`Trainer`] ร  la place. ## Entraรฎnement avec TensorFlow Tous les modรจles sont des modรจles standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) afin qu'ils puissent รชtre entraรฎnรฉs avec TensorFlow avec l'API [Keras](https://keras.io/). ๐Ÿค— Transformers fournit la fonction [`~TFPreTrainedModel.prepare_tf_dataset`] pour charger facilement votre jeu de donnรฉes comme un `tf.data.Dataset` afin que vous puissiez commencer l'entraรฎnement immรฉdiatement avec les fonctions [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) et [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) de Keras. 1. Vous commencez avec un modรจle [`TFPreTrainedModel`] ou [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` 2. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` 3. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. Appliquez le tokenizer ร  l'ensemble du jeu de donnรฉes avec [`~datasets.Dataset.map`] et passez ensuite le jeu de donnรฉes et le tokenizer ร  [`~TFPreTrainedModel.prepare_tf_dataset`]. Vous pouvez รฉgalement modifier la taille de l'รฉchantillon et mรฉlanger le jeu de donnรฉes ici si vous le souhaitez : ```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset, batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. Une fois que vous รชtes prรชt, appelez les fonctions `compile` et `fit` pour commencer l'entraรฎnement : ```py >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) >>> model.fit(dataset) # doctest: +SKIP ``` ## Et aprรจs ? Maintenant que vous avez terminรฉ la visite rapide de ๐Ÿค— Transformers, consultez nos guides et apprenez ร  faire des choses plus spรฉcifiques comme crรฉer un modรจle personnalisรฉ, finetuner un modรจle pour une tรขche, et comment entraรฎner un modรจle avec un script. Si vous souhaitez en savoir plus sur les concepts fondamentaux de ๐Ÿค— Transformers, jetez un ล“il ร  nos guides conceptuels !
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/te/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฐฐเฑเฐฏเฐŸเฐจ title: เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/te/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> [เฐชเฑˆเฐŸเฑ‹เฐฐเฑเฐšเฑ](https://pytorch.org/), [เฐŸเฑ†เฐจเฑเฐธเฐฐเฑโ€Œเฐซเฑเฐฒเฑ‹](https://www.tensorflow.org/), เฐฎเฐฐเฐฟเฐฏเฑ [เฐœเฐพเฐ•เฑเฐธเฑ](https://jax.readthedocs.io/en/latest/) เฐ•เฑ‹เฐธเฐ‚ เฐธเฑเฐฅเฐฟเฐคเฐฟ-เฐ•เฐฒเฐพเฐจ เฐฏเฐ‚เฐคเฑเฐฐ เฐ…เฐญเฑเฐฏเฐพเฐธเฐ‚. ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐธเฑ เฐ…เฐญเฐฟเฐตเฑƒเฐฆเฑเฐงเฐฟเฐธเฑเฐคเฑเฐจเฑเฐจเฐฆเฐฟ API เฐฎเฐฐเฐฟเฐฏเฑ เฐ‰เฐชเฐ•เฐฐเฐฃเฐพเฐฒเฑ, เฐชเฑ‚เฐฐเฑเฐต-เฐšเฑ‡เฐคเฐจ เฐฎเฑ‹เฐกเฐฒเฑเฐฒเฐจเฑ เฐธเฑเฐฒเฐญเฐ‚เฐ—เฐพ เฐกเฑŒเฐจเฑเฐฒเฑ‹เฐกเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐ…เฐตเฐธเฐฐเฐฎเฑˆเฐจ เฐธเฐฎเฐฏเฐ‚, เฐตเฐจเฐฐเฑเฐฒเฑ, เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐธเฑเฐคเฑเฐตเฑเฐฒเฐจเฑ เฐจเฑเฐ‚เฐšเฐฟ เฐฎเฑ‹เฐกเฐฒเฑเฐจเฑ เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•เฐ‚ เฐจเฑเฐ‚เฐšเฐฟ เฐชเฑเฐฐเฐถเฐฟเฐ•เฑเฐทเฐฟเฐ‚เฐšเฐกเฐ‚ เฐตเฐฐเฐ•เฑ เฐฆเฑ‡เฐตเฐพเฐฏเฐจเฐ‚ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐˆ เฐฎเฑ‹เฐกเฐฒเฑเฐฒเฑ เฐตเฐฟเฐญเฐฟเฐจเฑเฐจ เฐฎเฑ‹เฐกเฐพเฐฒเฐฟเฐŸเฑ€เฐฒเฐฒเฑ‹ เฐธเฐพเฐงเฐพเฐฐเฐฃ เฐชเฐจเฑเฐฒเฐ•เฑ เฐฎเฐฆเฑเฐฆเฐคเฑ เฐšเฑ‡เฐธเฑเฐคเฐพเฐฏเฐฟ, เฐตเฐ‚เฐŸเฐฟเฐตเฐฟ: ๐Ÿ“ **เฐชเฑเฐฐเฐพเฐ•เฑƒเฐคเฐฟเฐ• เฐญเฐพเฐท เฐชเฑเฐฐเฐ•เฑเฐฐเฐฟเฐฏ**: เฐตเฐšเฐจ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ, เฐชเฑ‡เฐฐเฑเฐฒ เฐฏเฑŠเฐ•เฑเฐ• เฐฏเฑ†เฐ‚เฐŸเฐฟเฐŸเฑ€ เฐ—เฑเฐฐเฑเฐคเฑเฐตเฑ, เฐชเฑเฐฐเฐถเฑเฐจ เฐธเฐ‚เฐตเฐพเฐฆ, เฐญเฐพเฐทเฐพ เฐฐเฐšเฐจ, เฐธเฐ‚เฐ•เฑเฐทเฑ‡เฐชเฐฃ, เฐ…เฐจเฑเฐตเฐพเฐฆเฐ‚, เฐ…เฐจเฑ‡เฐ• เฐชเฑเฐฐเฐ•เฐพเฐฐเฐพเฐฒเฑ, เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐšเฐจ เฐธเฑƒเฐทเฑเฐŸเฐฟ.<br> ๐Ÿ–ผ๏ธ **เฐ•เฐ‚เฐชเฑเฐฏเฑ‚เฐŸเฐฐเฑ เฐตเฐฟเฐทเฐฏเฐ‚**: เฐšเฐฟเฐคเฑเฐฐเฐ‚ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ, เฐตเฐธเฑเฐคเฑเฐฐเฐ‚ เฐ—เฑเฐฐเฑเฐคเฑเฐตเฑ, เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐฟเฐญเฐœเฐจ.<br> ๐Ÿ—ฃ๏ธ **เฐ†เฐกเฐฟเฐฏเฑ‹**: เฐธเฑเฐตเฐฏเฐ‚เฐšเฐฒเฐจ เฐชเฑเฐฐเฐธเฐ‚เฐ—เฐพเฐจเฑเฐจเฐฟ เฐ—เฑเฐฐเฑเฐคเฑเฐšเฑ‡เฐธเฑ‡เฐ‚เฐฆเฑเฐ•เฑ, เฐ†เฐกเฐฟเฐฏเฑ‹ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ.<br> ๐Ÿ™ **เฐฌเฐนเฑเฐฎเฑ‚เฐฒเฐฟเฐ•**: เฐชเฐŸเฑเฐŸเฐฟ เฐชเฑเฐฐเฐถเฑเฐจ เฐธเฐ‚เฐตเฐพเฐฆ, เฐ†เฐชเฑเฐŸเฐฟเฐ•เฐฒเฑ เฐธเฐฟเฐซเฐฐเฑ เฐ—เฑเฐฐเฑเฐคเฑเฐตเฑ, เฐกเฐพเฐ•เฑเฐฏเฑเฐฎเฑ†เฐ‚เฐŸเฑเฐฒเฑ เฐธเฑเฐ•เฑเฐฏเฐพเฐจเฑ เฐšเฑ‡เฐธเฐฟเฐจเฐ‚เฐคเฐ—เฐพ เฐธเฐฎเฐพเฐšเฐพเฐฐ เฐชเฑŠเฐ‚เฐฆเฐกเฐ‚, เฐตเฑ€เฐกเฐฟเฐฏเฑ‹ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ, เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฑƒเฐถเฑเฐฏ เฐชเฑเฐฐเฐถเฑเฐจ เฐธเฐ‚เฐตเฐพเฐฆ. ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐธเฑ เฐชเฑˆเฐจ เฐฎเฐฆเฑเฐฆเฐคเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ เฐชเฑˆเฐจ เฐคเฑŠเฐฒเฐ—เฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐชเฑˆเฐจ เฐชเฑˆเฐจ เฐชเฑˆเฐจ เฐชเฑเฐฐเฑ‹เฐ—เฑเฐฐเฐพเฐฎเฑเฐฒเฑ‹ เฐฎเฑ‹เฐกเฐฒเฑเฐจเฑ เฐถเฐฟเฐ•เฑเฐทเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ, เฐฎเฐฐเฐฟเฐฏเฑ เฐ…เฐจเฑเฐจเฐฟ เฐชเฑเฐฐเฐพเฐฅเฐฎเฐฟเฐ• เฐฏเฑŠเฐ•เฑเฐ•เฐกเฐพ เฐ‡เฐจเฑโ€Œเฐซเฐฐเฑ†เฐจเฑเฐธเฑ เฐ•เฑ‹เฐธเฐ‚ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ. เฐฎเฑ‹ เฐกเฐฒเฑเฐฒเฑ เฐ•เฑ‚เฐกเฐพ เฐชเฑเฐฐเฑŠเฐกเฐ•เฑเฐทเฐจเฑ เฐตเฐพเฐคเฐพเฐตเฐฐเฐฃเฐพเฐฒเฐฒเฑ‹ เฐตเฐพเฐกเฑเฐ•เฑ‹เฐตเฐกเฐพเฐจเฐฟเฐ•เฐฟ ONNX เฐฎเฐฐเฐฟเฐฏเฑ TorchScript เฐตเฐ‚เฐŸเฐฟ เฐ†เฐ•เฑƒเฐคเฑเฐฒเฐ•เฑ เฐŽเฐ—เฑเฐฎเฐคเฐฟ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ. เฐˆเฐฐเฑเฐตเฑเฐฒเฐ•เฑ [เฐนเฐฌเฑ](https://huggingface.co/models), [เฐซเฑ‹เฐฐเฐ‚](https://discuss.huggingface.co/), เฐฒเฑ‡เฐฆเฐพ [เฐกเฐฟเฐธเฑเฐ•เฐพเฐฐเฑเฐกเฑ](https://discord.com/invite/JfAtkvEtRb) เฐฒเฑ‹ เฐˆ เฐชเฑ†เฐฆเฑเฐฆ เฐธเฐฎเฑเฐฆเฐพเฐฏเฐ‚เฐฒเฑ‹ เฐšเฑ‡เฐฐเฐ‚เฐกเฐฟ! ## เฐฎเฑ€เฐฐเฑ เฐนเฐ—เฑเฐ—เฐฟเฐ‚เฐ—เฑ เฐซเฑ‡เฐธเฑ เฐŸเฑ€เฐฎเฑ เฐจเฑเฐ‚เฐกเฐฟ เฐ…เฐจเฑเฐ•เฑ‚เฐฒ เฐฎเฐฆเฑเฐฆเฐคเฑ เฐ•เฑ‹เฐธเฐ‚ เฐšเฑ‚เฐธเฑเฐคเฑเฐจเฑเฐจเฐŸเฑเฐฒเฐฏเฐฟเฐคเฑ‡ <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## เฐตเฐฟเฐทเฐฏเฐพเฐฒเฑ เฐกเฐพเฐ•เฑเฐฏเฑเฐฎเฑ†เฐ‚เฐŸเฑ‡เฐทเฐจเฑ เฐเฐฆเฑ เฐตเฐฟเฐญเฐพเฐ—เฐพเฐฒเฑเฐ—เฐพ เฐจเฐฟเฐฐเฑเฐตเฐนเฐฟเฐ‚เฐšเฐฌเฐกเฐฟเฐ‚เฐฆเฐฟ: - **เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ** เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€ เฐฏเฑŠเฐ•เฑเฐ• เฐถเฑ€เฐ˜เฑเฐฐ เฐชเฐฐเฑเฐฏเฐŸเฐจ เฐฎเฐฐเฐฟเฐฏเฑ เฐฐเฐจเฑเฐจเฐฟเฐ‚เฐ—เฑ เฐ•เฑ‹เฐธเฐ‚ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ‡เฐทเฐจเฑ เฐธเฑ‚เฐšเฐจเฐฒเฐจเฑ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. - **เฐŸเฑเฐฏเฑเฐŸเฑ‹เฐฐเฐฟเฐฏเฐฒเฑเฐธเฑ** เฐฎเฑ€เฐฐเฑ เฐ…เฐจเฑเฐญเฐตเฐถเฑ‚เฐจเฑเฐฏเฑเฐกเฑ เฐ…เฐฏเฐฟเฐคเฑ‡ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐ—เฑŠเฐชเฑเฐช เฐชเฑเฐฐเฐฆเฑ‡เฐถเฐ‚. เฐฎเฑ€เฐฐเฑ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€เฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐ‚ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐ…เฐตเฐธเฐฐเฐฎเฑˆเฐจ เฐชเฑเฐฐเฐพเฐฅเฐฎเฐฟเฐ• เฐจเฑˆเฐชเฑเฐฃเฑเฐฏเฐพเฐฒเฐจเฑ เฐชเฑŠเฐ‚เฐฆเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐˆ เฐตเฐฟเฐญเฐพเฐ—เฐ‚ เฐฎเฑ€เฐ•เฑ เฐธเฐนเฐพเฐฏเฐ‚ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. - **เฐนเฑŒ-เฐŸเฑ-เฐ—เฑˆเฐกเฑโ€Œเฐฒเฑ** เฐฒเฐพเฐ‚เฐ—เฑเฐตเฑ‡เฐœเฑ เฐฎเฑ‹เฐกเฐฒเฐฟเฐ‚เฐ—เฑ เฐ•เฑ‹เฐธเฐ‚ เฐชเฑเฐฐเฐฟเฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฐฟ เฐซเฑˆเฐจเฑโ€ŒเฐŸเฑเฐฏเฑ‚เฐจเฑ เฐšเฑ‡เฐฏเฐกเฐ‚ เฐฒเฑ‡เฐฆเฐพ เฐ•เฐธเฑเฐŸเฐฎเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐŽเฐฒเฐพ เฐตเฑเฐฐเฐพเฐฏเฐพเฐฒเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐทเฑ‡เฐฐเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ เฐตเฐ‚เฐŸเฐฟ เฐจเฐฟเฐฐเฑเฐฆเฐฟเฐทเฑเฐŸ เฐฒเฐ•เฑเฐทเฑเฐฏเฐพเฐจเฑเฐจเฐฟ เฐŽเฐฒเฐพ เฐธเฐพเฐงเฐฟเฐ‚เฐšเฐพเฐฒเฑ‹ เฐฎเฑ€เฐ•เฑ เฐšเฑ‚เฐชเฑเฐคเฐพเฐฏเฐฟ. - **เฐ•เฐพเฐจเฑเฐธเฑ†เฐชเฑเฐšเฑเฐตเฐฒเฑ เฐ—เฑˆเฐกเฑเฐธเฑ** เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฑ, เฐŸเฐพเฐธเฑเฐ•เฑโ€Œเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒ เฐกเฐฟเฐœเฑˆเฐจเฑ เฐซเฐฟเฐฒเฐพเฐธเฐซเฑ€ เฐตเฑ†เฐจเฑเฐ• เฐ‰เฐจเฑเฐจ เฐ…เฐ‚เฐคเฐฐเฑเฐฒเฑ€เฐจ เฐญเฐพเฐตเฐจเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐ†เฐฒเฑ‹เฐšเฐจเฐฒ เฐ—เฑเฐฐเฐฟเฐ‚เฐšเฐฟ เฐฎเฐฐเฐฟเฐ‚เฐค เฐšเฐฐเฑเฐš เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐฟเฐตเฐฐเฐฃเฐจเฑ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. - **API** เฐ…เฐจเฑเฐจเฐฟ เฐคเฐฐเฐ—เฐคเฑเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐฟเฐงเฑเฐฒเฐจเฑ เฐตเฐฟเฐตเฐฐเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ: - **เฐชเฑเฐฐเฐงเฐพเฐจ เฐคเฐฐเฐ—เฐคเฑเฐฒเฑ** เฐ•เฐพเฐจเฑเฐซเฐฟเฐ—เฐฐเฑ‡เฐทเฐจเฑ, เฐฎเฑ‹เฐกเฐฒเฑ, เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑˆเฐชเฑโ€Œเฐฒเฑˆเฐจเฑ เฐตเฐ‚เฐŸเฐฟ เฐ…เฐคเฑเฐฏเฐ‚เฐค เฐฎเฑเฐ–เฑเฐฏเฐฎเฑˆเฐจ เฐคเฐฐเฐ—เฐคเฑเฐฒเฐจเฑ เฐตเฐฟเฐตเฐฐเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. - **เฐฎเฑ‹เฐกเฐฒเฑเฐธเฑ** เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€เฐฒเฑ‹ เฐ…เฐฎเฐฒเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจ เฐชเฑเฐฐเฐคเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ•เฑ เฐธเฐ‚เฐฌเฐ‚เฐงเฐฟเฐ‚เฐšเฐฟเฐจ เฐคเฐฐเฐ—เฐคเฑเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐฟเฐงเฑเฐฒเฐจเฑ เฐตเฐฟเฐตเฐฐเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. - **เฐ…เฐ‚เฐคเฐฐเฑเฐ—เฐค เฐธเฐนเฐพเฐฏเฐ•เฑเฐฒเฑ** เฐ…เฐ‚เฐคเฐฐเฑเฐ—เฐคเฐ‚เฐ—เฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฑ‡ เฐฏเฑเฐŸเฐฟเฐฒเฐฟเฐŸเฑ€ เฐ•เฑเฐฒเฐพเฐธเฑโ€Œเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐซเฐ‚เฐ•เฑเฐทเฐจเฑโ€Œเฐฒ เฐตเฐฟเฐตเฐฐเฐพเฐฒเฑ. ## เฐฎเฐฆเฑเฐฆเฐคเฑ เฐ‰เฐจเฑเฐจ เฐจเฐฎเฑ‚เฐจเฐพเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐซเฑเฐฐเฑ‡เฐฎเฑโ€Œเฐตเฐฐเฑเฐ•เฑโ€Œเฐฒเฑ เฐฆเฐฟเฐ—เฑเฐตเฐจ เฐ‰เฐจเฑเฐจ เฐชเฐŸเฑเฐŸเฐฟเฐ• เฐ† เฐชเฑเฐฐเฐคเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ•เฑ เฐชเฑˆเฐฅเฐพเฐจเฑ เฐ•เฐฒเฐฟเฐ—เฐฟ เฐ‰เฐจเฑเฐจเฐพ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€เฐฒเฑ‹ เฐชเฑเฐฐเฐธเฑเฐคเฑเฐค เฐฎเฐฆเฑเฐฆเฐคเฑเฐจเฑ เฐธเฑ‚เฐšเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ ("เฐจเฑ†เฐฎเฑเฐฎเฐฆเฐฟเฐ—เฐพ" เฐ…เฐจเฐฟ เฐชเฐฟเฐฒเฑเฐธเฑเฐคเฐพเฐฐเฑ). Jax (เฐฆเฑเฐตเฐพเฐฐเฐพ เฐซเฑเฐฒเฐพเฐ•เฑเฐธเฑ), เฐชเฑˆเฐŸเฐพเฐฐเฑเฐšเฑ เฐฎเฐฐเฐฟเฐฏเฑ/เฐฒเฑ‡เฐฆเฐพ เฐŸเฑ†เฐจเฑเฐธเฐฐเฑโ€Œเฐซเฑเฐฒเฑ‹. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | PyTorch support | TensorFlow support | Flax Support | |:------------------------------------------------------------------------:|:---------------:|:------------------:|:------------:| | [ALBERT](model_doc/albert) | โœ… | โœ… | โœ… | | [ALIGN](model_doc/align) | โœ… | โŒ | โŒ | | [AltCLIP](model_doc/altclip) | โœ… | โŒ | โŒ | | [Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer) | โœ… | โŒ | โŒ | | [Autoformer](model_doc/autoformer) | โœ… | โŒ | โŒ | | [Bark](model_doc/bark) | โœ… | โŒ | โŒ | | [BART](model_doc/bart) | โœ… | โœ… | โœ… | | [BARThez](model_doc/barthez) | โœ… | โœ… | โœ… | | [BARTpho](model_doc/bartpho) | โœ… | โœ… | โœ… | | [BEiT](model_doc/beit) | โœ… | โŒ | โœ… | | [BERT](model_doc/bert) | โœ… | โœ… | โœ… | | [Bert Generation](model_doc/bert-generation) | โœ… | โŒ | โŒ | | [BertJapanese](model_doc/bert-japanese) | โœ… | โœ… | โœ… | | [BERTweet](model_doc/bertweet) | โœ… | โœ… | โœ… | | [BigBird](model_doc/big_bird) | โœ… | โŒ | โœ… | | [BigBird-Pegasus](model_doc/bigbird_pegasus) | โœ… | โŒ | โŒ | | [BioGpt](model_doc/biogpt) | โœ… | โŒ | โŒ | | [BiT](model_doc/bit) | โœ… | โŒ | โŒ | | [Blenderbot](model_doc/blenderbot) | โœ… | โœ… | โœ… | | [BlenderbotSmall](model_doc/blenderbot-small) | โœ… | โœ… | โœ… | | [BLIP](model_doc/blip) | โœ… | โœ… | โŒ | | [BLIP-2](model_doc/blip-2) | โœ… | โŒ | โŒ | | [BLOOM](model_doc/bloom) | โœ… | โŒ | โœ… | | [BORT](model_doc/bort) | โœ… | โœ… | โœ… | | [BridgeTower](model_doc/bridgetower) | โœ… | โŒ | โŒ | | [BROS](model_doc/bros) | โœ… | โŒ | โŒ | | [ByT5](model_doc/byt5) | โœ… | โœ… | โœ… | | [CamemBERT](model_doc/camembert) | โœ… | โœ… | โŒ | | [CANINE](model_doc/canine) | โœ… | โŒ | โŒ | | [Chinese-CLIP](model_doc/chinese_clip) | โœ… | โŒ | โŒ | | [CLAP](model_doc/clap) | โœ… | โŒ | โŒ | | [CLIP](model_doc/clip) | โœ… | โœ… | โœ… | | [CLIPSeg](model_doc/clipseg) | โœ… | โŒ | โŒ | | [CodeGen](model_doc/codegen) | โœ… | โŒ | โŒ | | [CodeLlama](model_doc/code_llama) | โœ… | โŒ | โŒ | | [Conditional DETR](model_doc/conditional_detr) | โœ… | โŒ | โŒ | | [ConvBERT](model_doc/convbert) | โœ… | โœ… | โŒ | | [ConvNeXT](model_doc/convnext) | โœ… | โœ… | โŒ | | [ConvNeXTV2](model_doc/convnextv2) | โœ… | โŒ | โŒ | | [CPM](model_doc/cpm) | โœ… | โœ… | โœ… | | [CPM-Ant](model_doc/cpmant) | โœ… | โŒ | โŒ | | [CTRL](model_doc/ctrl) | โœ… | โœ… | โŒ | | [CvT](model_doc/cvt) | โœ… | โœ… | โŒ | | [Data2VecAudio](model_doc/data2vec) | โœ… | โŒ | โŒ | | [Data2VecText](model_doc/data2vec) | โœ… | โŒ | โŒ | | [Data2VecVision](model_doc/data2vec) | โœ… | โœ… | โŒ | | [DeBERTa](model_doc/deberta) | โœ… | โœ… | โŒ | | [DeBERTa-v2](model_doc/deberta-v2) | โœ… | โœ… | โŒ | | [Decision Transformer](model_doc/decision_transformer) | โœ… | โŒ | โŒ | | [Deformable DETR](model_doc/deformable_detr) | โœ… | โŒ | โŒ | | [DeiT](model_doc/deit) | โœ… | โœ… | โŒ | | [DePlot](model_doc/deplot) | โœ… | โŒ | โŒ | | [DETA](model_doc/deta) | โœ… | โŒ | โŒ | | [DETR](model_doc/detr) | โœ… | โŒ | โŒ | | [DialoGPT](model_doc/dialogpt) | โœ… | โœ… | โœ… | | [DiNAT](model_doc/dinat) | โœ… | โŒ | โŒ | | [DINOv2](model_doc/dinov2) | โœ… | โŒ | โŒ | | [DistilBERT](model_doc/distilbert) | โœ… | โœ… | โœ… | | [DiT](model_doc/dit) | โœ… | โŒ | โœ… | | [DonutSwin](model_doc/donut) | โœ… | โŒ | โŒ | | [DPR](model_doc/dpr) | โœ… | โœ… | โŒ | | [DPT](model_doc/dpt) | โœ… | โŒ | โŒ | | [EfficientFormer](model_doc/efficientformer) | โœ… | โœ… | โŒ | | [EfficientNet](model_doc/efficientnet) | โœ… | โŒ | โŒ | | [ELECTRA](model_doc/electra) | โœ… | โœ… | โœ… | | [EnCodec](model_doc/encodec) | โœ… | โŒ | โŒ | | [Encoder decoder](model_doc/encoder-decoder) | โœ… | โœ… | โœ… | | [ERNIE](model_doc/ernie) | โœ… | โŒ | โŒ | | [ErnieM](model_doc/ernie_m) | โœ… | โŒ | โŒ | | [ESM](model_doc/esm) | โœ… | โœ… | โŒ | | [FairSeq Machine-Translation](model_doc/fsmt) | โœ… | โŒ | โŒ | | [Falcon](model_doc/falcon) | โœ… | โŒ | โŒ | | [FLAN-T5](model_doc/flan-t5) | โœ… | โœ… | โœ… | | [FLAN-UL2](model_doc/flan-ul2) | โœ… | โœ… | โœ… | | [FlauBERT](model_doc/flaubert) | โœ… | โœ… | โŒ | | [FLAVA](model_doc/flava) | โœ… | โŒ | โŒ | | [FNet](model_doc/fnet) | โœ… | โŒ | โŒ | | [FocalNet](model_doc/focalnet) | โœ… | โŒ | โŒ | | [Funnel Transformer](model_doc/funnel) | โœ… | โœ… | โŒ | | [GIT](model_doc/git) | โœ… | โŒ | โŒ | | [GLPN](model_doc/glpn) | โœ… | โŒ | โŒ | | [GPT Neo](model_doc/gpt_neo) | โœ… | โŒ | โœ… | | [GPT NeoX](model_doc/gpt_neox) | โœ… | โŒ | โŒ | | [GPT NeoX Japanese](model_doc/gpt_neox_japanese) | โœ… | โŒ | โŒ | | [GPT-J](model_doc/gptj) | โœ… | โœ… | โœ… | | [GPT-Sw3](model_doc/gpt-sw3) | โœ… | โœ… | โœ… | | [GPTBigCode](model_doc/gpt_bigcode) | โœ… | โŒ | โŒ | | [GPTSAN-japanese](model_doc/gptsan-japanese) | โœ… | โŒ | โŒ | | [Graphormer](model_doc/graphormer) | โœ… | โŒ | โŒ | | [GroupViT](model_doc/groupvit) | โœ… | โœ… | โŒ | | [HerBERT](model_doc/herbert) | โœ… | โœ… | โœ… | | [Hubert](model_doc/hubert) | โœ… | โœ… | โŒ | | [I-BERT](model_doc/ibert) | โœ… | โŒ | โŒ | | [IDEFICS](model_doc/idefics) | โœ… | โŒ | โŒ | | [ImageGPT](model_doc/imagegpt) | โœ… | โŒ | โŒ | | [Informer](model_doc/informer) | โœ… | โŒ | โŒ | | [InstructBLIP](model_doc/instructblip) | โœ… | โŒ | โŒ | | [Jukebox](model_doc/jukebox) | โœ… | โŒ | โŒ | | [LayoutLM](model_doc/layoutlm) | โœ… | โœ… | โŒ | | [LayoutLMv2](model_doc/layoutlmv2) | โœ… | โŒ | โŒ | | [LayoutLMv3](model_doc/layoutlmv3) | โœ… | โœ… | โŒ | | [LayoutXLM](model_doc/layoutxlm) | โœ… | โŒ | โŒ | | [LED](model_doc/led) | โœ… | โœ… | โŒ | | [LeViT](model_doc/levit) | โœ… | โŒ | โŒ | | [LiLT](model_doc/lilt) | โœ… | โŒ | โŒ | | [LLaMA](model_doc/llama) | โœ… | โŒ | โŒ | | [Llama2](model_doc/llama2) | โœ… | โŒ | โŒ | | [Longformer](model_doc/longformer) | โœ… | โœ… | โŒ | | [LongT5](model_doc/longt5) | โœ… | โŒ | โœ… | | [LUKE](model_doc/luke) | โœ… | โŒ | โŒ | | [LXMERT](model_doc/lxmert) | โœ… | โœ… | โŒ | | [M-CTC-T](model_doc/mctct) | โœ… | โŒ | โŒ | | [M2M100](model_doc/m2m_100) | โœ… | โŒ | โŒ | | [Marian](model_doc/marian) | โœ… | โœ… | โœ… | | [MarkupLM](model_doc/markuplm) | โœ… | โŒ | โŒ | | [Mask2Former](model_doc/mask2former) | โœ… | โŒ | โŒ | | [MaskFormer](model_doc/maskformer) | โœ… | โŒ | โŒ | | [MatCha](model_doc/matcha) | โœ… | โŒ | โŒ | | [mBART](model_doc/mbart) | โœ… | โœ… | โœ… | | [mBART-50](model_doc/mbart50) | โœ… | โœ… | โœ… | | [MEGA](model_doc/mega) | โœ… | โŒ | โŒ | | [Megatron-BERT](model_doc/megatron-bert) | โœ… | โŒ | โŒ | | [Megatron-GPT2](model_doc/megatron_gpt2) | โœ… | โœ… | โœ… | | [MGP-STR](model_doc/mgp-str) | โœ… | โŒ | โŒ | | [Mistral](model_doc/mistral) | โœ… | โŒ | โŒ | | [mLUKE](model_doc/mluke) | โœ… | โŒ | โŒ | | [MMS](model_doc/mms) | โœ… | โœ… | โœ… | | [MobileBERT](model_doc/mobilebert) | โœ… | โœ… | โŒ | | [MobileNetV1](model_doc/mobilenet_v1) | โœ… | โŒ | โŒ | | [MobileNetV2](model_doc/mobilenet_v2) | โœ… | โŒ | โŒ | | [MobileViT](model_doc/mobilevit) | โœ… | โœ… | โŒ | | [MobileViTV2](model_doc/mobilevitv2) | โœ… | โŒ | โŒ | | [MPNet](model_doc/mpnet) | โœ… | โœ… | โŒ | | [MPT](model_doc/mpt) | โœ… | โŒ | โŒ | | [MRA](model_doc/mra) | โœ… | โŒ | โŒ | | [MT5](model_doc/mt5) | โœ… | โœ… | โœ… | | [MusicGen](model_doc/musicgen) | โœ… | โŒ | โŒ | | [MVP](model_doc/mvp) | โœ… | โŒ | โŒ | | [NAT](model_doc/nat) | โœ… | โŒ | โŒ | | [Nezha](model_doc/nezha) | โœ… | โŒ | โŒ | | [NLLB](model_doc/nllb) | โœ… | โŒ | โŒ | | [NLLB-MOE](model_doc/nllb-moe) | โœ… | โŒ | โŒ | | [Nougat](model_doc/nougat) | โœ… | โœ… | โœ… | | [Nystrรถmformer](model_doc/nystromformer) | โœ… | โŒ | โŒ | | [OneFormer](model_doc/oneformer) | โœ… | โŒ | โŒ | | [OpenAI GPT](model_doc/openai-gpt) | โœ… | โœ… | โŒ | | [OpenAI GPT-2](model_doc/gpt2) | โœ… | โœ… | โœ… | | [OpenLlama](model_doc/open-llama) | โœ… | โŒ | โŒ | | [OPT](model_doc/opt) | โœ… | โœ… | โœ… | | [OWL-ViT](model_doc/owlvit) | โœ… | โŒ | โŒ | | [Pegasus](model_doc/pegasus) | โœ… | โœ… | โœ… | | [PEGASUS-X](model_doc/pegasus_x) | โœ… | โŒ | โŒ | | [Perceiver](model_doc/perceiver) | โœ… | โŒ | โŒ | | [Persimmon](model_doc/persimmon) | โœ… | โŒ | โŒ | | [PhoBERT](model_doc/phobert) | โœ… | โœ… | โœ… | | [Pix2Struct](model_doc/pix2struct) | โœ… | โŒ | โŒ | | [PLBart](model_doc/plbart) | โœ… | โŒ | โŒ | | [PoolFormer](model_doc/poolformer) | โœ… | โŒ | โŒ | | [Pop2Piano](model_doc/pop2piano) | โœ… | โŒ | โŒ | | [ProphetNet](model_doc/prophetnet) | โœ… | โŒ | โŒ | | [PVT](model_doc/pvt) | โœ… | โŒ | โŒ | | [QDQBert](model_doc/qdqbert) | โœ… | โŒ | โŒ | | [RAG](model_doc/rag) | โœ… | โœ… | โŒ | | [REALM](model_doc/realm) | โœ… | โŒ | โŒ | | [Reformer](model_doc/reformer) | โœ… | โŒ | โŒ | | [RegNet](model_doc/regnet) | โœ… | โœ… | โœ… | | [RemBERT](model_doc/rembert) | โœ… | โœ… | โŒ | | [ResNet](model_doc/resnet) | โœ… | โœ… | โœ… | | [RetriBERT](model_doc/retribert) | โœ… | โŒ | โŒ | | [RoBERTa](model_doc/roberta) | โœ… | โœ… | โœ… | | [RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm) | โœ… | โœ… | โœ… | | [RoCBert](model_doc/roc_bert) | โœ… | โŒ | โŒ | | [RoFormer](model_doc/roformer) | โœ… | โœ… | โœ… | | [RWKV](model_doc/rwkv) | โœ… | โŒ | โŒ | | [SAM](model_doc/sam) | โœ… | โœ… | โŒ | | [SegFormer](model_doc/segformer) | โœ… | โœ… | โŒ | | [SEW](model_doc/sew) | โœ… | โŒ | โŒ | | [SEW-D](model_doc/sew-d) | โœ… | โŒ | โŒ | | [Speech Encoder decoder](model_doc/speech-encoder-decoder) | โœ… | โŒ | โœ… | | [Speech2Text](model_doc/speech_to_text) | โœ… | โœ… | โŒ | | [SpeechT5](model_doc/speecht5) | โœ… | โŒ | โŒ | | [Splinter](model_doc/splinter) | โœ… | โŒ | โŒ | | [SqueezeBERT](model_doc/squeezebert) | โœ… | โŒ | โŒ | | [SwiftFormer](model_doc/swiftformer) | โœ… | โŒ | โŒ | | [Swin Transformer](model_doc/swin) | โœ… | โœ… | โŒ | | [Swin Transformer V2](model_doc/swinv2) | โœ… | โŒ | โŒ | | [Swin2SR](model_doc/swin2sr) | โœ… | โŒ | โŒ | | [SwitchTransformers](model_doc/switch_transformers) | โœ… | โŒ | โŒ | | [T5](model_doc/t5) | โœ… | โœ… | โœ… | | [T5v1.1](model_doc/t5v1.1) | โœ… | โœ… | โœ… | | [Table Transformer](model_doc/table-transformer) | โœ… | โŒ | โŒ | | [TAPAS](model_doc/tapas) | โœ… | โœ… | โŒ | | [TAPEX](model_doc/tapex) | โœ… | โœ… | โœ… | | [Time Series Transformer](model_doc/time_series_transformer) | โœ… | โŒ | โŒ | | [TimeSformer](model_doc/timesformer) | โœ… | โŒ | โŒ | | [Trajectory Transformer](model_doc/trajectory_transformer) | โœ… | โŒ | โŒ | | [Transformer-XL](model_doc/transfo-xl) | โœ… | โœ… | โŒ | | [TrOCR](model_doc/trocr) | โœ… | โŒ | โŒ | | [TVLT](model_doc/tvlt) | โœ… | โŒ | โŒ | | [UL2](model_doc/ul2) | โœ… | โœ… | โœ… | | [UMT5](model_doc/umt5) | โœ… | โŒ | โŒ | | [UniSpeech](model_doc/unispeech) | โœ… | โŒ | โŒ | | [UniSpeechSat](model_doc/unispeech-sat) | โœ… | โŒ | โŒ | | [UPerNet](model_doc/upernet) | โœ… | โŒ | โŒ | | [VAN](model_doc/van) | โœ… | โŒ | โŒ | | [VideoMAE](model_doc/videomae) | โœ… | โŒ | โŒ | | [ViLT](model_doc/vilt) | โœ… | โŒ | โŒ | | [Vision Encoder decoder](model_doc/vision-encoder-decoder) | โœ… | โœ… | โœ… | | [VisionTextDualEncoder](model_doc/vision-text-dual-encoder) | โœ… | โœ… | โœ… | | [VisualBERT](model_doc/visual_bert) | โœ… | โŒ | โŒ | | [ViT](model_doc/vit) | โœ… | โœ… | โœ… | | [ViT Hybrid](model_doc/vit_hybrid) | โœ… | โŒ | โŒ | | [VitDet](model_doc/vitdet) | โœ… | โŒ | โŒ | | [ViTMAE](model_doc/vit_mae) | โœ… | โœ… | โŒ | | [ViTMatte](model_doc/vitmatte) | โœ… | โŒ | โŒ | | [ViTMSN](model_doc/vit_msn) | โœ… | โŒ | โŒ | | [VITS](model_doc/vits) | โœ… | โŒ | โŒ | | [ViViT](model_doc/vivit) | โœ… | โŒ | โŒ | | [Wav2Vec2](model_doc/wav2vec2) | โœ… | โœ… | โœ… | | [Wav2Vec2-Conformer](model_doc/wav2vec2-conformer) | โœ… | โŒ | โŒ | | [Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme) | โœ… | โœ… | โœ… | | [WavLM](model_doc/wavlm) | โœ… | โŒ | โŒ | | [Whisper](model_doc/whisper) | โœ… | โœ… | โœ… | | [X-CLIP](model_doc/xclip) | โœ… | โŒ | โŒ | | [X-MOD](model_doc/xmod) | โœ… | โŒ | โŒ | | [XGLM](model_doc/xglm) | โœ… | โœ… | โœ… | | [XLM](model_doc/xlm) | โœ… | โœ… | โŒ | | [XLM-ProphetNet](model_doc/xlm-prophetnet) | โœ… | โŒ | โŒ | | [XLM-RoBERTa](model_doc/xlm-roberta) | โœ… | โœ… | โœ… | | [XLM-RoBERTa-XL](model_doc/xlm-roberta-xl) | โœ… | โŒ | โŒ | | [XLM-V](model_doc/xlm-v) | โœ… | โœ… | โœ… | | [XLNet](model_doc/xlnet) | โœ… | โœ… | โŒ | | [XLS-R](model_doc/xls_r) | โœ… | โœ… | โœ… | | [XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2) | โœ… | โœ… | โœ… | | [YOLOS](model_doc/yolos) | โœ… | โŒ | โŒ | | [YOSO](model_doc/yoso) | โœ… | โŒ | โŒ | <!-- End table-->
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/te/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # เฐถเฑ€เฐ˜เฑเฐฐ เฐชเฐฐเฑเฐฏเฐŸเฐจ [[เฐ“เฐชเฑ†เฐจเฑ-เฐ‡เฐจเฑ-เฐ•เฑ‹เฐฒเฐพเฐฌเฑ]] ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฐคเฑ‹ เฐฒเฑ‡เฐšเฐฟ เฐชเฐฐเฑเฐ—เฑ†เฐคเฑเฐคเฐ‚เฐกเฐฟ! เฐฎเฑ€เฐฐเฑ เฐกเฑ†เฐตเฐฒเฐชเฐฐเฑ เฐ…เฐฏเฐฟเฐจเฐพ เฐฒเฑ‡เฐฆเฐพ เฐฐเฑ‹เฐœเฑเฐตเฐพเฐฐเฑ€ เฐตเฐฟเฐจเฐฟเฐฏเฑ‹เฐ—เฐฆเฐพเฐฐเฑ เฐ…เฐฏเฐฟเฐจเฐพ, เฐˆ เฐถเฑ€เฐ˜เฑเฐฐ เฐชเฐฐเฑเฐฏเฐŸเฐจ เฐฎเฑ€เฐ•เฑ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฐนเฐพเฐฏเฐ‚ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ [`pipeline`] เฐ…เฐจเฑเฐฎเฐฟเฐคเฐฟ เฐ•เฑ‹เฐธเฐ‚ เฐŽเฐฒเฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐพเฐฒเฑ‹ เฐฎเฑ€เฐ•เฑ เฐšเฑ‚เฐชเฑเฐคเฑเฐ‚เฐฆเฐฟ, [AutoClass](./model_doc/auto) เฐคเฑ‹ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑเฐฐเฐฟเฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฐเฑ/ เฐ†เฐŸเฑ‹, เฐฎเฐฐเฐฟเฐฏเฑ PyTorch เฐฒเฑ‡เฐฆเฐพ TensorFlowเฐคเฑ‹ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ•เฑ เฐคเฑเฐตเฐฐเฐ—เฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐ‡เฐตเฑเฐตเฐ‚เฐกเฐฟ. เฐฎเฑ€เฐฐเฑ เฐ’เฐ• เฐ…เฐจเฑเฐญเฐตเฐถเฑ‚เฐจเฑเฐฏเฑเฐกเฑ เฐ…เฐฏเฐฟเฐคเฑ‡, เฐ‡เฐ•เฑเฐ•เฐก เฐชเฐฐเฐฟเฐšเฐฏเฐ‚ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจ เฐญเฐพเฐตเฐจเฐฒ เฐ—เฑเฐฐเฐฟเฐ‚เฐšเฐฟ เฐฎเฐฐเฐฟเฐ‚เฐค เฐฒเฑ‹เฐคเฑˆเฐจ เฐตเฐฟเฐตเฐฐเฐฃเฐฒ เฐ•เฑ‹เฐธเฐ‚ เฐฎเฐพ เฐŸเฑเฐฏเฑเฐŸเฑ‹เฐฐเฐฟเฐฏเฐฒเฑเฐธเฑ เฐฒเฑ‡เฐฆเฐพ [course](https://huggingface.co/course/chapter1/1)เฐจเฐฟ เฐคเฐจเฐฟเฐ–เฑ€ เฐšเฑ‡เฐฏเฐฎเฐจเฐฟ เฐฎเฑ‡เฐฎเฑ เฐธเฐฟเฐซเฐพเฐฐเฑเฐธเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐจเฑเฐจเฐพเฐฎเฑ. เฐฎเฑ€เฐฐเฑ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑเฐ‚เฐฆเฑ, เฐฎเฑ€เฐฐเฑ เฐ…เฐตเฐธเฐฐเฐฎเฑˆเฐจ เฐ…เฐจเฑเฐจเฐฟ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€เฐฒเฐจเฑ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐถเฐพเฐฐเฐจเฐฟ เฐจเฐฟเฐฐเฑเฐงเฐพเฐฐเฐฟเฐ‚เฐšเฑเฐ•เฑ‹เฐ‚เฐกเฐฟ: ```bash !pip install transformers datasets evaluate accelerate ``` เฐฎเฑ€เฐฐเฑ เฐฎเฑ€ เฐชเฑเฐฐเฐพเฐงเฐพเฐจเฑเฐฏ เฐฏเฐ‚เฐคเฑเฐฐ เฐ…เฐญเฑเฐฏเฐพเฐธ เฐซเฑเฐฐเฑ‡เฐฎเฑโ€Œเฐตเฐฐเฑเฐ•เฑโ€Œเฐจเฑ เฐ•เฑ‚เฐกเฐพ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## เฐชเฑˆเฐชเฑโ€Œเฐฒเฑˆเฐจเฑ <Youtube id="tiZFewofSLM"/> [`pipeline`] เฐ…เฐจเฑเฐฎเฐฟเฐคเฐฟ เฐ•เฑ‹เฐธเฐ‚ เฐฎเฑเฐ‚เฐฆเฑเฐ—เฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ เฐจเฐฎเฑ‚เฐจเฐพเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฑเฐฒเฐญเฐฎเฑˆเฐจ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฑ‡เฐ—เฐตเฐ‚เฐคเฐฎเฑˆเฐจ เฐฎเฐพเฐฐเฑเฐ—เฐ‚. เฐฎเฑ€เฐฐเฑ เฐตเฐฟเฐตเฐฟเฐง เฐชเฐฆเฑเฐงเฐคเฑเฐฒเฐฒเฑ‹ เฐ…เฐจเฑ‡เฐ• เฐชเฐจเฑเฐฒ เฐ•เฑ‹เฐธเฐ‚ [`pipeline`] เฐตเฑ†เฐฒเฑเฐชเฐฒ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ, เฐตเฐพเฐŸเฐฟเฐฒเฑ‹ เฐ•เฑŠเฐจเฑเฐจเฐฟ เฐ•เฑเฐฐเฐฟเฐ‚เฐฆเฐฟ เฐชเฐŸเฑเฐŸเฐฟเฐ•เฐฒเฑ‹ เฐšเฑ‚เฐชเฐฌเฐกเฑเฐกเฐพเฐฏเฐฟ: <Tip> เฐ…เฐ‚เฐฆเฑเฐฌเฐพเฐŸเฑเฐฒเฑ‹ เฐ‰เฐจเฑเฐจ เฐชเฐจเฑเฐฒ เฐชเฑ‚เฐฐเฑเฐคเฐฟ เฐœเฐพเฐฌเฐฟเฐคเฐพ เฐ•เฑ‹เฐธเฐ‚, [เฐชเฑˆเฐชเฑโ€Œเฐฒเฑˆเฐจเฑ API เฐธเฑ‚เฐšเฐจ](./main_classes/pipelines)เฐจเฐฟ เฐคเฐจเฐฟเฐ–เฑ€ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ. </Tip> Here is the translation in Telugu: | **เฐชเฐจเฐฟ** | **เฐตเฐฟเฐตเฐฐเฐฃ** | **เฐฎเฑ‹เฐกเฐพเฐฒเฐฟเฐŸเฑ€** | **เฐชเฑˆเฐชเฑโ€Œเฐฒเฑ†เฑ–เฐจเฑ เฐเฐกเฑ†เฐ‚เฐŸเฐฟเฐซเฑˆเฐฏเฐฐเฑ** | |------------------------------|--------------------------------------------------------------------------------------------------------|-----------------|------------------------------------------| | เฐตเฐšเฐจ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃเฑ | เฐ•เฑŠเฐจเฑเฐจเฐฟ เฐตเฐšเฐจเฐพเฐฒ เฐ…เฐ‚เฐคเฐพ เฐ’เฐ• เฐฒเฑ‡เฐฌเฑเฐฒเฑโ€Œเฐจเฑ เฐ•เฑŠเฐกเฐฟ | NLP | pipeline(task=โ€œsentiment-analysisโ€) | | เฐตเฐšเฐจ เฐธเฑƒเฐทเฑเฐŸเฐฟ | เฐชเฑเฐฐเฐฎเฑเฐชเฑเฐŸเฐ‚ เฐ•เฐฒเฐฟเฐ—เฐฟเฐจเฐ‚เฐค เฐตเฐšเฐจเฐ‚ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ | NLP | pipeline(task=โ€œtext-generationโ€) | | เฐธเฐ‚เฐ•เฑเฐทเฑ‡เฐชเฐฃ | เฐตเฐšเฐจเฐ‚ เฐฒเฑ‡เฐฆเฐพ เฐชเฐคเฑเฐฐเฐ‚ เฐ•เฑŠเฐฐเฐ•เฑ เฐธเฐ‚เฐ•เฑเฐทเฑ‡เฐชเฐฃ เฐคเฐฏเฐพเฐฐเฑเฐšเฑ‡เฐธเฐ‚เฐกเฐฟ | NLP | pipeline(task=โ€œsummarizationโ€) | | เฐšเฐฟเฐคเฑเฐฐเฐ‚ เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃเฑ | เฐšเฐฟเฐคเฑเฐฐเฐ‚เฐฒเฑ‹ เฐ’เฐ• เฐฒเฑ‡เฐฌเฑเฐฒเฑโ€Œเฐจเฑ เฐ•เฑŠเฐกเฐฟ | เฐ•เฐ‚เฐชเฑเฐฏเฑ‚เฐŸเฐฐเฑ เฐตเฐฟเฐทเฐฏเฐ‚ | pipeline(task=โ€œimage-classificationโ€) | | เฐšเฐฟเฐคเฑเฐฐเฐ‚ เฐตเฐฟเฐญเฐœเฐจ | เฐ’เฐ• เฐšเฐฟเฐคเฑเฐฐเฐ‚เฐฒเฑ‹ เฐชเฑเฐฐเฐคเฐฟ เฐตเฑเฐฏเฐ•เฑเฐคเฐฟเฐ—เฐค เฐชเฐฟเฐ•เฑเฐธเฐฒเฑโ€Œเฐจเฑ เฐ’เฐ• เฐฒเฑ‡เฐฌเฑเฐฒเฑโ€Œเฐ—เฐพ เฐจเฐฎเฑ‹เฐฆเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ (เฐธเฑ†เฐฎเฐพเฐ‚เฐŸเฐฟเฐ•เฑ, เฐชเฐพเฐจเฑŠเฐชเฑเฐŸเฐฟเฐ•เฑ, เฐฎเฐฐเฐฟเฐฏเฑ เฐ‡เฐจเฑเฐธเฑเฐŸเฐจเฑเฐธเฑ เฐตเฐฟเฐญเฐœเฐจเฐฒเฐจเฑ เฐฎเฐฆเฑเฐฆเฐคเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ) | เฐ•เฐ‚เฐชเฑเฐฏเฑ‚เฐŸเฐฐเฑ เฐตเฐฟเฐทเฐฏเฐ‚ | pipeline(task=โ€œimage-segmentationโ€) | | เฐตเฐธเฑเฐคเฑเฐฐเฐ‚ เฐ—เฑเฐฐเฑเฐคเฑเฐตเฑ | เฐ’เฐ• เฐšเฐฟเฐคเฑเฐฐเฐ‚เฐฒเฑ‹ เฐชเฐฆเฐพเฐฒ เฐฏเฑŠเฐ•เฑเฐ• เฐฌเฑŒเฐ‚เฐกเฐฟเฐ‚เฐ—เฑ เฐฌเฐพเฐ•เฑเฐธเฑโ€Œเฐฒเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐธเฑเฐคเฑเฐฐเฐพเฐฒ เฐตเฐฐเฑเฐ—เฐพเฐฒเฐจเฑ เฐ…เฐ‚เฐšเฐจเฐพ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ | เฐ•เฐ‚เฐชเฑเฐฏเฑ‚เฐŸเฐฐเฑ เฐตเฐฟเฐทเฐฏเฐ‚ | pipeline(task=โ€œobject-detectionโ€) | | เฐ†เฐกเฐฟเฐฏเฑ‹ เฐ—เฑเฐฐเฑเฐคเฑเฐตเฑ | เฐ•เฑŠเฐจเฑเฐจเฐฟ เฐ†เฐกเฐฟเฐฏเฑ‹ เฐกเฑ‡เฐŸเฐพเฐจเฐฟเฐ•เฐฟ เฐ’เฐ• เฐฒเฑ‡เฐฌเฑเฐฒเฑโ€Œเฐจเฑ เฐ•เฑŠเฐกเฐฟ | เฐ†เฐกเฐฟเฐฏเฑ‹ | pipeline(task=โ€œaudio-classificationโ€) | | เฐธเฑเฐตเฐฏเฐ‚เฐšเฐฒเฐจ เฐชเฑเฐฐเฐธเฐ‚เฐ— เฐ—เฑเฐฐเฑเฐคเฑเฐตเฑ | เฐชเฑเฐฐเฐธเฐ‚เฐ—เฐพเฐจเฑเฐจเฐฟ เฐตเฐšเฐจเฐ‚เฐ—เฐพ เฐตเฐฐเฑเฐฃเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ | เฐ†เฐกเฐฟเฐฏเฑ‹ | pipeline(task=โ€œautomatic-speech-recognitionโ€) | | เฐฆเฑƒเฐถเฑเฐฏ เฐชเฑเฐฐเฐถเฑเฐจ เฐธเฐ‚เฐตเฐพเฐฆเฐ‚ | เฐตเฐšเฐจเฐ‚ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑเฐฐเฐถเฑเฐจเฐจเฑ เฐจเฐฎเฑ‹เฐฆเฑ เฐšเฑ‡เฐธเฐฟเฐจ เฐšเฐฟเฐคเฑเฐฐเฐ‚เฐคเฑ‹ เฐชเฑเฐฐเฐถเฑเฐจเฐ•เฑ เฐธเฐฎเฐพเฐงเฐพเฐจเฐ‚ เฐ‡เฐตเฑเฐตเฐ‚เฐกเฐฟ | เฐฌเฐนเฑเฐฎเฑ‚เฐฒเฐฟเฐ• | pipeline(task=โ€œvqaโ€) | | เฐชเฐคเฑเฐฐเฐ‚ เฐชเฑเฐฐเฐถเฑเฐจ เฐธเฐ‚เฐตเฐพเฐฆเฐ‚ | เฐชเฑเฐฐเฐถเฑเฐจเฐจเฑ เฐชเฐคเฑเฐฐเฐ‚ เฐฒเฑ‡เฐฆเฐพ เฐกเฐพเฐ•เฑเฐฏเฑเฐฎเฑ†เฐ‚เฐŸเฑโ€Œเฐคเฑ‹ เฐธเฐฎเฐพเฐงเฐพเฐจเฐ‚ เฐ‡เฐตเฑเฐตเฐ‚เฐกเฐฟ | เฐฌเฐนเฑเฐฎเฑ‚เฐฒเฐฟเฐ• | pipeline(task="document-question-answering") | | เฐšเฐฟเฐคเฑเฐฐเฐ‚ เฐตเฑเฐฐเฐพเฐธเฐพเฐฏเฐฟเฐ‚เฐ—เฑ | เฐ•เฑŠเฐจเฑเฐจเฐฟ เฐšเฐฟเฐคเฑเฐฐเฐพเฐจเฐฟเฐ•เฐฟ เฐชเฐฟเฐŸเฐฟเฐฏเฐพเฐฐเฑเฐฒเฐจเฑ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ | เฐฌเฐนเฑเฐฎเฑ‚เฐฒเฐฟเฐ• | pipeline(task="image-to-text") | [`pipeline`] เฐฏเฑŠเฐ•เฑเฐ• เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐจเฑ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐกเฐ‚ เฐฆเฑเฐตเฐพเฐฐเฐพ เฐฎเฐฐเฐฟเฐฏเฑ เฐฎเฑ€เฐฐเฑ เฐฆเฐพเฐจเฐฟเฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐพเฐฒเฐจเฑเฐ•เฑเฐ‚เฐŸเฑเฐจเฑเฐจ เฐชเฐจเฐฟเฐจเฐฟ เฐชเฑ‡เฐฐเฑเฐ•เฑŠเฐจเฐกเฐ‚ เฐฆเฑเฐตเฐพเฐฐเฐพ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ. เฐˆ เฐ—เฑˆเฐกเฑโ€Œเฐฒเฑ‹, เฐฎเฑ€เฐฐเฑ เฐธเฑ†เฐ‚เฐŸเฐฟเฐฎเฑ†เฐ‚เฐŸเฑ เฐตเฐฟเฐถเฑเฐฒเฑ‡เฐทเฐฃ เฐ•เฑ‹เฐธเฐ‚ [`pipeline`]เฐจเฐฟ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐ—เฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐธเฑเฐคเฐพเฐฐเฑ: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` เฐธเฑ†เฐ‚เฐŸเฐฟเฐฎเฑ†เฐ‚เฐŸเฑ เฐตเฐฟเฐถเฑเฐฒเฑ‡เฐทเฐฃ เฐ•เฑ‹เฐธเฐ‚ [`pipeline`] เฐกเฐฟเฐซเฐพเฐฒเฑเฐŸเฑ [เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑ](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) เฐฎเฐฐเฐฟเฐฏเฑ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐจเฐฟ เฐกเฑŒเฐจเฑโ€Œเฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ•เฐพเฐทเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐ‡เฐชเฑเฐชเฑเฐกเฑ เฐฎเฑ€เฐฐเฑ เฐฎเฑ€ เฐฒเฐ•เฑเฐทเฑเฐฏ เฐตเฐšเฐจเฐ‚เฐฒเฑ‹ `classifier`เฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ: ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` เฐฎเฑ€เฐฐเฑ เฐ’เฐ•เฐŸเฐฟ เฐ•เฐ‚เฐŸเฑ‡ เฐŽเฐ•เฑเฐ•เฑเฐต เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฐจเฑ เฐ•เฐฒเฐฟเฐ—เฐฟ เฐ‰เฐ‚เฐŸเฑ‡, เฐจเฐฟเฐ˜เฐ‚เฐŸเฑเฐตเฑเฐฒ เฐœเฐพเฐฌเฐฟเฐคเฐพเฐจเฑ เฐ…เฐ‚เฐฆเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ€ เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฐจเฑ เฐœเฐพเฐฌเฐฟเฐคเฐพเฐ—เฐพ [`pipeline`]เฐ•เฐฟ เฐชเฐ‚เฐชเฐ‚เฐกเฐฟ: ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309 ``` [`pipeline`] เฐฎเฑ€เฐ•เฑ เฐจเฐšเฑเฐšเฐฟเฐจ เฐเฐฆเฑˆเฐจเฐพ เฐชเฐจเฐฟ เฐ•เฑ‹เฐธเฐ‚ เฐฎเฑŠเฐคเฑเฐคเฐ‚ เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐจเฑ เฐ•เฑ‚เฐกเฐพ เฐชเฑเฐจเฐฐเฐพเฐตเฑƒเฐคเฐ‚ เฐšเฑ‡เฐฏเฐ—เฐฒเฐฆเฑ. เฐˆ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃ เฐ•เฑ‹เฐธเฐ‚, เฐธเฑเฐตเฐฏเฐ‚เฐšเฐพเฐฒเฐ• เฐชเฑเฐฐเฐธเฐ‚เฐ— เฐ—เฑเฐฐเฑเฐคเฐฟเฐ‚เฐชเฑเฐจเฑ เฐฎเฐจ เฐชเฐจเฐฟเฐ—เฐพ เฐŽเฐ‚เฐšเฑเฐ•เฑเฐ‚เฐฆเฐพเฐ‚: ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` เฐฎเฑ€เฐฐเฑ เฐฎเฐณเฑเฐฒเฑ€ เฐฎเฐณเฑเฐฒเฑ€ เฐšเฑ†เฐชเฑเฐชเฐพเฐฒเฐจเฑเฐ•เฑเฐ‚เฐŸเฑเฐจเฑเฐจ เฐ†เฐกเฐฟเฐฏเฑ‹ เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ (เฐฎเฐฐเฐฟเฐจเฑเฐจเฐฟ เฐตเฐฟเฐตเฐฐเฐพเฐฒ เฐ•เฑ‹เฐธเฐ‚ ๐Ÿค— เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐฒเฑ [เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐ‚](https://huggingface.co/docs/datasets/quickstart#audio) เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ. เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐ•เฑ, [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑ เฐฏเฑŠเฐ•เฑเฐ• เฐจเฐฎเฑ‚เฐจเฐพ เฐฐเฑ‡เฐŸเฑ เฐจเฐฎเฑ‚เฐจเฐพเฐคเฑ‹ เฐธเฐฐเฐฟเฐชเฑ‹เฐฒเฑเฐคเฑเฐ‚เฐฆเฐจเฐฟ เฐฎเฑ€เฐฐเฑ เฐจเฐฟเฐฐเฑเฐงเฐพเฐฐเฐฟเฐ‚เฐšเฑเฐ•เฑ‹เฐตเฐพเฐฒเฐฟ เฐฐเฑ‡เฐŸเฑ [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) เฐฆเฑ€เฐจเฐฟเฐชเฑˆ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐ‚เฐฆเฐฟ: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` `"เฐ†เฐกเฐฟเฐฏเฑ‹"` เฐ•เฐพเฐฒเฐฎเฑโ€Œเฐ•เฐฟ เฐ•เฐพเฐฒเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐจเฑเฐจเฐชเฑเฐชเฑเฐกเฑ เฐ†เฐกเฐฟเฐฏเฑ‹ เฐซเฑˆเฐฒเฑโ€Œเฐฒเฑ เฐธเฑเฐตเฐฏเฐ‚เฐšเฐพเฐฒเฐ•เฐ‚เฐ—เฐพ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐคเฐพเฐฏเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐฎเฐณเฑเฐฒเฑ€ เฐจเฐฎเฑ‚เฐจเฐพ เฐšเฑ‡เฐฏเฐฌเฐกเฐคเฐพเฐฏเฐฟ. เฐฎเฑŠเฐฆเฐŸเฐฟ 4 เฐจเฐฎเฑ‚เฐจเฐพเฐฒ เฐจเฑเฐ‚เฐกเฐฟ เฐฎเฑเฐกเฐฟ เฐตเฑ‡เฐตเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฑ เฐถเฑเฐฐเฑ‡เฐฃเฑเฐฒเฐจเฑ เฐธเฐ‚เฐ—เฑเฐฐเฐนเฐฟเฐ‚เฐšเฐฟ, เฐชเฑˆเฐชเฑโ€Œเฐฒเฑˆเฐจเฑโ€Œเฐ•เฑ เฐœเฐพเฐฌเฐฟเฐคเฐพเฐ—เฐพ เฐชเฐพเฐธเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT'] ``` เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฑ เฐชเฑ†เฐฆเฑเฐฆเฐ—เฐพ เฐ‰เฐจเฑเฐจ เฐชเฑ†เฐฆเฑเฐฆ เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐฒ เฐ•เฑ‹เฐธเฐ‚ (เฐธเฑเฐชเฑ€เฐšเฑ เฐฒเฑ‡เฐฆเฐพ เฐตเฐฟเฐœเฐจเฑ เฐตเฐ‚เฐŸเฐฟเฐตเฐฟ), เฐฎเฑ†เฐฎเฐฐเฑ€เฐฒเฑ‹เฐจเฐฟ เฐ…เฐจเฑเฐจเฐฟ เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ€เฐฐเฑ เฐœเฐพเฐฌเฐฟเฐคเฐพเฐ•เฑ เฐฌเฐฆเฑเฐฒเฑเฐ—เฐพ เฐœเฑ†เฐจเฐฐเฑ‡เฐŸเฐฐเฑโ€Œเฐจเฑ เฐชเฐพเฐธเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฐจเฑเฐ•เฑเฐ‚เฐŸเฑเฐจเฑเฐจเฐพเฐฐเฑ. เฐฎเฐฐเฐฟเฐ‚เฐค เฐธเฐฎเฐพเฐšเฐพเฐฐเฐ‚ เฐ•เฑ‹เฐธเฐ‚ [เฐชเฑˆเฐชเฑโ€Œเฐฒเฑˆเฐจเฑ API เฐธเฑ‚เฐšเฐจ](./main_classes/pipelines)เฐจเฐฟ เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ. ### เฐชเฑˆเฐชเฑโ€Œเฐฒเฑˆเฐจเฑโ€Œเฐฒเฑ‹ เฐฎเฐฐเฑŠเฐ• เฐฎเฑ‹เฐกเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ [`pipeline`] [Hub](https://huggingface.co/models) เฐจเฑเฐ‚เฐกเฐฟ เฐเฐฆเฑˆเฐจเฐพ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐ•เฐฒเฐฟเฐ—เฐฟ เฐ‰เฐ‚เฐŸเฑเฐ‚เฐฆเฐฟ, เฐฆเฑ€เฐจเฐฟ เฐตเฐฒเฐจ เฐ‡เฐคเฐฐ เฐตเฐฟเฐจเฐฟเฐฏเฑ‹เฐ—-เฐ•เฑ‡เฐธเฑเฐฒ เฐ•เฑ‹เฐธเฐ‚ [`pipeline`]เฐจเฐฟ เฐธเฑเฐฒเฐญเฐ‚เฐ—เฐพ เฐธเฑเฐตเฑ€เฐ•เฐฐเฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ. เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐ•เฑ, เฐฎเฑ€เฐฐเฑ เฐซเฑเฐฐเฑ†เฐ‚เฐšเฑ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑโ€Œเฐจเฑ เฐนเฑเฐฏเฐพเฐ‚เฐกเฐฟเฐฒเฑ เฐšเฑ‡เฐฏเฐ—เฐฒ เฐฎเฑ‹เฐกเฐฒเฑ เฐ•เฐพเฐตเฐพเฐฒเฐจเฑเฐ•เฑเฐ‚เฐŸเฑ‡, เฐคเฐ—เฐฟเฐจ เฐฎเฑ‹เฐกเฐฒเฑ เฐ•เฑ‹เฐธเฐ‚ เฐซเฐฟเฐฒเฑเฐŸเฐฐเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐนเฐฌเฑโ€Œเฐฒเฑ‹เฐจเฐฟ เฐŸเฑเฐฏเฐพเฐ—เฑโ€Œเฐฒเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ. เฐ…เฐ—เฑเฐฐ เฐซเฐฟเฐฒเฑเฐŸเฐฐเฑ เฐšเฑ‡เฐธเฐฟเฐจ เฐซเฐฒเฐฟเฐคเฐ‚ เฐฎเฑ€เฐฐเฑ เฐซเฑเฐฐเฑ†เฐ‚เฐšเฑ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑ เฐ•เฑ‹เฐธเฐ‚ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐ—เฐฒ เฐธเฑ†เฐ‚เฐŸเฐฟเฐฎเฑ†เฐ‚เฐŸเฑ เฐตเฐฟเฐถเฑเฐฒเฑ‡เฐทเฐฃ เฐ•เฑ‹เฐธเฐ‚ เฐซเฑˆเฐจเฑโ€ŒเฐŸเฑเฐฏเฑ‚เฐจเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจ เฐฌเฐนเฑเฐญเฐพเฐทเฐพ [BERT เฐฎเฑ‹เฐกเฐฒเฑ](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment)เฐจเฐฟ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ: ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> เฐฎเฑเฐ‚เฐฆเฑเฐ—เฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ [`AutoModelForSequenceClassification`] เฐฎเฐฐเฐฟเฐฏเฑ [`AutoTokenizer`]เฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐ…เฐจเฑเฐฌเฐ‚เฐงเฐฟเฐค เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ (เฐคเฐฆเฑเฐชเฐฐเฐฟ เฐตเฐฟเฐญเฐพเฐ—เฐ‚เฐฒเฑ‹ `AutoClass`เฐชเฑˆ เฐฎเฐฐเฐฟเฐจเฑเฐจเฐฟ): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> เฐฎเฑเฐ‚เฐฆเฑเฐ—เฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ [`TFAutoModelForSequenceClassification`] เฐฎเฐฐเฐฟเฐฏเฑ [`AutoTokenizer`]เฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐ…เฐจเฑเฐฌเฐ‚เฐงเฐฟเฐค เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ (เฐคเฐฆเฑเฐชเฐฐเฐฟ เฐตเฐฟเฐญเฐพเฐ—เฐ‚เฐฒเฑ‹ `TFAutoClass`เฐชเฑˆ เฐฎเฐฐเฐฟเฐจเฑเฐจเฐฟ): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> [`pipeline`]เฐฒเฑ‹ เฐฎเฑ‹เฐกเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐจเฑ เฐชเฑ‡เฐฐเฑเฐ•เฑŠเฐจเฐ‚เฐกเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ‡เฐชเฑเฐชเฑเฐกเฑ เฐฎเฑ€เฐฐเฑ เฐซเฑเฐฐเฑ†เฐ‚เฐšเฑ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑโ€Œเฐชเฑˆ `เฐ•เฑเฐฒเฐพเฐธเฐฟเฐซเฑˆเฐฏเฐฐเฑ`เฐจเฐฟ เฐตเฐฐเฑเฐคเฐฟเฐ‚เฐชเฐœเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` เฐฎเฑ€เฐฐเฑ เฐฎเฑ€ เฐตเฐฟเฐจเฐฟเฐฏเฑ‹เฐ—-เฐ•เฑ‡เฐธเฑ เฐ•เฑ‹เฐธเฐ‚ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐ•เฐจเฑเฐ—เฑŠเฐจเฐฒเฑ‡เฐ•เฐชเฑ‹เฐคเฑ‡, เฐฎเฑ€เฐฐเฑ เฐฎเฑ€ เฐกเฑ‡เฐŸเฐพเฐชเฑˆ เฐฎเฑเฐ‚เฐฆเฑเฐ—เฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐšเฐ•เฑเฐ•เฐ—เฐพ เฐฎเฐพเฐฐเฑเฐšเฐพเฐฒเฐฟ. เฐŽเฐฒเฐพเฐ—เฑ‹ เฐคเฑ†เฐฒเฑเฐธเฑเฐ•เฑ‹เฐตเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฐพ [เฐซเฑˆเฐจเฑโ€ŒเฐŸเฑเฐฏเฑ‚เฐจเฐฟเฐ‚เฐ—เฑ เฐŸเฑเฐฏเฑเฐŸเฑ‹เฐฐเฐฟเฐฏเฐฒเฑ](./training)เฐจเฐฟ เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ. เฐšเฐฟเฐตเฐฐเฐ—เฐพ, เฐฎเฑ€เฐฐเฑ เฐฎเฑ€ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฐฟ เฐซเฑˆเฐจเฑโ€ŒเฐŸเฑเฐฏเฑ‚เฐจเฑ เฐšเฑ‡เฐธเฐฟเฐจ เฐคเฐฐเฑเฐตเฐพเฐค, เฐฆเฐฏเฐšเฑ‡เฐธเฐฟ เฐ…เฐ‚เฐฆเฐฐเฐฟ เฐ•เฑ‹เฐธเฐ‚ เฐฎเฑ†เฐทเฐฟเฐจเฑ เฐฒเฑ†เฐฐเฑเฐจเฐฟเฐ‚เฐ—เฑโ€Œเฐจเฐฟ เฐกเฑ†เฐฎเฑ‹เฐ•เฑเฐฐเฐŸเฑˆเฐœเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐนเฐฌเฑโ€Œเฐฒเฑ‹เฐจเฐฟ เฐธเฐ‚เฐ˜เฐ‚เฐคเฑ‹ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ [เฐทเฑ‡เฐฐเฐฟเฐ‚เฐ—เฑ](./model_sharing) เฐชเฐฐเฐฟเฐ—เฐฃเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> เฐนเฑเฐกเฑ เฐ•เฐฟเฐ‚เฐฆ, เฐฎเฑ€เฐฐเฑ เฐชเฑˆเฐจ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฟเฐจ [`pipeline`]เฐ•เฐฟ เฐถเฐ•เฑเฐคเฐฟเฐจเฐฟ เฐ…เฐ‚เฐฆเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ [`AutoModelForSequenceClassification`] เฐฎเฐฐเฐฟเฐฏเฑ [`AutoTokenizer`] เฐคเฐฐเฐ—เฐคเฑเฐฒเฑ เฐ•เฐฒเฐฟเฐธเฐฟ เฐชเฐจเฐฟ เฐšเฑ‡เฐธเฑเฐคเฐพเฐฏเฐฟ. เฐ’เฐ• [AutoClass](./model_doc/auto) เฐ…เฐจเฑ‡เฐฆเฐฟ เฐฎเฑเฐ‚เฐฆเฑเฐ—เฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ เฐฎเฑ‹เฐกเฐฒเฑ เฐฏเฑŠเฐ•เฑเฐ• เฐ†เฐฐเฑเฐ•เฐฟเฐŸเฑ†เฐ•เฑเฐšเฐฐเฑโ€Œเฐจเฑ เฐฆเฐพเฐจเฐฟ เฐชเฑ‡เฐฐเฑ เฐฒเฑ‡เฐฆเฐพ เฐฎเฐพเฐฐเฑเฐ—เฐ‚ เฐจเฑเฐ‚เฐกเฐฟ เฐธเฑเฐตเฐฏเฐ‚เฐšเฐพเฐฒเฐ•เฐ‚เฐ—เฐพ เฐคเฐฟเฐฐเฐฟเฐ—เฐฟ เฐชเฑŠเฐ‚เฐฆเฑ‡ เฐธเฐคเฑเฐตเฐฐเฐฎเฐพเฐฐเฑเฐ—เฐ‚. เฐฎเฑ€เฐฐเฑ เฐฎเฑ€ เฐŸเฐพเฐธเฑเฐ•เฑ เฐ•เฑ‹เฐธเฐ‚ เฐคเฐ—เฐฟเฐจ `เฐ†เฐŸเฑ‹เฐ•เฑเฐฒเฐพเฐธเฑ`เฐจเฐฟ เฐฎเฐพเฐคเฑเฐฐเฐฎเฑ‡ เฐŽเฐ‚เฐšเฑเฐ•เฑ‹เฐตเฐพเฐฒเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ‡เฐฆเฐฟ เฐ…เฐจเฑเฐฌเฐ‚เฐงเฐฟเฐค เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฟเฐ‚เฐ—เฑ เฐ•เฑเฐฒเฐพเฐธเฑ. เฐฎเฑเฐจเฑเฐชเฐŸเฐฟ เฐตเฐฟเฐญเฐพเฐ—เฐ‚ เฐจเฑเฐ‚เฐกเฐฟ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐ•เฐฟ เฐคเฐฟเฐฐเฐฟเฐ—เฐฟ เฐตเฑ†เฐณเฑเฐฒเฐฟ, [`pipeline`] เฐซเฐฒเฐฟเฐคเฐพเฐฒเฐจเฑ เฐชเฑเฐฐเฐคเฐฟเฐฌเฐฟเฐ‚เฐฌเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ€เฐฐเฑ `เฐ†เฐŸเฑ‹เฐ•เฑเฐฒเฐพเฐธเฑ`เฐจเฐฟ เฐŽเฐฒเฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ‹ เฐšเฑ‚เฐฆเฑเฐฆเฐพเฐ‚. ### AutoTokenizer เฐ’เฐ• เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ•เฑ เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฑเฐ—เฐพ เฐธเฐ‚เฐ–เฑเฐฏเฐฒ เฐถเฑเฐฐเฑ‡เฐฃเฐฟเฐฒเฑ‹ เฐตเฐšเฐจเฐพเฐจเฑเฐจเฐฟ เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฟเฐ‚เฐ—เฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ เฐฌเฐพเฐงเฑเฐฏเฐค เฐตเฐนเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐชเฐฆเฐพเฐจเฑเฐจเฐฟ เฐŽเฐฒเฐพ เฐตเฐฟเฐญเฐœเฐฟเฐ‚เฐšเฐพเฐฒเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ เฐธเฑเฐฅเฐพเฐฏเฐฟเฐฒเฑ‹ เฐชเฐฆเฐพเฐฒเฐจเฑ เฐตเฐฟเฐญเฐœเฐฟเฐ‚เฐšเฐพเฐฒเฐฟ ([tokenizer เฐธเฐพเฐฐเฐพเฐ‚เฐถเฐ‚](./tokenizer_summary)เฐฒเฑ‹ เฐŸเฑ‹เฐ•เฐจเฑˆเฐœเฑ‡เฐทเฐจเฑ เฐ—เฑเฐฐเฐฟเฐ‚เฐšเฐฟ เฐฎเฐฐเฐฟเฐ‚เฐค เฐคเฑ†เฐฒเฑเฐธเฑเฐ•เฑ‹เฐ‚เฐกเฐฟ) เฐธเฐนเฐพ เฐŸเฑ‹เฐ•เฐจเฑˆเฐœเฑ‡เฐทเฐจเฑ เฐชเฑเฐฐเฐ•เฑเฐฐเฐฟเฐฏเฐจเฑ เฐจเฐฟเฐฏเฐ‚เฐคเฑเฐฐเฐฟเฐ‚เฐšเฑ‡ เฐ…เฐจเฑ‡เฐ• เฐจเฐฟเฐฏเฐฎเฐพเฐฒเฑ เฐ‰เฐจเฑเฐจเฐพเฐฏเฐฟ. เฐ—เฑเฐฐเฑเฐคเฑเฐ‚เฐšเฑเฐ•เฑ‹เฐตเฐฒเฐธเฐฟเฐจ เฐฎเฑเฐ–เฑเฐฏเฐฎเฑˆเฐจ เฐตเฐฟเฐทเฐฏเฐ‚ เฐเฐฎเฐฟเฐŸเฐ‚เฐŸเฑ‡, เฐฎเฑ€เฐฐเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ•เฑ เฐฎเฑเฐ‚เฐฆเฑ‡ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐฟเฐจ เฐ…เฐฆเฑ‡ เฐŸเฑ‹เฐ•เฐจเฑˆเฐœเฑ‡เฐทเฐจเฑ เฐจเฐฟเฐฏเฐฎเฐพเฐฒเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐธเฑเฐคเฑเฐจเฑเฐจเฐพเฐฐเฐจเฐฟ เฐจเฐฟเฐฐเฑเฐงเฐพเฐฐเฐฟเฐ‚เฐšเฑเฐ•เฑ‹เฐตเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ€เฐฐเฑ เฐ…เฐฆเฑ‡ เฐฎเฑ‹เฐกเฐฒเฑ เฐชเฑ‡เฐฐเฑเฐคเฑ‹ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐจเฑ เฐคเฐ•เฑเฐทเฐฃเฐ‚ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ. [`AutoTokenizer`]เฐคเฑ‹ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` เฐฎเฑ€ เฐตเฐšเฐจเฐพเฐจเฑเฐจเฐฟ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐ•เฑ เฐชเฐ‚เฐชเฐ‚เฐกเฐฟ: ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ เฐตเฑ€เฐŸเฐฟเฐจเฐฟ เฐ•เฐฒเฐฟเฐ—เฐฟ เฐ‰เฐจเฑเฐจ เฐจเฐฟเฐ˜เฐ‚เฐŸเฑเฐตเฑเฐจเฐฟ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ: * [input_ids](./glossary#input-ids): เฐฎเฑ€ เฐŸเฑ‹เฐ•เฑ†เฐจเฑโ€Œเฐฒ เฐธเฐ‚เฐ–เฑเฐฏเฐพเฐชเฐฐเฐฎเฑˆเฐจ เฐชเฑเฐฐเฐพเฐคเฐฟเฐจเฐฟเฐงเฑเฐฏเฐ‚. * [เฐ…เฐŸเฑ†เฐจเฑเฐทเฐจเฑ_เฐฎเฐพเฐธเฑเฐ•เฑ](./glossary#attention-mask): เฐ เฐŸเฑ‹เฐ•เฑ†เฐจเฑโ€Œเฐฒเฐ•เฑ เฐนเฐพเฐœเฐฐเฑ เฐ•เฐพเฐตเฐพเฐฒเฑ‹ เฐธเฑ‚เฐšเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐ’เฐ• เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒ เฐœเฐพเฐฌเฐฟเฐคเฐพเฐจเฑ เฐ•เฑ‚เฐกเฐพ เฐ†เฐฎเฑ‹เฐฆเฐฟเฐ‚เฐšเฐ—เฐฒเฐฆเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐ•เฐฐเฑ€เฐคเฐฟ เฐชเฑŠเฐกเฐตเฑเฐคเฑ‹ เฐฌเฑเฐฏเฐพเฐšเฑโ€Œเฐจเฑ เฐคเฐฟเฐฐเฐฟเฐ—เฐฟ เฐ‡เฐตเฑเฐตเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑโ€Œเฐจเฑ เฐชเฑเฐฏเฐพเฐกเฑ เฐšเฑ‡เฐธเฐฟ เฐ•เฐคเฑเฐคเฐฟเฐฐเฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> เฐŸเฑ‹เฐ•เฐจเฑˆเฐœเฑ‡เฐทเฐจเฑ เฐ—เฑเฐฐเฐฟเฐ‚เฐšเฐฟ เฐฎเฐฐเฐฟเฐจเฑเฐจเฐฟ เฐตเฐฟเฐตเฐฐเฐพเฐฒ เฐ•เฑ‹เฐธเฐ‚ [เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฑ](./preprocessing) เฐŸเฑเฐฏเฑเฐŸเฑ‹เฐฐเฐฟเฐฏเฐฒเฑโ€Œเฐจเฐฟ เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ‡เฐฎเฑ‡เฐœเฑ, เฐ†เฐกเฐฟเฐฏเฑ‹ เฐฎเฐฐเฐฟเฐฏเฑ เฐฎเฐฒเฑเฐŸเฑ€เฐฎเฑ‹เฐกเฐฒเฑ เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฐจเฑ เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ [`AutoImageProcessor`], [`AutoFeatureExtractor`] เฐฎเฐฐเฐฟเฐฏเฑ [`AutoProcessor`] เฐŽเฐฒเฐพ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐพเฐฒเฐฟ. </Tip> ### AutoModel <frameworkcontent> <pt> ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐจเฑเฐธเฑโ€Œเฐฒเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฑเฐฒเฐญเฐฎเฑˆเฐจ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐ•เฑ€เฐ•เฑƒเฐค เฐฎเฐพเฐฐเฑเฐ—เฐพเฐจเฑเฐจเฐฟ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฐพเฐฏเฐฟ. เฐฆเฑ€เฐจเฐฟ เฐ…เฐฐเฑเฐฅเฐ‚ เฐฎเฑ€เฐฐเฑ [`AutoTokenizer`]เฐจเฐฟ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐธเฐฟเฐจเฐŸเฑเฐฒเฑเฐ—เฐพ [`AutoModel`]เฐจเฐฟ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ. เฐŸเฐพเฐธเฑเฐ•เฑ เฐ•เฑ‹เฐธเฐ‚ เฐธเฐฐเฑˆเฐจ [`AutoModel`]เฐจเฐฟ เฐŽเฐ‚เฐšเฑเฐ•เฑ‹เฐตเฐกเฐ‚ เฐฎเฐพเฐคเฑเฐฐเฐฎเฑ‡ เฐคเฑ‡เฐกเฐพ. เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑ (เฐฒเฑ‡เฐฆเฐพ เฐธเฑ€เฐ•เฑเฐตเฑ†เฐจเฑเฐธเฑ) เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ เฐ•เฑ‹เฐธเฐ‚, เฐฎเฑ€เฐฐเฑ [`AutoModelForSequenceClassification`]เฐจเฐฟ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> [`AutoModel`] เฐ•เฑเฐฒเฐพเฐธเฑ เฐฆเฑเฐตเฐพเฐฐเฐพ เฐธเฐชเฑ‹เฐฐเฑเฐŸเฑ เฐšเฑ‡เฐธเฑ‡ เฐŸเฐพเฐธเฑเฐ•เฑโ€Œเฐฒ เฐ•เฑ‹เฐธเฐ‚ [เฐŸเฐพเฐธเฑเฐ•เฑ เฐธเฐพเฐฐเฐพเฐ‚เฐถเฐ‚](./task_summary)เฐจเฐฟ เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ. </Tip> เฐ‡เฐชเฑเฐชเฑเฐกเฑ เฐฎเฑ€ เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจ เฐฌเฑเฐฏเฐพเฐšเฑ เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฐจเฑ เฐจเฑ‡เฐฐเฑเฐ—เฐพ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ•เฐฟ เฐชเฐ‚เฐชเฐ‚เฐกเฐฟ. เฐฎเฑ€เฐฐเฑ `**`เฐจเฐฟ เฐœเฑ‹เฐกเฐฟเฐ‚เฐšเฐกเฐ‚ เฐฆเฑเฐตเฐพเฐฐเฐพ เฐจเฐฟเฐ˜เฐ‚เฐŸเฑเฐตเฑเฐจเฐฟ เฐ…เฐจเฑโ€Œเฐชเฑเฐฏเฐพเฐ•เฑ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ: ```py >>> pt_outputs = pt_model(**pt_batch) ``` เฐฎเฑ‹เฐกเฐฒเฑ เฐคเฑเฐฆเฐฟ เฐฏเฐพเฐ•เฑเฐŸเฐฟเฐตเฑ‡เฐทเฐจเฑโ€Œเฐฒเฐจเฑ `logits` เฐฒเฐ•เฑเฐทเฐฃเฐ‚เฐฒเฑ‹ เฐ…เฐตเฑเฐŸเฑโ€ŒเฐชเฑเฐŸเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐธเฐ‚เฐญเฐพเฐตเฑเฐฏเฐคเฐฒเฐจเฑ เฐคเฐฟเฐฐเฐฟเฐ—เฐฟ เฐชเฑŠเฐ‚เฐฆเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฐพเฐซเฑเฐŸเฑโ€Œเฐฎเฐพเฐ•เฑเฐธเฑ เฐซเฐ‚เฐ•เฑเฐทเฐจเฑโ€Œเฐจเฑ `logits` เฐ•เฑ เฐตเฐฐเฑเฐคเฐฟเฐ‚เฐชเฐœเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐ‡เฐจเฑโ€ŒเฐธเฑเฐŸเฐพเฐจเฑเฐธเฑโ€Œเฐฒเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฑเฐฒเฐญเฐฎเฑˆเฐจ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐ•เฑ€เฐ•เฑƒเฐค เฐฎเฐพเฐฐเฑเฐ—เฐพเฐจเฑเฐจเฐฟ เฐ…เฐ‚เฐฆเฐฟเฐธเฑเฐคเฐพเฐฏเฐฟ. เฐฎเฑ€เฐฐเฑ [`AutoTokenizer`]เฐจเฐฟ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐธเฐฟเฐจเฐŸเฑเฐฒเฑเฐ—เฐพ เฐฎเฑ€เฐฐเฑ [`TFAutoModel`]เฐจเฐฟ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฐจเฐฟ เฐฆเฑ€เฐจเฐฟ เฐ…เฐฐเฑเฐฅเฐ‚. เฐŸเฐพเฐธเฑเฐ•เฑ เฐ•เฑ‹เฐธเฐ‚ เฐธเฐฐเฑˆเฐจ [`TFAutoModel`]เฐจเฐฟ เฐŽเฐ‚เฐšเฑเฐ•เฑ‹เฐตเฐกเฐ‚ เฐฎเฐพเฐคเฑเฐฐเฐฎเฑ‡ เฐคเฑ‡เฐกเฐพ. เฐŸเฑ†เฐ•เฑเฐธเฑเฐŸเฑ (เฐฒเฑ‡เฐฆเฐพ เฐธเฑ€เฐ•เฑเฐตเฑ†เฐจเฑเฐธเฑ) เฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฃ เฐ•เฑ‹เฐธเฐ‚, เฐฎเฑ€เฐฐเฑ [`TFAutoModelForSequenceClassification`]เฐจเฐฟ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> [`AutoModel`] เฐ•เฑเฐฒเฐพเฐธเฑ เฐฆเฑเฐตเฐพเฐฐเฐพ เฐธเฐชเฑ‹เฐฐเฑเฐŸเฑ เฐšเฑ‡เฐธเฑ‡ เฐŸเฐพเฐธเฑเฐ•เฑโ€Œเฐฒ เฐ•เฑ‹เฐธเฐ‚ [เฐŸเฐพเฐธเฑเฐ•เฑ เฐธเฐพเฐฐเฐพเฐ‚เฐถเฐ‚](./task_summary)เฐจเฐฟ เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ. </Tip> เฐ‡เฐชเฑเฐชเฑเฐกเฑ เฐฎเฑ€ เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจ เฐฌเฑเฐฏเฐพเฐšเฑ เฐ‡เฐจเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฐจเฑ เฐจเฑ‡เฐฐเฑเฐ—เฐพ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ•เฐฟ เฐชเฐ‚เฐชเฐ‚เฐกเฐฟ. เฐฎเฑ€เฐฐเฑ เฐŸเฑ†เฐจเฑเฐธเฐฐเฑโ€Œเฐฒเฐจเฑ เฐ‡เฐฒเฐพ เฐชเฐพเฐธเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ: ```py >>> tf_outputs = tf_model(tf_batch) ``` เฐฎเฑ‹เฐกเฐฒเฑ เฐคเฑเฐฆเฐฟ เฐฏเฐพเฐ•เฑเฐŸเฐฟเฐตเฑ‡เฐทเฐจเฑโ€Œเฐฒเฐจเฑ `logits` เฐฒเฐ•เฑเฐทเฐฃเฐ‚เฐฒเฑ‹ เฐ…เฐตเฑเฐŸเฑโ€ŒเฐชเฑเฐŸเฑ เฐšเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐธเฐ‚เฐญเฐพเฐตเฑเฐฏเฐคเฐฒเฐจเฑ เฐคเฐฟเฐฐเฐฟเฐ—เฐฟ เฐชเฑŠเฐ‚เฐฆเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฐพเฐซเฑเฐŸเฑโ€Œเฐฎเฐพเฐ•เฑเฐธเฑ เฐซเฐ‚เฐ•เฑเฐทเฐจเฑโ€Œเฐจเฑ `logits`เฐ•เฑ เฐตเฐฐเฑเฐคเฐฟเฐ‚เฐชเฐœเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> เฐ…เฐจเฑเฐจเฐฟ ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐธเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฑ (PyTorch เฐฒเฑ‡เฐฆเฐพ TensorFlow) เฐคเฑเฐฆเฐฟ เฐฏเฐพเฐ•เฑเฐŸเฐฟเฐตเฑ‡เฐทเฐจเฑโ€Œเฐ•เฑ *เฐฎเฑเฐ‚เฐฆเฑ* เฐŸเฑ†เฐจเฑเฐธเฐฐเฑโ€Œเฐฒเฐจเฑ เฐ…เฐตเฑเฐŸเฑโ€ŒเฐชเฑเฐŸเฑ เฐšเฑ‡เฐธเฑเฐคเฐพเฐฏเฐฟ เฐซเฐ‚เฐ•เฑเฐทเฐจเฑ (softmax เฐตเฐ‚เฐŸเฐฟเฐฆเฐฟ) เฐŽเฐ‚เฐฆเฑเฐ•เฐ‚เฐŸเฑ‡ เฐšเฐฟเฐตเฐฐเฐฟ เฐฏเฐพเฐ•เฑเฐŸเฐฟเฐตเฑ‡เฐทเฐจเฑ เฐซเฐ‚เฐ•เฑเฐทเฐจเฑ เฐคเฐฐเฐšเฑเฐ—เฐพ เฐจเฐทเฑเฐŸเฐ‚เฐคเฑ‹ เฐ•เฐฒเฐฟเฐธเฐฟเฐชเฑ‹เฐคเฑเฐ‚เฐฆเฐฟ. เฐฎเฑ‹เฐกเฐฒเฑ เฐ…เฐตเฑเฐŸเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฑ เฐชเฑเฐฐเฐคเฑเฐฏเฑ‡เฐ• เฐกเฑ‡เฐŸเฐพเฐ•เฑเฐฒเฐพเฐธเฑโ€Œเฐฒเฑ เฐ•เฐพเฐฌเฐŸเฑเฐŸเฐฟ เฐตเฐพเฐŸเฐฟ เฐฒเฐ•เฑเฐทเฐฃเฐพเฐฒเฑ IDEเฐฒเฑ‹ เฐธเฑเฐตเฐฏเฐ‚เฐšเฐพเฐฒเฐ•เฐ‚เฐ—เฐพ เฐชเฑ‚เฐฐเฑเฐคเฐฟ เฐšเฑ‡เฐฏเฐฌเฐกเฐคเฐพเฐฏเฐฟ. เฐฎเฑ‹เฐกเฐฒเฑ เฐ…เฐตเฑเฐŸเฑโ€ŒเฐชเฑเฐŸเฑโ€Œเฐฒเฑ เฐŸเฑเฐชเฑเฐฒเฑ เฐฒเฑ‡เฐฆเฐพ เฐกเฐฟเฐ•เฑเฐทเฐจเฐฐเฑ€ เฐฒเฐพเฐ—เฐพ เฐชเฑเฐฐเฐตเฐฐเฑเฐคเฐฟเฐธเฑเฐคเฐพเฐฏเฐฟ (เฐฎเฑ€เฐฐเฑ เฐชเฑ‚เฐฐเฑเฐฃเฐพเฐ‚เฐ•เฐ‚, เฐธเฑเฐฒเฑˆเฐธเฑ เฐฒเฑ‡เฐฆเฐพ เฐธเฑเฐŸเฑเฐฐเฐฟเฐ‚เฐ—เฑโ€Œเฐคเฑ‹ เฐ‡เฐ‚เฐกเฑ†เฐ•เฑเฐธเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ) เฐˆ เฐธเฐ‚เฐฆเฐฐเฑเฐญเฐ‚เฐฒเฑ‹, เฐเฐฆเฑ€ เฐฒเฑ‡เฐจเฐฟ เฐ—เฑเฐฃเฐพเฐฒเฑ เฐตเฐฟเฐธเฑเฐฎเฐฐเฐฟเฐ‚เฐšเฐฌเฐกเฐคเฐพเฐฏเฐฟ. </Tip> ### เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐธเฑ‡เฐตเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ <frameworkcontent> <pt> เฐฎเฑ€ เฐฎเฑ‹เฐกเฐฒเฑ เฐšเฐ•เฑเฐ•เฐ—เฐพ เฐŸเฑเฐฏเฑ‚เฐจเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจ เฐคเฐฐเฑเฐตเฐพเฐค, เฐฎเฑ€เฐฐเฑ เฐฆเฐพเฐจเฐฟเฐจเฐฟ [`PreTrainedModel.save_pretrained`]เฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฟ เฐฆเฐพเฐจเฐฟ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐคเฑ‹ เฐธเฑ‡เฐตเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` เฐฎเฑ€เฐฐเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฐฟ เฐฎเฐณเฑเฐฒเฑ€ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฐฟเฐฆเฑเฐงเฐ‚เฐ—เฐพ เฐ‰เฐจเฑเฐจเฐชเฑเฐชเฑเฐกเฑ, เฐฆเฐพเฐจเฑเฐจเฐฟ [`PreTrainedModel.from_pretrained`]เฐคเฑ‹ เฐฐเฑ€เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> เฐฎเฑ€ เฐฎเฑ‹เฐกเฐฒเฑ เฐšเฐ•เฑเฐ•เฐ—เฐพ เฐŸเฑเฐฏเฑ‚เฐจเฑ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจ เฐคเฐฐเฑเฐตเฐพเฐค, เฐฎเฑ€เฐฐเฑ เฐฆเฐพเฐจเฐฟเฐจเฐฟ [`TFPreTrainedModel.save_pretrained`]เฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฟ เฐฆเฐพเฐจเฐฟ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐคเฑ‹ เฐธเฑ‡เฐตเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` เฐฎเฑ€เฐฐเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฐฟ เฐฎเฐณเฑเฐฒเฑ€ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐธเฐฟเฐฆเฑเฐงเฐ‚เฐ—เฐพ เฐ‰เฐจเฑเฐจเฐชเฑเฐชเฑเฐกเฑ, เฐฆเฐพเฐจเฑเฐจเฐฟ [`TFPreTrainedModel.from_pretrained`]เฐคเฑ‹ เฐฐเฑ€เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> เฐ’เฐ• เฐชเฑเฐฐเฐคเฑเฐฏเฑ‡เฐ•เฐฟเฐ‚เฐšเฐฟ เฐ…เฐฆเฑเฐญเฑเฐคเฐฎเฑˆเฐจ ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐธเฑ เฐซเฑ€เฐšเฐฐเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐธเฑ‡เฐตเฑ เฐšเฑ‡เฐฏเฐ—เฐฒ เฐธเฐพเฐฎเฐฐเฑเฐฅเฑเฐฏเฐ‚ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟเฐจเฐฟ PyTorch เฐฒเฑ‡เฐฆเฐพ TensorFlow เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ—เฐพ เฐฐเฑ€เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ—เฐฒเฐฆเฑ. `from_pt` เฐฒเฑ‡เฐฆเฐพ `from_tf` เฐชเฐฐเฐพเฐฎเฐฟเฐคเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐ’เฐ• เฐซเฑเฐฐเฑ‡เฐฎเฑโ€Œเฐตเฐฐเฑเฐ•เฑ เฐจเฑเฐ‚เฐกเฐฟ เฐฎเฐฐเฑŠเฐ• เฐซเฑเฐฐเฑ‡เฐฎเฑโ€Œเฐตเฐฐเฑเฐ•เฑโ€Œเฐ•เฐฟ เฐฎเฐพเฐฐเฑเฐšเฐ—เฐฒเฐฆเฑ: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## เฐ•เฐธเฑเฐŸเฐฎเฑ เฐฎเฑ‹เฐกเฐฒเฑ เฐฌเฐฟเฐฒเฑเฐกเฑเฐธเฑ เฐฎเฑ‹เฐกเฐฒเฑ เฐŽเฐฒเฐพ เฐจเฐฟเฐฐเฑเฐฎเฐฟเฐ‚เฐšเฐฌเฐกเฑเฐคเฑเฐ‚เฐฆเฑ‹ เฐฎเฐพเฐฐเฑเฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ€เฐฐเฑ เฐฎเฑ‹เฐกเฐฒเฑ เฐ•เฐพเฐจเฑเฐซเฐฟเฐ—เฐฐเฑ‡เฐทเฐจเฑ เฐ•เฑเฐฒเฐพเฐธเฑโ€Œเฐจเฐฟ เฐธเฐตเฐฐเฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ. เฐฆเฐพเฐšเฐฟเฐจ เฐฒเฑ‡เฐฏเฐฐเฑโ€Œเฐฒเฑ เฐฒเฑ‡เฐฆเฐพ เฐ…เฐŸเฑ†เฐจเฑเฐทเฐจเฑ เฐนเฑ†เฐกเฑโ€Œเฐฒ เฐธเฐ‚เฐ–เฑเฐฏ เฐตเฐ‚เฐŸเฐฟ เฐฎเฑ‹เฐกเฐฒเฑ เฐฒเฐ•เฑเฐทเฐฃเฐพเฐฒเฐจเฑ เฐ•เฐพเฐจเฑเฐซเฐฟเฐ—เฐฐเฑ‡เฐทเฐจเฑ เฐจเฐฟเฐฐเฑเฐฆเฑ‡เฐถเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐฎเฑ€เฐฐเฑ เฐ•เฐธเฑเฐŸเฐฎเฑ เฐ•เฐพเฐจเฑเฐซเฐฟเฐ—เฐฐเฑ‡เฐทเฐจเฑ เฐ•เฑเฐฒเฐพเฐธเฑ เฐจเฑเฐ‚เฐกเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐฟเฐจเฐชเฑเฐชเฑเฐกเฑ เฐฎเฑ€เฐฐเฑ เฐฎเฑŠเฐฆเฐŸเฐฟ เฐจเฑเฐ‚เฐกเฐฟ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐธเฑเฐคเฐพเฐฐเฑ. เฐฎเฑ‹เฐกเฐฒเฑ เฐ…เฐŸเฑเฐฐเฐฟเฐฌเฑเฐฏเฑ‚เฐŸเฑโ€Œเฐฒเฑ เฐฏเฐพเฐฆเฑƒเฐšเฑเฐ›เฐฟเฐ•เฐ‚เฐ—เฐพ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐฌเฐกเฑเฐกเฐพเฐฏเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ…เฐฐเฑเฐฅเฐตเฐ‚เฐคเฐฎเฑˆเฐจ เฐซเฐฒเฐฟเฐคเฐพเฐฒเฐจเฑ เฐชเฑŠเฐ‚เฐฆเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ€เฐฐเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฑ‡ เฐฎเฑเฐ‚เฐฆเฑ เฐฆเฐพเฐจเฐฟเฐ•เฐฟ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐ‡เฐตเฑเฐตเฐพเฐฒเฐฟ. [`AutoConfig`]เฐจเฐฟ เฐฆเฐฟเฐ—เฑเฐฎเฐคเฐฟ เฐšเฑ‡เฐฏเฐกเฐ‚ เฐฆเฑเฐตเฐพเฐฐเฐพ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ, เฐ†เฐชเฑˆ เฐฎเฑ€เฐฐเฑ เฐธเฐตเฐฐเฐฟเฐ‚เฐšเฐพเฐฒเฐจเฑเฐ•เฑเฐ‚เฐŸเฑเฐจเฑเฐจ เฐชเฑเฐฐเฑ€เฐŸเฑเฐฐเฑˆเฐจเฑเฐกเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ. [`AutoConfig.from_pretrained`]เฐฒเฑ‹, เฐฎเฑ€เฐฐเฑ เฐ…เฐŸเฑ†เฐจเฑเฐทเฐจเฑ เฐนเฑ†เฐกเฑโ€Œเฐฒ เฐธเฐ‚เฐ–เฑเฐฏ เฐตเฐ‚เฐŸเฐฟ เฐฎเฑ€เฐฐเฑ เฐฎเฐพเฐฐเฑเฐšเฐพเฐฒเฐจเฑเฐ•เฑเฐ‚เฐŸเฑเฐจเฑเฐจ เฐฒเฐ•เฑเฐทเฐฃเฐพเฐจเฑเฐจเฐฟ เฐชเฑ‡เฐฐเฑเฐ•เฑŠเฐจเฐตเฐšเฑเฐšเฑ: ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> [`AutoModel.from_config`]เฐคเฑ‹ เฐฎเฑ€ เฐ…เฐจเฑเฐ•เฑ‚เฐฒ เฐ•เฐพเฐจเฑเฐซเฐฟเฐ—เฐฐเฑ‡เฐทเฐจเฑ เฐจเฑเฐ‚เฐกเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ: ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> [`TFAutoModel.from_config`]เฐคเฑ‹ เฐฎเฑ€ เฐ…เฐจเฑเฐ•เฑ‚เฐฒ เฐ•เฐพเฐจเฑเฐซเฐฟเฐ—เฐฐเฑ‡เฐทเฐจเฑ เฐจเฑเฐ‚เฐกเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ: ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> เฐ…เฐจเฑเฐ•เฑ‚เฐฒ เฐ•เฐพเฐจเฑเฐซเฐฟเฐ—เฐฐเฑ‡เฐทเฐจเฑโ€Œเฐฒเฐจเฑ เฐฐเฑ‚เฐชเฑŠเฐ‚เฐฆเฐฟเฐ‚เฐšเฐกเฐ‚ เฐ—เฑเฐฐเฐฟเฐ‚เฐšเฐฟ เฐฎเฐฐเฐฟเฐ‚เฐค เฐธเฐฎเฐพเฐšเฐพเฐฐเฐ‚ เฐ•เฑ‹เฐธเฐ‚ [เฐ•เฐธเฑเฐŸเฐฎเฑ เฐ†เฐฐเฑเฐ•เฐฟเฐŸเฑ†เฐ•เฑเฐšเฐฐเฑโ€Œเฐจเฐฟ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ](./create_a_model) เฐ—เฑˆเฐกเฑโ€Œเฐจเฑ เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ. ## เฐถเฐฟเฐ•เฑเฐทเฐ•เฑเฐกเฑ - เฐชเฑˆเฐŸเฐพเฐฐเฑเฐšเฑ เฐ†เฐชเฑเฐŸเฐฟเฐฎเฑˆเฐœเฑ เฐšเฑ‡เฐธเฐฟเฐจ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐฒเฑ‚เฐชเฑ เฐ…เฐจเฑเฐจเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฑ เฐชเฑเฐฐเฐพเฐฎเฐพเฐฃเฐฟเฐ•เฐฎเฑˆเฐจ [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) เฐ•เฐพเฐฌเฐŸเฑเฐŸเฐฟ เฐฎเฑ€เฐฐเฑ เฐตเฐพเฐŸเฐฟเฐจเฐฟ เฐเฐฆเฑˆเฐจเฐพ เฐธเฐพเฐงเฐพเฐฐเฐฃ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐฒเฑ‚เฐชเฑโ€Œเฐฒเฑ‹ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ. เฐฎเฑ€เฐฐเฑ เฐฎเฑ€ เฐธเฑเฐตเฐ‚เฐค เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐฒเฑ‚เฐชเฑโ€Œเฐจเฑ เฐตเฑเฐฐเฐพเฐฏเฐ—เฐฒเฐฟเฐ—เฐฟเฐจเฐชเฑเฐชเฐŸเฐฟเฐ•เฑ€, ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ PyTorch เฐ•เฑ‹เฐธเฐ‚ [`เฐŸเฑเฐฐเฑˆเฐจเฐฐเฑ`] เฐคเฐฐเฐ—เฐคเฐฟเฐจเฐฟ เฐ…เฐ‚เฐฆเฐœเฑ‡เฐธเฑเฐคเฐพเฐฏเฐฟ, เฐ‡เฐ‚เฐฆเฑเฐฒเฑ‹ เฐชเฑเฐฐเฐพเฐฅเฐฎเฐฟเฐ• เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐฒเฑ‚เฐชเฑ เฐ‰เฐ‚เฐŸเฑเฐ‚เฐฆเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฐ‚เฐชเฐฟเฐฃเฑ€ เฐšเฑ‡เฐฏเฐฌเฐกเฐฟเฐจ เฐถเฐฟเฐ•เฑเฐทเฐฃ, เฐฎเฐฟเฐถเฑเฐฐเฐฎ เฐ–เฐšเฑเฐšเฐฟเฐคเฐคเฑเฐตเฐ‚ เฐฎเฐฐเฐฟเฐฏเฑ เฐฎเฐฐเฐฟเฐจเฑเฐจเฐฟ เฐตเฐ‚เฐŸเฐฟ เฐซเฑ€เฐšเฐฐเฑโ€Œเฐฒ เฐ•เฑ‹เฐธเฐ‚ เฐ…เฐฆเฐจเฐชเฑ เฐ•เฐพเฐฐเฑเฐฏเฐพเฐšเฐฐเฐฃเฐจเฑ เฐœเฑ‹เฐกเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐฎเฑ€ เฐตเฐฟเฐงเฐฟเฐจเฐฟ เฐฌเฐŸเฑเฐŸเฐฟ, เฐฎเฑ€เฐฐเฑ เฐธเฐพเฐงเฐพเฐฐเฐฃเฐ‚เฐ—เฐพ เฐ•เฐฟเฐ‚เฐฆเฐฟ เฐชเฐพเฐฐเฐพเฐฎเฐฟเฐคเฑเฐฒเฐจเฑ [`เฐŸเฑเฐฐเฑˆเฐจเฐฐเฑ`]เฐ•เฐฟ เฐชเฐ‚เฐชเฑเฐคเฐพเฐฐเฑ: 1. เฐฎเฑ€เฐฐเฑ [`PreTrainedModel`] เฐฒเฑ‡เฐฆเฐพ [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)เฐคเฑ‹ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐธเฑเฐคเฐพเฐฐเฑ: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` 2. [`TrainingArguments`] เฐฎเฑ€เฐฐเฑ เฐจเฑ‡เฐฐเฑเฐšเฑเฐ•เฑเฐจเฑ‡ เฐฐเฑ‡เฐŸเฑ, เฐฌเฑเฐฏเฐพเฐšเฑ เฐชเฐฐเฐฟเฐฎเฐพเฐฃเฐ‚ เฐฎเฐฐเฐฟเฐฏเฑ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐตเฐฒเฐธเฐฟเฐจ เฐฏเฑเฐ—เฐพเฐฒ เฐธเฐ‚เฐ–เฑเฐฏ เฐตเฐ‚เฐŸเฐฟ เฐฎเฐพเฐฐเฑเฐšเฐ—เฐฒ เฐฎเฑ‹เฐกเฐฒเฑ เฐนเฑˆเฐชเฐฐเฑโ€Œเฐชเฐพเฐฐเฐพเฐฎเฑ€เฐŸเฐฐเฑโ€Œเฐฒเฐจเฑ เฐ•เฐฒเฐฟเฐ—เฐฟ เฐ‰เฐ‚เฐฆเฐฟ. เฐฎเฑ€เฐฐเฑ เฐŽเฐฒเฐพเฐ‚เฐŸเฐฟ เฐถเฐฟเฐ•เฑเฐทเฐฃเฐพ เฐตเฐพเฐฆเฐจเฐฒเฐจเฑ เฐชเฑ‡เฐฐเฑเฐ•เฑŠเฐจเฐ•เฑเฐ‚เฐŸเฑ‡ เฐกเฐฟเฐซเฐพเฐฒเฑเฐŸเฑ เฐตเฐฟเฐฒเฑเฐตเฐฒเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐฌเฐกเฐคเฐพเฐฏเฐฟ: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ, เฐ‡เฐฎเฑ‡เฐœเฑ เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฐเฑ, เฐซเฑ€เฐšเฐฐเฑ เฐŽเฐ•เฑเฐธเฑโ€ŒเฐŸเฑเฐฐเฐพเฐ•เฑเฐŸเฐฐเฑ เฐฒเฑ‡เฐฆเฐพ เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฐเฑ เฐตเฐ‚เฐŸเฐฟ เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฟเฐ‚เฐ—เฑ เฐ•เฑเฐฒเฐพเฐธเฑโ€Œเฐจเฐฟ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` 4. เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐจเฑ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐจเฑ เฐŸเฑ‹เฐ•เฐจเฑˆเฐœเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐ’เฐ• เฐซเฐ‚เฐ•เฑเฐทเฐจเฑโ€Œเฐจเฑ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` เฐ†เฐชเฑˆ เฐฆเฐพเฐจเฐฟเฐจเฐฟ [`~datasets.Dataset.map`]เฐคเฑ‹ เฐฎเฑŠเฐคเฑเฐคเฐ‚ เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐฒเฑ‹ เฐตเฐฐเฑเฐคเฐฟเฐ‚เฐชเฐœเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. เฐฎเฑ€ เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑ เฐจเฑเฐ‚เฐกเฐฟ เฐ‰เฐฆเฐพเฐนเฐฐเฐฃเฐฒ เฐธเฐฎเฑ‚เฐนเฐพเฐจเฑเฐจเฐฟ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ [`DataCollatorWithPadding`]: ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` เฐ‡เฐชเฑเฐชเฑเฐกเฑ เฐˆ เฐคเฐฐเฐ—เฐคเฑเฐฒเฐจเฑเฐจเฐฟเฐ‚เฐŸเฐฟเฐจเฑ€ [`Trainer`]เฐฒเฑ‹ เฐธเฑ‡เฐ•เฐฐเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` เฐฎเฑ€เฐฐเฑ เฐธเฐฟเฐฆเฑเฐงเฐ‚เฐ—เฐพ เฐ‰เฐจเฑเฐจเฐชเฑเฐชเฑเฐกเฑ, เฐถเฐฟเฐ•เฑเฐทเฐฃเฐจเฑ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ [`~Trainer.train`]เฐ•เฐฟ เฐ•เฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> เฐธเฑ€เฐ•เฑเฐตเฑ†เฐจเฑเฐธเฑ-เฐŸเฑ-เฐธเฑ€เฐ•เฑเฐตเฑ†เฐจเฑเฐธเฑ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฐฟ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฑ‡ - เฐ…เฐจเฑเฐตเฐพเฐฆเฐ‚ เฐฒเฑ‡เฐฆเฐพ เฐธเฐพเฐฐเฐพเฐ‚เฐถเฐ‚ เฐตเฐ‚เฐŸเฐฟ เฐชเฐจเฑเฐฒ เฐ•เฑ‹เฐธเฐ‚, เฐฌเฐฆเฑเฐฒเฑเฐ—เฐพ [`Seq2SeqTrainer`] เฐฎเฐฐเฐฟเฐฏเฑ [`Seq2SeqTrainingArguments`] เฐคเฐฐเฐ—เฐคเฑเฐฒเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ. </Tip> เฐฎเฑ€เฐฐเฑ [`Trainer`] เฐฒเฑ‹เฐชเฐฒ เฐ‰เฐจเฑเฐจ เฐชเฐฆเฑเฐงเฐคเฑเฐฒเฐจเฑ เฐ‰เฐชเฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฟเฐ‚เฐšเฐกเฐ‚ เฐฆเฑเฐตเฐพเฐฐเฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐฒเฑ‚เฐชเฑ เฐชเฑเฐฐเฐตเฐฐเฑเฐคเฐจเฐจเฑ เฐ…เฐจเฑเฐ•เฑ‚เฐฒเฑ€เฐ•เฐฐเฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ. เฐ‡เฐฆเฐฟ เฐฒเฐพเฐธเฑ เฐซเฐ‚เฐ•เฑเฐทเฐจเฑ, เฐ†เฐชเฑเฐŸเฐฟเฐฎเฑˆเฐœเฐฐเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐทเฑ†เฐกเฑเฐฏเฑ‚เฐฒเฐฐเฑ เฐตเฐ‚เฐŸเฐฟ เฐฒเฐ•เฑเฐทเฐฃเฐพเฐฒเฐจเฑ เฐ…เฐจเฑเฐ•เฑ‚เฐฒเฑ€เฐ•เฐฐเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฐฟเฐฎเฑเฐฎเฐฒเฑเฐจเฐฟ เฐ…เฐจเฑเฐฎเฐคเฐฟเฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ. เฐ‰เฐชเฐตเฐฐเฑเฐ—เฑ€เฐ•เฐฐเฐฟเฐ‚เฐšเฐฌเฐกเฑ‡ เฐชเฐฆเฑเฐงเฐคเฑเฐฒ เฐ•เฑ‹เฐธเฐ‚ [`Trainer`] เฐธเฑ‚เฐšเฐจเฐจเฑ เฐชเฐฐเฐฟเฐถเฑ€เฐฒเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ. เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐฒเฑ‚เฐชเฑโ€Œเฐจเฑ เฐ…เฐจเฑเฐ•เฑ‚เฐฒเฑ€เฐ•เฐฐเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฐฐเฑŠเฐ• เฐฎเฐพเฐฐเฑเฐ—เฐ‚ [เฐ•เฐพเฐฒเฑโ€Œเฐฌเฑเฐฏเฐพเฐ•เฑโ€Œเฐฒเฑ](./main_classes/callbacks). เฐฎเฑ€เฐฐเฑ เฐ‡เฐคเฐฐ เฐฒเฑˆเฐฌเฑเฐฐเฐฐเฑ€เฐฒเฐคเฑ‹ เฐ…เฐจเฑเฐธเฐ‚เฐงเฐพเฐจเฐ‚ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐ•เฐพเฐฒเฑโ€Œเฐฌเฑเฐฏเฐพเฐ•เฑโ€Œเฐฒเฐจเฑ เฐ‰เฐชเฐฏเฑ‹เฐ—เฐฟเฐ‚เฐšเฐตเฐšเฑเฐšเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑเฐฐเฑ‹เฐ—เฐคเฐฟเฐชเฑˆ เฐจเฐฟเฐตเฑ‡เฐฆเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐฒเฑ‚เฐชเฑโ€Œเฐจเฑ เฐคเฐจเฐฟเฐ–เฑ€ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ เฐฒเฑ‡เฐฆเฐพ เฐถเฐฟเฐ•เฑเฐทเฐฃเฐจเฑ เฐฎเฑเฐ‚เฐฆเฑเฐ—เฐพเฐจเฑ‡ เฐ†เฐชเฐตเฐšเฑเฐšเฑ. เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐฒเฑ‚เฐชเฑโ€Œเฐฒเฑ‹เฐจเฑ‡ เฐ•เฐพเฐฒเฑโ€Œเฐฌเฑเฐฏเฐพเฐ•เฑโ€Œเฐฒเฑ เฐฆเฑ‡เฐจเฐฟเฐจเฑ€ เฐธเฐตเฐฐเฐฟเฐ‚เฐšเฐตเฑ. เฐฒเฐพเฐธเฑ เฐซเฐ‚เฐ•เฑเฐทเฐจเฑ เฐตเฐ‚เฐŸเฐฟเฐตเฐพเฐŸเฐฟเฐจเฐฟ เฐ…เฐจเฑเฐ•เฑ‚เฐฒเฑ€เฐ•เฐฐเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ, เฐฎเฑ€เฐฐเฑ เฐฌเฐฆเฑเฐฒเฑเฐ—เฐพ [`Trainer`]เฐจเฐฟ เฐ‰เฐชเฐตเฐฐเฑเฐ—เฐ‚ เฐšเฑ‡เฐฏเฐพเฐฒเฐฟ. ## TensorFlowเฐคเฑ‹ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐ‚เฐกเฐฟ เฐ…เฐจเฑเฐจเฐฟ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐฒเฑ เฐชเฑเฐฐเฐพเฐฎเฐพเฐฃเฐฟเฐ•เฐฎเฑˆเฐจ [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) เฐ•เฐพเฐฌเฐŸเฑเฐŸเฐฟ เฐตเฐพเฐŸเฐฟเฐจเฐฟ [Keras]เฐคเฑ‹ TensorFlowเฐฒเฑ‹ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐชเฑŠเฐ‚เฐฆเฐตเฐšเฑเฐšเฑ(https: //keras.io/) API. ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโ€Œเฐฒเฑ เฐฎเฑ€ เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐจเฐฟ เฐธเฑเฐฒเฐญเฐ‚เฐ—เฐพ `tf.data.Dataset`เฐ—เฐพ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ [`~TFPreTrainedModel.prepare_tf_dataset`] เฐชเฐฆเฑเฐงเฐคเฐฟเฐจเฐฟ เฐ…เฐ‚เฐฆเฐœเฑ‡เฐธเฑเฐคเฑเฐ‚เฐฆเฐฟ เฐ•เฐพเฐฌเฐŸเฑเฐŸเฐฟ เฐฎเฑ€เฐฐเฑ เฐตเฑ†เฐ‚เฐŸเฐจเฑ‡ Keras' [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) เฐฎเฐฐเฐฟเฐฏเฑ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) เฐชเฐฆเฑเฐงเฐคเฑเฐฒเฑ. 1. เฐฎเฑ€เฐฐเฑ [`TFPreTrainedModel`] เฐฒเฑ‡เฐฆเฐพ [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)เฐคเฑ‹ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐธเฑเฐคเฐพเฐฐเฑ: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` 2. เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑ, เฐ‡เฐฎเฑ‡เฐœเฑ เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฐเฑ, เฐซเฑ€เฐšเฐฐเฑ เฐŽเฐ•เฑเฐธเฑโ€ŒเฐŸเฑเฐฐเฐพเฐ•เฑเฐŸเฐฐเฑ เฐฒเฑ‡เฐฆเฐพ เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฐเฑ เฐตเฐ‚เฐŸเฐฟ เฐชเฑเฐฐเฑ€เฐชเฑเฐฐเฐพเฐธเฑ†เฐธเฐฟเฐ‚เฐ—เฑ เฐ•เฑเฐฒเฐพเฐธเฑโ€Œเฐจเฐฟ เฐฒเฑ‹เฐกเฑ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` 3. เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐจเฑ เฐŸเฑ‹เฐ•เฐจเฑˆเฐœเฑ เฐšเฑ‡เฐฏเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐ’เฐ• เฐซเฐ‚เฐ•เฑเฐทเฐจเฑโ€Œเฐจเฑ เฐธเฑƒเฐทเฑเฐŸเฐฟเฐ‚เฐšเฐ‚เฐกเฐฟ: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. [`~datasets.Dataset.map`]เฐคเฑ‹ เฐฎเฑŠเฐคเฑเฐคเฐ‚ เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐชเฑˆ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐจเฐฟ เฐตเฐฐเฑเฐคเฐฟเฐ‚เฐชเฐœเฑ‡เฐฏเฐฟ, เฐ†เฐชเฑˆ เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐŸเฑ‹เฐ•เฑ†เฐจเฑˆเฐœเฐฐเฑโ€Œเฐจเฑ [`~TFPreTrainedModel.prepare_tf_dataset`]เฐ•เฐฟ เฐชเฐ‚เฐชเฐ‚เฐกเฐฟ. เฐฎเฑ€เฐฐเฑ เฐ•เฐพเฐตเฐพเฐฒเฐจเฑเฐ•เฑเฐ‚เฐŸเฑ‡ เฐฌเฑเฐฏเฐพเฐšเฑ เฐชเฐฐเฐฟเฐฎเฐพเฐฃเฐพเฐจเฑเฐจเฐฟ เฐ•เฑ‚เฐกเฐพ เฐฎเฐพเฐฐเฑเฐšเฐตเฐšเฑเฐšเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐกเฑ‡เฐŸเฐพเฐธเฑ†เฐŸเฑโ€Œเฐจเฑ เฐ‡เฐ•เฑเฐ•เฐก เฐทเฐซเฑเฐฒเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ: ```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. เฐฎเฑ€เฐฐเฑ เฐธเฐฟเฐฆเฑเฐงเฐ‚เฐ—เฐพ เฐ‰เฐจเฑเฐจเฐชเฑเฐชเฑเฐกเฑ, เฐถเฐฟเฐ•เฑเฐทเฐฃเฐจเฑ เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ€เฐฐเฑ `เฐ•เฐ‚เฐชเฑˆเฐฒเฑ` เฐฎเฐฐเฐฟเฐฏเฑ `เฐซเฐฟเฐŸเฑ`เฐ•เฐฟ เฐ•เฐพเฐฒเฑ เฐšเฑ‡เฐฏเฐตเฐšเฑเฐšเฑ. เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐธเฑ เฐฎเฑ‹เฐกเฐฒเฑเฐธเฑ เฐ…เฐจเฑเฐจเฑ€ เฐกเฐฟเฐซเฐพเฐฒเฑเฐŸเฑ เฐŸเฐพเฐธเฑเฐ•เฑ-เฐธเฐ‚เฐฌเฐ‚เฐงเฐฟเฐค เฐฒเฐพเฐธเฑ เฐซเฐ‚เฐ•เฑเฐทเฐจเฑโ€Œเฐจเฐฟ เฐ•เฐฒเฐฟเฐ—เฐฟ เฐ‰เฐจเฑเฐจเฐพเฐฏเฐจเฐฟ เฐ—เฑเฐฐเฑเฐคเฑเฐ‚เฐšเฑเฐ•เฑ‹เฐ‚เฐกเฐฟ, เฐ•เฐพเฐฌเฐŸเฑเฐŸเฐฟ เฐฎเฑ€เฐฐเฑ เฐ•เฑ‹เฐฐเฑเฐ•เฑเฐจเฑ‡ เฐตเฐฐเฐ•เฑ เฐฎเฑ€เฐฐเฑ เฐ’เฐ•เฐฆเฐพเฐจเฐฟเฐจเฐฟ เฐชเฑ‡เฐฐเฑเฐ•เฑŠเฐจเฐตเฐฒเฐธเฐฟเฐจ เฐ…เฐตเฐธเฐฐเฐ‚ เฐฒเฑ‡เฐฆเฑ: ```py >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) # No loss argument! >>> model.fit(tf_dataset) # doctest: +SKIP ``` ## เฐคเฐฐเฐตเฐพเฐค เฐเฐ‚เฐŸเฐฟ? เฐ‡เฐชเฑเฐชเฑเฐกเฑ เฐฎเฑ€เฐฐเฑ ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐธเฑ เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฐฐเฑเฐฏเฐŸเฐจเฐจเฑ เฐชเฑ‚เฐฐเฑเฐคเฐฟ เฐšเฑ‡เฐธเฐพเฐฐเฑ, เฐฎเฐพ เฐ—เฑˆเฐกเฑโ€Œเฐฒเฐจเฑ เฐคเฐจเฐฟเฐ–เฑ€ เฐšเฑ‡เฐฏเฐ‚เฐกเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐ…เฐจเฑเฐ•เฑ‚เฐฒ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐตเฑเฐฐเฐพเฐฏเฐกเฐ‚, เฐŸเฐพเฐธเฑเฐ•เฑ เฐ•เฑ‹เฐธเฐ‚ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐจเฑ เฐšเฐ•เฑเฐ•เฐ—เฐพ เฐคเฑ€เฐฐเฑเฐšเฐฟเฐฆเฐฟเฐฆเฑเฐฆเฐกเฐ‚ เฐฎเฐฐเฐฟเฐฏเฑ เฐธเฑเฐ•เฑเฐฐเฐฟเฐชเฑเฐŸเฑโ€Œเฐคเฑ‹ เฐฎเฑ‹เฐกเฐฒเฑโ€Œเฐ•เฑ เฐถเฐฟเฐ•เฑเฐทเฐฃ เฐ‡เฐตเฑเฐตเฐกเฐ‚ เฐตเฐ‚เฐŸเฐฟ เฐฎเฐฐเฐฟเฐ‚เฐค เฐจเฐฟเฐฐเฑเฐฆเฐฟเฐทเฑเฐŸเฐฎเฑˆเฐจ เฐชเฐจเฑเฐฒเฐจเฑ เฐŽเฐฒเฐพ เฐšเฑ‡เฐฏเฐพเฐฒเฑ‹ เฐคเฑ†เฐฒเฑเฐธเฑเฐ•เฑ‹เฐ‚เฐกเฐฟ. ๐Ÿค— เฐŸเฑเฐฐเฐพเฐจเฑเฐธเฑโ€Œเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐธเฑ เฐ•เฑ‹เฐฐเฑ เฐ•เฐพเฐจเฑเฐธเฑ†เฐชเฑเฐŸเฑโ€Œเฐฒ เฐ—เฑเฐฐเฐฟเฐ‚เฐšเฐฟ เฐฎเฐฐเฐฟเฐ‚เฐค เฐคเฑ†เฐฒเฑเฐธเฑเฐ•เฑ‹เฐตเฐกเฐพเฐจเฐฟเฐ•เฐฟ เฐฎเฑ€เฐ•เฑ เฐ†เฐธเฐ•เฑเฐคเฐฟ เฐ‰เฐ‚เฐŸเฑ‡, เฐ’เฐ• เฐ•เฐชเฑเฐชเฑ เฐ•เฐพเฐซเฑ€ เฐคเฐพเฐ—เฐฟ, เฐฎเฐพ เฐ•เฐพเฐจเฑเฐธเฑ†เฐชเฑเฐŸเฑเฐตเฐฒเฑ เฐ—เฑˆเฐกเฑโ€Œเฐฒเฐจเฑ เฐšเฑ‚เฐกเฐ‚เฐกเฐฟ!
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/fast_tokenizers.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ํ† ํฌ๋‚˜์ด์ € ์‚ฌ์šฉํ•˜๊ธฐ[[use-tokenizers-from-tokenizers]] [`PreTrainedTokenizerFast`]๋Š” [๐Ÿค— Tokenizers](https://huggingface.co/docs/tokenizers) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๊ธฐ๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ํ† ํฌ๋‚˜์ด์ €๋Š” ๐Ÿค— Transformers๋กœ ๋งค์šฐ ๊ฐ„๋‹จํ•˜๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌ์ฒด์ ์ธ ๋‚ด์šฉ์— ๋“ค์–ด๊ฐ€๊ธฐ ์ „์—, ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋กœ ๋”๋ฏธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token="[UNK]")) >>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [...] >>> tokenizer.train(files, trainer) ``` ์šฐ๋ฆฌ๊ฐ€ ์ •์˜ํ•œ ํŒŒ์ผ์„ ํ†ตํ•ด ์ด์ œ ํ•™์Šต๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ–๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋Ÿฐํƒ€์ž„์—์„œ ๊ณ„์† ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ JSON ํŒŒ์ผ๋กœ ์ €์žฅํ•˜์—ฌ ๋‚˜์ค‘์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ† ํฌ๋‚˜์ด์ € ๊ฐ์ฒด๋กœ๋ถ€ํ„ฐ ์ง์ ‘ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[loading-directly-from-the-tokenizer-object]] ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ์ด ํ† ํฌ๋‚˜์ด์ € ๊ฐ์ฒด๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [`PreTrainedTokenizerFast`] ํด๋ž˜์Šค๋Š” ์ธ์Šคํ„ด์Šคํ™”๋œ *ํ† ํฌ๋‚˜์ด์ €* ๊ฐ์ฒด๋ฅผ ์ธ์ˆ˜๋กœ ๋ฐ›์•„ ์‰ฝ๊ฒŒ ์ธ์Šคํ„ด์Šคํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) ``` ์ด์ œ `fast_tokenizer` ๊ฐ์ฒด๋Š” ๐Ÿค— Transformers ํ† ํฌ๋‚˜์ด์ €์—์„œ ๊ณต์œ ํ•˜๋Š” ๋ชจ๋“  ๋ฉ”์†Œ๋“œ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ† ํฌ๋‚˜์ด์ € ํŽ˜์ด์ง€](main_classes/tokenizer)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## JSON ํŒŒ์ผ์—์„œ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[loading-from-a-JSON-file]] <!--In order to load a tokenizer from a JSON file, let's first start by saving our tokenizer:--> JSON ํŒŒ์ผ์—์„œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ ์œ„ํ•ด, ๋จผ์ € ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ €์žฅํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python >>> tokenizer.save("tokenizer.json") ``` JSON ํŒŒ์ผ์„ ์ €์žฅํ•œ ๊ฒฝ๋กœ๋Š” `tokenizer_file` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ [`PreTrainedTokenizerFast`] ์ดˆ๊ธฐํ™” ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` ์ด์ œ `fast_tokenizer` ๊ฐ์ฒด๋Š” ๐Ÿค— Transformers ํ† ํฌ๋‚˜์ด์ €์—์„œ ๊ณต์œ ํ•˜๋Š” ๋ชจ๋“  ๋ฉ”์†Œ๋“œ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ† ํฌ๋‚˜์ด์ € ํŽ˜์ด์ง€](main_classes/tokenizer)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/model_memory_anatomy.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ๋ชจ๋ธ ํ•™์Šต ํ•ด๋ถ€ํ•˜๊ธฐ [[model-training-anatomy]] ๋ชจ๋ธ ํ›ˆ๋ จ ์†๋„์™€ ๋ฉ”๋ชจ๋ฆฌ ํ™œ์šฉ์˜ ํšจ์œจ์„ฑ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์ ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์„ฑ๋Šฅ ์ตœ์ ํ™” ๊ธฐ์ˆ ์„ ์ดํ•ดํ•˜๋ ค๋ฉด GPU๊ฐ€ ํ›ˆ๋ จ ์ค‘์— ์–ด๋–ป๊ฒŒ ํ™œ์šฉ๋˜๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ์ˆ˜ํ–‰๋˜๋Š” ์—ฐ์‚ฐ์— ๋”ฐ๋ผ ์—ฐ์‚ฐ ๊ฐ•๋„๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ณ€ํ•˜๋Š”์ง€์— ์ต์ˆ™ํ•ด์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € GPU ํ™œ์šฉ๊ณผ ๋ชจ๋ธ ํ›ˆ๋ จ ์‹คํ–‰์— ๋Œ€ํ•œ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ฐ๋ชจ๋ฅผ ์œ„ํ•ด ๋ช‡๋ช‡ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install transformers datasets accelerate nvidia-ml-py3 ``` `nvidia-ml-py3` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” Python ๋‚ด๋ถ€์—์„œ ๋ชจ๋ธ์˜ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ๋ชจ๋‹ˆํ„ฐ๋งํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค๋‹ˆ๋‹ค. ํ„ฐ๋ฏธ๋„์˜ `nvidia-smi` ๋ช…๋ น์–ด์— ์ต์ˆ™ํ•  ์ˆ˜ ์žˆ๋Š”๋ฐ, ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” Python์—์„œ ์ง์ ‘ ๋™์ผํ•œ ์ •๋ณด์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ, 100๊ณผ 30000 ์‚ฌ์ด์˜ ๋ฌด์ž‘์œ„ ํ† ํฐ ID์™€ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ์œ„ํ•œ ์ด์ง„ ๋ ˆ์ด๋ธ”์ธ ๋”๋ฏธ ๋ฐ์ดํ„ฐ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๊ธธ์ด๊ฐ€ ๊ฐ๊ฐ 512์ธ ์ด 512๊ฐœ์˜ ์‹œํ€€์Šค๋ฅผ ๊ฐ€์ ธ์™€ PyTorch ํ˜•์‹์˜ [`~datasets.Dataset`]์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ```py >>> import numpy as np >>> from datasets import Dataset >>> seq_len, dataset_size = 512, 512 >>> dummy_data = { ... "input_ids": np.random.randint(100, 30000, (dataset_size, seq_len)), ... "labels": np.random.randint(0, 1, (dataset_size)), ... } >>> ds = Dataset.from_dict(dummy_data) >>> ds.set_format("pt") ``` GPU ํ™œ์šฉ ๋ฐ [`Trainer`]๋กœ ์‹คํ–‰ํ•œ ํ›ˆ๋ จ ๊ณผ์ •์— ๋Œ€ํ•œ ์š”์•ฝ ํ†ต๊ณ„๋ฅผ ์ถœ๋ ฅํ•˜๊ธฐ ์œ„ํ•ด ๋‘ ๊ฐœ์˜ ๋„์šฐ๋ฏธ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from pynvml import * >>> def print_gpu_utilization(): ... nvmlInit() ... handle = nvmlDeviceGetHandleByIndex(0) ... info = nvmlDeviceGetMemoryInfo(handle) ... print(f"GPU memory occupied: {info.used//1024**2} MB.") >>> def print_summary(result): ... print(f"Time: {result.metrics['train_runtime']:.2f}") ... print(f"Samples/second: {result.metrics['train_samples_per_second']:.2f}") ... print_gpu_utilization() ``` ์‹œ์ž‘ํ•  ๋•Œ GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋น„์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•ด ๋ด…์‹œ๋‹ค: ```py >>> print_gpu_utilization() GPU memory occupied: 0 MB. ``` ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ธฐ ์ „์—๋Š” ์˜ˆ์ƒ๋Œ€๋กœ GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์ ์œ ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š๋‹ค๋ฉด ์‚ฌ์šฉ์ž์˜ ๊ธฐ๊ธฐ์—์„œ GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ํ”„๋กœ์„ธ์Šค๋ฅผ ์ค‘๋‹จํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‚ฌ์šฉ์ž๋Š” ๋ชจ๋“  ์—ฌ์œ  GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋Š” ์—†์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด GPU์— ๋กœ๋“œ๋  ๋•Œ ์ปค๋„๋„ ๋กœ๋“œ๋˜๋ฏ€๋กœ 1-2GB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ฐจ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ผ๋งˆ๋‚˜ ๋˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด GPU์— ์ž‘์€ ํ…์„œ๋ฅผ ๋กœ๋“œํ•˜์—ฌ ์ปค๋„์ด ๋กœ๋“œ๋˜๋„๋ก ํŠธ๋ฆฌ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> torch.ones((1, 1)).to("cuda") >>> print_gpu_utilization() GPU memory occupied: 1343 MB. ``` ์ปค๋„๋งŒ์œผ๋กœ๋„ GPU ๋ฉ”๋ชจ๋ฆฌ์˜ 1.3GB๋ฅผ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ๋ชจ๋ธ์ด ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ๊ณต๊ฐ„์„ ์‚ฌ์šฉํ•˜๋Š”์ง€ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## ๋ชจ๋ธ ๋กœ๋“œ [[load-model]] ์šฐ์„ , `google-bert/bert-large-uncased` ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜๋ฅผ ์ง์ ‘ GPU์— ๋กœ๋“œํ•ด์„œ ๊ฐ€์ค‘์น˜๋งŒ์ด ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ๊ณต๊ฐ„์„ ์ฐจ์ง€ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-large-uncased").to("cuda") >>> print_gpu_utilization() GPU memory occupied: 2631 MB. ``` ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜๋งŒ์œผ๋กœ๋„ GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ 1.3 GB ์ฐจ์ง€ํ•˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ •ํ™•ํ•œ ์ˆซ์ž๋Š” ์‚ฌ์šฉํ•˜๋Š” GPU์— ๋”ฐ๋ผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ตœ์‹  GPU์—์„œ๋Š” ๋ชจ๋ธ ์‚ฌ์šฉ ์†๋„๋ฅผ ๋†’์ด๋Š” ์ตœ์ ํ™”๋œ ๋ฐฉ์‹์œผ๋กœ ๊ฐ€์ค‘์น˜๊ฐ€ ๋กœ๋“œ๋˜๋ฏ€๋กœ, ๋ชจ๋ธ์ด ๋” ๋งŽ์€ ๊ณต๊ฐ„์„ ์ฐจ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ `nvidia-smi` CLI์™€ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป๋Š”์ง€ ๋น ๋ฅด๊ฒŒ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash nvidia-smi ``` ```bash Tue Jan 11 08:58:05 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:04.0 Off | 0 | | N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB | +-----------------------------------------------------------------------------+ ``` ์ด์ „๊ณผ ๋™์ผํ•œ ์ˆซ์ž๊ฐ€ ์ถœ๋ ฅ๋˜๊ณ  16GB ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ฐ€์ง„ V100 GPU๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ๋„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์—ฌ GPU ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์ด ์–ด๋–ป๊ฒŒ ๋‹ฌ๋ผ์ง€๋Š”์ง€ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ์„  ๋ช‡๋ช‡ ํ‘œ์ค€ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py default_args = { "output_dir": "tmp", "eval_strategy": "steps", "num_train_epochs": 1, "log_level": "error", "report_to": "none", } ``` <Tip> ์—ฌ๋Ÿฌ ์‹คํ—˜์„ ์‹คํ–‰ํ•  ๊ณ„ํš์ด๋ผ๋ฉด, ์‹คํ—˜ ๊ฐ„์— ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ œ๋Œ€๋กœ ๋น„์šฐ๊ธฐ ์œ„ํ•ด์„œ Python ์ปค๋„์„ ์‹คํ—˜ ์‚ฌ์ด๋งˆ๋‹ค ์žฌ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ## ๊ธฐ๋ณธ ํ›ˆ๋ จ์—์„œ์˜ ๋ฉ”๋ชจ๋ฆฌ ํ™œ์šฉ [[memory-utilization-at-vanilla-training]] [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ, GPU ์„ฑ๋Šฅ ์ตœ์ ํ™” ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ 4์ธ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ค๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments, Trainer, logging >>> logging.set_verbosity_error() >>> training_args = TrainingArguments(per_device_train_batch_size=4, **default_args) >>> trainer = Trainer(model=model, args=training_args, train_dataset=ds) >>> result = trainer.train() >>> print_summary(result) ``` ``` Time: 57.82 Samples/second: 8.86 GPU memory occupied: 14949 MB. ``` ์šฐ๋ฆฌ๋Š” ๋น„๊ต์  ์ž‘์€ ๋ฐฐ์น˜ ํฌ๊ธฐ๋กœ๋„ ์ „์ฒด GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ฑฐ์˜ ๋‹ค ์ฐจ์ง€ํ•˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ ํด์ˆ˜๋ก ๋ชจ๋ธ ์ˆ˜๋ ด ์†๋„๊ฐ€ ๋นจ๋ผ์ง€๊ณ  ์ตœ์ข… ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ ์ด์ƒ์ ์œผ๋กœ๋Š” GPU ์ œํ•œ์ด ์•„๋‹Œ ์šฐ๋ฆฌ ๋ชจ๋ธ์˜ ์š”๊ตฌ์‚ฌํ•ญ์— ๋งž๊ฒŒ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ํฅ๋ฏธ๋กญ๊ฒŒ๋„ ์šฐ๋ฆฌ๋Š” ๋ชจ๋ธ์˜ ํฌ๊ธฐ๋ณด๋‹ค ํ›จ์”ฌ ๋” ๋งŽ์€ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์™œ ์ด๋Ÿฐ ํ˜„์ƒ์ด ๋ฐœ์ƒํ•˜๋Š”์ง€ ์กฐ๊ธˆ ๋” ์ž˜ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์˜ ์—ฐ์‚ฐ๊ณผ ๋ฉ”๋ชจ๋ฆฌ ์š”๊ตฌ ์‚ฌํ•ญ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## ๋ชจ๋ธ์˜ ์—ฐ์‚ฐ ํ•ด๋ถ€ํ•˜๊ธฐ [[anatomy-of-models-operations]] ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜์—๋Š” ์—ฐ์‚ฐ ๊ฐ•๋„(compute-intensity)์— ๋”ฐ๋ผ ๊ทธ๋ฃนํ™”๋œ 3๊ฐ€์ง€ ์ฃผ์š” ์—ฐ์‚ฐ ๊ทธ๋ฃน์ด ์žˆ์Šต๋‹ˆ๋‹ค. 1. **ํ…์„œ ์ถ•์•ฝ(Tensor Contractions)** ์„ ํ˜• ๋ ˆ์ด์–ด์™€ ๋ฉ€ํ‹ฐํ—ค๋“œ ์–ดํ…์…˜์˜ ๊ตฌ์„ฑ ์š”์†Œ๋Š” ๋ชจ๋‘ **ํ–‰๋ ฌ-ํ–‰๋ ฌ ๊ณฑ์…ˆ(matrix-matrix multiplications)**์„ ์ผ๊ด„์ ์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ด ์—ฐ์‚ฐ์€ ํŠธ๋žœ์Šคํฌ๋จธ ํ›ˆ๋ จ์—์„œ ๊ฐ€์žฅ ์—ฐ์‚ฐ ๊ฐ•๋„๊ฐ€ ๋†’์€ ๋ถ€๋ถ„์ž…๋‹ˆ๋‹ค. 2. **ํ†ต๊ณ„ ์ •๊ทœํ™”(Statistical Normalizations)** ์†Œํ”„ํŠธ๋งฅ์Šค์™€ ๋ ˆ์ด์–ด ์ •๊ทœํ™”๋Š” ํ…์„œ ์ถ•์•ฝ๋ณด๋‹ค ์—ฐ์‚ฐ ๊ฐ•๋„๊ฐ€ ๋‚ฎ์Šต๋‹ˆ๋‹ค. ํ•˜๋‚˜ ์ด์ƒ์˜ **๊ฐ์†Œ ์—ฐ์‚ฐ(reduction operations)**์„ ํฌํ•จํ•˜๋ฉฐ, ๊ทธ ๊ฒฐ๊ณผ๋Š” map์„ ํ†ตํ•ด ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. 3. **์›์†Œ๋ณ„ ์—ฐ์‚ฐ์ž(Element-wise Operators)** ๊ทธ ์™ธ ์—ฐ์‚ฐ์ž๋“ค, **ํŽธํ–ฅ(biases), ๋“œ๋กญ์•„์›ƒ(dropout), ํ™œ์„ฑํ™” ํ•จ์ˆ˜(activations), ์ž”์ฐจ ์—ฐ๊ฒฐ(residual connections)**์ด ์—ฌ๊ธฐ์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด ์—ฐ์‚ฐ๋“ค์€ ์—ฐ์‚ฐ ๊ฐ•๋„๊ฐ€ ๊ฐ€์žฅ ๋‚ฎ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ง€์‹์€ ์„ฑ๋Šฅ ๋ณ‘๋ชฉ ํ˜„์ƒ์„ ๋ถ„์„ํ•  ๋•Œ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‚ด์šฉ์€ [Data Movement Is All You Need: A Case Study on Optimizing Transformers 2020](https://arxiv.org/abs/2007.00072)์„ ์ฐธ๊ณ ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ## ๋ชจ๋ธ์˜ ๋ฉ”๋ชจ๋ฆฌ ๊ตฌ์กฐ [[anatomy-of-models-memory]] ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฐ๋Š” ๋‹จ์ˆœํžˆ GPU์— ๋ชจ๋ธ์„ ์˜ฌ๋ฆฌ๋Š” ๊ฒƒ๋ณด๋‹ค ํ›จ์”ฌ ๋” ๋งŽ์€ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์•˜์Šต๋‹ˆ๋‹ค. ์ด๋Š” ํ›ˆ๋ จ ์ค‘ GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋งŽ์€ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. GPU ๋ฉ”๋ชจ๋ฆฌ์˜ ๊ตฌ์„ฑ ์š”์†Œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. ๋ชจ๋ธ ๊ฐ€์ค‘์น˜ 2. ์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ 3. ๊ทธ๋ผ๋””์–ธํŠธ 4. ๊ทธ๋ผ๋””์–ธํŠธ ๊ณ„์‚ฐ์„ ์œ„ํ•ด ์ €์žฅ๋œ ์ˆœ๋ฐฉํ–ฅ ํ™œ์„ฑํ™” 5. ์ž„์‹œ ๋ฒ„ํผ 6. ๊ธฐ๋Šฅ๋ณ„ ๋ฉ”๋ชจ๋ฆฌ AdamW๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋กœ ํ›ˆ๋ จ๋œ ์ผ๋ฐ˜์ ์ธ ๋ชจ๋ธ์€ ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๋‹น 18 ๋ฐ”์ดํŠธ์™€ ํ™œ์„ฑํ™” ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ถ”๋ก  ๋‹จ๊ณ„์—์„œ๋Š” ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ๊ทธ๋ผ๋””์–ธํŠธ๊ฐ€ ํ•„์š”ํ•˜์ง€ ์•Š์œผ๋ฏ€๋กœ ์ด๋“ค์€ ์ œ์™ธํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ์ถ”๋ก ์˜ ๊ฒฝ์šฐ ๋ชจ๋ธ ๋งค๊ฐœ๋ณ€์ˆ˜๋‹น 6 ๋ฐ”์ดํŠธ์™€ ํ™œ์„ฑํ™” ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. **๋ชจ๋ธ ๊ฐ€์ค‘์น˜:** - fp32 ํ›ˆ๋ จ์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 4 ๋ฐ”์ดํŠธ - ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 6 ๋ฐ”์ดํŠธ (๋ฉ”๋ชจ๋ฆฌ์— fp32์™€ fp16 ๋‘ ๊ฐ€์ง€ ๋ชจ๋ธ์„ ์œ ์ง€) **์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ:** - ์ผ๋ฐ˜ AdamW์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 8 ๋ฐ”์ดํŠธ (2๊ฐ€์ง€ ์ƒํƒœ ์œ ์ง€) - [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)์™€ ๊ฐ™์€ 8๋น„ํŠธ AdamW ์˜ตํ‹ฐ๋งˆ์ด์ €์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 2 ๋ฐ”์ดํŠธ - Momentum์„ ๊ฐ€์ง„ SGD์™€ ๊ฐ™์€ ์˜ตํ‹ฐ๋งˆ์ด์ €์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 4 ๋ฐ”์ดํŠธ (ํ•˜๋‚˜์˜ ์ƒํƒœ๋งŒ ์œ ์ง€) **๊ทธ๋ผ๋””์–ธํŠธ** - fp32 ๋˜๋Š” ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 4 ๋ฐ”์ดํŠธ (๊ทธ๋ผ๋””์–ธํŠธ๋Š” ํ•ญ์ƒ fp32์œผ๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค.) **์ˆœ๋ฐฉํ–ฅ ํ™œ์„ฑํ™”** - ํฌ๊ธฐ๋Š” ์—ฌ๋Ÿฌ ์š”์ธ์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง€๋ฉฐ, ์ฃผ์š” ์š”์ธ์€ ์‹œํ€€์Šค ๊ธธ์ด, ์€๋‹‰ ์ƒํƒœ์˜ ํฌ๊ธฐ ๋ฐ ๋ฐฐ์น˜ ํฌ๊ธฐ์ž…๋‹ˆ๋‹ค. ์ˆœ๋ฐฉํ–ฅ ๋ฐ ์—ญ๋ฐฉํ–ฅ ํ•จ์ˆ˜์—์„œ ์ „๋‹ฌ ๋ฐ ๋ฐ˜ํ™˜๋˜๋Š” ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์ด ์žˆ์œผ๋ฉฐ, ๊ทธ๋ผ๋””์–ธํŠธ ๊ณ„์‚ฐ์„ ์œ„ํ•ด ์ €์žฅ๋œ ์ˆœ๋ฐฉํ–ฅ ํ™œ์„ฑํ™”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. **์ž„์‹œ ๋ฉ”๋ชจ๋ฆฌ** ๋”๋ถˆ์–ด ๋ชจ๋“  ์ข…๋ฅ˜์˜ ์ž„์‹œ ๋ณ€์ˆ˜๋Š” ์—ฐ์‚ฐ์ด ์™„๋ฃŒ๋˜๋ฉด ๊ณง๋ฐ”๋กœ ํ•ด์ œ๋˜์ง€๋งŒ, ๊ทธ ์ˆœ๊ฐ„์—๋Š” ์ถ”๊ฐ€ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ํ•„์š”ํ•  ์ˆ˜ ์žˆ๊ณ  OOM์„ ์œ ๋ฐœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ฝ”๋”ฉํ•  ๋•Œ ์ด๋Ÿฌํ•œ ์ž„์‹œ ๋ณ€์ˆ˜์— ๋Œ€ํ•ด ์ „๋žต์ ์œผ๋กœ ์ƒ๊ฐํ•˜๊ณ  ๋•Œ๋กœ๋Š” ๋” ์ด์ƒ ํ•„์š” ์—†๋Š” ์ž„์‹œ ๋ณ€์ˆ˜๋ฅผ ์ฆ‰์‹œ ๋ช…์‹œ์ ์œผ๋กœ ๋ฉ”๋ชจ๋ฆฌ์—์„œ ์ œ๊ฑฐํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. **๊ธฐ๋Šฅ๋ณ„ ๋ฉ”๋ชจ๋ฆฌ** ๊ทธ๋Ÿฐ ๋‹ค์Œ, ์†Œํ”„ํŠธ์›จ์–ด์—๋Š” ํŠน๋ณ„ํ•œ ๋ฉ”๋ชจ๋ฆฌ ์š”๊ตฌ ์‚ฌํ•ญ์ด ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋น” ๊ฒ€์ƒ‰์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•  ๋•Œ ์†Œํ”„ํŠธ์›จ์–ด๋Š” ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ ์‚ฌ๋ณธ์„ ์—ฌ๋Ÿฌ ๊ฐœ ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **`forward` vs `backward` ์‹คํ–‰ ์†๋„** ํ•ฉ์„ฑ๊ณฑ๊ณผ ์„ ํ˜• ๋ ˆ์ด์–ด์˜ ๊ฒฝ์šฐ ์ˆœ๋ฐฉํ–ฅ์— ๋น„ํ•ด ์—ญ๋ฐฉํ–ฅ์—์„œ๋Š” 2๋ฐฐ์˜ ํ”Œ๋กญ์Šค๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ์ผ๋ฐ˜์ ์œผ๋กœ 2๋ฐฐ ์ •๋„ ๋Š๋ฆฌ๊ฒŒ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค(์—ญ๋ฐฉํ–ฅ์˜ ๊ฒฝ์šฐ ์‚ฌ์ด์ฆˆ๊ฐ€ ๋ถ€์ž์—ฐ์Šค๋Ÿฝ๊ธฐ ๋•Œ๋ฌธ์—, ๋•Œ๋กœ๋Š” ๋”์šฑ ๋Š๋ฆด ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค). ํ™œ์„ฑํ™”๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋Œ€์—ญํญ์ด ์ œํ•œ๋˜์–ด ์žˆ์œผ๋ฉฐ, ์ผ๋ฐ˜์ ์œผ๋กœ ์ˆœ๋ฐฉํ–ฅ๋ณด๋‹ค ์—ญ๋ฐฉํ–ฅ์—์„œ ๋” ๋งŽ์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ฝ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. (์˜ˆ๋ฅผ ๋“ค์–ด, ์ˆœ๋ฐฉํ–ฅ ํ™œ์„ฑํ™” ์‹œ ํ•œ ๋ฒˆ ์”ฉ ์ฝ๊ณ  ์“ฐ์ง€๋งŒ, ์—ญ๋ฐฉํ–ฅ ํ™œ์„ฑํ™”์—์„œ๋Š” ์ˆœ๋ฐฉํ–ฅ gradOutput๊ณผ ์ถœ๋ ฅ์— ๋Œ€ํ•ด ์ด ๋‘ ๋ฒˆ ์ฝ๊ณ  gradInput์— ๋Œ€ํ•ด ํ•œ ๋ฒˆ ์”๋‹ˆ๋‹ค.) ๋ณด๋‹ค์‹œํ”ผ, GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ ˆ์•ฝํ•˜๊ฑฐ๋‚˜ ์ž‘์—… ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ GPU ํ™œ์šฉ๊ณผ ๊ณ„์‚ฐ ์†๋„์— ์˜ํ–ฅ์„ ์ฃผ๋Š” ๊ฒƒ์ด ๋ฌด์—‡์ธ์ง€๋ฅผ ์ดํ•ดํ–ˆ์œผ๋ฏ€๋กœ, [Methods and tools for efficient training on a single GPU](perf_train_gpu_one) ๋ฌธ์„œ ํŽ˜์ด์ง€๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์„ฑ๋Šฅ ์ตœ์ ํ™” ๊ธฐ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ณด์„ธ์š”.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/testing.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ…Œ์ŠคํŠธ[[testing]] ๋จผ์ € ๐Ÿค— Transformers ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ํ…Œ์ŠคํŠธ๋˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ณ , ์ƒˆ๋กœ์šด ํ…Œ์ŠคํŠธ๋ฅผ ์ž‘์„ฑ ๋ฐ ๊ธฐ์กด ํ…Œ์ŠคํŠธ๋ฅผ ๊ฐœ์„ ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ด…์‹œ๋‹ค. ์ด ์ €์žฅ์†Œ์—๋Š” 2๊ฐœ์˜ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. `tests` - ์ผ๋ฐ˜ API์— ๋Œ€ํ•œ ํ…Œ์ŠคํŠธ 2. `examples` - API์˜ ์ผ๋ถ€๊ฐ€ ์•„๋‹Œ ๋‹ค์–‘ํ•œ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์— ๋Œ€ํ•œ ํ…Œ์ŠคํŠธ ## Transformers ํ…Œ์ŠคํŠธ ๋ฐฉ๋ฒ•[[how-transformers-are-tested]] 1. PR์ด ์ œ์ถœ๋˜๋ฉด 9๊ฐœ์˜ CircleCi ์ž‘์—…์œผ๋กœ ํ…Œ์ŠคํŠธ๊ฐ€ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น PR์— ๋Œ€ํ•ด ์ƒˆ๋กœ์šด ์ปค๋ฐ‹์ด ์ƒ์„ฑ๋  ๋•Œ๋งˆ๋‹ค ํ…Œ์ŠคํŠธ๋Š” ๋‹ค์‹œ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค. ์ด ์ž‘์—…๋“ค์€ ์ด [config ํŒŒ์ผ](https://github.com/huggingface/transformers/tree/main/.circleci/config.yml)์— ์ •์˜๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ•„์š”ํ•˜๋‹ค๋ฉด ์‚ฌ์šฉ์ž์˜ ๋กœ์ปฌ ํ™˜๊ฒฝ์—์„œ ๋™์ผํ•˜๊ฒŒ ์žฌํ˜„ํ•ด ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด CI ์ž‘์—…์€ `@slow` ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 2. [github actions](https://github.com/huggingface/transformers/actions)์— ์˜ํ•ด ์‹คํ–‰๋˜๋Š” ์ž‘์—…์€ 3๊ฐœ์ž…๋‹ˆ๋‹ค: - [torch hub integration](https://github.com/huggingface/transformers/tree/main/.github/workflows/github-torch-hub.yml): torch hub integration์ด ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. - [self-hosted (push)](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-push.yml): `main` ๋ธŒ๋žœ์น˜์—์„œ ์ปค๋ฐ‹์ด ์—…๋ฐ์ดํŠธ๋œ ๊ฒฝ์šฐ์—๋งŒ GPU๋ฅผ ์ด์šฉํ•œ ๋น ๋ฅธ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” `src`, `tests`, `.github` ํด๋” ์ค‘ ํ•˜๋‚˜์— ์ฝ”๋“œ๊ฐ€ ์—…๋ฐ์ดํŠธ๋œ ๊ฒฝ์šฐ์—๋งŒ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. (model card, notebook, ๊ธฐํƒ€ ๋“ฑ๋“ฑ์„ ์ถ”๊ฐ€ํ•œ ๊ฒฝ์šฐ ์‹คํ–‰๋˜์ง€ ์•Š๋„๋ก ํ•˜๊ธฐ ์œ„ํ•ด์„œ์ž…๋‹ˆ๋‹ค) - [self-hosted runner](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-scheduled.yml): `tests` ๋ฐ `examples`์—์„œ GPU๋ฅผ ์ด์šฉํ•œ ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ, ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ```bash RUN_SLOW=1 pytest tests/ RUN_SLOW=1 pytest examples/ ``` ๊ฒฐ๊ณผ๋Š” [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/actions)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ…Œ์ŠคํŠธ ์‹คํ–‰[[running-tests]] ### ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ์„ ํƒ[[choosing-which-tests-to-run]] ์ด ๋ฌธ์„œ๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋‚ด์šฉ์„ ์ฝ์€ ํ›„์—๋„, ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์ด ํ•„์š”ํ•˜๋‹ค๋ฉด [์—ฌ๊ธฐ](https://docs.pytest.org/en/latest/usage.html)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ฐ€์žฅ ์œ ์šฉํ•œ ํ…Œ์ŠคํŠธ ์‹คํ–‰ ๋ฐฉ๋ฒ• ๋ช‡ ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ ์‹คํ–‰: ```console pytest ``` ๋˜๋Š”: ```bash make test ``` ํ›„์ž๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜๋ฉ๋‹ˆ๋‹ค: ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/ ``` ์œ„์˜ ๋ช…๋ น์–ด๋Š” pytest์—๊ฒŒ ์•„๋ž˜์˜ ๋‚ด์šฉ์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: - ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ CPU ์ฝ”์–ด ์ˆ˜๋งŒํผ ํ…Œ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. (RAM์ด ์ถฉ๋ถ„ํ•˜์ง€ ์•Š๋‹ค๋ฉด, ํ…Œ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค ์ˆ˜๊ฐ€ ๋„ˆ๋ฌด ๋งŽ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค!) - ๋™์ผํ•œ ํŒŒ์ผ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋Š” ๋™์ผํ•œ ํ…Œ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค์—์„œ ์‹คํ–‰๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ์ถœ๋ ฅ์„ ์บก์ฒ˜ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. - ์ž์„ธํ•œ ๋ชจ๋“œ๋กœ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ### ๋ชจ๋“  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก ๊ฐ€์ ธ์˜ค๊ธฐ[[getting-the-list-of-all-tests]] ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ: ```bash pytest --collect-only -q ``` ์ง€์ •๋œ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ: ```bash pytest tests/test_optimization.py --collect-only -q ``` ### ํŠน์ • ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ ์‹คํ–‰[[run-a-specific-test-module]] ๊ฐœ๋ณ„ ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ ์‹คํ–‰ํ•˜๊ธฐ: ```bash pytest tests/utils/test_logging.py ``` ### ํŠน์ • ํ…Œ์ŠคํŠธ ์‹คํ–‰[[run-specific-tests]] ๋Œ€๋ถ€๋ถ„์˜ ํ…Œ์ŠคํŠธ ๋‚ด๋ถ€์—์„œ๋Š” unittest๊ฐ€ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํŠน์ • ํ•˜์œ„ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ํฌํ•จํ•˜๋Š” unittest ํด๋ž˜์Šค์˜ ์ด๋ฆ„์„ ์•Œ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest tests/test_optimization.py::OptimizationTest::test_adam_w ``` ์œ„์˜ ๋ช…๋ น์–ด์˜ ์˜๋ฏธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - `tests/test_optimization.py` - ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ๋Š” ํŒŒ์ผ - `OptimizationTest` - ํด๋ž˜์Šค์˜ ์ด๋ฆ„ - `test_adam_w` - ํŠน์ • ํ…Œ์ŠคํŠธ ํ•จ์ˆ˜์˜ ์ด๋ฆ„ ํŒŒ์ผ์— ์—ฌ๋Ÿฌ ํด๋ž˜์Šค๊ฐ€ ํฌํ•จ๋œ ๊ฒฝ์šฐ, ํŠน์ • ํด๋ž˜์Šค์˜ ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest tests/test_optimization.py::OptimizationTest ``` ์ด ๋ช…๋ น์–ด๋Š” ํ•ด๋‹น ํด๋ž˜์Šค ๋‚ด๋ถ€์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ `OptimizationTest` ํด๋ž˜์Šค์— ํฌํ•จ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash pytest tests/test_optimization.py::OptimizationTest --collect-only -q ``` ํ‚ค์›Œ๋“œ ํ‘œํ˜„์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `adam`์ด๋ผ๋Š” ์ด๋ฆ„์„ ํฌํ•จํ•˜๋Š” ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest -k adam tests/test_optimization.py ``` ๋…ผ๋ฆฌ ์—ฐ์‚ฐ์ž `and`์™€ `or`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  ํ‚ค์›Œ๋“œ๊ฐ€ ์ผ์น˜ํ•ด์•ผ ํ•˜๋Š”์ง€ ๋˜๋Š” ์–ด๋Š ํ•˜๋‚˜๊ฐ€ ์ผ์น˜ํ•ด์•ผ ํ•˜๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `not`์€ ๋ถ€์ •ํ•  ๋•Œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `adam`์ด๋ผ๋Š” ์ด๋ฆ„์„ ํฌํ•จํ•˜์ง€ ์•Š๋Š” ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest -k "not adam" tests/test_optimization.py ``` ๋‘ ๊ฐ€์ง€ ํŒจํ„ด์„ ํ•˜๋‚˜๋กœ ๊ฒฐํ•ฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "ada and not adam" tests/test_optimization.py ``` ์˜ˆ๋ฅผ ๋“ค์–ด `test_adafactor`์™€ `test_adam_w`๋ฅผ ๋ชจ๋‘ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "test_adam_w or test_adam_w" tests/test_optimization.py ``` ์—ฌ๊ธฐ์„œ `or`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์— ์œ ์˜ํ•˜์„ธ์š”. ๋‘ ํ‚ค์›Œ๋“œ ์ค‘ ํ•˜๋‚˜๊ฐ€ ์ผ์น˜ํ•˜๋„๋ก ํ•˜๊ธฐ ์œ„ํ•œ ๋ชฉ์ ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋‘ ํŒจํ„ด์ด ๋ชจ๋‘ ํฌํ•จ๋˜์–ด์•ผ ํ•˜๋Š” ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•˜๋ ค๋ฉด, `and`๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pytest -k "test and ada" tests/test_optimization.py ``` ### `accelerate` ํ…Œ์ŠคํŠธ ์‹คํ–‰[[run-`accelerate`-tests]] ๋ชจ๋ธ์—์„œ `accelerate` ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ด์•ผ ํ•  ๋•Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋ช…๋ น์–ด์— `-m accelerate_tests`๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `OPT`์—์„œ ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py ``` ### ๋ฌธ์„œ ํ…Œ์ŠคํŠธ ์‹คํ–‰[[run-documentation-tests]] ์˜ˆ์‹œ ๋ฌธ์„œ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ์ง€ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด `doctests`๊ฐ€ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [`WhisperModel.forward`'s docstring](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L1017-L1035)๋ฅผ ์‚ฌ์šฉํ•ด ๋ด…์‹œ๋‹ค: ```python r""" Returns: Example: ```python >>> import torch >>> from transformers import WhisperModel, WhisperFeatureExtractor >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```""" ``` ์›ํ•˜๋Š” ํŒŒ์ผ์˜ ๋ชจ๋“  docstring ์˜ˆ์ œ๋ฅผ ์ž๋™์œผ๋กœ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```bash pytest --doctest-modules <path_to_file_or_dir> ``` ํŒŒ์ผ์˜ ํ™•์žฅ์ž๊ฐ€ markdown์ธ ๊ฒฝ์šฐ `--doctest-glob="*.md"` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ### ์ˆ˜์ •๋œ ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰[[run-only-modified-tests]] ์ˆ˜์ •๋œ ํŒŒ์ผ ๋˜๋Š” ํ˜„์žฌ ๋ธŒ๋žœ์น˜ (Git ๊ธฐ์ค€)์™€ ๊ด€๋ จ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด [pytest-picked](https://github.com/anapaulagomes/pytest-picked)์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋ณ€๊ฒฝํ•œ ๋‚ด์šฉ์ด ํ…Œ์ŠคํŠธ์— ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š์•˜๋Š”์ง€ ๋น ๋ฅด๊ฒŒ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋Š” ์ข‹์€ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ```bash pip install pytest-picked ``` ```bash pytest --picked ``` ์ˆ˜์ •๋˜์—ˆ์ง€๋งŒ, ์•„์ง ์ปค๋ฐ‹๋˜์ง€ ์•Š์€ ๋ชจ๋“  ํŒŒ์ผ ๋ฐ ํด๋”์—์„œ ํ…Œ์ŠคํŠธ๊ฐ€ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ### ์†Œ์Šค ์ˆ˜์ • ์‹œ ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ ์ž๋™ ์žฌ์‹คํ–‰[[automatically-rerun-failed-tests-on-source-modification]] [pytest-xdist](https://github.com/pytest-dev/pytest-xdist)๋Š” ๋ชจ๋“  ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฐ์ง€ํ•˜๊ณ , ํŒŒ์ผ์„ ์ˆ˜์ •ํ•œ ํ›„์— ํŒŒ์ผ์„ ๊ณ„์† ์žฌ์‹คํ–‰ํ•˜์—ฌ ํ…Œ์ŠคํŠธ๊ฐ€ ์„ฑ๊ณตํ•  ๋•Œ๊นŒ์ง€ ๊ธฐ๋‹ค๋ฆฌ๋Š” ๋งค์šฐ ์œ ์šฉํ•œ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ˆ˜์ •ํ•œ ๋‚ด์šฉ์„ ํ™•์ธํ•œ ํ›„ pytest๋ฅผ ๋‹ค์‹œ ์‹œ์ž‘ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผ๋  ๋•Œ๊นŒ์ง€ ์ด ๊ณผ์ •์„ ๋ฐ˜๋ณตํ•œ ํ›„ ๋‹ค์‹œ ์ „์ฒด ์‹คํ–‰์ด ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ```bash pip install pytest-xdist ``` ์žฌ๊ท€์  ๋ชจ๋“œ์˜ ์‚ฌ์šฉ: `pytest -f` ๋˜๋Š” `pytest --looponfail` ํŒŒ์ผ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์€ `looponfailroots` ๋ฃจํŠธ ๋””๋ ‰ํ„ฐ๋ฆฌ์™€ ํ•ด๋‹น ๋‚ด์šฉ์„ (์žฌ๊ท€์ ์œผ๋กœ) ํ™•์ธํ•˜์—ฌ ๊ฐ์ง€๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ’์˜ ๊ธฐ๋ณธ๊ฐ’์ด ์ž‘๋™ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, `setup.cfg`์˜ ์„ค์ • ์˜ต์…˜์„ ๋ณ€๊ฒฝํ•˜์—ฌ ํ”„๋กœ์ ํŠธ์—์„œ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```ini [tool:pytest] looponfailroots = transformers tests ``` ๋˜๋Š” `pytest.ini`/`tox.ini`` ํŒŒ์ผ: ```ini [pytest] looponfailroots = transformers tests ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ini-file์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ๊ธฐ์ค€์œผ๋กœ ์ƒ๋Œ€์ ์œผ๋กœ ์ง€์ •๋œ ๊ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ํŒŒ์ผ ๋ณ€๊ฒฝ ์‚ฌํ•ญ๋งŒ ์ฐพ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์„ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ๋Š” ๊ตฌํ˜„ ๋ฐฉ๋ฒ•์ธ [pytest-watch](https://github.com/joeyespo/pytest-watch)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ### ํŠน์ • ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ ๊ฑด๋„ˆ๋›ฐ๊ธฐ[[skip-a-test-module]] ๋ชจ๋“  ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ์„ ์‹คํ–‰ํ•˜๋˜ ํŠน์ • ๋ชจ๋“ˆ์„ ์ œ์™ธํ•˜๋ ค๋ฉด, ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก์„ ๋ช…์‹œ์ ์œผ๋กœ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `test_modeling_*.py` ํ…Œ์ŠคํŠธ๋ฅผ ์ œ์™ธํ•œ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest *ls -1 tests/*py | grep -v test_modeling* ``` ### ์ƒํƒœ ์ดˆ๊ธฐํ™”[[clearing state]] CI ๋นŒ๋“œ ๋ฐ (์†๋„์— ๋Œ€ํ•œ) ๊ฒฉ๋ฆฌ๊ฐ€ ์ค‘์š”ํ•œ ๊ฒฝ์šฐ, ์บ์‹œ๋ฅผ ์ง€์›Œ์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pytest --cache-clear tests ``` ### ํ…Œ์ŠคํŠธ๋ฅผ ๋ณ‘๋ ฌ๋กœ ์‹คํ–‰[[running-tests-in-parallel]] ์ด์ „์— ์–ธ๊ธ‰ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ `make test`๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ๋ณ‘๋ ฌ๋กœ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด `pytest-xdist` ํ”Œ๋Ÿฌ๊ทธ์ธ(`-n X` ์ธ์ˆ˜, ์˜ˆ๋ฅผ ๋“ค์–ด `-n 2`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ 2๊ฐœ์˜ ๋ณ‘๋ ฌ ์ž‘์—… ์‹คํ–‰)์„ ํ†ตํ•ด ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. `pytest-xdist`์˜ `--dist=` ์˜ต์…˜์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ๋ฅผ ์–ด๋–ป๊ฒŒ ๊ทธ๋ฃนํ™”ํ• ์ง€ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `--dist=loadfile`์€ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์žˆ๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ๋™์ผํ•œ ํ”„๋กœ์„ธ์Šค๋กœ ๊ทธ๋ฃนํ™”ํ•ฉ๋‹ˆ๋‹ค. ์‹คํ–‰๋œ ํ…Œ์ŠคํŠธ์˜ ์ˆœ์„œ๊ฐ€ ๋‹ค๋ฅด๊ณ  ์˜ˆ์ธกํ•  ์ˆ˜ ์—†๊ธฐ ๋•Œ๋ฌธ์—, `pytest-xdist`๋กœ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ฉด ์‹คํŒจ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (๊ฒ€์ถœ๋˜์ง€ ์•Š์€ ๊ฒฐํ•ฉ๋œ ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ). ์ด ๊ฒฝ์šฐ [pytest-replay](https://github.com/ESSS/pytest-replay)๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋™์ผํ•œ ์ˆœ์„œ๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ๋‹ค์‹œ ์‹คํ–‰ํ•ด์„œ ์‹คํŒจํ•˜๋Š” ์‹œํ€€์Šค๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๋ฐ์— ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ### ํ…Œ์ŠคํŠธ ์ˆœ์„œ์™€ ๋ฐ˜๋ณต[[test-order-and-repetition]] ์ž ์žฌ์ ์ธ ์ข…์†์„ฑ ๋ฐ ์ƒํƒœ ๊ด€๋ จ ๋ฒ„๊ทธ(tear down)๋ฅผ ๊ฐ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ํ…Œ์ŠคํŠธ๋ฅผ ์—ฌ๋Ÿฌ ๋ฒˆ, ์—ฐ์†์œผ๋กœ, ๋ฌด์ž‘์œ„๋กœ ๋˜๋Š” ์„ธํŠธ๋กœ ๋ฐ˜๋ณตํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ง์ ‘์ ์ธ ์—ฌ๋Ÿฌ ๋ฒˆ์˜ ๋ฐ˜๋ณต์€ DL์˜ ๋ฌด์ž‘์œ„์„ฑ์— ์˜ํ•ด ๋ฐœ๊ฒฌ๋˜๋Š” ์ผ๋ถ€ ๋ฌธ์ œ๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ๋ฐ์—๋„ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. #### ํ…Œ์ŠคํŠธ๋ฅผ ๋ฐ˜๋ณต[[repeat-tests]] - [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder): ```bash pip install pytest-flakefinder ``` ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์—ฌ๋Ÿฌ ๋ฒˆ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค(๊ธฐ๋ณธ๊ฐ’์€ 50๋ฒˆ): ```bash pytest --flake-finder --flake-runs=5 tests/test_failing_test.py ``` <Tip> ์ด ํ”Œ๋Ÿฌ๊ทธ์ธ์€ `pytest-xdist`์˜ `-n` ํ”Œ๋ž˜๊ทธ์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. </Tip> <Tip> `pytest-repeat`๋ผ๋Š” ๋˜ ๋‹ค๋ฅธ ํ”Œ๋Ÿฌ๊ทธ์ธ๋„ ์žˆ์ง€๋งŒ `unittest`์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. </Tip> #### ํ…Œ์ŠคํŠธ๋ฅผ ์ž„์˜์˜ ์ˆœ์„œ๋กœ ์‹คํ–‰[[run-tests-in-a-random-order]] ```bash pip install pytest-random-order ``` ์ค‘์š”: `pytest-random-order`๊ฐ€ ์„ค์น˜๋˜๋ฉด ํ…Œ์ŠคํŠธ๊ฐ€ ์ž๋™์œผ๋กœ ์ž„์˜์˜ ์ˆœ์„œ๋กœ ์„ž์ž…๋‹ˆ๋‹ค. ๊ตฌ์„ฑ ๋ณ€๊ฒฝ์ด๋‚˜ ์ปค๋งจ๋“œ ๋ผ์ธ ์˜ต์…˜์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์•ž์„œ ์„ค๋ช…ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ ์ด๋ฅผ ํ†ตํ•ด ํ•œ ํ…Œ์ŠคํŠธ์˜ ์ƒํƒœ๊ฐ€ ๋‹ค๋ฅธ ํ…Œ์ŠคํŠธ์˜ ์ƒํƒœ์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ๊ฒฐํ•ฉ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `pytest-random-order`๊ฐ€ ์„ค์น˜๋˜๋ฉด ํ•ด๋‹น ์„ธ์…˜์—์„œ ์‚ฌ์šฉ๋œ ๋žœ๋ค ์‹œ๋“œ๊ฐ€ ์ถœ๋ ฅ๋˜๋ฉฐ ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest tests [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` ๋”ฐ๋ผ์„œ ํŠน์ • ์‹œํ€€์Šค๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ์ •ํ™•ํ•œ ์‹œ๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest --random-order-seed=573663 [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` ์ •ํ™•ํžˆ ๋™์ผํ•œ ํ…Œ์ŠคํŠธ ๋ชฉ๋ก(๋˜๋Š” ๋ชฉ๋ก์ด ์—†์Œ)์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋งŒ ์ •ํ™•ํ•œ ์ˆœ์„œ๋ฅผ ์žฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. ๋ชฉ๋ก์„ ์ˆ˜๋™์œผ๋กœ ์ขํžˆ๊ธฐ ์‹œ์ž‘ํ•˜๋ฉด ๋” ์ด์ƒ ์‹œ๋“œ์— ์˜์กดํ•  ์ˆ˜ ์—†๊ณ  ์‹คํŒจํ–ˆ๋˜ ์ •ํ™•ํ•œ ์ˆœ์„œ๋กœ ์ˆ˜๋™์œผ๋กœ ๋ชฉ๋ก์„ ๋‚˜์—ดํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  `--random-order-bucket=none`์„ ์‚ฌ์šฉํ•˜์—ฌ pytest์—๊ฒŒ ์ˆœ์„œ๋ฅผ ์ž„์˜๋กœ ์„ค์ •ํ•˜์ง€ ์•Š๋„๋ก ์•Œ๋ ค์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py ``` ๋ชจ๋“  ํ…Œ์ŠคํŠธ์— ๋Œ€ํ•ด ์„ž๊ธฐ๋ฅผ ๋น„ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest --random-order-bucket=none ``` ๊ธฐ๋ณธ์ ์œผ๋กœ `--random-order-bucket=module`์ด ๋‚ด์žฌ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, ๋ชจ๋“ˆ ์ˆ˜์ค€์—์„œ ํŒŒ์ผ์„ ์„ž์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ `class`, `package`, `global` ๋ฐ `none` ์ˆ˜์ค€์—์„œ๋„ ์„ž์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ ํ•ด๋‹น [๋ฌธ์„œ](https://github.com/jbasko/pytest-random-order)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๋˜ ๋‹ค๋ฅธ ๋ฌด์ž‘์œ„ํ™”์˜ ๋Œ€์•ˆ์€ [`pytest-randomly`](https://github.com/pytest-dev/pytest-randomly)์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋“ˆ์€ ๋งค์šฐ ์œ ์‚ฌํ•œ ๊ธฐ๋Šฅ/์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์ง€๋งŒ, `pytest-random-order`์— ์žˆ๋Š” ๋ฒ„ํ‚ท ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์„ค์น˜ ํ›„์—๋Š” ์ž๋™์œผ๋กœ ์ ์šฉ๋˜๋Š” ๋ฌธ์ œ๋„ ๋™์ผํ•˜๊ฒŒ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ### ์™ธ๊ด€๊ณผ ๋Š๋‚Œ์„ ๋ณ€๊ฒฝ[[look-and-feel-variations] #### pytest-sugar ์‚ฌ์šฉ[[pytest-sugar]] [pytest-sugar](https://github.com/Frozenball/pytest-sugar)๋Š” ํ…Œ์ŠคํŠธ๊ฐ€ ๋ณด์—ฌ์ง€๋Š” ํ˜•ํƒœ๋ฅผ ๊ฐœ์„ ํ•˜๊ณ , ์ง„ํ–‰ ์ƒํ™ฉ ๋ฐ”๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉฐ, ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ์™€ ๊ฒ€์ฆ์„ ์ฆ‰์‹œ ํ‘œ์‹œํ•˜๋Š” ํ”Œ๋Ÿฌ๊ทธ์ธ์ž…๋‹ˆ๋‹ค. ์„ค์น˜ํ•˜๋ฉด ์ž๋™์œผ๋กœ ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ```bash pip install pytest-sugar ``` pytest-sugar ์—†์ด ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest -p no:sugar ``` ๋˜๋Š” ์ œ๊ฑฐํ•˜์„ธ์š”. #### ๊ฐ ํ•˜์œ„ ํ…Œ์ŠคํŠธ ์ด๋ฆ„๊ณผ ์ง„ํ–‰ ์ƒํ™ฉ ๋ณด๊ณ [[report-each-sub-test-name-and-its-progress]] `pytest`๋ฅผ ํ†ตํ•ด ๋‹จ์ผ ๋˜๋Š” ๊ทธ๋ฃน์˜ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒฝ์šฐ(`pip install pytest-pspec` ์ดํ›„): ```bash pytest --pspec tests/test_optimization.py ``` #### ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ ์ฆ‰์‹œ ํ‘œ์‹œ[[instantly-shows-failed-tests]] [pytest-instafail](https://github.com/pytest-dev/pytest-instafail)์€ ํ…Œ์ŠคํŠธ ์„ธ์…˜์˜ ๋๊นŒ์ง€ ๊ธฐ๋‹ค๋ฆฌ์ง€ ์•Š๊ณ  ์‹คํŒจ ๋ฐ ์˜ค๋ฅ˜๋ฅผ ์ฆ‰์‹œ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ```bash pip install pytest-instafail ``` ```bash pytest --instafail ``` ### GPU ์‚ฌ์šฉ ์—ฌ๋ถ€[[to-GPU-or-not-to-GPU]] GPU๊ฐ€ ํ™œ์„ฑํ™”๋œ ํ™˜๊ฒฝ์—์„œ, CPU ์ „์šฉ ๋ชจ๋“œ๋กœ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด `CUDA_VISIBLE_DEVICES=""`๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```bash CUDA_VISIBLE_DEVICES="" pytest tests/utils/test_logging.py ``` ๋˜๋Š” ๋‹ค์ค‘ GPU๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ `pytest`์—์„œ ์‚ฌ์šฉํ•  GPU๋ฅผ ์ง€์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, GPU `0` ๋ฐ `1`์ด ์žˆ๋Š” ๊ฒฝ์šฐ ๋‹ค์Œ์„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash CUDA_VISIBLE_DEVICES="1" pytest tests/utils/test_logging.py ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋‹ค๋ฅธ GPU์—์„œ ๋‹ค๋ฅธ ์ž‘์—…์„ ์‹คํ–‰ํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ถ€ ํ…Œ์ŠคํŠธ๋Š” ๋ฐ˜๋“œ์‹œ CPU ์ „์šฉ์œผ๋กœ ์‹คํ–‰ํ•ด์•ผ ํ•˜๋ฉฐ, ์ผ๋ถ€๋Š” CPU ๋˜๋Š” GPU ๋˜๋Š” TPU์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•˜๊ณ , ์ผ๋ถ€๋Š” ์—ฌ๋Ÿฌ GPU์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์Šคํ‚ต ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ํ…Œ์ŠคํŠธ์˜ ์š”๊ตฌ ์‚ฌํ•ญ์„ CPU/GPU/TPU๋ณ„๋กœ ์„ค์ •ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค: - `require_torch` - ์ด ํ…Œ์ŠคํŠธ๋Š” torch์—์„œ๋งŒ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. - `require_torch_gpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ ์ ์–ด๋„ 1๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_multi_gpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_non_multi_gpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ 0๊ฐœ ๋˜๋Š” 1๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_up_to_2_gpus` - `require_torch`์— ์ถ”๊ฐ€๋กœ 0๊ฐœ, 1๊ฐœ ๋˜๋Š” 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_xla` - `require_torch`์— ์ถ”๊ฐ€๋กœ ์ ์–ด๋„ 1๊ฐœ์˜ TPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. GPU ์š”๊ตฌ ์‚ฌํ•ญ์„ ํ‘œ๋กœ ์ •๋ฆฌํ•˜๋ฉด ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋””ใ…: | n gpus | decorator | |--------+--------------------------------| | `>= 0` | `@require_torch` | | `>= 1` | `@require_torch_gpu` | | `>= 2` | `@require_torch_multi_gpu` | | `< 2` | `@require_torch_non_multi_gpu` | | `< 3` | `@require_torch_up_to_2_gpus` | ์˜ˆ๋ฅผ ๋“ค์–ด, 2๊ฐœ ์ด์ƒ์˜ GPU๊ฐ€ ์žˆ๊ณ  pytorch๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ์„ ๋•Œ์—๋งŒ ์‹คํ–‰๋˜์–ด์•ผ ํ•˜๋Š” ํ…Œ์ŠคํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python no-style @require_torch_multi_gpu def test_example_with_multi_gpu(): ``` `tensorflow`๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ `require_tf` ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python no-style @require_tf def test_tf_thing_with_tensorflow(): ``` ์ด๋Ÿฌํ•œ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ์ค‘์ฒฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ์ง„ํ–‰๋˜๊ณ  pytorch์—์„œ ์ ์–ด๋„ ํ•˜๋‚˜์˜ GPU๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python no-style @require_torch_gpu @slow def test_example_slow_on_gpu(): ``` `@parametrized`์™€ ๊ฐ™์€ ์ผ๋ถ€ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ํ…Œ์ŠคํŠธ ์ด๋ฆ„์„ ๋‹ค์‹œ ์ž‘์„ฑํ•˜๊ธฐ ๋•Œ๋ฌธ์— `@require_*` ์Šคํ‚ต ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™ํ•˜๋ ค๋ฉด ํ•ญ์ƒ ๋งจ ๋งˆ์ง€๋ง‰์— ๋‚˜์—ด๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์˜ฌ๋ฐ”๋ฅธ ์‚ฌ์šฉ ์˜ˆ์ž…๋‹ˆ๋‹ค: ```python no-style @parameterized.expand(...) @require_torch_multi_gpu def test_integration_foo(): ``` `@pytest.mark.parametrize`์—๋Š” ์ด๋Ÿฌํ•œ ์ˆœ์„œ ๋ฌธ์ œ๋Š” ์—†์œผ๋ฏ€๋กœ ์ฒ˜์Œ ํ˜น์€ ๋งˆ์ง€๋ง‰์— ์œ„์น˜์‹œํ‚ฌ ์ˆ˜ ์žˆ๊ณ  ์ด๋Ÿฌํ•œ ๊ฒฝ์šฐ์—๋„ ์ž˜ ์ž‘๋™ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ unittest๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ์—๋งŒ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ๋‚ด๋ถ€์—์„œ ๋‹ค์Œ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ GPU ์ˆ˜: ```python from transformers.testing_utils import get_gpu_count n_gpu = get_gpu_count() #torch์™€ tf์™€ ํ•จ๊ป˜ ์ž‘๋™ ``` ### ๋ถ„์‚ฐ ํ›ˆ๋ จ[[distributed-training]] `pytest`๋Š” ๋ถ„์‚ฐ ํ›ˆ๋ จ์„ ์ง์ ‘์ ์œผ๋กœ ๋‹ค๋ฃจ์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์‹œ๋„ํ•˜๋ฉด ํ•˜์œ„ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜์ง€ ์•Š๊ณ  `pytest`๋ผ๊ณ  ์ƒ๊ฐํ•˜๊ธฐ์— ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๋ฅผ ๋ฐ˜๋ณตํ•ด์„œ ์‹คํ–‰ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ผ๋ฐ˜ ํ”„๋กœ์„ธ์Šค๋ฅผ ์ƒ์„ฑํ•œ ๋‹ค์Œ ์—ฌ๋Ÿฌ ์›Œ์ปค๋ฅผ ์ƒ์„ฑํ•˜๊ณ  IO ํŒŒ์ดํ”„๋ฅผ ๊ด€๋ฆฌํ•˜๋„๋ก ํ•˜๋ฉด ๋™์ž‘ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ํ…Œ์ŠคํŠธ์ž…๋‹ˆ๋‹ค: - [test_trainer_distributed.py](https://github.com/huggingface/transformers/tree/main/tests/trainer/test_trainer_distributed.py) - [test_deepspeed.py](https://github.com/huggingface/transformers/tree/main/tests/deepspeed/test_deepspeed.py) ์‹คํ–‰ ์ง€์ ์œผ๋กœ ๋ฐ”๋กœ ์ด๋™ํ•˜๋ ค๋ฉด, ํ•ด๋‹น ํ…Œ์ŠคํŠธ์—์„œ `execute_subprocess_async` ํ˜ธ์ถœ์„ ๊ฒ€์ƒ‰ํ•˜์„ธ์š”. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ```bash CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py ``` ### ์ถœ๋ ฅ ์บก์ฒ˜[[output-capture]] ํ…Œ์ŠคํŠธ ์‹คํ–‰ ์ค‘ `stdout` ๋ฐ `stderr`๋กœ ์ „์†ก๋œ ๋ชจ๋“  ์ถœ๋ ฅ์ด ์บก์ฒ˜๋ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ๋‚˜ ์„ค์ • ๋ฉ”์†Œ๋“œ๊ฐ€ ์‹คํŒจํ•˜๋ฉด ์บก์ฒ˜๋œ ์ถœ๋ ฅ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์‹คํŒจ ์ถ”์  ์ •๋ณด์™€ ํ•จ๊ป˜ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ ์บก์ฒ˜๋ฅผ ๋น„ํ™œ์„ฑํ™”ํ•˜๊ณ  `stdout` ๋ฐ `stderr`๋ฅผ ์ •์ƒ์ ์œผ๋กœ ๋ฐ›์œผ๋ ค๋ฉด `-s` ๋˜๋Š” `--capture=no`๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash pytest -s tests/utils/test_logging.py ``` ํ…Œ์ŠคํŠธ ๊ฒฐ๊ณผ๋ฅผ JUnit ํ˜•์‹์˜ ์ถœ๋ ฅ์œผ๋กœ ๋ณด๋‚ด๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash py.test tests --junitxml=result.xml ``` ### ์ƒ‰์ƒ ์กฐ์ ˆ[[color-control]] ์ƒ‰์ƒ์ด ์—†๊ฒŒ ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์ •ํ•˜์„ธ์š”(์˜ˆ๋ฅผ ๋“ค์–ด ํฐ์ƒ‰ ๋ฐฐ๊ฒฝ์— ๋…ธ๋ž€์ƒ‰ ๊ธ€์”จ๋Š” ๊ฐ€๋…์„ฑ์ด ์ข‹์ง€ ์•Š์Šต๋‹ˆ๋‹ค): ```bash pytest --color=no tests/utils/test_logging.py ``` ### online pastebin service์— ํ…Œ์ŠคํŠธ ๋ณด๊ณ ์„œ ์ „์†ก[[sending test report to online pastebin service]] ๊ฐ ํ…Œ์ŠคํŠธ ์‹คํŒจ์— ๋Œ€ํ•œ URL์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```bash pytest --pastebin=failed tests/utils/test_logging.py ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๊ฐ ์‹คํŒจ์— ๋Œ€ํ•œ URL์„ ์ œ๊ณตํ•˜๋Š” remote Paste service์— ํ…Œ์ŠคํŠธ ์‹คํ–‰ ์ •๋ณด๋ฅผ ์ œ์ถœํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ์„ ํƒํ•  ์ˆ˜๋„ ์žˆ๊ณ  ํ˜น์€ ํŠน์ • ์‹คํŒจ๋งŒ ๋ณด๋‚ด๋ ค๋ฉด `-x`์™€ ๊ฐ™์ด ์ถ”๊ฐ€ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ํ…Œ์ŠคํŠธ ์„ธ์…˜ ๋กœ๊ทธ์— ๋Œ€ํ•œ URL์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```bash pytest --pastebin=all tests/utils/test_logging.py ``` ## ํ…Œ์ŠคํŠธ ์ž‘์„ฑ[[writing-tests]] ๐Ÿค— transformers ํ…Œ์ŠคํŠธ๋Š” ๋Œ€๋ถ€๋ถ„ `unittest`๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์ง€๋งŒ, `pytest`์—์„œ ์‹คํ–‰๋˜๋ฏ€๋กœ ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋‘ ์‹œ์Šคํ…œ์˜ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง€์›๋˜๋Š” ๊ธฐ๋Šฅ์— ๋Œ€ํ•ด [์—ฌ๊ธฐ](https://docs.pytest.org/en/stable/unittest.html)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๊ธฐ์–ตํ•ด์•ผ ํ•  ์ค‘์š”ํ•œ ์ ์€ ๋Œ€๋ถ€๋ถ„์˜ `pytest` fixture๊ฐ€ ์ž‘๋™ํ•˜์ง€ ์•Š๋Š”๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํŒŒ๋ผ๋ฏธํ„ฐํ™”๋„ ์ž‘๋™ํ•˜์ง€ ์•Š์ง€๋งŒ, ์šฐ๋ฆฌ๋Š” ๋น„์Šทํ•œ ๋ฐฉ์‹์œผ๋กœ ์ž‘๋™ํ•˜๋Š” `parameterized` ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ### ๋งค๊ฐœ๋ณ€์ˆ˜ํ™”[[parametrization]] ๋™์ผํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ๋‹ค๋ฅธ ์ธ์ˆ˜๋กœ ์—ฌ๋Ÿฌ ๋ฒˆ ์‹คํ–‰ํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์ข…์ข… ์žˆ์Šต๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ๋‚ด์—์„œ ์ด ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๊ทธ๋ ‡๊ฒŒ ํ•˜๋ฉด ํ•˜๋‚˜์˜ ์ธ์ˆ˜ ์„ธํŠธ์— ๋Œ€ํ•ด ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ```python # test_this1.py import unittest from parameterized import parameterized class TestMathUnitTest(unittest.TestCase): @parameterized.expand( [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ] ) def test_floor(self, name, input, expected): assert_equal(math.floor(input), expected) ``` ์ด์ œ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ด ํ…Œ์ŠคํŠธ๋Š” `test_floor`์˜ ๋งˆ์ง€๋ง‰ 3๊ฐœ ์ธ์ˆ˜๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜ ๋ชฉ๋ก์˜ ํ•ด๋‹น ์ธ์ˆ˜์— ํ• ๋‹น๋˜๋Š” ๊ฒƒ์œผ๋กœ 3๋ฒˆ ์‹คํ–‰๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  `negative` ๋ฐ `integer` ๋งค๊ฐœ๋ณ€์ˆ˜ ์ง‘ํ•ฉ๋งŒ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "negative and integer" tests/test_mytest.py ``` ๋˜๋Š” `negative` ํ•˜์œ„ ํ…Œ์ŠคํŠธ๋ฅผ ์ œ์™ธํ•œ ๋ชจ๋“  ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋ฅผ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "not negative" tests/test_mytest.py ``` ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ `-k` ํ•„ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์™ธ์—๋„, ๊ฐ ์„œ๋ธŒ ํ…Œ์ŠคํŠธ์˜ ์ •ํ™•ํ•œ ์ด๋ฆ„์„ ํ™•์ธํ•œ ํ›„์— ์ผ๋ถ€ ํ˜น์€ ์ „์ฒด ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash pytest test_this1.py --collect-only -q ``` ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```bash test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer test_this1.py::TestMathUnitTest::test_floor_2_large_fraction ``` 2๊ฐœ์˜ ํŠน์ •ํ•œ ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer ``` `transformers`์˜ ๊ฐœ๋ฐœ์ž ์ข…์†์„ฑ์— ์ด๋ฏธ ์žˆ๋Š” [parameterized](https://pypi.org/project/parameterized/) ๋ชจ๋“ˆ์€ `unittests`์™€ `pytest` ํ…Œ์ŠคํŠธ ๋ชจ๋‘์—์„œ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…Œ์ŠคํŠธ๊ฐ€ `unittest`๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ `pytest.mark.parametrize`๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์ด๋ฏธ ์žˆ๋Š” ์ผ๋ถ€ ํ…Œ์ŠคํŠธ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฃผ๋กœ `examples` ํ•˜์œ„์— ์žˆ์Šต๋‹ˆ๋‹ค). ๋‹ค์Œ์€ `pytest`์˜ `parametrize` ๋งˆ์ปค๋ฅผ ์‚ฌ์šฉํ•œ ๋™์ผํ•œ ์˜ˆ์ž…๋‹ˆ๋‹ค: ```python # test_this2.py import pytest @pytest.mark.parametrize( "name, input, expected", [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ], ) def test_floor(name, input, expected): assert_equal(math.floor(input), expected) ``` `parameterized`์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `pytest.mark.parametrize`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด `-k` ํ•„ํ„ฐ๊ฐ€ ์ž‘๋™ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ์—๋„ ์‹คํ–‰ํ•  ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹จ, ์ด ๋งค๊ฐœ๋ณ€์ˆ˜ํ™” ํ•จ์ˆ˜๋Š” ์„œ๋ธŒ ํ…Œ์ŠคํŠธ์˜ ์ด๋ฆ„ ์ง‘ํ•ฉ์„ ์•ฝ๊ฐ„ ๋‹ค๋ฅด๊ฒŒ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ์Šต์ž…๋‹ˆ๋‹ค: ```bash pytest test_this2.py --collect-only -q ``` ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```bash test_this2.py::test_floor[integer-1-1.0] test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[large fraction-1.6-1] ``` ํŠน์ •ํ•œ ํ…Œ์ŠคํŠธ์— ๋Œ€ํ•ด์„œ๋งŒ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0] ``` ์ด์ „์˜ ์˜ˆ์‹œ์™€ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ[[files-and-directories]] ํ…Œ์ŠคํŠธ์—์„œ ์ข…์ข… ํ˜„์žฌ ํ…Œ์ŠคํŠธ ํŒŒ์ผ๊ณผ ๊ด€๋ จ๋œ ์ƒ๋Œ€์ ์ธ ์œ„์น˜๋ฅผ ์•Œ์•„์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ๊ฐ€ ์—ฌ๋Ÿฌ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ํ˜ธ์ถœ๋˜๊ฑฐ๋‚˜ ๊นŠ์ด๊ฐ€ ๋‹ค๋ฅธ ํ•˜์œ„ ๋””๋ ‰ํ„ฐ๋ฆฌ์— ์žˆ์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๊ทธ ์œ„์น˜๋ฅผ ์•„๋Š” ๊ฒƒ์€ ๊ฐ„๋‹จํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. `transformers.test_utils.TestCasePlus`๋ผ๋Š” ํ—ฌํผ ํด๋ž˜์Šค๋Š” ๋ชจ๋“  ๊ธฐ๋ณธ ๊ฒฝ๋กœ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ  ๊ฐ„๋‹จํ•œ ์•ก์„ธ์„œ๋ฅผ ์ œ๊ณตํ•˜์—ฌ ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค: - `pathlib` ๊ฐ์ฒด(์™„์ „ํžˆ ์ •ํ•ด์ง„ ๊ฒฝ๋กœ) - `test_file_path` - ํ˜„์žฌ ํ…Œ์ŠคํŠธ ํŒŒ์ผ ๊ฒฝ๋กœ (์˜ˆ: `__file__`) - test_file_dir` - ํ˜„์žฌ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์ด ํฌํ•จ๋œ ๋””๋ ‰ํ„ฐ๋ฆฌ - tests_dir` - `tests` ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ - examples_dir` - `examples` ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ - repo_root_dir` - ์ €์žฅ์†Œ ๋””๋ ‰ํ„ฐ๋ฆฌ - src_dir` - `src`์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ(์˜ˆ: `transformers` ํ•˜์œ„ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ์žˆ๋Š” ๊ณณ) - ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜๋œ ๊ฒฝ๋กœ---์œ„์™€ ๋™์ผํ•˜์ง€๋งŒ, `pathlib` ๊ฐ์ฒด๊ฐ€ ์•„๋‹Œ ๋ฌธ์ž์—ด๋กœ ๊ฒฝ๋กœ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: - `test_file_path_str` - `test_file_dir_str` - `tests_dir_str` - `examples_dir_str` - `repo_root_dir_str` - `src_dir_str` ์œ„์˜ ๋‚ด์šฉ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ํ…Œ์ŠคํŠธ๊ฐ€ 'transformers.test_utils.TestCasePlus'์˜ ์„œ๋ธŒํด๋ž˜์Šค์— ์žˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_local_locations(self): data_dir = self.tests_dir / "fixtures/tests_samples/wmt_en_ro" ``` ๋งŒ์•ฝ `pathlib`๋ฅผ ํ†ตํ•ด ๊ฒฝ๋กœ๋ฅผ ์กฐ์ž‘ํ•  ํ•„์š”๊ฐ€ ์—†๊ฑฐ๋‚˜ ๊ฒฝ๋กœ๋ฅผ ๋ฌธ์ž์—ด๋กœ๋งŒ ํ•„์š”๋กœ ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” `pathlib` ๊ฐ์ฒด์— `str()`์„ ํ˜ธ์ถœํ•˜๊ฑฐ๋‚˜ `_str`๋กœ ๋๋‚˜๋Š” ์ ‘๊ทผ์ž๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_stringified_locations(self): examples_dir = self.examples_dir_str ``` ### ์ž„์‹œ ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ[[temporary-files-and-directories]] ๊ณ ์œ ํ•œ ์ž„์‹œ ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ๋ณ‘๋ ฌ ํ…Œ์ŠคํŠธ ์‹คํ–‰์— ์žˆ์–ด ํ•„์ˆ˜์ ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•จ์œผ๋กœ์จ ํ…Œ์ŠคํŠธ๋“ค์ด ์„œ๋กœ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฎ์–ด์“ฐ์ง€ ์•Š๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์šฐ๋ฆฌ๋Š” ์ƒ์„ฑ๋œ ํ…Œ์ŠคํŠธ์˜ ์ข…๋ฃŒ ๋‹จ๊ณ„์—์„œ ์ด๋Ÿฌํ•œ ์ž„์‹œ ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ œ๊ฑฐํ•˜๊ณ  ์‹ถ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋Ÿฌํ•œ ์š”๊ตฌ ์‚ฌํ•ญ์„ ์ถฉ์กฑ์‹œ์ผœ์ฃผ๋Š” `tempfile`๊ณผ ๊ฐ™์€ ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…Œ์ŠคํŠธ๋ฅผ ๋””๋ฒ„๊น…ํ•  ๋•Œ๋Š” ์ž„์‹œ ํŒŒ์ผ์ด๋‚˜ ๋””๋ ‰ํ„ฐ๋ฆฌ์— ๋“ค์–ด๊ฐ€๋Š” ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•˜๋ฉฐ, ์žฌ์‹คํ–‰๋˜๋Š” ๊ฐ ํ…Œ์ŠคํŠธ๋งˆ๋‹ค ์ž„์‹œ ํŒŒ์ผ์ด๋‚˜ ๋””๋ ‰ํ„ฐ๋ฆฌ์˜ ๊ฒฝ๋กœ์— ๋Œ€ํ•ด ๋ฌด์ž‘์œ„ ๊ฐ’์ด ์•„๋‹Œ ์ •ํ™•ํ•œ ๊ฐ’์„ ์•Œ๊ณ  ์‹ถ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. `transformers.test_utils.TestCasePlus`๋ผ๋Š” ๋„์šฐ๋ฏธ ํด๋ž˜์Šค๋Š” ์ด๋Ÿฌํ•œ ๋ชฉ์ ์— ๊ฐ€์žฅ ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” `unittest.TestCase`์˜ ํ•˜์œ„ ํด๋ž˜์Šค์ด๋ฏ€๋กœ, ์šฐ๋ฆฌ๋Š” ์ด๊ฒƒ์„ ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ์—์„œ ์‰ฝ๊ฒŒ ์ƒ์†ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ํ•ด๋‹น ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class ExamplesTests(TestCasePlus): def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` ์ด ์ฝ”๋“œ๋Š” ๊ณ ์œ ํ•œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  `tmp_dir`์„ ํ•ด๋‹น ์œ„์น˜๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. - ๊ณ ์œ ํ•œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` `tmp_dir`์—๋Š” ์ƒ์„ฑ๋œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ์˜ ๊ฒฝ๋กœ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ…Œ์ŠคํŠธ์˜ ์ข…๋ฃŒ ๋‹จ๊ณ„์—์„œ ์ž๋™์œผ๋กœ ์ œ๊ฑฐ๋ฉ๋‹ˆ๋‹ค. - ์„ ํƒํ•œ ๊ฒฝ๋กœ๋กœ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ ์ƒ์„ฑ ํ›„์— ํ…Œ์ŠคํŠธ ์‹œ์ž‘ ์ „์— ๋น„์–ด ์žˆ๋Š” ์ƒํƒœ์ธ์ง€ ํ™•์ธํ•˜๊ณ , ํ…Œ์ŠคํŠธ ํ›„์—๋Š” ๋น„์šฐ์ง€ ๋งˆ์„ธ์š”. ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir("./xxx") ``` ์ด๊ฒƒ์€ ๋””๋ฒ„๊น…ํ•  ๋•Œ ํŠน์ • ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ณ , ๊ทธ ๋””๋ ‰ํ„ฐ๋ฆฌ์— ์ด์ „์— ์‹คํ–‰๋œ ํ…Œ์ŠคํŠธ๊ฐ€ ๋ฐ์ดํ„ฐ๋ฅผ ๋‚จ๊ธฐ์ง€ ์•Š๋„๋ก ํ•˜๋Š” ๋ฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. - `before` ๋ฐ `after` ์ธ์ˆ˜๋ฅผ ์ง์ ‘ ์˜ค๋ฒ„๋ผ์ด๋”ฉํ•˜์—ฌ ๊ธฐ๋ณธ ๋™์ž‘์„ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋‹ค์Œ ์ค‘ ํ•˜๋‚˜์˜ ๋™์ž‘์œผ๋กœ ์ด์–ด์ง‘๋‹ˆ๋‹ค: - `before=True`: ํ…Œ์ŠคํŠธ ์‹œ์ž‘ ์‹œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ํ•ญ์ƒ ์ง€์›Œ์ง‘๋‹ˆ๋‹ค. - `before=False`: ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ์กด ํŒŒ์ผ์€ ๊ทธ๋Œ€๋กœ ๋‚จ์Šต๋‹ˆ๋‹ค. - `after=True`: ํ…Œ์ŠคํŠธ ์ข…๋ฃŒ ์‹œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ํ•ญ์ƒ ์‚ญ์ œ๋ฉ๋‹ˆ๋‹ค. - `after=False`: ํ…Œ์ŠคํŠธ ์ข…๋ฃŒ ์‹œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ํ•ญ์ƒ ๊ทธ๋Œ€๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค. <Tip> `rm -r`์— ํ•ด๋‹นํ•˜๋Š” ๋ช…๋ น์„ ์•ˆ์ „ํ•˜๊ฒŒ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด, ๋ช…์‹œ์ ์ธ `tmp_dir`์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ํ”„๋กœ์ ํŠธ ์ €์žฅ์†Œ ์ฒดํฌ ์•„์›ƒ์˜ ํ•˜์œ„ ๋””๋ ‰ํ„ฐ๋ฆฌ๋งŒ ํ—ˆ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์‹ค์ˆ˜๋กœ `/tmp`๊ฐ€ ์•„๋‹Œ ์ค‘์š”ํ•œ ํŒŒ์ผ ์‹œ์Šคํ…œ์˜ ์ผ๋ถ€๊ฐ€ ์‚ญ์ œ๋˜์ง€ ์•Š๋„๋ก ํ•ญ์ƒ `./`๋กœ ์‹œ์ž‘ํ•˜๋Š” ๊ฒฝ๋กœ๋ฅผ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> <Tip> ๊ฐ ํ…Œ์ŠคํŠธ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ๋“ฑ๋กํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋ณ„๋„๋กœ ์š”์ฒญํ•˜์ง€ ์•Š๋Š” ํ•œ ๋ชจ๋‘ ์ž๋™์œผ๋กœ ์ œ๊ฑฐ๋ฉ๋‹ˆ๋‹ค. </Tip> ### ์ž„์‹œ sys.path ์˜ค๋ฒ„๋ผ์ด๋“œ[[temporary-sys.path-override]] `sys.path`๋ฅผ ๋‹ค๋ฅธ ํ…Œ์ŠคํŠธ๋กœ ์ž„์‹œ๋กœ ์˜ค๋ฒ„๋ผ์ด๋“œํ•˜๊ธฐ ์œ„ํ•ด ์˜ˆ๋ฅผ ๋“ค์–ด `ExtendSysPath` ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python import os from transformers.testing_utils import ExtendSysPath bindir = os.path.abspath(os.path.dirname(__file__)) with ExtendSysPath(f"{bindir}/.."): from test_trainer import TrainerIntegrationCommon # noqa ``` ### ํ…Œ์ŠคํŠธ ๊ฑด๋„ˆ๋›ฐ๊ธฐ[[skipping-tests]] ์ด๊ฒƒ์€ ๋ฒ„๊ทธ๊ฐ€ ๋ฐœ๊ฒฌ๋˜์–ด ์ƒˆ๋กœ์šด ํ…Œ์ŠคํŠธ๊ฐ€ ์ž‘์„ฑ๋˜์—ˆ์ง€๋งŒ ์•„์ง ๊ทธ ๋ฒ„๊ทธ๊ฐ€ ์ˆ˜์ •๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํ…Œ์ŠคํŠธ๋ฅผ ์ฃผ ์ €์žฅ์†Œ์— ์ปค๋ฐ‹ํ•˜๋ ค๋ฉด `make test` ์ค‘์— ๊ฑด๋„ˆ๋›ฐ๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐฉ๋ฒ•: - **skip**์€ ํ…Œ์ŠคํŠธ๊ฐ€ ์ผ๋ถ€ ์กฐ๊ฑด์ด ์ถฉ์กฑ๋  ๊ฒฝ์šฐ์—๋งŒ ํ†ต๊ณผ๋  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜๊ณ , ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด pytest๊ฐ€ ์ „์ฒด ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ์–ด์•ผ ํ•จ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์˜ˆ๋กœ๋Š” Windows๊ฐ€ ์•„๋‹Œ ํ”Œ๋žซํผ์—์„œ Windows ์ „์šฉ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๊ฑฐ๋‚˜ ์™ธ๋ถ€ ๋ฆฌ์†Œ์Šค(์˜ˆ๋ฅผ ๋“ค์–ด ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค)์— ์˜์กดํ•˜๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๊ฒƒ์ด ์žˆ์Šต๋‹ˆ๋‹ค. - **xfail**์€ ํ…Œ์ŠคํŠธ๊ฐ€ ํŠน์ •ํ•œ ์ด์œ ๋กœ ์ธํ•ด ์‹คํŒจํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์˜ˆ๋กœ๋Š” ์•„์ง ๊ตฌํ˜„๋˜์ง€ ์•Š์€ ๊ธฐ๋Šฅ์ด๋‚˜ ์•„์ง ์ˆ˜์ •๋˜์ง€ ์•Š์€ ๋ฒ„๊ทธ์˜ ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. `xfail`๋กœ ํ‘œ์‹œ๋œ ํ…Œ์ŠคํŠธ๊ฐ€ ์˜ˆ์ƒ๋Œ€๋กœ ์‹คํŒจํ•˜์ง€ ์•Š๊ณ  ํ†ต๊ณผ๋œ ๊ฒฝ์šฐ, ์ด๊ฒƒ์€ xpass์ด๋ฉฐ ํ…Œ์ŠคํŠธ ๊ฒฐ๊ณผ ์š”์•ฝ์— ๊ธฐ๋ก๋ฉ๋‹ˆ๋‹ค. ๋‘ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์ฐจ์ด์  ์ค‘ ํ•˜๋‚˜๋Š” `skip`์€ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š์ง€๋งŒ `xfail`์€ ์‹คํ–‰ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์˜ค๋ฅ˜๊ฐ€ ์žˆ๋Š” ์ฝ”๋“œ๊ฐ€ ์ผ๋ถ€ ํ…Œ์ŠคํŠธ์— ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ `xfail`์„ ์‚ฌ์šฉํ•˜์ง€ ๋งˆ์„ธ์š”. #### ๊ตฌํ˜„[[implementation]] - ์ „์ฒด ํ…Œ์ŠคํŠธ๋ฅผ ๋ฌด์กฐ๊ฑด ๊ฑด๋„ˆ๋›ฐ๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python no-style @unittest.skip("this bug needs to be fixed") def test_feature_x(): ``` ๋˜๋Š” pytest๋ฅผ ํ†ตํ•ด: ```python no-style @pytest.mark.skip(reason="this bug needs to be fixed") ``` ๋˜๋Š” `xfail` ๋ฐฉ์‹์œผ๋กœ: ```python no-style @pytest.mark.xfail def test_feature_x(): ``` - ํ…Œ์ŠคํŠธ ๋‚ด๋ถ€์—์„œ ๋‚ด๋ถ€ ํ™•์ธ์— ๋”ฐ๋ผ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python def test_feature_x(): if not has_something(): pytest.skip("unsupported configuration") ``` ๋˜๋Š” ๋ชจ๋“ˆ ์ „์ฒด: ```python import pytest if not pytest.config.getoption("--custom-flag"): pytest.skip("--custom-flag is missing, skipping tests", allow_module_level=True) ``` ๋˜๋Š” `xfail` ๋ฐฉ์‹์œผ๋กœ: ```python def test_feature_x(): pytest.xfail("expected to fail until bug XYZ is fixed") ``` - import๊ฐ€ missing๋œ ๋ชจ๋“ˆ์ด ์žˆ์„ ๋•Œ ๊ทธ ๋ชจ๋“ˆ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•: ```python docutils = pytest.importorskip("docutils", minversion="0.3") ``` - ์กฐ๊ฑด์— ๋”ฐ๋ผ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•: ```python no-style @pytest.mark.skipif(sys.version_info < (3,6), reason="requires python3.6 or higher") def test_feature_x(): ``` ๋˜๋Š”: ```python no-style @unittest.skipIf(torch_device == "cpu", "Can't do half precision") def test_feature_x(): ``` ๋˜๋Š” ๋ชจ๋“ˆ ์ „์ฒด๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•: ```python no-style @pytest.mark.skipif(sys.platform == 'win32', reason="does not run on windows") class TestClass(): def test_feature_x(self): ``` ๋ณด๋‹ค ์ž์„ธํ•œ ์˜ˆ์ œ ๋ฐ ๋ฐฉ๋ฒ•์€ [์—ฌ๊ธฐ](https://docs.pytest.org/en/latest/skipping.html)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ[[slow-tests]] ํ…Œ์ŠคํŠธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์ง€์†์ ์œผ๋กœ ํ™•์žฅ๋˜๊ณ  ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ํ…Œ์ŠคํŠธ๋Š” ์‹คํ–‰ํ•˜๋Š” ๋ฐ ๋ช‡ ๋ถ„์ด ๊ฑธ๋ฆฝ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์šฐ๋ฆฌ์—๊ฒŒ๋Š” ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ CI๋ฅผ ํ†ตํ•ด ์™„๋ฃŒ๋˜๊ธฐ๊นŒ์ง€ ํ•œ ์‹œ๊ฐ„์„ ๊ธฐ๋‹ค๋ฆด ์—ฌ์œ ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ•„์ˆ˜ ํ…Œ์ŠคํŠธ๋ฅผ ์œ„ํ•œ ์ผ๋ถ€ ์˜ˆ์™ธ๋ฅผ ์ œ์™ธํ•˜๊ณ  ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python no-style from transformers.testing_utils import slow @slow def test_integration_foo(): ``` `@slow`๋กœ ํ‘œ์‹œ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด `RUN_SLOW=1` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash RUN_SLOW=1 pytest tests ``` `@parameterized`์™€ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ํ…Œ์ŠคํŠธ ์ด๋ฆ„์„ ๋‹ค์‹œ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ `@slow`์™€ ๋‚˜๋จธ์ง€ ๊ฑด๋„ˆ๋›ฐ๊ธฐ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ `@require_*`๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™๋˜๋ ค๋ฉด ๋งˆ์ง€๋ง‰์— ๋‚˜์—ด๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์˜ฌ๋ฐ”๋ฅธ ์‚ฌ์šฉ ์˜ˆ์ž…๋‹ˆ๋‹ค. ```python no-style @parameterized.expand(...) @slow def test_integration_foo(): ``` ์ด ๋ฌธ์„œ์˜ ์ดˆ๋ฐ˜๋ถ€์— ์„ค๋ช…๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋Š” PR์˜ CI ํ™•์ธ์ด ์•„๋‹Œ ์˜ˆ์•ฝ๋œ ์ผ์ • ๊ธฐ๋ฐ˜์œผ๋กœ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PR ์ œ์ถœ ์ค‘์— ์ผ๋ถ€ ๋ฌธ์ œ๋ฅผ ๋†“์นœ ์ฑ„๋กœ ๋ณ‘ํ•ฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋“ค์€ ๋‹ค์Œ๋ฒˆ์˜ ์˜ˆ์ •๋œ CI ์ž‘์—… ์ค‘์— ๊ฐ์ง€๋ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ PR์„ ์ œ์ถœํ•˜๊ธฐ ์ „์— ์ž์‹ ์˜ ์ปดํ“จํ„ฐ์—์„œ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ ๋˜ํ•œ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ํ‘œ์‹œํ•ด์•ผ ํ•˜๋Š”์ง€ ์—ฌ๋ถ€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๋Œ€๋žต์ ์ธ ๊ฒฐ์ • ๊ธฐ์ค€์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ํ…Œ์ŠคํŠธ๊ฐ€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‚ด๋ถ€ ๊ตฌ์„ฑ ์š”์†Œ ์ค‘ ํ•˜๋‚˜์— ์ง‘์ค‘๋˜์–ด ์žˆ๋‹ค๋ฉด(์˜ˆ: ๋ชจ๋ธ๋ง ํŒŒ์ผ, ํ† ํฐํ™” ํŒŒ์ผ, ํŒŒ์ดํ”„๋ผ์ธ), ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‹ค๋ฅธ ์ธก๋ฉด(์˜ˆ: ๋ฌธ์„œ ๋˜๋Š” ์˜ˆ์ œ)์— ์ง‘์ค‘๋˜์–ด ์žˆ๋‹ค๋ฉด, ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด ์ ‘๊ทผ ๋ฐฉ์‹์„ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•ด ์˜ˆ์™ธ๋ฅผ ๋งŒ๋“ค์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋ฌด๊ฑฐ์šด ๊ฐ€์ค‘์น˜ ์„ธํŠธ๋‚˜ 50MB๋ณด๋‹ค ํฐ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค์šด๋กœ๋“œํ•ด์•ผ ํ•˜๋Š” ๋ชจ๋“  ํ…Œ์ŠคํŠธ(์˜ˆ: ๋ชจ๋ธ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ, ํ† ํฌ๋‚˜์ด์ € ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ, ํŒŒ์ดํ”„๋ผ์ธ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ)๋ฅผ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ์ž‘์€ ๋ฒ„์ „์„ ๋งŒ๋“ค์–ด ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋‚ด์šฉ์€ ์•„๋ž˜ ๋‹จ๋ฝ์—์„œ ์„ค๋ช…๋ฉ๋‹ˆ๋‹ค. - ํŠน๋ณ„ํžˆ ๋น ๋ฅด๊ฒŒ ์‹คํ–‰๋˜๋„๋ก ์ตœ์ ํ™”๋˜์ง€ ์•Š์€ ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•˜๋Š” ํ…Œ์ŠคํŠธ๋Š” ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋Š๋ฆฌ์ง€ ์•Š์•„์•ผ ํ•  ํ…Œ์ŠคํŠธ ์ค‘ ์ผ๋ถ€๊ฐ€ ๊ทน๋„๋กœ ๋Š๋ฆฐ ๊ฒฝ์šฐ ์˜ˆ์™ธ๋ฅผ ๋„์ž…ํ•˜๊ณ  ์ด๋ฅผ `@slow`๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€์šฉ๋Ÿ‰ ํŒŒ์ผ์„ ๋””์Šคํฌ์— ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์˜ค๋Š” ์ž๋™ ๋ชจ๋ธ๋ง ํ…Œ์ŠคํŠธ๋Š” `@slow`์œผ๋กœ ํ‘œ์‹œ๋œ ํ…Œ์ŠคํŠธ์˜ ์ข‹์€ ์˜ˆ์ž…๋‹ˆ๋‹ค. - CI์—์„œ 1์ดˆ ์ด๋‚ด์— ํ…Œ์ŠคํŠธ๊ฐ€ ์™„๋ฃŒ๋˜๋Š” ๊ฒฝ์šฐ(๋‹ค์šด๋กœ๋“œ ํฌํ•จ)์—๋Š” ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๊ฐ€ ์•„๋‹ˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์–‘ํ•œ ๋‚ด๋ถ€๋ฅผ ์™„์ „ํžˆ ์ปค๋ฒ„ํ•˜๋ฉด์„œ ๋น ๋ฅด๊ฒŒ ์œ ์ง€๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน๋ณ„ํžˆ ์ƒ์„ฑ๋œ ์ž‘์€ ๋ชจ๋ธ๋กœ ํ…Œ์ŠคํŠธํ•˜๋ฉด ์ƒ๋‹นํ•œ ์ปค๋ฒ„๋ฆฌ์ง€๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ตœ์†Œํ•œ์˜ ๋ ˆ์ด์–ด ์ˆ˜(์˜ˆ: 2), ์–ดํœ˜ ํฌ๊ธฐ(์˜ˆ: 1000) ๋“ฑ์˜ ์š”์†Œ๋งŒ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `@slow` ํ…Œ์ŠคํŠธ๋Š” ๋Œ€ํ˜• ๋Š๋ฆฐ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ •์„ฑ์ ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ž‘์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด *tiny* ๋ชจ๋ธ์„ ์ฐพ์•„๋ณด์„ธ์š”. ```bash grep tiny tests examples ``` ๋‹ค์Œ์€ ์ž‘์€ ๋ชจ๋ธ[stas/tiny-wmt19-en-de](https://huggingface.co/stas/tiny-wmt19-en-de)์„ ๋งŒ๋“  [script](https://github.com/huggingface/transformers/tree/main/scripts/fsmt/fsmt-make-tiny-model.py) ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ํŠน์ • ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜์— ๋งž๊ฒŒ ์‰ฝ๊ฒŒ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋Œ€์šฉ๋Ÿ‰ ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋Š” ๊ฒฝ์šฐ ๋Ÿฐํƒ€์ž„์„ ์ž˜๋ชป ์ธก์ •ํ•˜๊ธฐ ์‰ฝ์ง€๋งŒ, ๋กœ์ปฌ์—์„œ ํ…Œ์ŠคํŠธํ•˜๋ฉด ๋‹ค์šด๋กœ๋“œํ•œ ํŒŒ์ผ์ด ์บ์‹œ๋˜์–ด ๋‹ค์šด๋กœ๋“œ ์‹œ๊ฐ„์ด ์ธก์ •๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋Œ€์‹  CI ๋กœ๊ทธ์˜ ์‹คํ–‰ ์†๋„ ๋ณด๊ณ ์„œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”(`pytest --durations=0 tests`์˜ ์ถœ๋ ฅ). ์ด ๋ณด๊ณ ์„œ๋Š” ๋Š๋ฆฐ ์ด์ƒ๊ฐ’์œผ๋กœ ํ‘œ์‹œ๋˜์ง€ ์•Š๊ฑฐ๋‚˜ ๋น ๋ฅด๊ฒŒ ๋‹ค์‹œ ์ž‘์„ฑํ•ด์•ผ ํ•˜๋Š” ๋Š๋ฆฐ ์ด์ƒ๊ฐ’์„ ์ฐพ๋Š” ๋ฐ๋„ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. CI์—์„œ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ ๋Š๋ ค์ง€๊ธฐ ์‹œ์ž‘ํ•˜๋ฉด ์ด ๋ณด๊ณ ์„œ์˜ ๋งจ ์œ„ ๋ชฉ๋ก์— ๊ฐ€์žฅ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ### stdout/stderr ์ถœ๋ ฅ ํ…Œ์ŠคํŠธ[[testing-the-stdout/stderr-output]] `stdout` ๋ฐ/๋˜๋Š” `stderr`๋กœ ์“ฐ๋Š” ํ•จ์ˆ˜๋ฅผ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด `pytest`์˜ [capsys ์‹œ์Šคํ…œ](https://docs.pytest.org/en/latest/capture.html)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ•ด๋‹น ์ŠคํŠธ๋ฆผ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python import sys def print_to_stdout(s): print(s) def print_to_stderr(s): sys.stderr.write(s) def test_result_and_stdout(capsys): msg = "Hello" print_to_stdout(msg) print_to_stderr(msg) out, err = capsys.readouterr() # ์บก์ฒ˜๋œ ์ถœ๋ ฅ ์ŠคํŠธ๋ฆผ ์‚ฌ์šฉ # ์„ ํƒ ์‚ฌํ•ญ: ์บก์ฒ˜๋œ ์ŠคํŠธ๋ฆผ ์žฌ์ƒ์„ฑ sys.stdout.write(out) sys.stderr.write(err) # ํ…Œ์ŠคํŠธ: assert msg in out assert msg in err ``` ๊ทธ๋ฆฌ๊ณ , ๋ฌผ๋ก  ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ์—๋Š” `stderr`๋Š” ์˜ˆ์™ธ์˜ ์ผ๋ถ€๋กœ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ ํ•ด๋‹น ๊ฒฝ์šฐ์—๋Š” try/except๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python def raise_exception(msg): raise ValueError(msg) def test_something_exception(): msg = "Not a good value" error = "" try: raise_exception(msg) except Exception as e: error = str(e) assert msg in error, f"{msg} is in the exception:\n{error}" ``` `stdout`๋ฅผ ์บก์ฒ˜ํ•˜๋Š” ๋˜ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€ `contextlib.redirect_stdout`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```python from io import StringIO from contextlib import redirect_stdout def print_to_stdout(s): print(s) def test_result_and_stdout(): msg = "Hello" buffer = StringIO() with redirect_stdout(buffer): print_to_stdout(msg) out = buffer.getvalue() # ์„ ํƒ ์‚ฌํ•ญ: ์บก์ฒ˜๋œ ์ŠคํŠธ๋ฆผ ์žฌ์ƒ์„ฑ sys.stdout.write(out) # ํ…Œ์ŠคํŠธ: assert msg in out ``` `stdout` ์บก์ฒ˜์— ๊ด€๋ จ๋œ ์ค‘์š”ํ•œ ๋ฌธ์ œ ์ค‘ ํ•˜๋‚˜๋Š” ๋ณดํ†ต `print`์—์„œ ์ด์ „์— ์ธ์‡„๋œ ๋‚ด์šฉ์„ ์žฌ์„ค์ •ํ•˜๋Š” `\r` ๋ฌธ์ž๊ฐ€ ํฌํ•จ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. `pytest`์—์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ์—†์ง€๋งŒ `pytest -s`์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ž๊ฐ€ ๋ฒ„ํผ์— ํฌํ•จ๋˜๋ฏ€๋กœ `-s`๊ฐ€ ์žˆ๊ฑฐ๋‚˜ ์—†๋Š” ์ƒํƒœ์—์„œ ํƒœ์ŠคํŠธ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์œผ๋ ค๋ฉด ์บก์ฒ˜๋œ ์ถœ๋ ฅ์— ๋Œ€ํ•ด ์ถ”๊ฐ€์ ์ธ ์ •๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” `re.sub(r'~.*\r', '', buf, 0, re.M)`์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋„์šฐ๋ฏธ ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž ๋ž˜ํผ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ถœ๋ ฅ์— `\r`์ด ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€์˜ ์—ฌ๋ถ€์— ๊ด€๊ณ„์—†์ด ๋ชจ๋“  ๊ฒƒ์„ ์ž๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋ฏ€๋กœ ํŽธ๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ```python from transformers.testing_utils import CaptureStdout with CaptureStdout() as cs: function_that_writes_to_stdout() print(cs.out) ``` ๋‹ค์Œ์€ ์ „์ฒด ํ…Œ์ŠคํŠธ ์˜ˆ์ œ์ž…๋‹ˆ๋‹ค. ```python from transformers.testing_utils import CaptureStdout msg = "Secret message\r" final = "Hello World" with CaptureStdout() as cs: print(msg + final) assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}" ``` `stderr`๋ฅผ ์บก์ฒ˜ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ๋Œ€์‹  `CaptureStderr` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ```python from transformers.testing_utils import CaptureStderr with CaptureStderr() as cs: function_that_writes_to_stderr() print(cs.err) ``` ๋‘ ์ŠคํŠธ๋ฆผ์„ ๋™์‹œ์— ์บก์ฒ˜ํ•ด์•ผ ํ•œ๋‹ค๋ฉด, ๋ถ€๋ชจ `CaptureStd` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ```python from transformers.testing_utils import CaptureStd with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print(cs.err, cs.out) ``` ๋˜ํ•œ, ํ…Œ์ŠคํŠธ์˜ ๋””๋ฒ„๊น…์„ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•ด ์ด๋Ÿฌํ•œ ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์ปจํ…์ŠคํŠธ์—์„œ ์ข…๋ฃŒํ•  ๋•Œ ์บก์ฒ˜๋œ ์ŠคํŠธ๋ฆผ์„ ์ž๋™์œผ๋กœ ๋‹ค์‹œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ### ๋กœ๊ฑฐ ์ŠคํŠธ๋ฆผ ์บก์ฒ˜[[capturing-logger-stream]] ๋กœ๊ฑฐ ์ถœ๋ ฅ์„ ๊ฒ€์ฆํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ `CaptureLogger`๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from transformers import logging from transformers.testing_utils import CaptureLogger msg = "Testing 1, 2, 3" logging.set_verbosity_info() logger = logging.get_logger("transformers.models.bart.tokenization_bart") with CaptureLogger(logger) as cl: logger.info(msg) assert cl.out, msg + "\n" ``` ### ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ด์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ[[testing-with-environment-variables]] ํŠน์ • ํ…Œ์ŠคํŠธ์˜ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ ์˜ํ–ฅ์„ ๊ฒ€์ฆํ•˜๋ ค๋ฉด `transformers.testing_utils.mockenv`๋ผ๋Š” ๋„์šฐ๋ฏธ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from transformers.testing_utils import mockenv class HfArgumentParserTest(unittest.TestCase): @mockenv(TRANSFORMERS_VERBOSITY="error") def test_env_override(self): env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None) ``` ์ผ๋ถ€ ๊ฒฝ์šฐ์—๋Š” ์™ธ๋ถ€ ํ”„๋กœ๊ทธ๋žจ์„ ํ˜ธ์ถœํ•ด์•ผํ•  ์ˆ˜๋„ ์žˆ๋Š”๋ฐ, ์ด ๋•Œ์—๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋กœ์ปฌ ๊ฒฝ๋กœ๋ฅผ ํฌํ•จํ•˜๋Š” `os.environ`์—์„œ `PYTHONPATH`์˜ ์„ค์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ—ฌํผ ํด๋ž˜์Šค `transformers.test_utils.TestCasePlus`๊ฐ€ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class EnvExampleTest(TestCasePlus): def test_external_prog(self): env = self.get_env() # ์ด์ œ `env`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์™ธ๋ถ€ ํ”„๋กœ๊ทธ๋žจ ํ˜ธ์ถœ ``` ํ…Œ์ŠคํŠธ ํŒŒ์ผ์ด `tests` ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ ๋˜๋Š” `examples`์— ์žˆ๋Š”์ง€์— ๋”ฐ๋ผ `env[PYTHONPATH]`๊ฐ€ ๋‘ ๋””๋ ‰ํ„ฐ๋ฆฌ ์ค‘ ํ•˜๋‚˜๋ฅผ ํฌํ•จํ•˜๋„๋ก ์„ค์ •๋˜๋ฉฐ, ํ˜„์žฌ ์ €์žฅ์†Œ์— ๋Œ€ํ•ด ํ…Œ์ŠคํŠธ๊ฐ€ ์ˆ˜ํ–‰๋˜๋„๋ก `src` ๋””๋ ‰ํ„ฐ๋ฆฌ๋„ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ํ˜ธ์ถœ ์ด์ „์— ์„ค์ •๋œ ๊ฒฝ์šฐ์—๋Š” `env[PYTHONPATH]`๋ฅผ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํ—ฌํผ ๋ฉ”์†Œ๋“œ๋Š” `os.environ` ๊ฐ์ฒด์˜ ์‚ฌ๋ณธ์„ ์ƒ์„ฑํ•˜๋ฏ€๋กœ ์›๋ณธ์€ ๊ทธ๋Œ€๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค. ### ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๊ฒฐ๊ณผ ์–ป๊ธฐ[[getting-reproducible-results]] ์ผ๋ถ€ ์ƒํ™ฉ์—์„œ ํ…Œ์ŠคํŠธ์—์„œ ์ž„์˜์„ฑ์„ ์ œ๊ฑฐํ•˜์—ฌ ๋™์ผํ•˜๊ฒŒ ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ณ  ์‹ถ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹œ๋“œ๋ฅผ ๊ณ ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python seed = 42 # ํŒŒ์ด์ฌ RNG import random random.seed(seed) # ํŒŒ์ดํ† ์น˜ RNG import torch torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # ๋„˜ํŒŒ์ด RNG import numpy as np np.random.seed(seed) # ํ…์„œํ”Œ๋กœ RNG tf.random.set_seed(seed) ``` ### ํ…Œ์ŠคํŠธ ๋””๋ฒ„๊น…[[debugging tests]] ๊ฒฝ๊ณ ๊ฐ€ ์žˆ๋Š” ๊ณณ์—์„œ ๋””๋ฒ„๊ฑฐ๋ฅผ ์‹œ์ž‘ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•˜์„ธ์š”. ```bash pytest tests/utils/test_logging.py -W error::UserWarning --pdb ``` ## Github Actions ์›Œํฌํ”Œ๋กœ์šฐ ์ž‘์—… ์ฒ˜๋ฆฌ[[working-with-github-actions-workflows]] ์…€ํ”„ ํ‘ธ์‹œ ์›Œํฌํ”Œ๋กœ์šฐ CI ์ž‘์—…์„ ํŠธ๋ฆฌ๊ฑฐํ•˜๋ ค๋ฉด, ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 1. `transformers` ์›๋ณธ์—์„œ ์ƒˆ ๋ธŒ๋žœ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค(ํฌํฌ๊ฐ€ ์•„๋‹™๋‹ˆ๋‹ค!). 2. ๋ธŒ๋žœ์น˜ ์ด๋ฆ„์€ `ci_` ๋˜๋Š” `ci-`๋กœ ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค(`main`๋„ ํŠธ๋ฆฌ๊ฑฐํ•˜์ง€๋งŒ `main`์—์„œ๋Š” PR์„ ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค). ๋˜ํ•œ ํŠน์ • ๊ฒฝ๋กœ์— ๋Œ€ํ•ด์„œ๋งŒ ํŠธ๋ฆฌ๊ฑฐ๋˜๋ฏ€๋กœ ์ด ๋ฌธ์„œ๊ฐ€ ์ž‘์„ฑ๋œ ํ›„์— ๋ณ€๊ฒฝ๋œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/main/.github/workflows/self-push.yml)์˜ *push:*์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. ์ด ๋ธŒ๋žœ์น˜์—์„œ PR์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค 4. ๊ทธ๋Ÿฐ ๋‹ค์Œ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/actions/workflows/self-push.yml)์—์„œ ์ž‘์—…์ด ๋‚˜ํƒ€๋‚˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฑ๋กœ๊ทธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ, ๋ฐ”๋กœ ์‹คํ–‰๋˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‹คํ—˜์ ์ธ CI ๊ธฐ๋Šฅ ํ…Œ์ŠคํŠธ[[testing-Experimental-CI-Features]] CI ๊ธฐ๋Šฅ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฒƒ์€ ์ผ๋ฐ˜ CI ์ž‘๋™์— ๋ฐฉํ•ด๊ฐ€ ๋  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ž ์žฌ์ ์œผ๋กœ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ƒˆ๋กœ์šด CI ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 1. ํ…Œ์ŠคํŠธํ•ด์•ผ ํ•  ๋‚ด์šฉ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ์ƒˆ๋กœ์šด ์ „์šฉ ์ž‘์—…์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 2. ์ƒˆ๋กœ์šด ์ž‘์—…์€ ํ•ญ์ƒ ์„ฑ๊ณตํ•ด์•ผ๋งŒ ๋…น์ƒ‰ โœ“๋ฅผ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์•„๋ž˜์— ์ž์„ธํ•œ ๋‚ด์šฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค). 3. ๋‹ค์–‘ํ•œ PR ์œ ํ˜•์— ๋Œ€ํ•œ ํ™•์ธ์„ ์œ„ํ•ด (์‚ฌ์šฉ์ž ํฌํฌ ๋ธŒ๋žœ์น˜, ํฌํฌ๋˜์ง€ ์•Š์€ ๋ธŒ๋žœ์น˜, github.com UI ์ง์ ‘ ํŒŒ์ผ ํŽธ์ง‘์—์„œ ์ƒ์„ฑ๋œ ๋ธŒ๋žœ์น˜, ๊ฐ•์ œ ํ‘ธ์‹œ ๋“ฑ PR์˜ ์œ ํ˜•์€ ์•„์ฃผ ๋‹ค์–‘ํ•ฉ๋‹ˆ๋‹ค.) ๋ฉฐ์น  ๋™์•ˆ ์‹คํ—˜ ์ž‘์—…์˜ ๋กœ๊ทธ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋ฉด์„œ ์‹คํ–‰ํ•ด๋ด…๋‹ˆ๋‹ค. (์˜๋„์ ์œผ๋กœ ํ•ญ์ƒ ๋…น์ƒ‰์„ ํ‘œ์‹œํ•˜๋ฏ€๋กœ ์ž‘์—… ์ „์ฒด๊ฐ€ ๋…น์ƒ‰์€ ์•„๋‹ˆ๋ผ๋Š” ์ ์— ์œ ์˜ํ•ฉ๋‹ˆ๋‹ค.) 4. ๋ชจ๋“  ๊ฒƒ์ด ์•ˆ์ •์ ์ธ์ง€ ํ™•์ธํ•œ ํ›„, ์ƒˆ๋กœ์šด ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๊ธฐ์กด ์ž‘์—…์— ๋ณ‘ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด CI ๊ธฐ๋Šฅ ์ž์ฒด์— ๋Œ€ํ•œ ์‹คํ—˜์ด ์ผ๋ฐ˜ ์ž‘์—… ํ๋ฆ„์— ๋ฐฉํ•ด๊ฐ€ ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ƒˆ๋กœ์šด CI ๊ธฐ๋Šฅ์ด ๊ฐœ๋ฐœ ์ค‘์ธ ๋™์•ˆ, ํ•ญ์ƒ ์„ฑ๊ณตํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์€ ๋ฌด์—‡์ผ๊นŒ์š”? TravisCI์™€ ๊ฐ™์€ ์ผ๋ถ€ CI๋Š” `ignore-step-failure`๋ฅผ ์ง€์›ํ•˜๋ฉฐ ์ „์ฒด ์ž‘์—…์„ ์„ฑ๊ณตํ•œ ๊ฒƒ์œผ๋กœ ๋ณด๊ณ ํ•˜์ง€๋งŒ, ํ˜„์žฌ ์šฐ๋ฆฌ๊ฐ€ ์‚ฌ์šฉํ•˜๋Š” CircleCI์™€ Github Actions๋Š” ์ด๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ•ด๊ฒฐ์ฑ…์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. bash ์Šคํฌ๋ฆฝํŠธ์—์„œ ๊ฐ€๋Šฅํ•œ ๋งŽ์€ ์˜ค๋ฅ˜๋ฅผ ์–ต์ œํ•˜๊ธฐ ์œ„ํ•ด ์‹คํ–‰ ๋ช…๋ น์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— `set +euo pipefail`์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. 2. ๋งˆ์ง€๋ง‰ ๋ช…๋ น์€ ๋ฐ˜๋“œ์‹œ ์„ฑ๊ณตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `echo "done"` ๋˜๋Š” `true`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ```yaml - run: name: run CI experiment command: | set +euo pipefail echo "setting run-all-despite-any-errors-mode" this_command_will_fail echo "but bash continues to run" # emulate another failure false # but the last command must be a success echo "during experiment do not remove: reporting success to CI, even if there were failures" ``` ๊ฐ„๋‹จํ•œ ๋ช…๋ น์˜ ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash cmd_that_may_fail || true ``` ๊ฒฐ๊ณผ์— ๋งŒ์กฑํ•œ ํ›„์—๋Š” ๋ฌผ๋ก , ์‹คํ—˜์ ์ธ ๋‹จ๊ณ„ ๋˜๋Š” ์ž‘์—…์„ ์ผ๋ฐ˜ ์ž‘์—…์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„๊ณผ ํ†ตํ•ฉํ•˜๋ฉด์„œ `set +euo pipefail` ๋˜๋Š” ๊ธฐํƒ€ ์ถ”๊ฐ€ํ•œ ์š”์†Œ๋ฅผ ์ œ๊ฑฐํ•˜์—ฌ ์‹คํ—˜ ์ž‘์—…์ด ์ผ๋ฐ˜ CI ์ž‘๋™์— ๋ฐฉํ•ด๋˜์ง€ ์•Š๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ „๋ฐ˜์ ์ธ ๊ณผ์ •์€ ์‹คํ—˜ ๋‹จ๊ณ„๊ฐ€ PR์˜ ์ „๋ฐ˜์ ์ธ ์ƒํƒœ์— ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š๊ณ  ์‹คํŒจํ•˜๋„๋ก `allow-failure`์™€ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ๋‹ค๋ฉด ํ›จ์”ฌ ๋” ์‰ฌ์› ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ ๋ฐ”์™€ ๊ฐ™์ด CircleCI์™€ Github Actions๋Š” ํ˜„์žฌ ์ด๋Ÿฌํ•œ ๊ธฐ๋Šฅ๋“ค ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์˜ ์ง€์›์„ ์œ„ํ•œ ํˆฌํ‘œ์— ์ฐธ์—ฌํ•˜๊ณ  CI ๊ด€๋ จ ์Šค๋ ˆ๋“œ๋“ค์—์„œ ์ด๋Ÿฌํ•œ ์ƒํ™ฉ์„ ํ™•์ธํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. - [Github Actions:](https://github.com/actions/toolkit/issues/399) - [CircleCI:](https://ideas.circleci.com/ideas/CCI-I-344)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/run_scripts.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์Šคํฌ๋ฆฝํŠธ๋กœ ์‹คํ–‰ํ•˜๊ธฐ[[train-with-a-script]] ๐Ÿค— Transformers ๋…ธํŠธ๋ถ๊ณผ ํ•จ๊ป˜ [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), ๋˜๋Š” [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax)๋ฅผ ์‚ฌ์šฉํ•ด ํŠน์ • ํƒœ์Šคํฌ์— ๋Œ€ํ•œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ฃผ๋Š” ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ [์—ฐ๊ตฌ ํ”„๋กœ์ ํŠธ](https://github.com/huggingface/transformers/tree/main/examples/research_projects) ๋ฐ [๋ ˆ๊ฑฐ์‹œ ์˜ˆ์ œ](https://github.com/huggingface/transformers/tree/main/examples/legacy)์—์„œ ๋Œ€๋ถ€๋ถ„ ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ์ œ๊ณตํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์ ๊ทน์ ์œผ๋กœ ์œ ์ง€ ๊ด€๋ฆฌ๋˜์ง€ ์•Š์œผ๋ฉฐ ์ตœ์‹  ๋ฒ„์ „์˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ํ˜ธํ™˜๋˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํŠน์ • ๋ฒ„์ „์˜ ๐Ÿค— Transformers๋ฅผ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ๋ชจ๋“  ๋ฌธ์ œ์—์„œ ๋ฐ”๋กœ ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋ฉฐ, ํ•ด๊ฒฐํ•˜๋ ค๋Š” ๋ฌธ์ œ์— ๋งž๊ฒŒ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณ€๊ฒฝํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋Œ€๋ถ€๋ถ„์˜ ์Šคํฌ๋ฆฝํŠธ์—๋Š” ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ•์ด ๋‚˜์™€์žˆ์–ด ํ•„์š”์— ๋”ฐ๋ผ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ์— ๊ตฌํ˜„ํ•˜๊ณ  ์‹ถ์€ ๊ธฐ๋Šฅ์ด ์žˆ์œผ๋ฉด pull request๋ฅผ ์ œ์ถœํ•˜๊ธฐ ์ „์— [ํฌ๋Ÿผ](https://discuss.huggingface.co/) ๋˜๋Š” [์ด์Šˆ](https://github.com/huggingface/transformers/issues)์—์„œ ๋…ผ์˜ํ•ด ์ฃผ์„ธ์š”. ๋ฒ„๊ทธ ์ˆ˜์ •์€ ํ™˜์˜ํ•˜์ง€๋งŒ ๊ฐ€๋…์„ฑ์„ ํฌ์ƒํ•˜๋ฉด์„œ๊นŒ์ง€ ๋” ๋งŽ์€ ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ•˜๋Š” pull request๋Š” ๋ณ‘ํ•ฉ(merge)ํ•˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) ๋ฐ [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization)์—์„œ ์š”์•ฝ ํ›ˆ๋ จํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ ์˜ˆ์ œ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ํŠน๋ณ„ํ•œ ์„ค๋ช…์ด ์—†๋Š” ํ•œ ๋ชจ๋“  ์˜ˆ์ œ๋Š” ๋‘ ํ”„๋ ˆ์ž„์›Œํฌ ๋ชจ๋‘์—์„œ ์ž‘๋™ํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ## ์„ค์ •ํ•˜๊ธฐ[[setup]] ์ตœ์‹  ๋ฒ„์ „์˜ ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰ํ•˜๋ ค๋ฉด ์ƒˆ ๊ฐ€์ƒ ํ™˜๊ฒฝ์—์„œ **์†Œ์Šค๋กœ๋ถ€ํ„ฐ ๐Ÿค— Transformers๋ฅผ ์„ค์น˜**ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` ์ด์ „ ๋ฒ„์ „์˜ ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณด๋ ค๋ฉด ์•„๋ž˜ ํ† ๊ธ€์„ ํด๋ฆญํ•˜์„ธ์š”: <details> <summary>์ด์ „ ๋ฒ„์ „์˜ ๐Ÿค— Transformers ์˜ˆ์ œ</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ณต์ œ(clone)ํ•ด์˜จ ๐Ÿค— Transformers ๋ฒ„์ „์„ ํŠน์ • ๋ฒ„์ „(์˜ˆ: v3.5.1)์œผ๋กœ ์ „ํ™˜ํ•˜์„ธ์š”: ```bash git checkout tags/v3.5.1 ``` ์˜ฌ๋ฐ”๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋ฒ„์ „์„ ์„ค์ •ํ•œ ํ›„ ์›ํ•˜๋Š” ์˜ˆ์ œ ํด๋”๋กœ ์ด๋™ํ•˜์—ฌ ์˜ˆ์ œ๋ณ„๋กœ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•œ ์š”๊ตฌ ์‚ฌํ•ญ(requirements)์„ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install -r requirements.txt ``` ## ์Šคํฌ๋ฆฝํŠธ ์‹คํ–‰ํ•˜๊ธฐ[[run-a-script]] <frameworkcontent> <pt> ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋Š” ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์ „์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์š”์•ฝ ๊ธฐ๋Šฅ์„ ์ง€์›ํ•˜๋Š” ์•„ํ‚คํ…์ฒ˜์—์„œ [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋Š” [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [T5-small](https://huggingface.co/google-t5/t5-small)์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. T5 ๋ชจ๋ธ์€ ํ›ˆ๋ จ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์ถ”๊ฐ€ `source_prefix` ์ธ์ˆ˜๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ์ด ํ”„๋กฌํ”„ํŠธ๋Š” ์š”์•ฝ ์ž‘์—…์ž„์„ T5์— ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋Š” ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์ „์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์š”์•ฝ ๊ธฐ๋Šฅ์„ ์ง€์›ํ•˜๋Š” ์•„ํ‚คํ…์ฒ˜์—์„œ Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋Š” [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [T5-small](https://huggingface.co/google-t5/t5-small)์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. T5 ๋ชจ๋ธ์€ ํ›ˆ๋ จ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์ถ”๊ฐ€ `source_prefix` ์ธ์ˆ˜๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ์ด ํ”„๋กฌํ”„ํŠธ๋Š” ์š”์•ฝ ์ž‘์—…์ž„์„ T5์— ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision)๋กœ ๋ถ„์‚ฐ ํ›ˆ๋ จํ•˜๊ธฐ[[distributed-training-and-mixed-precision]] [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) ํด๋ž˜์Šค๋Š” ๋ถ„์‚ฐ ํ›ˆ๋ จ๊ณผ ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision)๋ฅผ ์ง€์›ํ•˜๋ฏ€๋กœ ์Šคํฌ๋ฆฝํŠธ์—์„œ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐ€์ง€ ๊ธฐ๋Šฅ์„ ๋ชจ๋‘ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‘ ๊ฐ€์ง€๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - `fp16` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision)๋ฅผ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. - `nproc_per_node` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด ์‚ฌ์šฉํ•  GPU ๊ฐœ์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` TensorFlow ์Šคํฌ๋ฆฝํŠธ๋Š” ๋ถ„์‚ฐ ํ›ˆ๋ จ์„ ์œ„ํ•ด [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy)๋ฅผ ํ™œ์šฉํ•˜๋ฉฐ, ํ›ˆ๋ จ ์Šคํฌ๋ฆฝํŠธ์— ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋‹ค์ค‘ GPU ํ™˜๊ฒฝ์ด๋ผ๋ฉด, TensorFlow ์Šคํฌ๋ฆฝํŠธ๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ## TPU ์œ„์—์„œ ์Šคํฌ๋ฆฝํŠธ ์‹คํ–‰ํ•˜๊ธฐ[[run-a-script-on-a-tpu]] <frameworkcontent> <pt> Tensor Processing Units (TPUs)๋Š” ์„ฑ๋Šฅ์„ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํžˆ ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. PyTorch๋Š” [XLA](https://www.tensorflow.org/xla) ๋”ฅ๋Ÿฌ๋‹ ์ปดํŒŒ์ผ๋Ÿฌ์™€ ํ•จ๊ป˜ TPU๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค(์ž์„ธํ•œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://github.com/pytorch/xla/blob/master/README.md) ์ฐธ์กฐ). TPU๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด `xla_spawn.py` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๊ณ  `num_cores` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉํ•˜๋ ค๋Š” TPU ์ฝ”์–ด ์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Tensor Processing Units (TPUs)๋Š” ์„ฑ๋Šฅ์„ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํžˆ ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. TensorFlow ์Šคํฌ๋ฆฝํŠธ๋Š” TPU๋ฅผ ํ›ˆ๋ จ์— ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy)๋ฅผ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. TPU๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด TPU ๋ฆฌ์†Œ์Šค์˜ ์ด๋ฆ„์„ `tpu` ์ธ์ˆ˜์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path google-t5/t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## ๐Ÿค— Accelerate๋กœ ์Šคํฌ๋ฆฝํŠธ ์‹คํ–‰ํ•˜๊ธฐ[[run-a-script-with-accelerate]] ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate)๋Š” PyTorch ํ›ˆ๋ จ ๊ณผ์ •์— ๋Œ€ํ•œ ์™„์ „ํ•œ ๊ฐ€์‹œ์„ฑ์„ ์œ ์ง€ํ•˜๋ฉด์„œ ์—ฌ๋Ÿฌ ์œ ํ˜•์˜ ์„ค์ •(CPU ์ „์šฉ, ๋‹ค์ค‘ GPU, TPU)์—์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š” ํ†ตํ•ฉ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” PyTorch ์ „์šฉ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ๐Ÿค— Accelerate๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: > ์ฐธ๊ณ : Accelerate๋Š” ๋น ๋ฅด๊ฒŒ ๊ฐœ๋ฐœ ์ค‘์ด๋ฏ€๋กœ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด accelerate๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```bash pip install git+https://github.com/huggingface/accelerate ``` `run_summarization.py` ์Šคํฌ๋ฆฝํŠธ ๋Œ€์‹  `run_summarization_no_trainer.py` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Accelerate ํด๋ž˜์Šค๊ฐ€ ์ง€์›๋˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋Š” ํด๋”์— `task_no_trainer.py` ํŒŒ์ผ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ตฌ์„ฑ ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜๊ณ  ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```bash accelerate config ``` ์„ค์ •์„ ํ…Œ์ŠคํŠธํ•˜์—ฌ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๊ตฌ์„ฑ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```bash accelerate test ``` ์ด์ œ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path google-t5/t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## ์‚ฌ์šฉ์ž ์ •์˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์‚ฌ์šฉํ•˜๊ธฐ[[use-a-custom-dataset]] ์š”์•ฝ ์Šคํฌ๋ฆฝํŠธ๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ CSV ๋˜๋Š” JSON ํŒŒ์ผ์ธ ๊ฒฝ์šฐ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์ถ”๊ฐ€ ์ธ์ˆ˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - `train_file`๊ณผ `validation_file`์€ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ํŒŒ์ผ์˜ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. - `text_column`์€ ์š”์•ฝํ•  ์ž…๋ ฅ ํ…์ŠคํŠธ์ž…๋‹ˆ๋‹ค. - `summary_column`์€ ์ถœ๋ ฅํ•  ๋Œ€์ƒ ํ…์ŠคํŠธ์ž…๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์š”์•ฝ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## ์Šคํฌ๋ฆฝํŠธ ํ…Œ์ŠคํŠธํ•˜๊ธฐ[[test-a-script]] ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋Œ€์ƒ์œผ๋กœ ํ›ˆ๋ จ์„ ์™„๋ฃŒํ•˜๋Š”๋ฐ ๊ฝค ์˜ค๋žœ ์‹œ๊ฐ„์ด ๊ฑธ๋ฆฌ๊ธฐ ๋•Œ๋ฌธ์—, ์ž‘์€ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ๋ชจ๋“  ๊ฒƒ์ด ์˜ˆ์ƒ๋Œ€๋กœ ์‹คํ–‰๋˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ตœ๋Œ€ ์ƒ˜ํ”Œ ์ˆ˜๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google-t5/t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` ๋ชจ๋“  ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ `max_predict_samples` ์ธ์ˆ˜๋ฅผ ์ง€์›ํ•˜์ง€๋Š” ์•Š์Šต๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์ด ์ธ์ˆ˜๋ฅผ ์ง€์›ํ•˜๋Š”์ง€ ํ™•์‹คํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ `-h` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ํ™•์ธํ•˜์„ธ์š”: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## ์ฒดํฌํฌ์ธํŠธ(checkpoint)์—์„œ ํ›ˆ๋ จ ์ด์–ด์„œ ํ•˜๊ธฐ[[resume-training-from-checkpoint]] ๋˜ ๋‹ค๋ฅธ ์œ ์šฉํ•œ ์˜ต์…˜์€ ์ด์ „ ์ฒดํฌํฌ์ธํŠธ์—์„œ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ํ›ˆ๋ จ์ด ์ค‘๋‹จ๋˜๋”๋ผ๋„ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋‹ค์‹œ ์‹œ์ž‘ํ•˜์ง€ ์•Š๊ณ  ์ค‘๋‹จํ•œ ๋ถ€๋ถ„๋ถ€ํ„ฐ ๋‹ค์‹œ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ์—์„œ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์—๋Š” ๋‘ ๊ฐ€์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” `output_dir previous_output_dir` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `output_dir`์— ์ €์žฅ๋œ ์ตœ์‹  ์ฒดํฌํฌ์ธํŠธ๋ถ€ํ„ฐ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ `overwrite_output_dir`์„ ์ œ๊ฑฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` ๋‘ ๋ฒˆ์งธ๋Š” `resume_from_checkpoint path_to_specific_checkpoint` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ • ์ฒดํฌํฌ์ธํŠธ ํด๋”์—์„œ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ[[share-your-model]] ๋ชจ๋“  ์Šคํฌ๋ฆฝํŠธ๋Š” ์ตœ์ข… ๋ชจ๋ธ์„ [Model Hub](https://huggingface.co/models)์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— Hugging Face์— ๋กœ๊ทธ์ธํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash huggingface-cli login ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ์— `push_to_hub` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ธ์ˆ˜๋Š” Hugging Face ์‚ฌ์šฉ์ž ์ด๋ฆ„๊ณผ `output_dir`์— ์ง€์ •๋œ ํด๋” ์ด๋ฆ„์œผ๋กœ ์ €์žฅ์†Œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ์— ํŠน์ • ์ด๋ฆ„์„ ์ง€์ •ํ•˜๋ ค๋ฉด `push_to_hub_model_id` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ๋Š” ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์•„๋ž˜์— ์ž๋™์œผ๋กœ ๋‚˜์—ด๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋Š” ํŠน์ • ์ €์žฅ์†Œ ์ด๋ฆ„์œผ๋กœ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path google-t5/t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/create_a_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋งž์ถคํ˜• ์•„ํ‚คํ…์ฒ˜ ๋งŒ๋“ค๊ธฐ[[create-a-custom-architecture]] [`AutoClass`](model_doc/auto)๋Š” ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜๊ณ  ๋ฏธ๋ฆฌ ํ•™์Šต๋œ configuration๊ณผ ๊ฐ€์ค‘์น˜๋ฅผ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ฒดํฌํฌ์ธํŠธ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š๋Š” ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด `AutoClass`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํŠน์ • ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๋ณด๋‹ค ์„ธ๋ฐ€ํ•˜๊ฒŒ ์ œ์–ดํ•˜๊ณ ์ž ํ•˜๋Š” ์‚ฌ์šฉ์ž๋Š” ๋ช‡ ๊ฐ€์ง€ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋งŒ์œผ๋กœ ์ปค์Šคํ…€ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์—ฐ๊ตฌ, ๊ต์œก ๋˜๋Š” ์‹คํ—˜ํ•˜๋Š” ๋ฐ ๊ด€์‹ฌ์ด ์žˆ๋Š” ๋ชจ๋“  ์‚ฌ์šฉ์ž์—๊ฒŒ ํŠนํžˆ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” 'AutoClass'๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ์ปค์Šคํ…€ ๋ชจ๋ธ์„ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: - ๋ชจ๋ธ configuration์„ ๊ฐ€์ ธ์˜ค๊ณ  ์‚ฌ์šฉ์ž ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. - ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•  ๋Š๋ฆฌ๊ฑฐ๋‚˜ ๋น ๋ฅธ ํ† ํฐํ™”๊ธฐ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. - ๋น„์ „ ์ž‘์—…์„ ์œ„ํ•œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. - ์˜ค๋””์˜ค ์ž‘์—…์„ ์œ„ํ•œ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. - ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์šฉ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ## Configuration[[configuration]] [configuration](main_classes/configuration)์€ ๋ชจ๋ธ์˜ ํŠน์ • ์†์„ฑ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๊ฐ ๋ชจ๋ธ ๊ตฌ์„ฑ์—๋Š” ์„œ๋กœ ๋‹ค๋ฅธ ์†์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ชจ๋“  NLP ๋ชจ๋ธ์—๋Š” `hidden_size`, `num_attention_heads`, `num_hidden_layers` ๋ฐ `vocab_size` ์†์„ฑ์ด ๊ณตํ†ต์œผ๋กœ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์†์„ฑ์€ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•  attention heads ๋˜๋Š” hidden layers์˜ ์ˆ˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. [DistilBERT](model_doc/distilbert) ์†์„ฑ์„ ๊ฒ€์‚ฌํ•˜๊ธฐ ์œ„ํ•ด [`DistilBertConfig`]์— ์ ‘๊ทผํ•˜์—ฌ ์ž์„ธํžˆ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` [`DistilBertConfig`]๋Š” ๊ธฐ๋ณธ [`DistilBertModel`]์„ ๋นŒ๋“œํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ๋ชจ๋“  ๊ธฐ๋ณธ ์†์„ฑ์„ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์†์„ฑ์€ ์ปค์Šคํ„ฐ๋งˆ์ด์ง•์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ์‹คํ—˜์„ ์œ„ํ•œ ๊ณต๊ฐ„์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๊ธฐ๋ณธ ๋ชจ๋ธ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ปค์Šคํ„ฐ๋งˆ์ด์ฆˆํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `activation` ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ ๋‹ค๋ฅธ ํ™œ์„ฑํ™” ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด์„ธ์š”. - `attention_dropout` ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์–ดํ…์…˜ ํ™•๋ฅ ์— ๋” ๋†’์€ ๋“œ๋กญ์•„์›ƒ ๋น„์œจ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ์†์„ฑ์€ [`~PretrainedConfig.from_pretrained`] ํ•จ์ˆ˜์—์„œ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert/distilbert-base-uncased", activation="relu", attention_dropout=0.4) ``` ๋ชจ๋ธ ๊ตฌ์„ฑ์ด ๋งŒ์กฑ์Šค๋Ÿฌ์šฐ๋ฉด [`~PretrainedConfig.save_pretrained`]๋กœ ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์„ค์ • ํŒŒ์ผ์€ ์ง€์ •๋œ ์ž‘์—… ๊ฒฝ๋กœ์— JSON ํŒŒ์ผ๋กœ ์ €์žฅ๋ฉ๋‹ˆ๋‹ค: ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` configuration ํŒŒ์ผ์„ ์žฌ์‚ฌ์šฉํ•˜๋ ค๋ฉด [`~PretrainedConfig.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") ``` <Tip> configuration ํŒŒ์ผ์„ ๋”•์…”๋„ˆ๋ฆฌ๋กœ ์ €์žฅํ•˜๊ฑฐ๋‚˜ ์‚ฌ์šฉ์ž ์ •์˜ configuration ์†์„ฑ๊ณผ ๊ธฐ๋ณธ configuration ์†์„ฑ์˜ ์ฐจ์ด์ ๋งŒ ์ €์žฅํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž์„ธํ•œ ๋‚ด์šฉ์€ [configuration](main_classes/configuration) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ๋ชจ๋ธ[[model]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” [๋ชจ๋ธ(model)](main_classes/models)์„ ๋งŒ๋“œ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋Š์Šจํ•˜๊ฒŒ ์•„ํ‚คํ…์ฒ˜๋ผ๊ณ ๋„ ๋ถˆ๋ฆฌ๋Š” ๋ชจ๋ธ์€ ๊ฐ ๊ณ„์ธต์ด ์ˆ˜ํ–‰ํ•˜๋Š” ๋™์ž‘๊ณผ ๋ฐœ์ƒํ•˜๋Š” ์ž‘์—…์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. configuration์˜ `num_hidden_layers`์™€ ๊ฐ™์€ ์†์„ฑ์€ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ •์˜ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์€ ๊ธฐ๋ณธ ํด๋ž˜์Šค [`PreTrainedModel`]๊ณผ ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ ํฌ๊ธฐ ์กฐ์ • ๋ฐ ์…€ํ”„ ์–ดํ…์…˜ ํ—ค๋“œ ๊ฐ€์ง€ ์น˜๊ธฐ์™€ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ๋ฉ”์†Œ๋“œ๋ฅผ ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ชจ๋“  ๋ชจ๋ธ์€ [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) ๋˜๋Š” [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html)์˜ ์„œ๋ธŒํด๋ž˜์Šค์ด๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ๊ฐ ํ”„๋ ˆ์ž„์›Œํฌ์˜ ์‚ฌ์šฉ๋ฒ•๊ณผ ํ˜ธํ™˜๋ฉ๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์‚ฌ์šฉ์ž ์ง€์ • configuration ์†์„ฑ์„ ๋ชจ๋ธ์— ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") >>> model = DistilBertModel(my_config) ``` ์ด์ œ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜ ๋Œ€์‹  ์ž„์˜์˜ ๊ฐ’์„ ๊ฐ€์ง„ ๋ชจ๋ธ์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „๊นŒ์ง€๋Š” ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์€ ๋น„์šฉ๊ณผ ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๋Š” ํ”„๋กœ์„ธ์Šค์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํ›ˆ๋ จ์— ํ•„์š”ํ•œ ๋ฆฌ์†Œ์Šค์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๋ฉด์„œ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ๋” ๋นจ๋ฆฌ ์–ป์œผ๋ ค๋ฉด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ [`~PreTrainedModel.from_pretrained`]๋กœ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` ๐Ÿค— Transformers์—์„œ ์ œ๊ณตํ•œ ๋ชจ๋ธ์˜ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration์„ ์ž๋™์œผ๋กœ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์›ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration ์†์„ฑ์˜ ์ผ๋ถ€ ๋˜๋Š” ์ „๋ถ€๋ฅผ ์‚ฌ์šฉ์ž ์ง€์ •์œผ๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config) ``` </pt> <tf> ์‚ฌ์šฉ์ž ์ง€์ • configuration ์†์„ฑ์„ ๋ชจ๋ธ์— ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` ์ด์ œ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜ ๋Œ€์‹  ์ž„์˜์˜ ๊ฐ’์„ ๊ฐ€์ง„ ๋ชจ๋ธ์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „๊นŒ์ง€๋Š” ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์€ ๋น„์šฉ๊ณผ ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๋Š” ํ”„๋กœ์„ธ์Šค์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํ›ˆ๋ จ์— ํ•„์š”ํ•œ ๋ฆฌ์†Œ์Šค์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๋ฉด์„œ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ๋” ๋นจ๋ฆฌ ์–ป์œผ๋ ค๋ฉด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ [`~TFPreTrainedModel.from_pretrained`]๋กœ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` ๐Ÿค— Transformers์—์„œ ์ œ๊ณตํ•œ ๋ชจ๋ธ์˜ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration์„ ์ž๋™์œผ๋กœ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์›ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration ์†์„ฑ์˜ ์ผ๋ถ€ ๋˜๋Š” ์ „๋ถ€๋ฅผ ์‚ฌ์šฉ์ž ์ง€์ •์œผ๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config) ``` </tf> </frameworkcontent> ### ๋ชจ๋ธ ํ—ค๋“œ[[model-heads]] ์ด ์‹œ์ ์—์„œ *์€๋‹‰ ์ƒํƒœ(hidden state)*๋ฅผ ์ถœ๋ ฅํ•˜๋Š” ๊ธฐ๋ณธ DistilBERT ๋ชจ๋ธ์„ ๊ฐ–๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์€๋‹‰ ์ƒํƒœ๋Š” ์ตœ์ข… ์ถœ๋ ฅ์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ ํ—ค๋“œ์— ์ž…๋ ฅ์œผ๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ์ด ํ•ด๋‹น ์ž‘์—…์„ ์ง€์›ํ•˜๋Š” ํ•œ ๊ฐ ์ž‘์—…๋งˆ๋‹ค ๋‹ค๋ฅธ ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค(์ฆ‰, ๋ฒˆ์—ญ๊ณผ ๊ฐ™์€ ์‹œํ€€์Šค ๊ฐ„ ์ž‘์—…์—๋Š” DistilBERT๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Œ). <frameworkcontent> <pt> ์˜ˆ๋ฅผ ๋“ค์–ด, [`DistilBertForSequenceClassification`]์€ ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๊ฐ€ ์žˆ๋Š” ๊ธฐ๋ณธ DistilBERT ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ํ’€๋ง๋œ ์ถœ๋ ฅ ์œ„์— ์žˆ๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` ๋‹ค๋ฅธ ๋ชจ๋ธ ํ—ค๋“œ๋กœ ์ „ํ™˜ํ•˜์—ฌ ์ด ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‹ค๋ฅธ ์ž‘์—…์— ์‰ฝ๊ฒŒ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ์ž‘์—…์˜ ๊ฒฝ์šฐ, [`DistilBertForQuestionAnswering`] ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ํ—ค๋“œ๋Š” ์ˆจ๊ฒจ์ง„ ์ƒํƒœ ์ถœ๋ ฅ ์œ„์— ์„ ํ˜• ๋ ˆ์ด์–ด๊ฐ€ ์žˆ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </pt> <tf> ์˜ˆ๋ฅผ ๋“ค์–ด, [`TFDistilBertForSequenceClassification`]์€ ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๊ฐ€ ์žˆ๋Š” ๊ธฐ๋ณธ DistilBERT ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ํ’€๋ง๋œ ์ถœ๋ ฅ ์œ„์— ์žˆ๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` ๋‹ค๋ฅธ ๋ชจ๋ธ ํ—ค๋“œ๋กœ ์ „ํ™˜ํ•˜์—ฌ ์ด ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‹ค๋ฅธ ์ž‘์—…์— ์‰ฝ๊ฒŒ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ์ž‘์—…์˜ ๊ฒฝ์šฐ, [`TFDistilBertForQuestionAnswering`] ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ํ—ค๋“œ๋Š” ์ˆจ๊ฒจ์ง„ ์ƒํƒœ ์ถœ๋ ฅ ์œ„์— ์„ ํ˜• ๋ ˆ์ด์–ด๊ฐ€ ์žˆ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </tf> </frameworkcontent> ## ํ† ํฌ๋‚˜์ด์ €[[tokenizer]] ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋งˆ์ง€๋ง‰์œผ๋กœ ํ•„์š”ํ•œ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋Š” ์›์‹œ ํ…์ŠคํŠธ๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” [ํ† ํฌ๋‚˜์ด์ €](main_classes/tokenizer)์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - [`PreTrainedTokenizer`]: ํŒŒ์ด์ฌ์œผ๋กœ ๊ตฌํ˜„๋œ ํ† ํฌ๋‚˜์ด์ €์ž…๋‹ˆ๋‹ค. - [`PreTrainedTokenizerFast`]: Rust ๊ธฐ๋ฐ˜ [๐Ÿค— Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ๋งŒ๋“ค์–ด์ง„ ํ† ํฌ๋‚˜์ด์ €์ž…๋‹ˆ๋‹ค. ์ด ํ† ํฌ๋‚˜์ด์ €๋Š” Rust๋กœ ๊ตฌํ˜„๋˜์–ด ๋ฐฐ์น˜ ํ† ํฐํ™”์—์„œ ํŠนํžˆ ๋น ๋ฆ…๋‹ˆ๋‹ค. ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋Š” ํ† ํฐ์„ ์›๋ž˜ ๋‹จ์–ด๋‚˜ ๋ฌธ์ž์— ๋งคํ•‘ํ•˜๋Š” *์˜คํ”„์…‹ ๋งคํ•‘*๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ๋ฉ”์†Œ๋“œ๋„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋‘ ํ† ํฌ๋‚˜์ด์ € ๋ชจ๋‘ ์ธ์ฝ”๋”ฉ ๋ฐ ๋””์ฝ”๋”ฉ, ์ƒˆ ํ† ํฐ ์ถ”๊ฐ€, ํŠน์ˆ˜ ํ† ํฐ ๊ด€๋ฆฌ์™€ ๊ฐ™์€ ์ผ๋ฐ˜์ ์ธ ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. <Tip warning={true}> ๋ชจ๋“  ๋ชจ๋ธ์ด ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ง€์›ํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. ์ด [ํ‘œ](index#supported-frameworks)์—์„œ ๋ชจ๋ธ์˜ ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ € ์ง€์› ์—ฌ๋ถ€๋ฅผ ํ™•์ธํ•˜์„ธ์š”. </Tip> ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ง์ ‘ ํ•™์Šตํ•œ ๊ฒฝ์šฐ, *์–ดํœ˜(vocabulary)* ํŒŒ์ผ์—์„œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left") ``` ์‚ฌ์šฉ์ž ์ง€์ • ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜๋Š” ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ํ† ํฌ๋‚˜์ด์ €์—์„œ ์ƒ์„ฑ๋œ ์–ดํœ˜์™€ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์–ดํœ˜๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋ฉฐ, ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ์ž…๋ ฅ์ด ์˜๋ฏธ๋ฅผ ๊ฐ–์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. [`DistilBertTokenizer`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์–ดํœ˜๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` [`DistilBertTokenizerFast`] ํด๋ž˜์Šค๋กœ ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert/distilbert-base-uncased") ``` <Tip> [`AutoTokenizer`]๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋™์ž‘์„ ๋น„ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด `from_pretrained`์—์„œ `use_fast=False`๋ฅผ ์„ค์ •ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. </Tip> ## ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ[[image-processor]] ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ(image processor)๋Š” ๋น„์ „ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ [`~image_processing_utils.ImageProcessingMixin`] ํด๋ž˜์Šค์—์„œ ์ƒ์†ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์‚ฌ์šฉ ์ค‘์ธ ๋ชจ๋ธ๊ณผ ์—ฐ๊ฒฐ๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— [ViT](model_doc/vit)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ [`ViTImageProcessor`]๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import ViTImageProcessor >>> vit_extractor = ViTImageProcessor() >>> print(vit_extractor) ViTImageProcessor { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTImageProcessor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> ์‚ฌ์šฉ์ž ์ง€์ •์„ ์›ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ `from_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋ฉด ๋ฉ๋‹ˆ๋‹ค. </Tip> ์‚ฌ์šฉ์ž ์ง€์ • ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`ViTImageProcessor`] ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import ViTImageProcessor >>> my_vit_extractor = ViTImageProcessor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTImageProcessor { "do_normalize": false, "do_resize": true, "feature_extractor_type": "ViTImageProcessor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ``` ## ํŠน์„ฑ ์ถ”์ถœ๊ธฐ[[feature-extractor]] ํŠน์„ฑ ์ถ”์ถœ๊ธฐ(feature extractor)๋Š” ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ [`~feature_extraction_utils.FeatureExtractionMixin`] ํด๋ž˜์Šค์—์„œ ์ƒ์†๋˜๋ฉฐ, ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด [`SequenceFeatureExtractor`] ํด๋ž˜์Šค์—์„œ ์ƒ์†ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์‚ฌ์šฉ ์ค‘์ธ ๋ชจ๋ธ๊ณผ ์—ฐ๊ฒฐ๋œ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์— [Wav2Vec2](model_doc/wav2vec2)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ [`Wav2Vec2FeatureExtractor`]๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 } ``` <Tip> ์‚ฌ์šฉ์ž ์ง€์ •์ด ํ•„์š”ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ `from_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ใ…๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ถˆ๋Ÿฌ ์˜ค๋ฉด ๋ฉ๋‹ˆ๋‹ค. </Tip> ์‚ฌ์šฉ์ž ์ง€์ • ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๋งŒ๋“ค๋ ค๋ฉด [`Wav2Vec2FeatureExtractor`] ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate=8000, do_normalize=False) >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": false, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 8000 } ``` ## ํ”„๋กœ์„ธ์„œ[[processor]] ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์„ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ๐Ÿค— Transformers๋Š” ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋ฐ ํ† ํฌ๋‚˜์ด์ €์™€ ๊ฐ™์€ ์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋‹จ์ผ ๊ฐ์ฒด๋กœ ํŽธ๋ฆฌํ•˜๊ฒŒ ๋ž˜ํ•‘ํ•˜๋Š” ํ”„๋กœ์„ธ์„œ ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ž๋™ ์Œ์„ฑ ์ธ์‹ ์ž‘์—…(Automatic Speech Recognition task (ASR))์— [`Wav2Vec2Processor`]๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ž๋™ ์Œ์„ฑ ์ธ์‹ ์ž‘์—…์€ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜๋ฏ€๋กœ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์™€ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•  ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) ``` ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•  ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt") ``` [`Wav2Vec2Processor`]์—์„œ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์™€ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` configuration๊ณผ ๋ชจ๋ธ์ด๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ๊ธฐ๋ณธ ํด๋ž˜์Šค์™€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค(ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋˜๋Š” ํ”„๋กœ์„ธ์„œ)๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๐Ÿค— Transformers์—์„œ ์ง€์›ํ•˜๋Š” ๋ชจ๋“  ๋ชจ๋ธ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋Š” ๊ตฌ์„ฑ์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ์›ํ•˜๋Š” ํŠน์ • ์†์„ฑ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•™์Šต์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ์‰ฝ๊ฒŒ ์„ค์ •ํ•˜๊ฑฐ๋‚˜ ๊ธฐ์กด์˜ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์ˆ˜์ •ํ•˜์—ฌ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/pipeline_webserver.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์›น ์„œ๋ฒ„๋ฅผ ์œ„ํ•œ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉํ•˜๊ธฐ[[using_pipelines_for_a_webserver]] <Tip> ์ถ”๋ก  ์—”์ง„์„ ๋งŒ๋“œ๋Š” ๊ฒƒ์€ ๋ณต์žกํ•œ ์ฃผ์ œ์ด๋ฉฐ, "์ตœ์„ ์˜" ์†”๋ฃจ์…˜์€ ๋ฌธ์ œ ๊ณต๊ฐ„์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์งˆ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. CPU ๋˜๋Š” GPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š”์ง€์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ณ  ๋‚ฎ์€ ์ง€์—ฐ ์‹œ๊ฐ„์„ ์›ํ•˜๋Š”์ง€, ๋†’์€ ์ฒ˜๋ฆฌ๋Ÿ‰์„ ์›ํ•˜๋Š”์ง€, ๋‹ค์–‘ํ•œ ๋ชจ๋ธ์„ ์ง€์›ํ•  ์ˆ˜ ์žˆ๊ธธ ์›ํ•˜๋Š”์ง€, ํ•˜๋‚˜์˜ ํŠน์ • ๋ชจ๋ธ์„ ๊ณ ๋„๋กœ ์ตœ์ ํ™”ํ•˜๊ธธ ์›ํ•˜๋Š”์ง€ ๋“ฑ์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์ด ์ฃผ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์—๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์ง€๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ, ์ด ์žฅ์—์„œ ์ œ์‹œํ•˜๋Š” ๊ฒƒ์€ ์ฒ˜์Œ ์‹œ๋„ํ•ด ๋ณด๊ธฐ์— ์ข‹์€ ์ถœ๋ฐœ์ ์ผ ์ˆ˜๋Š” ์žˆ์ง€๋งŒ, ์ด ์žฅ์„ ์ฝ๋Š” ์—ฌ๋Ÿฌ๋ถ„์ด ํ•„์š”๋กœ ํ•˜๋Š” ์ตœ์ ์˜ ์†”๋ฃจ์…˜์€ ์•„๋‹ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ํ•ต์‹ฌ์ ์œผ๋กœ ์ดํ•ดํ•ด์•ผ ํ•  ์ ์€ [dataset](pipeline_tutorial#using-pipelines-on-a-dataset)๋ฅผ ๋‹ค๋ฃฐ ๋•Œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ฐ˜๋ณต์ž๋ฅผ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด, ์›น ์„œ๋ฒ„๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์š”์ฒญ์„ ๊ธฐ๋‹ค๋ฆฌ๊ณ  ๋“ค์–ด์˜ค๋Š” ๋Œ€๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ์‹œ์Šคํ…œ์ด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋ณดํ†ต ์›น ์„œ๋ฒ„๋Š” ๋‹ค์–‘ํ•œ ์š”์ฒญ์„ ๋™์‹œ์— ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด ๋งค์šฐ ๋‹ค์ค‘ํ™”๋œ ๊ตฌ์กฐ(๋ฉ€ํ‹ฐ ์Šค๋ ˆ๋”ฉ, ๋น„๋™๊ธฐ ๋“ฑ)๋ฅผ ์ง€๋‹ˆ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด์—, ํŒŒ์ดํ”„๋ผ์ธ(๋Œ€๋ถ€๋ถ„ ํŒŒ์ดํ”„๋ผ์ธ ์•ˆ์— ์žˆ๋Š” ๋ชจ๋ธ)์€ ๋ณ‘๋ ฌ์ฒ˜๋ฆฌ์— ๊ทธ๋‹ค์ง€ ์ข‹์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์€ ๋งŽ์€ RAM์„ ์ฐจ์ง€ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, ํŒŒ์ดํ”„๋ผ์ธ์ด ์‹คํ–‰ ์ค‘์ด๊ฑฐ๋‚˜ ๊ณ„์‚ฐ ์ง‘์•ฝ์ ์ธ ์ž‘์—… ์ค‘์ผ ๋•Œ ๋ชจ๋“  ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ฆฌ์†Œ์Šค๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ์šฐ๋ฆฌ๋Š” ์›น ์„œ๋ฒ„๊ฐ€ ์š”์ฒญ์„ ๋ฐ›๊ณ  ๋ณด๋‚ด๋Š” ๊ฐ€๋ฒผ์šด ๋ถ€ํ•˜๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ , ์‹ค์ œ ์ž‘์—…์„ ์ฒ˜๋ฆฌํ•˜๋Š” ๋‹จ์ผ ์Šค๋ ˆ๋“œ๋ฅผ ๊ฐ–๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ํ•ด๊ฒฐํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ๋Š” `starlette` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹ค์ œ ํ”„๋ ˆ์ž„์›Œํฌ๋Š” ์ค‘์š”ํ•˜์ง€ ์•Š์ง€๋งŒ, ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๋™์ผํ•œ ํšจ๊ณผ๋ฅผ ๋ณด๊ธฐ ์œ„ํ•ด์„  ์ฝ”๋“œ๋ฅผ ์กฐ์ •ํ•˜๊ฑฐ๋‚˜ ๋ณ€๊ฒฝํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `server.py`๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py from starlette.applications import Starlette from starlette.responses import JSONResponse from starlette.routing import Route from transformers import pipeline import asyncio async def homepage(request): payload = await request.body() string = payload.decode("utf-8") response_q = asyncio.Queue() await request.app.model_queue.put((string, response_q)) output = await response_q.get() return JSONResponse(output) async def server_loop(q): pipe = pipeline(model="google-bert/bert-base-uncased") while True: (string, response_q) = await q.get() out = pipe(string) await response_q.put(out) app = Starlette( routes=[ Route("/", homepage, methods=["POST"]), ], ) @app.on_event("startup") async def startup_event(): q = asyncio.Queue() app.model_queue = q asyncio.create_task(server_loop(q)) ``` ์ด์ œ ๋‹ค์Œ ๋ช…๋ น์–ด๋กœ ์‹คํ–‰์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash uvicorn server:app ``` ์ด์ œ ์ฟผ๋ฆฌ๋ฅผ ๋‚ ๋ ค๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash curl -X POST -d "test [MASK]" http://localhost:8000/ #[{"score":0.7742936015129089,"token":1012,"token_str":".","sequence":"test."},...] ``` ์ž, ์ด์ œ ์›น ์„œ๋ฒ„๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ข‹์€ ๊ฐœ๋…์„ ์•Œ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์ค‘์š”ํ•œ ์ ์€ ๋ชจ๋ธ์„ **ํ•œ ๋ฒˆ๋งŒ** ๊ฐ€์ ธ์˜จ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›น ์„œ๋ฒ„์—๋Š” ๋ชจ๋ธ์˜ ์‚ฌ๋ณธ์ด ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๋ฐฉ์‹์€ ๋ถˆํ•„์š”ํ•œ RAM์ด ์‚ฌ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ํ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์‚ฌ์šฉํ•˜๋ฉด, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋™์  ๋ฐฐ์น˜๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ถ”๋ก  ์ „ ๋‹จ๊ณ„์— ๋ช‡ ๊ฐœ์˜ ํ•ญ๋ชฉ์„ ์ถ•์ ํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์€ ๋ฉ‹์ง„ ์ž‘์—…์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <Tip warning={true}> ์ฝ”๋“œ๋Š” ์˜๋„์ ์œผ๋กœ ๊ฐ€๋…์„ฑ์„ ์œ„ํ•ด ์˜์‚ฌ ์ฝ”๋“œ์ฒ˜๋Ÿผ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์•„๋ž˜ ์ฝ”๋“œ๋ฅผ ์ž‘๋™์‹œํ‚ค๊ธฐ ์ „์— ์‹œ์Šคํ…œ ์ž์›์ด ์ถฉ๋ถ„ํ•œ์ง€ ํ™•์ธํ•˜์„ธ์š”! </Tip> ```py (string, rq) = await q.get() strings = [] queues = [] while True: try: (string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms except asyncio.exceptions.TimeoutError: break strings.append(string) queues.append(rq) strings outs = pipe(strings, batch_size=len(strings)) for rq, out in zip(queues, outs): await rq.put(out) ``` ๋‹ค์‹œ ๋ง์”€ ๋“œ๋ฆฌ์ž๋ฉด, ์ œ์•ˆ๋œ ์ฝ”๋“œ๋Š” ๊ฐ€๋…์„ฑ์„ ์œ„ํ•ด ์ตœ์ ํ™”๋˜์—ˆ์œผ๋ฉฐ, ์ตœ์ƒ์˜ ์ฝ”๋“œ๋Š” ์•„๋‹™๋‹ˆ๋‹ค. ์ฒซ์งธ, ๋ฐฐ์น˜ ํฌ๊ธฐ ์ œํ•œ์ด ์—†์œผ๋ฉฐ ์ด๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ข‹์€ ๋ฐฉ์‹์ด ์•„๋‹™๋‹ˆ๋‹ค. ๋‘˜์งธ, ๋ชจ๋“  ํ ๊ฐ€์ ธ์˜ค๊ธฐ์—์„œ ํƒ€์ž„์•„์›ƒ์ด ์žฌ์„ค์ •๋˜๋ฏ€๋กœ ์ถ”๋ก ์„ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— 1ms๋ณด๋‹ค ํ›จ์”ฌ ์˜ค๋ž˜ ๊ธฐ๋‹ค๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์ฒซ ๋ฒˆ์งธ ์š”์ฒญ์„ ๊ทธ๋งŒํผ ์ง€์—ฐ์‹œํ‚ด). ๋‹จ์ผ 1ms ๊ธธ์ด์˜ ๋ฐ๋“œ๋ผ์ธ์„ ๋‘๋Š” ํŽธ์ด ๋” ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ํ๊ฐ€ ๋น„์–ด ์žˆ์–ด๋„ ํ•ญ์ƒ 1ms๋ฅผ ๊ธฐ๋‹ค๋ฆฌ๊ฒŒ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ์— ์•„๋ฌด๊ฒƒ๋„ ์—†์„ ๋•Œ ์ถ”๋ก ์„ ์›ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ์ตœ์„ ์˜ ๋ฐฉ๋ฒ•์ด ์•„๋‹ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ฐฐ์น˜ ์ž‘์—…์ด ์‚ฌ์šฉ๋ก€์— ๋”ฐ๋ผ ์ •๋ง๋กœ ์ค‘์š”ํ•˜๋‹ค๋ฉด ์˜๋ฏธ๊ฐ€ ์žˆ์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, ์ตœ์ƒ์˜ ์†”๋ฃจ์…˜์€ ์—†์Šต๋‹ˆ๋‹ค. ## ๊ณ ๋ คํ•ด์•ผ ํ•  ๋ช‡ ๊ฐ€์ง€ ์‚ฌํ•ญ[[few_things_you_might want_to_consider]] ### ์—๋Ÿฌ ํ™•์ธ[[error_checking]] ํ”„๋กœ๋•์…˜ ํ™˜๊ฒฝ์—์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์—ฌ์ง€๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋ชจ์ž๋ผ๊ฑฐ๋‚˜, ๊ณต๊ฐ„์ด ๋ถ€์กฑํ•˜๊ฑฐ๋‚˜, ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐ์— ์‹คํŒจํ•˜๊ฑฐ๋‚˜, ์ฟผ๋ฆฌ๊ฐ€ ์ž˜๋ชป๋˜์—ˆ๊ฑฐ๋‚˜, ์ฟผ๋ฆฌ๋Š” ์ •ํ™•ํ•ด๋„ ๋ชจ๋ธ ์„ค์ •์ด ์ž˜๋ชป๋˜์–ด ์‹คํ–‰์— ์‹คํŒจํ•˜๋Š” ๋“ฑ๋“ฑ ๋งŽ์€ ๊ฒฝ์šฐ๊ฐ€ ์กด์žฌํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์„œ๋ฒ„๊ฐ€ ์‚ฌ์šฉ์ž์—๊ฒŒ ์˜ค๋ฅ˜๋ฅผ ์ถœ๋ ฅํ•˜๋Š” ๊ฒƒ์ด ์ข‹์œผ๋ฏ€๋กœ ์˜ค๋ฅ˜๋ฅผ ํ‘œ์‹œํ•˜๊ธฐ ์œ„ํ•ด `try...except` ๋ฌธ์„ ๋งŽ์ด ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ณด์•ˆ ์ƒํ™ฉ์— ๋”ฐ๋ผ ๋ชจ๋“  ์˜ค๋ฅ˜๋ฅผ ํ‘œ์‹œํ•˜๋Š” ๊ฒƒ์€ ๋ณด์•ˆ์ƒ ์œ„ํ—˜ํ•  ์ˆ˜๋„ ์žˆ๋‹ค๋Š” ์ ์„ ๋ช…์‹ฌํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ### ์„œํ‚ท ๋ธŒ๋ ˆ์ดํ‚น[[circuit_breaking]] ์›น ์„œ๋ฒ„๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์„œํ‚ท ๋ธŒ๋ ˆ์ดํ‚น์„ ์ˆ˜ํ–‰ํ•  ๋•Œ ๋” ๋‚˜์€ ์ƒํ™ฉ์— ์ง๋ฉดํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ์ด๋Š” ์„œ๋ฒ„๊ฐ€ ์ฟผ๋ฆฌ๋ฅผ ๋ฌด๊ธฐํ•œ ๊ธฐ๋‹ค๋ฆฌ๋Š” ๋Œ€์‹  ๊ณผ๋ถ€ํ•˜ ์ƒํƒœ์ผ ๋•Œ ์ ์ ˆํ•œ ์˜ค๋ฅ˜๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์„œ๋ฒ„๊ฐ€ ๋งค์šฐ ์˜ค๋žœ ์‹œ๊ฐ„ ๋™์•ˆ ๋Œ€๊ธฐํ•˜๊ฑฐ๋‚˜ ์ ๋‹นํ•œ ์‹œ๊ฐ„์ด ์ง€๋‚œ ํ›„์— 504 ์—๋Ÿฌ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š” ๋Œ€์‹  503 ์—๋Ÿฌ๋ฅผ ๋น ๋ฅด๊ฒŒ ๋ฐ˜ํ™˜ํ•˜๊ฒŒ ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ œ์•ˆ๋œ ์ฝ”๋“œ์—๋Š” ๋‹จ์ผ ํ๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ๊ตฌํ˜„ํ•˜๊ธฐ๊ฐ€ ๋น„๊ต์  ์‰ฝ์Šต๋‹ˆ๋‹ค. ํ ํฌ๊ธฐ๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์€ ์›น ์„œ๋ฒ„๊ฐ€ ๊ณผ๋ถ€ํ•˜ ์ƒํ•ญ ํ•˜์— ์žˆ์„ ๋•Œ ์—๋Ÿฌ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ธฐ ์œ„ํ•œ ๊ฐ€์žฅ ๊ธฐ์ดˆ์ ์ธ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ### ๋ฉ”์ธ ์“ฐ๋ ˆ๋“œ ์ฐจ๋‹จ[[blocking_the_main_thread]] ํ˜„์žฌ PyTorch๋Š” ๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์œผ๋ฉฐ, ์‹คํ–‰ ์ค‘์—๋Š” ๋ฉ”์ธ ์Šค๋ ˆ๋“œ๊ฐ€ ์ฐจ๋‹จ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PyTorch๋ฅผ ๋ณ„๋„์˜ ์Šค๋ ˆ๋“œ/ํ”„๋กœ์„ธ์Šค์—์„œ ์‹คํ–‰ํ•˜๋„๋ก ๊ฐ•์ œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด ์ž‘์—…์ด ์ˆ˜ํ–‰๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ฝ”๋“œ๊ฐ€ ํ›จ์”ฌ ๋” ๋ณต์žกํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค(์ฃผ๋กœ ์Šค๋ ˆ๋“œ, ๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ, ํ๊ฐ€ ์„œ๋กœ ์ž˜ ๋งž์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค). ํ•˜์ง€๋งŒ ๊ถ๊ทน์ ์œผ๋กœ๋Š” ๊ฐ™์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹จ์ผ ํ•ญ๋ชฉ์˜ ์ถ”๋ก ์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฐ๋‹ค๋ฉด (> 1์ดˆ), ๋ฉ”์ธ ์“ฐ๋ ˆ๋“œ๋ฅผ ์ฐจ๋‹จํ•˜๋Š” ๊ฒƒ์€ ์ค‘์š”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ด ๊ฒฝ์šฐ ์ถ”๋ก  ์ค‘ ๋ชจ๋“  ์ฟผ๋ฆฌ๋Š” ์˜ค๋ฅ˜๋ฅผ ๋ฐ›๊ธฐ ์ „์— 1์ดˆ๋ฅผ ๊ธฐ๋‹ค๋ ค์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ### ๋™์  ๋ฐฐ์น˜[[dynamic_batching]] ์ผ๋ฐ˜์ ์œผ๋กœ, ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ๊ฐ€ 1๊ฐœ ํ•ญ๋ชฉ์„ ํ•œ ๋ฒˆ์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์— ๋น„ํ•ด ๋ฐ˜๋“œ์‹œ ์„ฑ๋Šฅ ํ–ฅ์ƒ์ด ์žˆ๋Š” ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค(์ž์„ธํ•œ ๋‚ด์šฉ์€ [`batching details`](./main_classes/pipelines#pipeline-batching)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”). ํ•˜์ง€๋งŒ ์˜ฌ๋ฐ”๋ฅธ ์„ค์ •์—์„œ ์‚ฌ์šฉํ•˜๋ฉด ๋งค์šฐ ํšจ๊ณผ์ ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. API์—๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์†๋„ ์ €ํ•˜์˜ ๊ฐ€๋Šฅ์„ฑ์ด ๋งค์šฐ ๋†’๊ธฐ ๋•Œ๋ฌธ์— ๋™์  ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋งค์šฐ ํฐ ๋ชจ๋ธ์ธ BLOOM ์ถ”๋ก ์˜ ๊ฒฝ์šฐ ๋™์  ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ๋Š” ๋ชจ๋“  ์‚ฌ๋žŒ์—๊ฒŒ ์ ์ ˆํ•œ ๊ฒฝํ—˜์„ ์ œ๊ณตํ•˜๋Š” ๋ฐ **ํ•„์ˆ˜**์ž…๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/community.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ปค๋ฎค๋‹ˆํ‹ฐ [[community]] ์ด ํŽ˜์ด์ง€๋Š” ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ๊ฐœ๋ฐœํ•œ ๐Ÿค— Transformers ๋ฆฌ์†Œ์Šค๋ฅผ ์žฌ๊ตฌ์„ฑํ•œ ํŽ˜์ด์ง€์ž…๋‹ˆ๋‹ค. ## ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฆฌ์†Œ์Šค: [[community-resources]] | ๋ฆฌ์†Œ์Šค | ์„ค๋ช… | ๋งŒ๋“ ์ด | |:----------|:-------------|------:| | [Hugging Face Transformers ์šฉ์–ด์ง‘ ํ”Œ๋ž˜์‹œ์นด๋“œ](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | [Transformers ๋ฌธ์„œ ์šฉ์–ด์ง‘](glossary)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ”Œ๋ž˜์‹œ์นด๋“œ ์„ธํŠธ๋กœ, ์ง€์‹์„ ์žฅ๊ธฐ์ ์œผ๋กœ ์œ ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํžˆ ์„ค๊ณ„๋œ ์˜คํ”ˆ์†Œ์Šค ํฌ๋กœ์Šค ํ”Œ๋žซํผ ์•ฑ์ธ [Anki](https://apps.ankiweb.net/)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‰ฝ๊ฒŒ ํ•™์Šต/์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•ํƒœ๋กœ ์ œ์ž‘๋˜์—ˆ์Šต๋‹ˆ๋‹ค. [ํ”Œ๋ž˜์‹œ์นด๋“œ ์‚ฌ์šฉ๋ฒ•์— ๋Œ€ํ•œ ์†Œ๊ฐœ ๋™์˜์ƒ](https://www.youtube.com/watch?v=Dji_h7PILrw)์„ ์ฐธ์กฐํ•˜์„ธ์š”. | [Darigov ๋ฆฌ์„œ์น˜](https://www.darigovresearch.com/) | ## ์ปค๋ฎค๋‹ˆํ‹ฐ ๋…ธํŠธ๋ถ: [[community-notebooks]] | ๋…ธํŠธ๋ถ | ์„ค๋ช… | ๋งŒ๋“ ์ด | | |:----------|:-------------|:-------------|------:| | [๊ฐ€์‚ฌ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํŠธ๋žœ์Šคํฌ๋จธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/AlekseyKorshuk/huggingartists) | GPT-2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์ข‹์•„ํ•˜๋Š” ์•„ํ‹ฐ์ŠคํŠธ์˜ ์Šคํƒ€์ผ๋กœ ๊ฐ€์‚ฌ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ• | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | [Tensorflow 2๋กœ T5 ํ›ˆ๋ จํ•˜๊ธฐ](https://github.com/snapthat/TF-T5-text-to-text) | Tensorflow 2๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ T5๋ฅผ ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•. ์ด ๋…ธํŠธ๋ถ์€ Tensorflow 2๋กœ SQUAD๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌํ˜„ํ•œ ์งˆ์˜์‘๋‹ต ์ž‘์—…์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | [TPU์—์„œ T5 ํ›ˆ๋ จํ•˜๊ธฐ](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | Transformers์™€ Nlp๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ SQUAD๋กœ T5๋ฅผ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ• | [Suraj Patil](https://github.com/patil-suraj) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | [๋ถ„๋ฅ˜ ๋ฐ ๊ฐ๊ด€์‹ ๋ฌธ์ œ๋ฅผ ์œ„ํ•ด T5 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | ๋ถ„๋ฅ˜ ๋ฐ ๊ฐ๊ด€์‹ ๋ฌธ์ œ์— ๋งž๊ฒŒ ํ…์ŠคํŠธ-ํ…์ŠคํŠธ ํ˜•์‹์„ ์‚ฌ์šฉํ•˜์—ฌ PyTorch Lightning์œผ๋กœ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | [์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ์–ธ์–ด๋กœ DialoGPT ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | ์ž์œ  ๋Œ€ํ™”ํ˜• ์ฑ—๋ด‡์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ DialoGPT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Nathan Cooper](https://github.com/ncoop57) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | [Reformer๋กœ ๊ธด ์‹œํ€€์Šค ๋ชจ๋ธ๋งํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | Reformer๋กœ ์ตœ๋Œ€ 50๋งŒ ํ† ํฐ์˜ ์‹œํ€€์Šค๋ฅผ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | [์š”์•ฝ์„ ์œ„ํ•ด BART ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | blurr๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ fastai๋กœ ์š”์•ฝํ•˜๊ธฐ ์œ„ํ•ด BART๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Wayde Gilliam](https://ohmeow.com/) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | | [๋‹ค๋ฅธ ์‚ฌ๋žŒ์˜ ํŠธ์œ—์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํŠธ๋žœ์Šคํฌ๋จธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | GPT-2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์ข‹์•„ํ•˜๋Š” ํŠธ์œ„ํ„ฐ ๊ณ„์ • ์Šคํƒ€์ผ๋กœ ํŠธ์œ—์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ• | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | [Weights & Biases๋กœ ๐Ÿค— Hugging Face ๋ชจ๋ธ ์ตœ์ ํ™”ํ•˜๊ธฐ](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | W&B์™€ Hugging Face์˜ ํ†ตํ•ฉ์„ ๋ณด์—ฌ์ฃผ๋Š” ์ „์ฒด ํŠœํ† ๋ฆฌ์–ผ | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | [Longformer ์‚ฌ์ „ํ›ˆ๋ จํ•˜๊ธฐ](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | ๊ธฐ์กด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ "๊ธด" ๋ฒ„์ „์„ ๋นŒ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ• | [Iz Beltagy](https://beltagy.net) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | [QA๋ฅผ ์œ„ํ•ด Longformer ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | QA ์ž‘์—…์„ ์œ„ํ•ด Longformer๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | [๐Ÿค— Nlp๋กœ ๋ชจ๋ธ ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | `Nlp`๋กœ TriviaQA์—์„œ Longformer๋ฅผ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | [๊ฐ์ • ๋ฒ”์œ„ ์ถ”์ถœ์„ ์œ„ํ•ด T5 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | ๊ฐ์ • ๋ฒ”์œ„ ์ถ”์ถœ์„ ์œ„ํ•ด ํ…์ŠคํŠธ-ํ…์ŠคํŠธ ํ˜•์‹์„ ์‚ฌ์šฉํ•˜์—ฌ PyTorch Lightning์œผ๋กœ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Lorenzo Ampil](https://github.com/enzoampil) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | [๋‹ค์ค‘ ํด๋ž˜์Šค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด DistilBert ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | ๋‹ค์ค‘ ํด๋ž˜์Šค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด PyTorch๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ DistilBert๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| | [๋‹ค์ค‘ ๋ ˆ์ด๋ธ” ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด BERT ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb) | ๋‹ค์ค‘ ๋ ˆ์ด๋ธ” ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด PyTorch๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ BERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| | [์š”์•ฝ์„ ์œ„ํ•ด T5 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) | ์š”์•ฝ์„ ์œ„ํ•ด PyTorch๋กœ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  WandB๋กœ ์‹คํ—˜์„ ์ถ”์ ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)| | [๋™์  ํŒจ๋”ฉ/๋ฒ„์ผ“ํŒ…์œผ๋กœ Transformers ๋ฏธ์„ธ ์กฐ์ • ์†๋„ ๋†’์ด๊ธฐ](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)| ๋™์  ํŒจ๋”ฉ/๋ฒ„์ผ“ํŒ…์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ์„ธ ์กฐ์ • ์†๋„๋ฅผ 2๋ฐฐ๋กœ ๋†’์ด๋Š” ๋ฐฉ๋ฒ• |[Michael Benesty](https://github.com/pommedeterresautee) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| |[๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด Reformer ์‚ฌ์ „ํ›ˆ๋ จํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| ์–‘๋ฐฉํ–ฅ ์…€ํ”„ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์ด์šฉํ•ด์„œ Reformer ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| | [Sci-BERT ํ™•์žฅ ๋ฐ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| CORD ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ AllenAI์—์„œ ์‚ฌ์ „ํ›ˆ๋ จ๋œ SciBERT ๋ชจ๋ธ์˜ ์–ดํœ˜๋ฅผ ๋Š˜๋ฆฌ๊ณ  ํŒŒ์ดํ”„๋ผ์ธ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| | [์š”์•ฝ์„ ์œ„ํ•ด Trainer API๋กœ BlenderBotSmall ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| ์š”์•ฝ์„ ์œ„ํ•ด Trainer API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ BlenderBotSmall ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| | [ํ†ตํ•ฉ ๊ธฐ์šธ๊ธฐ(Integrated Gradient)๋ฅผ ์ด์šฉํ•˜์—ฌ Electra ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ํ•ด์„ํ•˜๊ธฐ](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•ด Electra๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  Captum ํ†ตํ•ฉ ๊ธฐ์šธ๊ธฐ๋กœ ์˜ˆ์ธก์„ ํ•ด์„ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Eliza Szczechla](https://elsanns.github.io) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| | [Trainer ํด๋ž˜์Šค๋กœ ๋น„์˜์–ด๊ถŒ GPT-2 ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | Trainer ํด๋ž˜์Šค๋กœ ๋น„์˜์–ด๊ถŒ GPT-2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Philipp Schmid](https://www.philschmid.de) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| |[๋‹ค์ค‘ ๋ผ๋ฒจ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•ด DistilBERT ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | ๋‹ค์ค‘ ๋ผ๋ฒจ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•ด DistilBERT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| |[๋ฌธ์žฅ์Œ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด ALBERT ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | ๋ฌธ์žฅ์Œ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•ด ALBERT ๋ชจ๋ธ ๋˜๋Š” ๋‹ค๋ฅธ BERT ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Nadir El Manouzi](https://github.com/NadirEM) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| |[๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•ด Roberta ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•ด Roberta ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| |[์งˆ๋ฌธ ์ƒ์„ฑ ๋ชจ๋ธ ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/flexudy-pipe/qugeev) | seq2seq ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์ด ์ƒ์„ฑํ•œ ์งˆ๋ฌธ๊ณผ ์ด์— ๋Œ€ํ•œ ๋‹ต๋ณ€์ด ์–ผ๋งˆ๋‚˜ ์ •ํ™•ํ•œ๊ฐ€์š”? | [Pascal Zoleko](https://github.com/zolekode) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| |[DistilBERT์™€ Tensorflow๋กœ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ํ•˜๊ธฐ](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด TensorFlow๋กœ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Peter Bayerle](https://github.com/peterbayerle) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| |[CNN/Dailail ์š”์•ฝ์„ ์œ„ํ•ด ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์— BERT ํ™œ์šฉํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | CNN/Dailail ์š”์•ฝ์„ ์œ„ํ•ด *google-bert/bert-base-uncased* ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ™œ์šฉํ•˜์—ฌ *EncoderDecoderModel*์„ ์›Œ๋ฐ์—…ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| |[BBC XSum ์š”์•ฝ์„ ์œ„ํ•ด ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์— RoBERTa ํ™œ์šฉํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | BBC/XSum ์š”์•ฝ์„ ์œ„ํ•ด *FacebookAI/roberta-base* ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๊ณต์œ  *EncoderDecoderModel*์„ ์›Œ๋ฐ์—…ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| |[์ˆœ์ฐจ์  ์งˆ๋ฌธ ๋‹ต๋ณ€(SQA)์„ ์œ„ํ•ด TAPAS ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | *tapas-base* ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ˆœ์ฐจ์  ์งˆ๋ฌธ ๋‹ต๋ณ€(SQA) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *TapasForQuestionAnswering*์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| |[ํ‘œ ์‚ฌ์‹ค ๊ฒ€์‚ฌ(TabFact)๋กœ TAPAS ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | ๐Ÿค— Datasets์™€ ๐Ÿค— Transformer ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜์—ฌ *tapas-base-finetuned-tabfact* ์ฒดํฌํฌ์ธํŠธ๋กœ ๋ฏธ์„ธ ์กฐ์ •๋œ *TapasForSequenceClassification*์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| |[๋ฒˆ์—ญ์„ ์œ„ํ•ด mBART ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | ํžŒ๋””์–ด์—์„œ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด Seq2SeqTrainer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ mBART๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| |[FUNSD(์–‘์‹ ์ดํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ)๋กœ LayoutLM ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | ์Šค์บ”ํ•œ ๋ฌธ์„œ์—์„œ ์ •๋ณด ์ถ”์ถœ์„ ์œ„ํ•ด FUNSD ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LayoutLMForTokenClassification*์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| |[DistilGPT2 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๋ฐ ํ…์ŠคํŠธ ์ƒ์„ฑํ•˜๊ธฐ](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | DistilGPT2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ• | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| |[์ตœ๋Œ€ 8K ํ† ํฐ์—์„œ LED ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | ๊ธด ๋ฒ”์œ„๋ฅผ ์š”์•ฝํ•˜๊ธฐ ์œ„ํ•ด PubMed๋กœ LED๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| |[Arxiv๋กœ LED ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | ๊ธด ๋ฒ”์œ„ ์š”์•ฝ์— ๋Œ€ํ•ด LED๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| |[RVL-CDIP(๋ฌธ์„œ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ)๋กœ LayoutLM ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | ์Šค์บ” ๋ฌธ์„œ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด RVL-CDIP ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LayoutLMForSequenceClassification*์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| |[GPT2 ์กฐ์ •์„ ํ†ตํ•œ Wav2Vec2 CTC ๋””์ฝ”๋”ฉ](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | ์–ธ์–ด ๋ชจ๋ธ ์กฐ์ •์„ ํ†ตํ•ด CTC ์‹œํ€€์Šค๋ฅผ ๋””์ฝ”๋”ฉํ•˜๋Š” ๋ฐฉ๋ฒ• | [Eric Lam](https://github.com/voidful) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)| |[Trainer ํด๋ž˜์Šค๋กœ ๋‘ ๊ฐœ ์–ธ์–ด๋กœ ์š”์•ฝํ•˜๊ธฐ ์œ„ํ•ด BART ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | Trainer ํด๋ž˜์Šค๋กœ ๋‘ ๊ฐœ ์–ธ์–ด๋กœ ์š”์•ฝํ•˜๊ธฐ ์œ„ํ•ด BART ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Eliza Szczechla](https://github.com/elsanns) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| |[Trivia QA๋กœ Big Bird ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | Trivia QA๋กœ ๊ธด ๋ฌธ์„œ ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์— ๋Œ€ํ•ด BigBird๋ฅผ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | [Wav2Vec2๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋™์˜์ƒ ์บก์…˜ ๋งŒ๋“ค๊ธฐ](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | Wav2Vec์œผ๋กœ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ๋ชจ๋“  ๋™์˜์ƒ์—์„œ YouTube ์บก์…˜ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ• | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | [PyTorch Lightning์„ ์‚ฌ์šฉํ•˜์—ฌ CIFAR-10์œผ๋กœ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | HuggingFace Transformers, Datasets, PyTorch Lightning์„ ์‚ฌ์šฉํ•˜์—ฌ CIFAR-10์œผ๋กœ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ(ViT)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | [๐Ÿค— Trainer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ CIFAR-10์—์„œ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | Datasets, ๐Ÿค— Trainer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ CIFAR-10์—์„œ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ(ViT)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | [๊ฐœ์ฒด ์ž…๋ ฅ ๋ฐ์ดํ„ฐ ์„ธํŠธ์ธ Open Entity๋กœ LUKE ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | Open Entity ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LukeForEntityClassification*์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | [๊ด€๊ณ„ ์ถ”์ถœ ๋ฐ์ดํ„ฐ ์„ธํŠธ์ธ TACRED๋กœ LUKE ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | TACRED ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LukeForEntityPairClassification*์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | [์ค‘์š” NER ๋ฒค์น˜๋งˆํฌ์ธ CoNLL-2003์œผ๋กœ LUKE ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | CoNLL-2003 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LukeForEntitySpanClassification*๋ฅผ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | [PubMed ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ BigBird-Pegasus ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | PubMed ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *BigBirdPegasusForConditionalGeneration*๋ฅผ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | [Wav2Vec2๋ฅผ ์‚ฌ์šฉํ•ด์„œ ์Œ์„ฑ ๊ฐ์ • ๋ถ„๋ฅ˜ํ•˜๊ธฐ](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | ๊ฐ์ • ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ MEGA ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | [DETR๋กœ ์ด๋ฏธ์ง€์—์„œ ๊ฐ์ฒด ํƒ์ง€ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | ํ›ˆ๋ จ๋œ *DetrForObjectDetection* ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์—์„œ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•˜๊ณ  ์–ดํ…์…˜์„ ์‹œ๊ฐํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | [์‚ฌ์šฉ์ž ์ง€์ • ๊ฐ์ฒด ํƒ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ DETR ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | ์‚ฌ์šฉ์ž ์ง€์ • ๊ฐ์ฒด ํƒ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *DetrForObjectDetection*์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | [๊ฐœ์ฒด๋ช… ์ธ์‹์„ ์œ„ํ•ด T5 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | ๊ฐœ์ฒด๋ช… ์ธ์‹ ์ž‘์—…์„ ์œ„ํ•ด *T5*๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/tf_xla.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TensorFlow ๋ชจ๋ธ์„ ์œ„ํ•œ XLA ํ†ตํ•ฉ [[xla-integration-for-tensorflow-models]] [[open-in-colab]] XLA(Accelerated Linear Algebra)๋Š” TensorFlow ๋ชจ๋ธ์˜ ์‹คํ–‰ ์‹œ๊ฐ„์„ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ์ปดํŒŒ์ผ๋Ÿฌ์ž…๋‹ˆ๋‹ค. [๊ณต์‹ ๋ฌธ์„œ](https://www.tensorflow.org/xla)์— ๋”ฐ๋ฅด๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: XLA(Accelerated Linear Algebra)๋Š” ์„ ํ˜• ๋Œ€์ˆ˜๋ฅผ ์œ„ํ•œ ๋„๋ฉ”์ธ ํŠนํ™” ์ปดํŒŒ์ผ๋Ÿฌ๋กœ, TensorFlow ๋ชจ๋ธ์„ ์†Œ์Šค ์ฝ”๋“œ ๋ณ€๊ฒฝ ์—†์ด ๊ฐ€์†ํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TensorFlow์—์„œ XLA๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. XLA๋Š” `tensorflow` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋‚ด์— ํŒจํ‚ค์ง€๋กœ ์ œ๊ณต๋˜๋ฉฐ, [`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs)๊ณผ ๊ฐ™์€ ๊ทธ๋ž˜ํ”„ ์ƒ์„ฑ ํ•จ์ˆ˜์—์„œ `jit_compile` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `fit()` ๋ฐ `predict()`์™€ ๊ฐ™์€ Keras ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, `jit_compile` ์ธ์ˆ˜๋ฅผ `model.compile()`์— ์ „๋‹ฌํ•˜์—ฌ XLA๋ฅผ ๊ฐ„๋‹จํ•˜๊ฒŒ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ XLA๋Š” ์ด๋Ÿฌํ•œ ๋ฉ”์†Œ๋“œ์— ๊ตญํ•œ๋˜์ง€ ์•Š๊ณ  ์ž„์˜์˜ `tf.function`์„ ๊ฐ€์†ํ™”ํ•˜๋Š” ๋ฐ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers์—์„œ๋Š” [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2), [T5](https://huggingface.co/docs/transformers/model_doc/t5), [OPT](https://huggingface.co/docs/transformers/model_doc/opt)์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ํ…์ŠคํŠธ ์ƒ์„ฑ, ๊ทธ๋ฆฌ๊ณ  [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ์Œ์„ฑ ์ฒ˜๋ฆฌ๋ฅผ ํฌํ•จํ•˜์—ฌ ์—ฌ๋Ÿฌ TensorFlow ๋ฉ”์†Œ๋“œ๊ฐ€ XLA์™€ ํ˜ธํ™˜๋˜๋„๋ก ๋‹ค์‹œ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ •ํ™•ํ•œ ์†๋„ ํ–ฅ์ƒ์€ ๋ชจ๋ธ์— ๋”ฐ๋ผ ๋‹ค๋ฅด์ง€๋งŒ, ๐Ÿค— Transformers ๋‚ด์˜ TensorFlow ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ตœ๋Œ€ 100๋ฐฐ์˜ ์†๋„ ํ–ฅ์ƒ์„ ํ™•์ธํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์— ๋Œ€ํ•ด XLA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ตœ๋Œ€ ์„ฑ๋Šฅ์„ ์–ป๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ XLA ํ†ตํ•ฉ์˜ ๋ฒค์น˜๋งˆํฌ ๋ฐ ๋””์ž์ธ ์ฒ ํ•™์— ๋Œ€ํ•œ ์ถ”๊ฐ€ ์ž๋ฃŒ ๋งํฌ๋„ ์ œ๊ณตํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## XLA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ TF ํ•จ์ˆ˜ ์‹คํ–‰ํ•˜๊ธฐ [[running-tf-functions-with-xla]] TensorFlow์—์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ๋ธ์„ ๊ณ ๋ คํ•ด ๋ด…์‹œ๋‹ค: ```py import tensorflow as tf model = tf.keras.Sequential( [tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")] ) ``` ์œ„ ๋ชจ๋ธ์€ ์ฐจ์›์ด `(10, )`์ธ ์ž…๋ ฅ์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ˆœ์ „ํŒŒ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž„์˜์˜ ์ž…๋ ฅ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. batch_size = 16 input_vector_dim = 10 random_inputs = tf.random.normal((batch_size, input_vector_dim)) # ์ˆœ์ „ํŒŒ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. _ = model(random_inputs) ``` XLA๋กœ ์ปดํŒŒ์ผ๋œ ํ•จ์ˆ˜๋กœ ์ˆœ์ „ํŒŒ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py xla_fn = tf.function(model, jit_compile=True) _ = xla_fn(random_inputs) ``` `model`์˜ ๊ธฐ๋ณธ `call()` ํ•จ์ˆ˜๋Š” XLA ๊ทธ๋ž˜ํ”„๋ฅผ ์ปดํŒŒ์ผํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋‹ค๋ฅธ ๋ชจ๋ธ ํ•จ์ˆ˜๋ฅผ XLA๋กœ ์ปดํŒŒ์ผํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True) ``` ## ๐Ÿค— Transformers์—์„œ XLA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ TF ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ ์‹คํ–‰ํ•˜๊ธฐ [[running-a-tf-text-generation-model-with-xla-from-transformers]] ๐Ÿค— Transformers์—์„œ XLA๋กœ ๊ฐ€์†ํ™”๋œ ์ƒ์„ฑ์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ์ตœ์‹  ๋ฒ„์ „์˜ `transformers`๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pip install transformers --upgrade ``` ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM # ์ตœ์†Œ ๋ฒ„์ „์˜ Transformers๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ์ง€ ์•Š๋‹ค๋ฉด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. from transformers.utils import check_min_version check_min_version("4.21.0") tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") input_string = ["TensorFlow is"] # XLA ์ƒ์„ฑ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•œ ํ•œ ์ค„ xla_generate = tf.function(model.generate, jit_compile=True) tokenized_input = tokenizer(input_string, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") # Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the ``` ์•Œ ์ˆ˜ ์žˆ๋“ฏ์ด, `generate()`์—์„œ XLA๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋Š” ๊ฒƒ์€ ๋‹จ ํ•œ ์ค„์˜ ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค. ์ฝ”๋“œ์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„์€ ๋ณ€๊ฒฝ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์œ„ ์ฝ”๋“œ ์Šค๋‹ˆํŽซ์—์„œ๋Š” XLA์— ํŠน์ •ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ฃผ์˜ํ•  ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. XLA๊ฐ€ ๊ฐ€์ ธ๋‹ค์ค„ ์†๋„ ํ–ฅ์ƒ์„ ์‹คํ˜„ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ด๋ฅผ ์•Œ๊ณ  ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ ์ด์— ๋Œ€ํ•ด ๋…ผ์˜ํ•ฉ๋‹ˆ๋‹ค. ## ์ฃผ์˜ํ•  ์  [[gotchas-to-be-aware-of]] XLA ํ™œ์„ฑํ™” ํ•จ์ˆ˜(`xla_generate()`์™€ ๊ฐ™์€)๋ฅผ ์ฒ˜์Œ ์‹คํ–‰ํ•  ๋•Œ ๋‚ด๋ถ€์ ์œผ๋กœ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋ฅผ ์ถ”๋ก ํ•˜๋ ค๊ณ  ํ•˜๋ฉฐ, ์ด๋Š” ์‹œ๊ฐ„์ด ์†Œ์š”๋ฉ๋‹ˆ๋‹ค. ์ด ๊ณผ์ •์€ [โ€œ์ถ”์ (tracing)โ€](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing)์ด๋ผ๊ณ  ์•Œ๋ ค์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒ์„ฑ ์‹œ๊ฐ„์ด ๋น ๋ฅด์ง€ ์•Š๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. `xla_generate()`(๋˜๋Š” ๋‹ค๋ฅธ XLA ํ™œ์„ฑํ™” ํ•จ์ˆ˜)์˜ ์—ฐ์† ํ˜ธ์ถœ์€ ํ•จ์ˆ˜์— ์ „๋‹ฌ๋œ ์ž…๋ ฅ์ด ์ดˆ๊ธฐ์— ๊ตฌ์ถ•๋œ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„์™€ ๋™์ผํ•œ ํ˜•ํƒœ๋ฅผ ๋”ฐ๋ฅธ๋‹ค๋ฉด, ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋ฅผ ์ถ”๋ก ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ž…๋ ฅ ํ˜•ํƒœ๊ฐ€ ๊ณ ์ •๋œ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ(์˜ˆ: ์ด๋ฏธ์ง€)์—๋Š” ๋ฌธ์ œ๊ฐ€ ๋˜์ง€ ์•Š์ง€๋งŒ, ๊ฐ€๋ณ€ ์ž…๋ ฅ ํ˜•ํƒœ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ(์˜ˆ: ํ…์ŠคํŠธ)๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `xla_generate()`๊ฐ€ ํ•ญ์ƒ ๋™์ผํ•œ ์ž…๋ ฅ ํ˜•ํƒœ๋กœ ๋™์ž‘ํ•˜๋„๋ก ํ•˜๋ ค๋ฉด, ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ˜ธ์ถœํ•  ๋•Œ `padding` ์ธ์ˆ˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") input_string = ["TensorFlow is"] xla_generate = tf.function(model.generate, jit_compile=True) # ์—ฌ๊ธฐ์„œ, padding ์˜ต์…˜์ด ์žˆ๋Š” ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค. tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด `xla_generate()`์— ๋Œ€ํ•œ ์ž…๋ ฅ์ด ํ•ญ์ƒ ์ถ”์ ๋œ ํ˜•ํƒœ๋กœ ์ „๋‹ฌ๋˜์–ด ์ƒ์„ฑ ์‹œ๊ฐ„์ด ๊ฐ€์†ํ™”๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์ฝ”๋“œ๋กœ ์ด๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py import time import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2") xla_generate = tf.function(model.generate, jit_compile=True) for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]: tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") start = time.time_ns() generated_tokens = xla_generate(**tokenized_input, num_beams=2) end = time.time_ns() print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n") ``` Tesla T4 GPU์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ถœ๋ ฅ์„ ์˜ˆ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash Execution time -- 30819.6 ms Execution time -- 79.0 ms Execution time -- 78.9 ms ``` `xla_generate()`์˜ ์ฒซ ๋ฒˆ์งธ ํ˜ธ์ถœ์€ ์ถ”์  ๋•Œ๋ฌธ์— ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฌ์ง€๋งŒ, ์—ฐ์† ํ˜ธ์ถœ์€ ๋ช‡ ๋ฐฐ๋‚˜ ๋น ๋ฆ…๋‹ˆ๋‹ค. ์ƒ์„ฑ ์˜ต์…˜์— ๋Œ€ํ•œ ์–ด๋–ค ๋ณ€๊ฒฝ์ด๋“  ๋‹ค์‹œ ์ถ”์ ์„ ์œ ๋ฐœํ•˜๋ฏ€๋กœ ์ƒ์„ฑ ์‹œ๊ฐ„์ด ๋Š๋ ค์งˆ ์ˆ˜ ์žˆ์Œ์„ ๋ช…์‹ฌํ•˜์„ธ์š”. ์ด ๋ฌธ์„œ์—์„œ๋Š” ๐Ÿค— Transformers์—์„œ ์ œ๊ณตํ•˜๋Š” ๋ชจ๋“  ํ…์ŠคํŠธ ์ƒ์„ฑ ์˜ต์…˜์„ ๋‹ค๋ฃจ์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๊ณ ๊ธ‰ ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋Œ€ํ•ด ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ## ์ถ”๊ฐ€ ์ž๋ฃŒ [[additional-resources]] ์—ฌ๊ธฐ์— ๐Ÿค— Transformers์™€ XLA์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ๊ณ  ์‹ถ์€ ๊ฒฝ์šฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์ถ”๊ฐ€ ์ž๋ฃŒ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. * [์ด Colab ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb)์€ XLA์™€ ํ˜ธํ™˜๋˜๋Š” ์ธ์ฝ”๋”-๋””์ฝ”๋”([T5](https://huggingface.co/docs/transformers/model_doc/t5)์™€ ๊ฐ™์€) ๋ฐ ๋””์ฝ”๋” ์ „์šฉ([GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)์™€ ๊ฐ™์€) ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ์„ ์‹คํ—˜ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ™”ํ˜• ๋ฐ๋ชจ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. * [์ด ๋ธ”๋กœ๊ทธ ๊ธ€](https://huggingface.co/blog/tf-xla-generate)์€ TensorFlow์—์„œ XLA์— ๋Œ€ํ•œ ์นœ์ ˆํ•œ ์†Œ๊ฐœ์™€ ํ•จ๊ป˜ XLA์™€ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋ธ์˜ ๋น„๊ต ๋ฒค์น˜๋งˆํฌ์— ๋Œ€ํ•œ ๊ฐœ์š”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. * [์ด ๋ธ”๋กœ๊ทธ ๊ธ€](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html)์€ ๐Ÿค— Transformers์˜ TensorFlow ๋ชจ๋ธ์— XLA ์ง€์›์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์— ๋Œ€ํ•œ ๋””์ž์ธ ์ฒ ํ•™์„ ๋…ผ์˜ํ•ฉ๋‹ˆ๋‹ค. * XLA์™€ TensorFlow ๊ทธ๋ž˜ํ”„์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ๊ณ  ์‹ถ์€ ๊ฒฝ์šฐ ์ถ”์ฒœํ•˜๋Š” ๊ธ€: * [XLA: ๊ธฐ๊ณ„ ํ•™์Šต์„ ์œ„ํ•œ ์ตœ์ ํ™” ์ปดํŒŒ์ผ๋Ÿฌ](https://www.tensorflow.org/xla) * [๊ทธ๋ž˜ํ”„ ๋ฐ tf.function ์†Œ๊ฐœ](https://www.tensorflow.org/guide/intro_to_graphs) * [tf.function์œผ๋กœ ์„ฑ๋Šฅ ํ–ฅ์ƒํ•˜๊ธฐ](https://www.tensorflow.org/guide/function)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/attention.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜[[attention_mechanisms]] ๋Œ€๋ถ€๋ถ„์˜ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์€ ์ •๋ฐฉํ–‰๋ ฌ์ธ ์ „์ฒด ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Š” ๊ธด ํ…์ŠคํŠธ๋ฅผ ๋‹ค๋ฃฐ ๋•Œ๋Š” ํฐ ๊ณ„์‚ฐ ๋ณ‘๋ชฉ ํ˜„์ƒ์„ ์œ ๋ฐœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `Longformer`์™€ `Reformer`๋Š” ํ›ˆ๋ จ ์†๋„๋ฅผ ๋†’์ด๊ธฐ ์œ„ํ•ด ์–ดํ…์…˜ ํ–‰๋ ฌ์˜ ํฌ์†Œ ๋ฒ„์ „์„ ์‚ฌ์šฉํ•˜์—ฌ ํšจ์œจ์„ ๋†’์ด๋ ค๋Š” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ## LSH ์–ดํ…์…˜[[lsh_attention]] [Reformer](model_doc/reformer)๋Š” LSH(Locality Sensitive Hashing) ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. softmax(QK^t)์—์„œ๋Š” ํ–‰๋ ฌ QK^t์˜ (softmax ์ฐจ์›์—์„œ) ๊ฐ€์žฅ ํฐ ์š”์†Œ๋“ค๋งŒ ์œ ์šฉํ•œ ๊ธฐ์—ฌ๋ฅผ ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐ๊ฐ์˜ ์ฟผ๋ฆฌ q์— ๋Œ€ํ•ด, q์™€ ๊ฐ€๊นŒ์šด ํ‚ค k๋งŒ ๊ณ ๋ คํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•ด์‹œ ํ•จ์ˆ˜๋Š” q์™€ k๊ฐ€ ๊ฐ€๊นŒ์šด์ง€ ์—ฌ๋ถ€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋Š” ํ˜„์žฌ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜์—ฌ ๋ณ€๊ฒฝ๋ฉ๋‹ˆ๋‹ค. ์ด ๋•Œ ์ฒซ ๋ฒˆ์งธ ์œ„์น˜์˜ ํ† ํฐ์€ ์ œ์™ธํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ฟผ๋ฆฌ์™€ ํ‚ค๊ฐ€ ๋™์ผํ•œ ๊ฐ’์„ ๊ฐ–๊ฒŒ ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค(์„œ๋กœ ๋งค์šฐ ์œ ์‚ฌํ•จ). ํ•ด์‹œ๋Š” ์•ฝ๊ฐ„์˜ ๋ฌด์ž‘์œ„์„ฑ์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ์‹ค์ œ๋กœ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํ•ด์‹œ ํ•จ์ˆ˜๊ฐ€ ์‚ฌ์šฉ๋˜๊ณ  (`n_rounds` ๋งค๊ฐœ๋ณ€์ˆ˜์— ์˜ํ•ด ๊ฒฐ์ •๋จ) ๊ทธ ํ›„์— ํ‰๊ท ๊ฐ’์„ ์ทจํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ## ์ง€์—ญ ์–ดํ…์…˜[[local_attention]] [Longformer](model_doc/longformer)๋Š” ์ง€์—ญ ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ข…์ข… ํŠน์ • ํ† ํฐ์— ๋Œ€ํ•ด ์ง€์—ญ ์ปจํ…์ŠคํŠธ(์˜ˆ: ์™ผ์ชฝ๊ณผ ์˜ค๋ฅธ์ชฝ์— ์žˆ๋Š” ๋‘ ๊ฐœ์˜ ํ† ํฐ์€ ๋ฌด์—‡์ธ๊ฐ€์š”?)๋งŒ์œผ๋กœ๋„ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š”๋ฐ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์ž‘์€ ์ฐฝ(window)์„ ๊ฐ€์ง„ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์Œ“์Œ์œผ๋กœ์จ ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๋Š” ์ฐฝ ๋‚ด์˜ ํ† ํฐ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋” ๋งŽ์€ ์ˆ˜์˜ ํ† ํฐ์— ๋Œ€ํ•œ ์ˆ˜์šฉ ์˜์—ญ(receptive field)์„ ๊ฐ–๊ฒŒ ๋˜์–ด ์ „์ฒด ๋ฌธ์žฅ์˜ ํ‘œํ˜„์„ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „์— ์„ ํƒ๋œ ์ผ๋ถ€ ์ž…๋ ฅ ํ† ํฐ๋“ค์€ ์ „์—ญ ์–ดํ…์…˜์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด ๋ช‡ ๊ฐœ์˜ ํ† ํฐ์— ๋Œ€ํ•ด์„œ๋Š” ์–ดํ…์…˜ ํ–‰๋ ฌ์ด ๋ชจ๋“  ํ† ํฐ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๊ณผ์ •์€ ๋Œ€์นญ์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ชจ๋“  ํ† ํฐ๋“ค์€ ๋กœ์ปฌ ์ฐฝ ๋‚ด์˜ ํ† ํฐ๋“ค์— ๋”ํ•ด ํ•ด๋‹น ํŠน์ • ํ† ํฐ๋“ค์—๋„ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋…ผ๋ฌธ์˜ Figure 2d์—์„œ ๋‚˜ํƒ€๋‚˜๋ฉฐ, ์•„๋ž˜์— ์ƒ˜ํ”Œ ์–ดํ…์…˜ ๋งˆ์Šคํฌ๊ฐ€ ์ œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค: <div class="flex justify-center"> <img scale="50 %" align="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/local_attention_mask.png"/> </div> ์ ์€ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์–ดํ…์…˜ ํ–‰๋ ฌ์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์ด ๋” ํฐ ์‹œํ€€์Šค ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•๋“ค[[other_tricks]] ### ์ถ•๋ณ„ ์œ„์น˜ ์ธ์ฝ”๋”ฉ[[axial_positional_encodings]] [Reformer](model_doc/reformer)๋Š” ์ถ•๋ณ„ ์œ„์น˜ ์ธ์ฝ”๋”ฉ(axial positional encodings)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด์˜ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์—์„œ๋Š” ์œ„์น˜ ์ธ์ฝ”๋”ฉ ํ–‰๋ ฌ E๋Š” ํฌ๊ธฐ๊ฐ€ \\(l \times d\\)์ธ ํ–‰๋ ฌ์ด๋ฉฐ, ์—ฌ๊ธฐ์„œ \\(l\\)์€ ์‹œํ€€์Šค ๊ธธ์ด(sequence length)์ด๊ณ  \\(d\\)๋Š” ์ˆจ๊ฒจ์ง„ ์ƒํƒœ(hidden state)์˜ ์ฐจ์›์ž…๋‹ˆ๋‹ค. ๋งค์šฐ ๊ธด ํ…์ŠคํŠธ์˜ ๊ฒฝ์šฐ, ์ด ํ–‰๋ ฌ์€ ๋งค์šฐ ํฌ๋ฉฐ GPU ์ƒ์—์„œ ๊ณต๊ฐ„์„ ๋งŽ์ด ์ฐจ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์™„ํ™”ํ•˜๊ธฐ ์œ„ํ•ด, ์ถ•๋ณ„ ์œ„์น˜ ์ธ์ฝ”๋”ฉ์€ ํฐ ํ–‰๋ ฌ E๋ฅผ ๋‘ ๊ฐœ์˜ ์ž‘์€ ํ–‰๋ ฌ E1๊ณผ E2๋กœ ๋ถ„ํ•ดํ•ฉ๋‹ˆ๋‹ค. ์ด๋•Œ E1์˜ ํฌ๊ธฐ๋Š” \\(l_{1} \times d_{1}\\)์ด๊ณ , E2์˜ ํฌ๊ธฐ๋Š” \\(l_{2} \times d_{2}\\)์ž…๋‹ˆ๋‹ค. ์ด๋•Œ \\(l_{1} \times l_{2} = l\\)์ด๊ณ  \\(d_{1} + d_{2} = d\\)(๊ธธ์ด์— ๋Œ€ํ•œ ๊ณฑ์…ˆ ์—ฐ์‚ฐ์„ ์‚ฌ์šฉํ•˜๋ฉด ํ›จ์”ฌ ์ž‘์•„์ง‘๋‹ˆ๋‹ค). E์˜ ์‹œ๊ฐ„ ๋‹จ๊ณ„ j์— ๋Œ€ํ•œ ์ž„๋ฒ ๋”ฉ์€ E1์—์„œ ์‹œ๊ฐ„ ๋‹จ๊ณ„ \\(j \% l1\\)์˜ ์ž„๋ฒ ๋”ฉ๊ณผ E2์—์„œ ์‹œ๊ฐ„ ๋‹จ๊ณ„ \\(j // l1\\)์˜ ์ž„๋ฒ ๋”ฉ์„ ์—ฐ๊ฒฐํ•˜์—ฌ ์–ป์Šต๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/bertology.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BERTology BERT์™€ ๊ฐ™์€ ๋Œ€๊ทœ๋ชจ ํŠธ๋žœ์Šคํฌ๋จธ์˜ ๋‚ด๋ถ€ ๋™์ž‘์„ ์กฐ์‚ฌํ•˜๋Š” ์—ฐ๊ตฌ ๋ถ„์•ผ๊ฐ€ ์ ์  ๋” ์ค‘์š”ํ•ด์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ํ˜น์ž๋Š” "BERTology"๋ผ ์นญํ•˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ถ„์•ผ์˜ ์ข‹์€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - BERT๋Š” ๊ณ ์ „์ ์ธ NLP ํŒŒ์ดํ”„๋ผ์ธ์˜ ์žฌ๋ฐœ๊ฒฌ - Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950 - 16๊ฐœ์˜ ํ—ค๋“œ๊ฐ€ ์ •๋ง๋กœ 1๊ฐœ๋ณด๋‹ค ๋‚˜์€๊ฐ€? - Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650 - BERT๋Š” ๋ฌด์—‡์„ ๋ณด๋Š”๊ฐ€? BERT์˜ ์–ดํ…์…˜ ๋ถ„์„ - Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341 - CAT-probing: ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์–ธ์–ด์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ์ฝ”๋“œ ๊ตฌ์กฐ๋ฅผ ๋ณด๋Š”์ง€ ์•Œ์•„๋ณด๊ธฐ ์œ„ํ•œ ๋ฉ”ํŠธ๋ฆญ ๊ธฐ๋ฐ˜ ์ ‘๊ทผ ๋ฐฉ๋ฒ•: https://arxiv.org/abs/2210.04633 ์šฐ๋ฆฌ๋Š” ์ด ์ƒˆ๋กœ์šด ์—ฐ๊ตฌ ๋ถ„์•ผ์˜ ๋ฐœ์ „์„ ๋•๊ธฐ ์œ„ํ•ด, BERT/GPT/GPT-2 ๋ชจ๋ธ์— ๋‚ด๋ถ€ ํ‘œํ˜„์„ ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ๋“ค์€ ์ฃผ๋กœ Paul Michel์˜ ํ›Œ๋ฅญํ•œ ์ž‘์—…์„ ์ฐธ๊ณ ํ•˜์—ฌ ๊ฐœ๋ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค (https://arxiv.org/abs/1905.10650): - BERT/GPT/GPT-2์˜ ๋ชจ๋“  ์€๋‹‰ ์ƒํƒœ์— ์ ‘๊ทผํ•˜๊ธฐ, - BERT/GPT/GPT-2์˜ ๊ฐ ํ—ค๋“œ์˜ ๋ชจ๋“  ์–ดํ…์…˜ ๊ฐ€์ค‘์น˜์— ์ ‘๊ทผํ•˜๊ธฐ, - ํ—ค๋“œ์˜ ์ถœ๋ ฅ ๊ฐ’๊ณผ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๊ฒ€์ƒ‰ํ•˜์—ฌ ํ—ค๋“œ ์ค‘์š”๋„ ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  https://arxiv.org/abs/1905.10650์—์„œ ์„ค๋ช…๋œ ๋Œ€๋กœ ํ—ค๋“œ๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋Šฅ๋“ค์„ ์ดํ•ดํ•˜๊ณ  ์ง์ ‘ ์‚ฌ์šฉํ•ด๋ณผ ์ˆ˜ ์žˆ๋„๋ก [bertology.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/bertology/run_bertology.py) ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ์—์„œ๋Š” GLUE์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์—์„œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๊ณ  ๋ชจ๋ธ์„ ๊ฐ€์ง€์น˜๊ธฐ(prune)ํ•ด๋ด…๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/transformers_agents.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Transformers Agent [[transformers-agent]] <Tip warning={true}> Transformers Agent๋Š” ์‹คํ—˜ ์ค‘์ธ API๋กœ ์–ธ์ œ๋“ ์ง€ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. API ๋˜๋Š” ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์ด ๋ณ€๊ฒฝ๋˜๊ธฐ ์‰ฝ๊ธฐ ๋•Œ๋ฌธ์— ์—์ด์ „ํŠธ๊ฐ€ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒฐ๊ณผ๋„ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> Transformers ๋ฒ„์ „ 4.29.0.์—์„œ *๋„๊ตฌ*์™€ *์—์ด์ „ํŠธ*๋ผ๋Š” ์ปจ์…‰์„ ๋„์ž…ํ–ˆ์Šต๋‹ˆ๋‹ค. [์ด colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj)์—์„œ ์‚ฌ์šฉํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํžˆ ๋งํ•˜๋ฉด, Agent๋Š” ํŠธ๋žœ์Šคํฌ๋จธ ์œ„์— ์ž์—ฐ์–ด API๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์—„์„ ๋œ ๋„๊ตฌ ์„ธํŠธ๋ฅผ ์ •์˜ํ•˜๊ณ , ์ž์—ฐ์–ด๋ฅผ ํ•ด์„ํ•˜์—ฌ ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์—์ด์ „ํŠธ๋ฅผ ์„ค๊ณ„ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด API๋Š” ํ™•์žฅ์ด ๊ฐ€๋Šฅํ•˜๋„๋ก ์„ค๊ณ„ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ฃผ์š” ๋„๊ตฌ๋ฅผ ์„ ๋ณ„ํ•ด๋‘์—ˆ์ง€๋งŒ, ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ๊ฐœ๋ฐœํ•œ ๋ชจ๋“  ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์‹œ์Šคํ…œ์„ ์‰ฝ๊ฒŒ ํ™•์žฅํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋„ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ช‡ ๊ฐ€์ง€ ์˜ˆ๋ฅผ ํ†ตํ•ด ์ƒˆ๋กœ์šด API๋กœ ๋ฌด์—‡์„ ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด API๋Š” ํŠนํžˆ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์—์„œ ๊ฐ•๋ ฅํ•˜๋ฏ€๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์†Œ๋ฆฌ๋‚ด์–ด ์ฝ์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py agent.run("Caption the following image", image=image) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------| | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beaver.png" width=200> | A beaver is swimming in the water | --- ```py agent.run("Read the following text out loud", text=text) ``` | **Input** | **Output** | |-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------| | A beaver is swimming in the water | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tts_example.wav" type="audio/wav"> your browser does not support the audio element. </audio> --- ```py agent.run( "In the following `document`, where will the TRRF Scientific Advisory Council Meeting take place?", document=document, ) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|----------------| | <img src="https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/0/image/image.jpg" width=200> | ballroom foyer | ## ๋ฐ”๋กœ ์‹œ์ž‘ํ•˜๊ธฐ [[quickstart]] `agent.run`์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋จผ์ € ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(LLM)์ธ ์—์ด์ „ํŠธ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๋Š” openAI ๋ชจ๋ธ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ BigCode ๋ฐ OpenAssistant์˜ ์˜คํ”ˆ์†Œ์Šค ๋Œ€์ฒด ๋ชจ๋ธ๋„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. openAI ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์ด ๋” ์šฐ์ˆ˜ํ•˜์ง€๋งŒ(๋‹จ, openAI API ํ‚ค๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ๋ฌด๋ฃŒ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Œ), Hugging Face๋Š” BigCode์™€ OpenAssistant ๋ชจ๋ธ์˜ ์—”๋“œํฌ์ธํŠธ์— ๋Œ€ํ•œ ๋ฌด๋ฃŒ ์•ก์„ธ์Šค๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ์„  ๋ชจ๋“  ๊ธฐ๋ณธ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜๋ ค๋ฉด `agents`๋ฅผ ์ถ”๊ฐ€๋กœ ์„ค์น˜ํ•˜์„ธ์š”. ```bash pip install transformers[agents] ``` openAI ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด `openai` ์ข…์†์„ฑ์„ ์„ค์น˜ํ•œ ํ›„ [`OpenAiAgent`]๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install openai ``` ```py from transformers import OpenAiAgent agent = OpenAiAgent(model="text-davinci-003", api_key="<your_api_key>") ``` BigCode ๋˜๋Š” OpenAssistant๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋จผ์ € ๋กœ๊ทธ์ธํ•˜์—ฌ Inference API์— ์•ก์„ธ์Šคํ•˜์„ธ์š”: ```py from huggingface_hub import login login("<YOUR_TOKEN>") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์—์ด์ „ํŠธ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py from transformers import HfAgent # Starcoder agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") # StarcoderBase # agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoderbase") # OpenAssistant # agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5") ``` ํ˜„์žฌ Hugging Face์—์„œ ๋ฌด๋ฃŒ๋กœ ์ œ๊ณตํ•˜๋Š” ์ถ”๋ก  API๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž์ฒด ์ถ”๋ก  ์—”๋“œํฌ์ธํŠธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ(๋˜๋Š” ๋‹ค๋ฅธ ์—”๋“œํฌ์ธํŠธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ) ์œ„์˜ URL์„ ํ•ด๋‹น URL ์—”๋“œํฌ์ธํŠธ๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <Tip> StarCoder์™€ OpenAssistant๋Š” ๋ฌด๋ฃŒ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๊ฐ„๋‹จํ•œ ์ž‘์—…์—์„œ ๋†€๋ผ์šธ ์ •๋„๋กœ ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋” ๋ณต์žกํ•œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•  ๋•Œ๋Š” ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ž˜ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด OpenAI ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ์•„์‰ฝ๊ฒŒ๋„ ์˜คํ”ˆ์†Œ์Šค๋Š” ์•„๋‹ˆ์ง€๋งŒ ํ˜„์žฌ๋กœ์„œ๋Š” ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. </Tip> ์ด์ œ ์ค€๋น„๊ฐ€ ์™„๋ฃŒ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์ด์ œ ์ž์œ ๋กญ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋‘ ๊ฐ€์ง€ API์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๋‹จ์ผ ์‹คํ–‰ (run) [[single-execution-(run)]] ๋‹จ์ผ ์‹คํ–‰ ๋ฐฉ๋ฒ•์€ ์—์ด์ „ํŠธ์˜ [`~Agent.run`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค: ```py agent.run("Draw me a picture of rivers and lakes.") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ์ˆ˜ํ–‰ํ•˜๋ ค๋Š” ์ž‘์—…์— ์ ํ•ฉํ•œ ๋„๊ตฌ๋ฅผ ์ž๋™์œผ๋กœ ์„ ํƒํ•˜์—ฌ ์ ์ ˆํ•˜๊ฒŒ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋™์ผํ•œ ๋ช…๋ น์–ด์—์„œ ํ•˜๋‚˜ ๋˜๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (๋‹ค๋งŒ, ๋ช…๋ น์–ด๊ฐ€ ๋ณต์žกํ• ์ˆ˜๋ก ์—์ด์ „ํŠธ๊ฐ€ ์‹คํŒจํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์•„์ง‘๋‹ˆ๋‹ค). ```py agent.run("Draw me a picture of the sea then transform the picture to add an island") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sea_and_island.png" width=200> <br/> ๋ชจ๋“  [`~Agent.run`] ์ž‘์—…์€ ๋…๋ฆฝ์ ์ด๋ฏ€๋กœ ๋‹ค๋ฅธ ์ž‘์—…์œผ๋กœ ์—ฌ๋Ÿฌ ๋ฒˆ ์—ฐ์†ํ•ด์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `agent`๋Š” ํฐ ์–ธ์–ด ๋ชจ๋ธ์ผ ๋ฟ์ด๋ฏ€๋กœ ํ”„๋กฌํ”„ํŠธ์— ์•ฝ๊ฐ„์˜ ๋ณ€ํ™”๋ฅผ ์ฃผ๋ฉด ์™„์ „ํžˆ ๋‹ค๋ฅธ ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์˜ฌ ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”. ์ˆ˜ํ–‰ํ•˜๋ ค๋Š” ์ž‘์—…์„ ์ตœ๋Œ€ํ•œ ๋ช…ํ™•ํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ข‹์€ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์€ [์—ฌ๊ธฐ](custom_tools#writing-good-user-inputs)์—์„œ ์ž์„ธํžˆ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๋Ÿฌ ์‹คํ–‰์— ๊ฑธ์ณ ์ƒํƒœ๋ฅผ ์œ ์ง€ํ•˜๊ฑฐ๋‚˜ ํ…์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ๊ฐœ์ฒด๋ฅผ ์—์ด์ „ํŠธ์—๊ฒŒ ์ „๋‹ฌํ•˜๋ ค๋Š” ๊ฒฝ์šฐ์—๋Š” ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉํ•  ๋ณ€์ˆ˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๊ฐ•๊ณผ ํ˜ธ์ˆ˜์˜ ์ฒซ ๋ฒˆ์งธ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•œ ๋’ค, ๋ชจ๋ธ์ด ํ•ด๋‹น ๊ทธ๋ฆผ์— ์„ฌ์„ ์ถ”๊ฐ€ํ•˜๋„๋ก ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์š”์ฒญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python picture = agent.run("Generate a picture of rivers and lakes.") updated_picture = agent.run("Transform the image in `picture` to add an island to it.", picture=picture) ``` <Tip> ์ด ๋ฐฉ๋ฒ•์€ ๋ชจ๋ธ์ด ์š”์ฒญ์„ ์ดํ•ดํ•˜์ง€ ๋ชปํ•˜๊ณ  ๋„๊ตฌ๋ฅผ ํ˜ผํ•ฉํ•  ๋•Œ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py agent.run("Draw me the picture of a capybara swimming in the sea") ``` ์—ฌ๊ธฐ์„œ ๋ชจ๋ธ์€ ๋‘ ๊ฐ€์ง€ ๋ฐฉ์‹์œผ๋กœ ํ•ด์„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `text-to-image`์ด ๋ฐ”๋‹ค์—์„œ ํ—ค์—„์น˜๋Š” ์นดํ”ผ๋ฐ”๋ผ๋ฅผ ์ƒ์„ฑํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. - ๋˜๋Š” `text-to-image`์ด ์นดํ”ผ๋ฐ”๋ผ๋ฅผ ์ƒ์„ฑํ•œ ๋‹ค์Œ `image-transformation` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ”๋‹ค์—์„œ ํ—ค์—„์น˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ์‹œ๋‚˜๋ฆฌ์˜ค๋ฅผ ๊ฐ•์ œ๋กœ ์‹คํ–‰ํ•˜๋ ค๋ฉด ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ธ์ˆ˜๋กœ ์ „๋‹ฌํ•˜์—ฌ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py agent.run("Draw me a picture of the `prompt`", prompt="a capybara swimming in the sea") ``` </Tip> ### ๋Œ€ํ™” ๊ธฐ๋ฐ˜ ์‹คํ–‰ (chat) [[chat-based-execution-(chat)]] ์—์ด์ „ํŠธ๋Š” [`~Agent.chat`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋Œ€ํ™” ๊ธฐ๋ฐ˜ ์ ‘๊ทผ ๋ฐฉ์‹๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py agent.chat("Generate a picture of rivers and lakes") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ```py agent.chat("Transform the picture so that there is a rock in there") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_and_beaver.png" width=200> <br/> ์ด ๋ฐฉ์‹์€ ์—ฌ๋Ÿฌ ๋ช…๋ น์–ด์— ๊ฑธ์ณ ์ƒํƒœ๋ฅผ ์œ ์ง€ํ•˜๊ณ ์ž ํ•  ๋•Œ ํฅ๋ฏธ๋กœ์šด ์ ‘๊ทผ ๋ฐฉ์‹์ž…๋‹ˆ๋‹ค. ์‹คํ—˜์šฉ์œผ๋กœ ๋” ์ข‹์ง€๋งŒ ๋ณต์žกํ•œ ๋ช…๋ น์–ด๋ณด๋‹ค๋Š” ๋‹จ์ผ ๋ช…๋ น์–ด([`~Agent.run`] ๋ฉ”์†Œ๋“œ๊ฐ€ ๋” ์ž˜ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ช…๋ น์–ด)์— ํ›จ์”ฌ ๋” ์ž˜ ์ž‘๋™ํ•˜๋Š” ๊ฒฝํ–ฅ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฉ”์†Œ๋“œ๋Š” ํ…์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ์œ ํ˜•์ด๋‚˜ ํŠน์ • ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ „๋‹ฌํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ์ธ์ˆ˜๋ฅผ ๋ฐ›์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ### โš ๏ธ ์›๊ฒฉ ์‹คํ–‰ [[remote-execution]] ๋ฐ๋ชจ ๋ชฉ์ ๊ณผ ๋ชจ๋“  ์„ค์ •์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์—์ด์ „ํŠธ๊ฐ€ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๊ธฐ๋ณธ ๋„๊ตฌ์— ๋Œ€ํ•œ ์›๊ฒฉ ์‹คํ–‰๊ธฐ๋ฅผ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋Š” [inference endpoints](https://huggingface.co/inference-endpoints)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋งŒ๋“ค์–ด์กŒ์Šต๋‹ˆ๋‹ค. ์›๊ฒฉ ์‹คํ–‰๊ธฐ ๋„๊ตฌ๋ฅผ ์ง์ ‘ ์„ค์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด๋ ค๋ฉด [์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ ๊ฐ€์ด๋“œ](./custom_tools)๋ฅผ ์ฝ์–ด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ์›๊ฒฉ ๋„๊ตฌ๋กœ ์‹คํ–‰ํ•˜๋ ค๋ฉด [`~Agent.run`] ๋˜๋Š” [`~Agent.chat`] ์ค‘ ํ•˜๋‚˜์— `remote=True`๋ฅผ ์ง€์ •ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ ๋ช…๋ น์€ ๋งŽ์€ RAM์ด๋‚˜ GPU ์—†์ด๋„ ๋ชจ๋“  ์žฅ์น˜์—์„œ ํšจ์œจ์ ์œผ๋กœ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py agent.run("Draw me a picture of rivers and lakes", remote=True) ``` [`~Agent.chat`]๋„ ๋งˆ์ฐฌ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค: ```py agent.chat("Draw me a picture of rivers and lakes", remote=True) ``` ### ์—ฌ๊ธฐ์„œ ๋ฌด์Šจ ์ผ์ด ์ผ์–ด๋‚˜๋Š” ๊ฑฐ์ฃ ? ๋„๊ตฌ๋ž€ ๋ฌด์—‡์ด๊ณ , ์—์ด์ „ํŠธ๋ž€ ๋ฌด์—‡์ธ๊ฐ€์š”? [[whats-happening-here-what-are-tools-and-what-are-agents]] <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/diagram.png"> #### ์—์ด์ „ํŠธ [[agents]] ์—ฌ๊ธฐ์„œ "์—์ด์ „ํŠธ"๋Š” ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์ด๋ฉฐ, ํŠน์ • ๋„๊ตฌ ๋ชจ์Œ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๋„๋ก ํ”„๋กฌํ”„ํŠธํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. LLM์€ ์ž‘์€ ์ฝ”๋“œ ์ƒ˜ํ”Œ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์ƒ๋‹นํžˆ ๋Šฅ์ˆ™ํ•˜๋ฏ€๋กœ, ์ด ์žฅ์ ์„ ํ™œ์šฉํ•ด ๋„๊ตฌ ๋ชจ์Œ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ์ž‘์€ ์ฝ”๋“œ ์ƒ˜ํ”Œ์„ ์ œ๊ณตํ•˜๋ผ๋Š” ๋ฉ”์‹œ์ง€๋ฅผ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์—์ด์ „ํŠธ์—๊ฒŒ ์ œ๊ณตํ•˜๋Š” ์ž‘์—…๊ณผ ์ œ๊ณตํ•˜๋Š” ๋„๊ตฌ์— ๋Œ€ํ•œ ์„ค๋ช…์œผ๋กœ ์ด ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์™„๋ฃŒ๋ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์‚ฌ์šฉ ์ค‘์ธ ๋„๊ตฌ๋“ค์˜ ๋ฌธ์„œ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ํ•ด๋‹น ๋„๊ตฌ๋“ค์˜ ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์„ ์˜ˆ์ƒํ•˜๊ณ , ๊ด€๋ จ๋œ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. #### ๋„๊ตฌ [[tools]] ๋„๊ตฌ๋Š” ๋งค์šฐ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์žˆ๋Š” ๋‹จ์ผ ๊ธฐ๋Šฅ์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด๋Ÿฌํ•œ ๋„๊ตฌ์˜ ์„ค๋ช…์„ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ๋‹ด์›์—๊ฒŒ ํ”„๋กฌํ”„ํŠธ๋ฅผ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ์ด ํ”„๋กฌํ”„ํŠธ๋ฅผ ํ†ตํ•ด ์ƒ๋‹ด์›์—๊ฒŒ ์ฟผ๋ฆฌ์—์„œ ์š”์ฒญ๋œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋งค์šฐ ์›์ž์ ์ธ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ๋‚˜์€ ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํŒŒ์ดํ”„๋ผ์ธ์ด ์•„๋‹Œ ์™„์ „ํžˆ ์ƒˆ๋กœ์šด ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์€ ๋” ๋งŽ์ด ๋ฆฌํŒฉํ„ฐ๋ง๋˜๋ฉฐ ์ข…์ข… ์—ฌ๋Ÿฌ ์ž‘์—…์„ ํ•˜๋‚˜๋กœ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๋„๊ตฌ๋Š” ํ•˜๋‚˜์˜ ๋งค์šฐ ๊ฐ„๋‹จํ•œ ์ž‘์—…์—๋งŒ ์ง‘์ค‘ํ•˜๋„๋ก ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. #### ์ฝ”๋“œ ์‹คํ–‰?! [[code-execution]] ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด ์ฝ”๋“œ๋Š” ๋„๊ตฌ์™€ ํ•จ๊ป˜ ์ „๋‹ฌ๋œ ์ž…๋ ฅ ์„ธํŠธ์— ๋Œ€ํ•ด ์ž‘์€ Python ์ธํ„ฐํ”„๋ฆฌํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. "์ž„์˜ ์ฝ”๋“œ ์‹คํ–‰์ด๋ผ๋‹ˆ!"์ด๋ผ๊ณ  ๋น„๋ช…์„ ์ง€๋ฅด๋Š” ์†Œ๋ฆฌ๊ฐ€ ๋“ค๋ฆฌ๊ฒ ์ง€๋งŒ, ๊ทธ๋ ‡์ง€ ์•Š์€ ์ด์œ ๋ฅผ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ๋Š” ํ•จ์ˆ˜๋Š” ์ œ๊ณตํ•œ ๋„๊ตฌ์™€ ์ธ์‡„ ๊ธฐ๋Šฅ๋ฟ์ด๋ฏ€๋กœ ์ด๋ฏธ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๊ธฐ๋Šฅ์ด ์ œํ•œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. Hugging Face ๋„๊ตฌ๋กœ ์ œํ•œ๋˜์–ด ์žˆ๋‹ค๋ฉด ์•ˆ์ „ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์–ดํŠธ๋ฆฌ๋ทฐํŠธ ์กฐํšŒ๋‚˜ ๊ฐ€์ ธ์˜ค๊ธฐ๋ฅผ ํ—ˆ์šฉํ•˜์ง€ ์•Š์œผ๋ฏ€๋กœ (์–ด์ฐจํ”ผ ์ž‘์€ ํ•จ์ˆ˜ ์ง‘ํ•ฉ์— ์ž…/์ถœ๋ ฅ์„ ์ „๋‹ฌํ•  ๋•Œ๋Š” ํ•„์š”ํ•˜์ง€ ์•Š์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค) ๊ฐ€์žฅ ๋ช…๋ฐฑํ•œ ๊ณต๊ฒฉ(์–ด์ฐจํ”ผ LLM์— ์ถœ๋ ฅํ•˜๋ผ๋Š” ๋ฉ”์‹œ์ง€๋ฅผ ํ‘œ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค)์€ ๋ฌธ์ œ๊ฐ€ ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋งค์šฐ ์•ˆ์ „ํ•˜๊ฒŒ ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ์ถ”๊ฐ€ ์ธ์ˆ˜ return_code=True๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ run() ๋ฉ”์†Œ๋“œ๋ฅผ ์‹คํ–‰ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์—์ด์ „ํŠธ๊ฐ€ ์‹คํ–‰ํ•  ์ฝ”๋“œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ  ์‹คํ–‰ํ• ์ง€ ์—ฌ๋ถ€๋ฅผ ๊ฒฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ถˆ๋ฒ•์ ์ธ ์—ฐ์‚ฐ์„ ์ˆ˜ํ–‰ํ•˜๋ ค๊ณ  ํ•˜๊ฑฐ๋‚˜ ์—์ด์ „ํŠธ๊ฐ€ ์ƒ์„ฑํ•œ ์ฝ”๋“œ์— ์ผ๋ฐ˜์ ์ธ ํŒŒ์ด์ฌ ์˜ค๋ฅ˜๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ ์‹คํ–‰์ด ์ค‘์ง€๋ฉ๋‹ˆ๋‹ค. ### ์—„์„ ๋œ ๋„๊ตฌ ๋ชจ์Œ [[a-curated-set-of-tools]] ์ €ํฌ๋Š” ์ด๋Ÿฌํ•œ ์—์ด์ „ํŠธ๋“ค์˜ ์—ญ๋Ÿ‰์„ ๊ฐ•ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ จ์˜ ๋„๊ตฌ๋ฅผ ํ™•์ธํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์—ฐ๋™๋œ ๋„๊ตฌ์˜ ์ตœ์‹  ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค: - **๋ฌธ์„œ ์งˆ๋ฌธ ๋‹ต๋ณ€**: ์ด๋ฏธ์ง€ ํ˜•์‹์˜ ๋ฌธ์„œ(์˜ˆ: PDF)๊ฐ€ ์ฃผ์–ด์ง€๋ฉด ์ด ๋ฌธ์„œ์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ต๋ณ€ํ•ฉ๋‹ˆ๋‹ค. ([Donut](./model_doc/donut)) - **ํ…์ŠคํŠธ ์งˆ๋ฌธ ๋‹ต๋ณ€**: ๊ธด ํ…์ŠคํŠธ์™€ ์งˆ๋ฌธ์ด ์ฃผ์–ด์ง€๋ฉด ํ…์ŠคํŠธ์—์„œ ์งˆ๋ฌธ์— ๋‹ต๋ณ€ํ•ฉ๋‹ˆ๋‹ค. ([Flan-T5](./model_doc/flan-t5)) - **๋ฌด์กฐ๊ฑด ์ด๋ฏธ์ง€ ์บก์…”๋‹**: ์ด๋ฏธ์ง€์— ์บก์…˜์„ ๋‹ต๋‹ˆ๋‹ค! ([BLIP](./model_doc/blip)) - **์ด๋ฏธ์ง€ ์งˆ๋ฌธ ๋‹ต๋ณ€**: ์ด๋ฏธ์ง€๊ฐ€ ์ฃผ์–ด์ง€๋ฉด ์ด ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ต๋ณ€ํ•˜๊ธฐ. ([VILT](./model_doc/vilt)) - **์ด๋ฏธ์ง€ ๋ถ„ํ• **: ์ด๋ฏธ์ง€์™€ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ฃผ์–ด์ง€๋ฉด ํ•ด๋‹น ํ”„๋กฌํ”„ํŠธ์˜ ๋ถ„ํ•  ๋งˆ์Šคํฌ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ([CLIPSeg](./model_doc/clipseg)) - **์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜**: ์‚ฌ๋žŒ์ด ๋งํ•˜๋Š” ์˜ค๋””์˜ค ๋…น์Œ์ด ์ฃผ์–ด์ง€๋ฉด ์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ([Whisper](./model_doc/whisper)) - **ํ…์ŠคํŠธ ์Œ์„ฑ ๋ณ€ํ™˜**: ํ…์ŠคํŠธ๋ฅผ ์Œ์„ฑ์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ([SpeechT5](./model_doc/speecht5)) - **์ œ๋กœ ์ƒท(zero-shot) ํ…์ŠคํŠธ ๋ถ„๋ฅ˜**: ํ…์ŠคํŠธ์™€ ๋ ˆ์ด๋ธ” ๋ชฉ๋ก์ด ์ฃผ์–ด์ง€๋ฉด ํ…์ŠคํŠธ์™€ ๊ฐ€์žฅ ๊ด€๋ จ ์žˆ๋Š” ๋ ˆ์ด๋ธ”์„ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. ([BART](./model_doc/bart)) - **ํ…์ŠคํŠธ ์š”์•ฝ**: ๊ธด ํ…์ŠคํŠธ๋ฅผ ํ•œ ๋ฌธ์žฅ ๋˜๋Š” ๋ช‡ ๋ฌธ์žฅ์œผ๋กœ ์š”์•ฝํ•ฉ๋‹ˆ๋‹ค. ([BART](./model_doc/bart)) - **๋ฒˆ์—ญ**: ํ…์ŠคํŠธ๋ฅผ ์ง€์ •๋œ ์–ธ์–ด๋กœ ๋ฒˆ์—ญํ•ฉ๋‹ˆ๋‹ค. ([NLLB](./model_doc/nllb)) ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋Š” ํŠธ๋žœ์Šคํฌ๋จธ์— ํ†ตํ•ฉ๋˜์–ด ์žˆ์œผ๋ฉฐ, ์˜ˆ๋ฅผ ๋“ค์–ด ์ˆ˜๋™์œผ๋กœ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py from transformers import load_tool tool = load_tool("text-to-speech") audio = tool("This is a text to speech tool") ``` ### ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ [[custom-tools]] ์—„์„ ๋œ ๋„๊ตฌ ์„ธํŠธ๋„ ์žˆ์ง€๋งŒ, ์ด ๊ตฌํ˜„์ด ์ œ๊ณตํ•˜๋Š” ๊ฐ€์žฅ ํฐ ๊ฐ€์น˜๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋„๊ตฌ๋ฅผ ๋น ๋ฅด๊ฒŒ ๋งŒ๋“ค๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ๋„๊ตฌ์˜ ์ฝ”๋“œ๋ฅผ Hugging Face Space๋‚˜ ๋ชจ๋ธ ์ €์žฅ์†Œ์— ํ‘ธ์‹œํ•˜๋ฉด ์—์ด์ „ํŠธ์—๊ฒŒ ์ง์ ‘ ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`huggingface-tools` organization](https://huggingface.co/huggingface-tools)์— ๋ช‡ ๊ฐ€์ง€ **ํŠธ๋žœ์Šคํฌ๋จธ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š๋Š”** ํˆด์„ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค: - **ํ…์ŠคํŠธ ๋‹ค์šด๋กœ๋”**: ์›น URL์—์„œ ํ…์ŠคํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. - **ํ…์ŠคํŠธ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜**: ํ”„๋กฌํ”„ํŠธ์— ๋”ฐ๋ผ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ์•ˆ์ •์ ์ธ ํ™•์‚ฐ์„ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. - **์ด๋ฏธ์ง€ ๋ณ€ํ™˜**: ์ดˆ๊ธฐ ์ด๋ฏธ์ง€์™€ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€๋ฅผ ์ˆ˜์ •ํ•˜๊ณ , ์•ˆ์ •์ ์ธ ํ™•์‚ฐ์„ ํ™œ์šฉํ•˜๋Š” ์ง€์‹œ ํ”ฝ์…€ 2 ํ”ฝ์…€์„ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. - **ํ…์ŠคํŠธ ๋น„๋””์˜ค ๋ณ€ํ™˜**: ํ”„๋กฌํ”„ํŠธ์— ๋”ฐ๋ผ ์ž‘์€ ๋น„๋””์˜ค๋ฅผ ์ƒ์„ฑํ•˜๋ฉฐ, damo-vilab์„ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๊ฐ€ ์ฒ˜์Œ๋ถ€ํ„ฐ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ํ…์ŠคํŠธ-์ด๋ฏธ์ง€ ๋ณ€ํ™˜ ๋„๊ตฌ๋Š” [*huggingface-tools/text-to-image*](https://huggingface.co/spaces/huggingface-tools/text-to-image)์— ์žˆ๋Š” ์›๊ฒฉ ๋„๊ตฌ์ž…๋‹ˆ๋‹ค! ์ €ํฌ๋Š” ์ด ๋„๊ตฌ์™€ ๋‹ค๋ฅธ ์กฐ์ง์— ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋ฅผ ๊ณ„์† ์ถœ์‹œํ•˜์—ฌ ์ด ๊ตฌํ˜„์„ ๋”์šฑ ๊ฐ•ํ™”ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ [`huggingface-tools`](https://huggingface.co/huggingface-tools)์— ์žˆ๋Š” ๋„๊ตฌ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [๋‹ค์Œ ๊ฐ€์ด๋“œ](custom_tools)์—์„œ ๋„๊ตฌ๋ฅผ ์ž‘์„ฑํ•˜๊ณ  ๊ณต์œ ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ Hub์— ์žˆ๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ### ์ฝ”๋“œ ์ƒ์„ฑ[[code-generation]] ์ง€๊ธˆ๊นŒ์ง€ ์—์ด์ „ํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ๋“œ๋ ธ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—์ด์ „ํŠธ๋Š” ๋งค์šฐ ์ œํ•œ๋œ Python ์ธํ„ฐํ”„๋ฆฌํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹คํ–‰ํ•  ์ฝ”๋“œ๋งŒ ์ƒ์„ฑํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์„ค์ •์—์„œ ์ƒ์„ฑ๋œ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ์—์ด์ „ํŠธ์—๊ฒŒ ๋„๊ตฌ ์ •์˜ ๋ฐ ์ •ํ™•ํ•œ ๊ฐ€์ ธ์˜ค๊ธฐ์™€ ํ•จ๊ป˜ ์ฝ”๋“œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋ผ๋Š” ๋ฉ”์‹œ์ง€๋ฅผ ํ‘œ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ ๋ช…๋ น์–ด๋Š” ```python agent.run("Draw me a picture of rivers and lakes", return_code=True) ``` ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import load_tool image_generator = load_tool("huggingface-tools/text-to-image") image = image_generator(prompt="rivers and lakes") ``` ์ด ์ฝ”๋“œ๋Š” ์ง์ ‘ ์ˆ˜์ •ํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/custom_tools.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ํ”„๋กฌํ”„ํŠธ[[custom-tools-and-prompts]] <Tip> Transformers์™€ ๊ด€๋ จํ•˜์—ฌ ์–ด๋–ค ๋„๊ตฌ์™€ ์—์ด์ „ํŠธ๊ฐ€ ์žˆ๋Š”์ง€ ์ž˜ ๋ชจ๋ฅด์‹ ๋‹ค๋ฉด [Transformers Agents](transformers_agents) ํŽ˜์ด์ง€๋ฅผ ๋จผ์ € ์ฝ์–ด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. </Tip> <Tip warning={true}> Transformers Agents๋Š” ์‹คํ—˜ ์ค‘์ธ API๋กœ ์–ธ์ œ๋“ ์ง€ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. API ๋˜๋Š” ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์ด ๋ณ€๊ฒฝ๋˜๊ธฐ ์‰ฝ๊ธฐ ๋•Œ๋ฌธ์— ์—์ด์ „ํŠธ๊ฐ€ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒฐ๊ณผ๋„ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์—์ด์ „ํŠธ์—๊ฒŒ ๊ถŒํ•œ์„ ๋ถ€์—ฌํ•˜๊ณ  ์ƒˆ๋กœ์šด ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ฒŒ ํ•˜๋ ค๋ฉด ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋งŒ๋“ค๊ณ  ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋ฌด์—‡๋ณด๋‹ค ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‚ด์šฉ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: - ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๋Š” ๋ฐฉ๋ฒ• - ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• - ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ• ## ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ[[customizing-the-prompt]] [Transformers Agents](transformers_agents)์—์„œ ์„ค๋ช…ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ ์—์ด์ „ํŠธ๋Š” [`~Agent.run`] ๋ฐ [`~Agent.chat`] ๋ชจ๋“œ์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `run`(์‹คํ–‰) ๋ชจ๋“œ์™€ `chat`(์ฑ„ํŒ…) ๋ชจ๋“œ ๋ชจ๋‘ ๋™์ผํ•œ ๋กœ์ง์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋ฅผ ๊ตฌ๋™ํ•˜๋Š” ์–ธ์–ด ๋ชจ๋ธ์€ ๊ธด ํ”„๋กฌํ”„ํŠธ์— ๋”ฐ๋ผ ์กฐ๊ฑด์ด ์ง€์ •๋˜๊ณ , ์ค‘์ง€ ํ† ํฐ์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ๋‹ค์Œ ํ† ํฐ์„ ์ƒ์„ฑํ•˜์—ฌ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์™„์ˆ˜ํ•ฉ๋‹ˆ๋‹ค. `chat` ๋ชจ๋“œ์—์„œ๋Š” ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ด์ „ ์‚ฌ์šฉ์ž ์ž…๋ ฅ ๋ฐ ๋ชจ๋ธ ์ƒ์„ฑ์œผ๋กœ ์—ฐ์žฅ๋œ๋‹ค๋Š” ์ ์ด ๋‘ ๋ชจ๋“œ์˜ ์œ ์ผํ•œ ์ฐจ์ด์ ์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์—์ด์ „ํŠธ๊ฐ€ ๊ณผ๊ฑฐ ์ƒํ˜ธ์ž‘์šฉ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜๋ฏ€๋กœ ์—์ด์ „ํŠธ์—๊ฒŒ ์ผ์ข…์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ œ๊ณตํ•˜๋Š” ์…ˆ์ž…๋‹ˆ๋‹ค. ### ํ”„๋กฌํ”„ํŠธ์˜ ๊ตฌ์กฐ[[structure-of-the-prompt]] ์–ด๋–ป๊ฒŒ ํ”„๋กฌํ”„ํŠธ ์‚ฌ์šฉ์ž ์ •์˜๋ฅผ ์ž˜ ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด ํ”„๋กฌํ”„ํŠธ์˜ ๊ตฌ์กฐ๋ฅผ ์ž์„ธํžˆ ์‚ดํŽด๋ด…์‹œ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋Š” ํฌ๊ฒŒ ๋„ค ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - 1. ๋„์ž…: ์—์ด์ „ํŠธ๊ฐ€ ์–ด๋–ป๊ฒŒ ํ–‰๋™ํ•ด์•ผ ํ•˜๋Š”์ง€, ๋„๊ตฌ์˜ ๊ฐœ๋…์— ๋Œ€ํ•œ ์„ค๋ช…. - 2. ๋ชจ๋“  ๋„๊ตฌ์— ๋Œ€ํ•œ ์„ค๋ช…. ์ด๋Š” ๋Ÿฐํƒ€์ž„์— ์‚ฌ์šฉ์ž๊ฐ€ ์ •์˜/์„ ํƒํ•œ ๋„๊ตฌ๋กœ ๋™์ ์œผ๋กœ ๋Œ€์ฒด๋˜๋Š” `<<all_tools>>` ํ† ํฐ์œผ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค. - 3. ์ž‘์—… ์˜ˆ์ œ ๋ฐ ํ•ด๋‹น ์†”๋ฃจ์…˜ ์„ธํŠธ. - 4. ํ˜„์žฌ ์˜ˆ์ œ ๋ฐ ํ•ด๊ฒฐ ์š”์ฒญ. ๊ฐ ๋ถ€๋ถ„์„ ๋” ์ž˜ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์งง์€ ๋ฒ„์ „์„ ํ†ตํ•ด `run` ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ณด์ด๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ````text I will ask you to perform a task, your job is to come up with a series of simple commands in Python that will perform the task. [...] You can print intermediate results if it makes sense to do so. Tools: - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. - image_captioner: This is a tool that generates a description of an image. It takes an input named `image` which should be the image to the caption and returns a text that contains the description in English. [...] Task: "Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French." I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image. Answer: ```py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(image=image, question=translated_question) print(f"The answer is {answer}") ``` Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner." I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer. Answer: ```py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) ``` [...] Task: "Draw me a picture of rivers and lakes" I will use the following ```` ๋„์ž…(*"๋„๊ตฌ:"* ์•ž์˜ ํ…์ŠคํŠธ)์—์„œ๋Š” ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๊ณ  ๋ฌด์—‡์„ ํ•ด์•ผ ํ•˜๋Š”์ง€ ์ •ํ™•ํ•˜๊ฒŒ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋Š” ํ•ญ์ƒ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ž‘๋™ํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์ด ๋ถ€๋ถ„์€ ์‚ฌ์šฉ์ž ์ •์˜ํ•  ํ•„์š”๊ฐ€ ์—†์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ๋ถ€๋ถ„(*"๋„๊ตฌ"* ์•„๋ž˜์˜ ๊ธ€๋จธ๋ฆฌ ๊ธฐํ˜ธ)์€ `run` ๋˜๋Š” `chat`์„ ํ˜ธ์ถœํ•  ๋•Œ ๋™์ ์œผ๋กœ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ •ํ™•ํžˆ `agent.toolbox`์— ์žˆ๋Š” ๋„๊ตฌ ์ˆ˜๋งŒํผ ๊ธ€๋จธ๋ฆฌ ๊ธฐํ˜ธ๊ฐ€ ์žˆ๊ณ , ๊ฐ ๊ธ€๋จธ๋ฆฌ ๊ธฐํ˜ธ๋Š” ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์œผ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค: ```text - <tool.name>: <tool.description> ``` ๋ฌธ์„œ ์งˆ์˜์‘๋‹ต ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ์ถœ๋ ฅํ•ด์„œ ๋น ๋ฅด๊ฒŒ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py from transformers import load_tool document_qa = load_tool("document-question-answering") print(f"- {document_qa.name}: {document_qa.description}") ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. ``` ์—ฌ๊ธฐ์„œ ๋„๊ตฌ ์ด๋ฆ„์ด ์งง๊ณ  ์ •ํ™•ํ•˜๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์„ค๋ช…์€ ๋‘ ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”๋ฐ, ์ฒซ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ๋„๊ตฌ์˜ ๊ธฐ๋Šฅ์„ ์„ค๋ช…ํ•˜๊ณ  ๋‘ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ์˜ˆ์ƒ๋˜๋Š” ์ž…๋ ฅ ์ธ์ˆ˜์™€ ๋ฐ˜ํ™˜ ๊ฐ’์„ ๋ช…์‹œํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ข‹์€ ๋„๊ตฌ ์ด๋ฆ„๊ณผ ๋„๊ตฌ ์„ค๋ช…์ด ๋งค์šฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ์— ๋Œ€ํ•ด ์•Œ ์ˆ˜ ์žˆ๋Š” ์œ ์ผํ•œ ์ •๋ณด๋Š” ์ด๋ฆ„๊ณผ ์„ค๋ช…๋ฟ์ด๋ฏ€๋กœ, ์ด ๋‘ ๊ฐ€์ง€๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ž‘์„ฑํ•˜๊ณ  ๋„๊ตฌ ์ƒ์ž์— ์žˆ๋Š” ๊ธฐ์กด ๋„๊ตฌ์˜ ์Šคํƒ€์ผ๊ณผ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŠนํžˆ ์ด๋ฆ„์— ๋”ฐ๋ผ ์˜ˆ์ƒ๋˜๋Š” ๋ชจ๋“  ์ธ์ˆ˜๊ฐ€ ์„ค๋ช…์— ์ฝ”๋“œ ์Šคํƒ€์ผ๋กœ ์–ธ๊ธ‰๋˜์–ด ์žˆ๋Š”์ง€, ์˜ˆ์ƒ๋˜๋Š” ์œ ํ˜•๊ณผ ๊ทธ ์œ ํ˜•์ด ๋ฌด์—‡์ธ์ง€์— ๋Œ€ํ•œ ์„ค๋ช…์ด ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. <Tip> ๋„๊ตฌ์— ์–ด๋–ค ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์žˆ์–ด์•ผ ํ•˜๋Š”์ง€ ์ดํ•ดํ•˜๋ ค๋ฉด ์—„์„ ๋œ Transformers ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ํ™•์ธํ•˜์„ธ์š”. [`Agent.toolbox`] ์†์„ฑ์„ ๊ฐ€์ง„ ๋ชจ๋“  ๋„๊ตฌ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์„ธ ๋ฒˆ์งธ ๋ถ€๋ถ„์—๋Š” ์—์ด์ „ํŠธ๊ฐ€ ์–ด๋–ค ์ข…๋ฅ˜์˜ ์‚ฌ์šฉ์ž ์š”์ฒญ์— ๋Œ€ํ•ด ์–ด๋–ค ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•ด์•ผ ํ•˜๋Š”์ง€ ์ •ํ™•ํ•˜๊ฒŒ ๋ณด์—ฌ์ฃผ๋Š” ์—„์„ ๋œ ์˜ˆ์ œ ์„ธํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋ฅผ ์ง€์›ํ•˜๋Š” ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์€ ํ”„๋กฌํ”„ํŠธ์—์„œ ํŒจํ„ด์„ ์ธ์‹ํ•˜๊ณ  ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋กœ ํŒจํ„ด์„ ๋ฐ˜๋ณตํ•˜๋Š” ๋ฐ ๋งค์šฐ ๋Šฅ์ˆ™ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—์ด์ „ํŠธ๊ฐ€ ์‹ค์ œ๋กœ ์˜ฌ๋ฐ”๋ฅธ ์‹คํ–‰ ๊ฐ€๋Šฅํ•œ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•  ๊ฐ€๋Šฅ์„ฑ์„ ๊ทน๋Œ€ํ™”ํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์˜ˆ์ œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ํ•œ ๊ฐ€์ง€ ์˜ˆ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ````text Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner." I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer. Answer: ```py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) ``` ```` ์ž‘์—… ์„ค๋ช…, ์—์ด์ „ํŠธ๊ฐ€ ์ˆ˜ํ–‰ํ•˜๋ ค๋Š” ์ž‘์—…์— ๋Œ€ํ•œ ์„ค๋ช…, ๋งˆ์ง€๋ง‰์œผ๋กœ ์ƒ์„ฑ๋œ ์ฝ”๋“œ, ์ด ์„ธ ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋œ ํ”„๋กฌํ”„ํŠธ๋Š” ๋ชจ๋ธ์— ๋ฐ˜๋ณตํ•˜์—ฌ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ์˜ ์ผ๋ถ€์ธ ๋ชจ๋“  ์˜ˆ์ œ๋Š” ์ด๋Ÿฌํ•œ ์ •ํ™•ํ•œ ํŒจํ„ด์œผ๋กœ ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, ์—์ด์ „ํŠธ๊ฐ€ ์ƒˆ ํ† ํฐ์„ ์ƒ์„ฑํ•  ๋•Œ ์ •ํ™•ํžˆ ๋™์ผํ•œ ํŒจํ„ด์„ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ ์˜ˆ์ œ๋Š” Transformers ํŒ€์ด ์„ ๋ณ„ํ•˜๊ณ  ์ผ๋ จ์˜ [problem statements](https://github.com/huggingface/transformers/blob/main/src/transformers/tools/evaluate_agent.py)์— ๋”ฐ๋ผ ์—„๊ฒฉํ•˜๊ฒŒ ํ‰๊ฐ€ํ•˜์—ฌ ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์—์ด์ „ํŠธ์˜ ์‹ค์ œ ์‚ฌ์šฉ ์‚ฌ๋ก€๋ฅผ ์ตœ๋Œ€ํ•œ ์ž˜ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ์˜ ๋งˆ์ง€๋ง‰ ๋ถ€๋ถ„์€ ๋‹ค์Œ์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค: ```text Task: "Draw me a picture of rivers and lakes" I will use the following ``` ์ด๋Š” ์—์ด์ „ํŠธ๊ฐ€ ์™„๋ฃŒํ•ด์•ผ ํ•  ์ตœ์ข…์ ์ธ ๋ฏธ์™„์„ฑ ์˜ˆ์ œ์ž…๋‹ˆ๋‹ค. ๋ฏธ์™„์„ฑ ์˜ˆ์ œ๋Š” ์‹ค์ œ ์‚ฌ์šฉ์ž ์ž…๋ ฅ์— ๋”ฐ๋ผ ๋™์ ์œผ๋กœ ๋งŒ๋“ค์–ด์ง‘๋‹ˆ๋‹ค. ์œ„ ์˜ˆ์‹œ์˜ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž๊ฐ€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค: ```py agent.run("Draw me a picture of rivers and lakes") ``` ์‚ฌ์šฉ์ž ์ž…๋ ฅ - *์ฆ‰* Task: *"Draw me a picture of rivers and lakes"*๊ฐ€ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์— ๋งž์ถฐ "Task: <task> \n\n I will use the following"๋กœ ์บ์ŠคํŒ…๋ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์žฅ์€ ์—์ด์ „ํŠธ์—๊ฒŒ ์กฐ๊ฑด์ด ์ ์šฉ๋˜๋Š” ํ”„๋กฌํ”„ํŠธ์˜ ๋งˆ์ง€๋ง‰ ์ค„์„ ๊ตฌ์„ฑํ•˜๋ฏ€๋กœ ์—์ด์ „ํŠธ๊ฐ€ ์ด์ „ ์˜ˆ์ œ์—์„œ ์ˆ˜ํ–‰ํ•œ ๊ฒƒ๊ณผ ์ •ํ™•ํžˆ ๋™์ผํ•œ ๋ฐฉ์‹์œผ๋กœ ์˜ˆ์ œ๋ฅผ ์™„๋ฃŒํ•˜๋„๋ก ๊ฐ•๋ ฅํ•˜๊ฒŒ ์˜ํ–ฅ์„ ๋ฏธ์นฉ๋‹ˆ๋‹ค. ๋„ˆ๋ฌด ์ž์„ธํžˆ ์„ค๋ช…ํ•˜์ง€ ์•Š๋”๋ผ๋„ ์ฑ„ํŒ… ํ…œํ”Œ๋ฆฟ์˜ ํ”„๋กฌํ”„ํŠธ ๊ตฌ์กฐ๋Š” ๋™์ผํ•˜์ง€๋งŒ ์˜ˆ์ œ์˜ ์Šคํƒ€์ผ์ด ์•ฝ๊ฐ„ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค๋ฉด*: ````text [...] ===== Human: Answer the question in the variable `question` about the image stored in the variable `image`. Assistant: I will use the tool `image_qa` to answer the question on the input image. ```py answer = image_qa(text=question, image=image) print(f"The answer is {answer}") ``` Human: I tried this code, it worked but didn't give me a good result. The question is in French Assistant: In this case, the question needs to be translated first. I will use the tool `translator` to do this. ```py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(text=translated_question, image=image) print(f"The answer is {answer}") ``` ===== [...] ```` `run` ํ”„๋กฌํ”„ํŠธ์˜ ์˜ˆ์™€๋Š” ๋ฐ˜๋Œ€๋กœ, ๊ฐ `chat` ํ”„๋กฌํ”„ํŠธ์˜ ์˜ˆ์—๋Š” *Human(์‚ฌ๋žŒ)*๊ณผ *Assistant(์–ด์‹œ์Šคํ„ดํŠธ)* ๊ฐ„์— ํ•˜๋‚˜ ์ด์ƒ์˜ ๊ตํ™˜์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ๊ตํ™˜์€ `run` ํ”„๋กฌํ”„ํŠธ์˜ ์˜ˆ์™€ ์œ ์‚ฌํ•œ ๊ตฌ์กฐ๋กœ ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์ด *Human:* ๋’ค์— ์ถ”๊ฐ€๋˜๋ฉฐ, ์—์ด์ „ํŠธ์—๊ฒŒ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์ „์— ์ˆ˜ํ–‰ํ•ด์•ผ ํ•  ์ž‘์—…์„ ๋จผ์ € ์ƒ์„ฑํ•˜๋ผ๋Š” ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ๊ตํ™˜์€ ์ด์ „ ๊ตํ™˜์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์œ„์™€ ๊ฐ™์ด ์‚ฌ์šฉ์ž๊ฐ€ "**์ด** ์ฝ”๋“œ๋ฅผ ์‹œ๋„ํ–ˆ์Šต๋‹ˆ๋‹ค"๋ผ๊ณ  ์ž…๋ ฅํ•˜๋ฉด ์ด์ „์— ์ƒ์„ฑ๋œ ์—์ด์ „ํŠธ์˜ ์ฝ”๋“œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ๊ณผ๊ฑฐ ๊ตํ™˜์„ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `.chat`์„ ์‹คํ–‰ํ•˜๋ฉด ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ ๋˜๋Š” *์ž‘์—…*์ด ๋ฏธ์™„์„ฑ๋œ ์–‘์‹์˜ ์˜ˆ์‹œ๋กœ ์บ์ŠคํŒ…๋ฉ๋‹ˆ๋‹ค: ```text Human: <user-input>\n\nAssistant: ``` ๊ทธ๋Ÿฌ๋ฉด ์—์ด์ „ํŠธ๊ฐ€ ์ด๋ฅผ ์™„์„ฑํ•ฉ๋‹ˆ๋‹ค. `run` ๋ช…๋ น๊ณผ ๋‹ฌ๋ฆฌ `chat` ๋ช…๋ น์€ ์™„๋ฃŒ๋œ ์˜ˆ์ œ๋ฅผ ํ”„๋กฌํ”„ํŠธ์— ์ถ”๊ฐ€ํ•˜์—ฌ ์—์ด์ „ํŠธ์—๊ฒŒ ๋‹ค์Œ `chat` ์ฐจ๋ก€์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ๋ฌธ๋งฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์–ด๋–ป๊ฒŒ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”์ง€ ์•Œ์•˜์œผ๋‹ˆ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉ์ž ์ •์˜ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์‚ดํŽด๋ด…์‹œ๋‹ค! ### ์ข‹์€ ์‚ฌ์šฉ์ž ์ž…๋ ฅ ์ž‘์„ฑํ•˜๊ธฐ[[writing-good-user-inputs]] ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์ด ์‚ฌ์šฉ์ž์˜ ์˜๋„๋ฅผ ์ดํ•ดํ•˜๋Š” ๋Šฅ๋ ฅ์ด ์ ์  ๋” ํ–ฅ์ƒ๋˜๊ณ  ์žˆ์ง€๋งŒ, ์—์ด์ „ํŠธ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ ์ž‘์—…์„ ์„ ํƒํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ๋Œ€ํ•œ ์ •ํ™•์„ฑ์„ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์€ ํฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์ตœ๋Œ€ํ•œ ์ •ํ™•ํ•˜๋‹ค๋Š” ๊ฒƒ์€ ๋ฌด์—‡์„ ์˜๋ฏธํ• ๊นŒ์š”? ์—์ด์ „ํŠธ๋Š” ํ”„๋กฌํ”„ํŠธ์—์„œ ๋„๊ตฌ ์ด๋ฆ„ ๋ชฉ๋ก๊ณผ ํ•ด๋‹น ์„ค๋ช…์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ๋„๊ตฌ๊ฐ€ ์ถ”๊ฐ€๋ ์ˆ˜๋ก ์—์ด์ „ํŠธ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ ๋„๊ตฌ๋ฅผ ์„ ํƒํ•˜๊ธฐ๊ฐ€ ๋” ์–ด๋ ค์›Œ์ง€๊ณ  ์‹คํ–‰ํ•  ๋„๊ตฌ์˜ ์˜ฌ๋ฐ”๋ฅธ ์ˆœ์„œ๋ฅผ ์„ ํƒํ•˜๋Š” ๊ฒƒ์€ ๋”์šฑ ์–ด๋ ค์›Œ์ง‘๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์‹คํŒจ ์‚ฌ๋ก€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ๋ถ„์„ํ•  ์ฝ”๋“œ๋งŒ ๋ฐ˜ํ™˜ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.run("Show me a tree", return_code=True) ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tool: `image_segmenter` to create a segmentation mask for the image. ==Code generated by the agent== mask = image_segmenter(image, prompt="tree") ``` ์šฐ๋ฆฌ๊ฐ€ ์›ํ–ˆ๋˜ ๊ฒฐ๊ณผ๊ฐ€ ์•„๋‹ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€์‹  ๋‚˜๋ฌด ์ด๋ฏธ์ง€๊ฐ€ ์ƒ์„ฑ๋˜๊ธฐ๋ฅผ ์›ํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋” ๋†’์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—์ด์ „ํŠธ๊ฐ€ ํŠน์ • ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ์œ ๋„ํ•˜๋ ค๋ฉด ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์— ์žˆ๋Š” ์ค‘์š”ํ•œ ํ‚ค์›Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•œ๋ฒˆ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py agent.toolbox["image_generator"].description ``` ```text 'This is a tool that creates an image according to a prompt, which is a text description. It takes an input named `prompt` which contains the image description and outputs an image. ``` ์ด๋ฆ„๊ณผ ์„ค๋ช…์€ "image", "prompt", "create" ๋ฐ "generate" ํ‚ค์›Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋‹จ์–ด๋“ค์„ ์‚ฌ์šฉํ•˜๋ฉด ๋” ์ž˜ ์ž‘๋™ํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋ฅผ ์กฐ๊ธˆ ๋” ๊ตฌ์ฒดํ™”ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py agent.run("Create an image of a tree", return_code=True) ``` ์ด ์ฝ”๋“œ๋Š” ๋‹ค์Œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋งŒ๋“ค์–ด๋ƒ…๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tool `image_generator` to generate an image of a tree. ==Code generated by the agent== image = image_generator(prompt="tree") ``` ํ›จ์”ฌ ๋‚ซ๋„ค์š”! ์ €ํฌ๊ฐ€ ์›ํ–ˆ๋˜ ๊ฒƒ๊ณผ ๋น„์Šทํ•ด ๋ณด์ž…๋‹ˆ๋‹ค. ์ฆ‰, ์—์ด์ „ํŠธ๊ฐ€ ์ž‘์—…์„ ์˜ฌ๋ฐ”๋ฅธ ๋„๊ตฌ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋งคํ•‘ํ•˜๋Š” ๋ฐ ์–ด๋ ค์›€์„ ๊ฒช๊ณ  ์žˆ๋‹ค๋ฉด ๋„๊ตฌ ์ด๋ฆ„๊ณผ ์„ค๋ช…์—์„œ ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ์ด ๋†’์€ ํ‚ค์›Œ๋“œ๋ฅผ ์ฐพ์•„๋ณด๊ณ  ์ด๋ฅผ ํ†ตํ•ด ์ž‘์—… ์š”์ฒญ์„ ๊ตฌ์ฒดํ™”ํ•ด ๋ณด์„ธ์š”. ### ๋„๊ตฌ ์„ค๋ช… ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ[[customizing-the-tool-descriptions]] ์•ž์„œ ์‚ดํŽด๋ณธ ๊ฒƒ์ฒ˜๋Ÿผ ์—์ด์ „ํŠธ๋Š” ๊ฐ ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ๋„๊ตฌ์—๋Š” ๋งค์šฐ ์ •ํ™•ํ•œ ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์žˆ์–ด์•ผ ํ•˜์ง€๋งŒ ํŠน์ • ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋งž๊ฒŒ ๋„๊ตฌ์˜ ์„ค๋ช…์ด๋‚˜ ์ด๋ฆ„์„ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒƒ์ด ๋„์›€์ด ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋งค์šฐ ์œ ์‚ฌํ•œ ์—ฌ๋Ÿฌ ๋„๊ตฌ๋ฅผ ์ถ”๊ฐ€ํ–ˆ๊ฑฐ๋‚˜ ํŠน์ • ๋„๋ฉ”์ธ(*์˜ˆ*: ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ฐ ๋ณ€ํ™˜)์—๋งŒ ์—์ด์ „ํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ์— ํŠนํžˆ ์ค‘์š”ํ•ด์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ๋ฌธ์ œ๋Š” ์ด๋ฏธ์ง€ ์ƒ์„ฑ ์ž‘์—…์— ๋งŽ์ด ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ ์—์ด์ „ํŠธ๊ฐ€ ์ด๋ฏธ์ง€ ์ƒ์„ฑ๊ณผ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜/์ˆ˜์ •์„ ํ˜ผ๋™ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด,* ```py agent.run("Make an image of a house and a car", return_code=True) ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tools `image_generator` to generate an image of a house and `image_transformer` to transform the image of a car into the image of a house. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car") house_car_image = image_transformer(image=car_image, prompt="A house") ``` ๊ฒฐ๊ณผ๋ฌผ์ด ์šฐ๋ฆฌ๊ฐ€ ์—ฌ๊ธฐ์„œ ์›ํ•˜๋Š” ๊ฒƒ๊ณผ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ `image_generator`์™€ `image_transformer`์˜ ์ฐจ์ด์ ์„ ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์›Œ์„œ ๋‘ ๊ฐ€์ง€๋ฅผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์€ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ `image_transformer`์˜ ๋„๊ตฌ ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ๋ณ€๊ฒฝํ•˜์—ฌ ์—์ด์ „ํŠธ๊ฐ€ ๋„์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. "image" ๋ฐ "prompt"์™€ ์•ฝ๊ฐ„ ๋ถ„๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด `modifier`๋ผ๊ณ  ๋Œ€์‹  ๋ถ€๋ฅด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py agent.toolbox["modifier"] = agent.toolbox.pop("image_transformer") agent.toolbox["modifier"].description = agent.toolbox["modifier"].description.replace( "transforms an image according to a prompt", "modifies an image" ) ``` ์ด์ œ "modify"์€ ์ƒˆ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜๋ผ๋Š” ๊ฐ•๋ ฅํ•œ ์‹ ํ˜ธ์ด๋ฏ€๋กœ ์œ„์˜ ํ”„๋กฌํ”„ํŠธ์— ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์‹œ ์‹คํ–‰ํ•ด ๋ด…์‹œ๋‹ค. ```py agent.run("Make an image of a house and a car", return_code=True) ``` ์—ฌ๊ธฐ์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tools: `image_generator` to generate an image of a house, then `image_generator` to generate an image of a car. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car") ``` ์šฐ๋ฆฌ๊ฐ€ ์—ผ๋‘์— ๋‘์—ˆ๋˜ ๊ฒƒ๊ณผ ํ™•์‹คํžˆ ๋” ๊ฐ€๊นŒ์›Œ์กŒ์Šต๋‹ˆ๋‹ค! ํ•˜์ง€๋งŒ ์ง‘๊ณผ ์ž๋™์ฐจ๊ฐ€ ๋ชจ๋‘ ๊ฐ™์€ ์ด๋ฏธ์ง€์— ํฌํ•จ๋˜๋ฉด ์ข‹๊ฒ ์Šต๋‹ˆ๋‹ค. ์ž‘์—…์„ ๋‹จ์ผ ์ด๋ฏธ์ง€ ์ƒ์„ฑ์— ๋” ์ง‘์ค‘ํ•˜๋ฉด ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py agent.run("Create image: 'A house and car'", return_code=True) ``` ```text ==Explanation from the agent== I will use the following tool: `image_generator` to generate an image. ==Code generated by the agent== image = image_generator(prompt="A house and car") ``` <Tip warning={true}> ์—์ด์ „ํŠธ๋Š” ์—ฌ์ „ํžˆ ํŠนํžˆ ์—ฌ๋Ÿฌ ๊ฐœ์ฒด์˜ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์ด ์•ฝ๊ฐ„ ๋” ๋ณต์žกํ•œ ์‚ฌ์šฉ ์‚ฌ๋ก€์—์„œ ์ทจ์•ฝํ•œ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ์•ž์œผ๋กœ ๋ช‡ ๋‹ฌ ์•ˆ์— ์—์ด์ „ํŠธ ์ž์ฒด์™€ ๊ธฐ๋ณธ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋”์šฑ ๊ฐœ์„ ๋˜์–ด ์—์ด์ „ํŠธ๊ฐ€ ๋‹ค์–‘ํ•œ ์‚ฌ์šฉ์ž ์ž…๋ ฅ์— ๋”์šฑ ๊ฐ•๋ ฅํ•˜๊ฒŒ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. </Tip> ### ์ „์ฒด ํ”„๋กฌํ”„ํŠธ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ[[customizing-the-whole-prompt]] ์‚ฌ์šฉ์ž์—๊ฒŒ ์ตœ๋Œ€ํ•œ์˜ ์œ ์—ฐ์„ฑ์„ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•ด [์œ„](#structure-of-the-prompt)์— ์„ค๋ช…๋œ ์ „์ฒด ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ์‚ฌ์šฉ์ž๊ฐ€ ๋ฎ์–ด์“ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ํ”„๋กฌํ”„ํŠธ์— ์†Œ๊ฐœ ์„น์…˜, ๋„๊ตฌ ์„น์…˜, ์˜ˆ์ œ ์„น์…˜ ๋ฐ ๋ฏธ์™„์„ฑ ์˜ˆ์ œ ์„น์…˜์ด ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. `run` ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ๋ฎ์–ด์“ฐ๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py template = """ [...] """ agent = HfAgent(your_endpoint, run_prompt_template=template) ``` <Tip warning={true}> ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋„๊ตฌ๋ฅผ ์ธ์‹ํ•˜๊ณ  ์‚ฌ์šฉ์ž์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‚ฝ์ž…ํ•  ์ˆ˜ ์žˆ๋„๋ก `<<all_tools>>` ๋ฌธ์ž์—ด๊ณผ `<<prompt>>`๋ฅผ `template` ์–ด๋”˜๊ฐ€์— ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `chat` ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ๋ฎ์–ด์“ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `chat` ๋ชจ๋“œ์—์„œ๋Š” ํ•ญ์ƒ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ตํ™˜ ํ˜•์‹์„ ์‚ฌ์šฉํ•œ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”: ```text Human: <<task>> Assistant: ``` ๋”ฐ๋ผ์„œ ์‚ฌ์šฉ์ž ์ •์˜ `chat` ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์˜ ์˜ˆ์ œ์—์„œ๋„ ์ด ํ˜•์‹์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ธ์Šคํ„ด์Šคํ™” ํ•  ๋•Œ `chat` ํ…œํ”Œ๋ฆฟ์„ ๋ฎ์–ด์“ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python template = """ [...] """ agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template) ``` <Tip warning={true}> ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋„๊ตฌ๋ฅผ ์ธ์‹ํ•  ์ˆ˜ ์žˆ๋„๋ก `<<all_tools>>` ๋ฌธ์ž์—ด์„ `template` ์–ด๋”˜๊ฐ€์— ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋‘ ๊ฒฝ์šฐ ๋ชจ๋‘ ์ปค๋ฎค๋‹ˆํ‹ฐ์˜ ๋ˆ„๊ตฐ๊ฐ€๊ฐ€ ํ˜ธ์ŠคํŒ…ํ•˜๋Š” ํ…œํ”Œ๋ฆฟ์„ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ๋Œ€์‹  ์ €์žฅ์†Œ ID๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ํ”„๋กฌํ”„ํŠธ๋Š” [์ด ์ €์žฅ์†Œ](https://huggingface.co/datasets/huggingface-tools/default-prompts)๋ฅผ ์˜ˆ๋กœ ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hub์˜ ์ €์žฅ์†Œ์— ์‚ฌ์šฉ์ž ์ •์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์—…๋กœ๋“œํ•˜์—ฌ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ํ™•์ธํ•˜์„ธ์š”: - ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ €์žฅ์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. - `run` ๋ช…๋ น์— ๋Œ€ํ•œ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ `run_prompt_template.txt`๋ผ๋Š” ํŒŒ์ผ์— ๋„ฃ์œผ์„ธ์š”. - `chat` ๋ช…๋ น์— ๋Œ€ํ•œ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ `chat_prompt_template.txt`๋ผ๋Š” ํŒŒ์ผ์— ๋„ฃ์œผ์„ธ์š”. ## ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ ์‚ฌ์šฉํ•˜๊ธฐ[[using-custom-tools]] ์ด ์„น์…˜์—์„œ๋Š” ์ด๋ฏธ์ง€ ์ƒ์„ฑ์— ํŠนํ™”๋œ ๋‘ ๊ฐ€์ง€ ๊ธฐ์กด ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: - ๋” ๋งŽ์€ ์ด๋ฏธ์ง€ ์ˆ˜์ •์„ ํ—ˆ์šฉํ•˜๊ธฐ ์œ„ํ•ด [huggingface-tools/image-transformation](https://huggingface.co/spaces/huggingface-tools/image-transformation)์„ [diffusers/controlnet-canny-tool](https://huggingface.co/spaces/diffusers/controlnet-canny-tool)๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. - ๊ธฐ๋ณธ ๋„๊ตฌ ์ƒ์ž์— ์ด๋ฏธ์ง€ ์—…์Šค์ผ€์ผ๋ง์„ ์œ„ํ•œ ์ƒˆ๋กœ์šด ๋„๊ตฌ๊ฐ€ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค: [diffusers/latent-upscaler-tool](https://huggingface.co/spaces/diffusers/latent-upscaler-tool)๊ฐ€ ๊ธฐ์กด ์ด๋ฏธ์ง€ ๋ณ€ํ™˜ ๋„๊ตฌ๋ฅผ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ํŽธ๋ฆฌํ•œ [`load_tool`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py from transformers import load_tool controlnet_transformer = load_tool("diffusers/controlnet-canny-tool") upscaler = load_tool("diffusers/latent-upscaler-tool") ``` ์—์ด์ „ํŠธ์—๊ฒŒ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ๋„๊ตฌ์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„์ด ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ์— ์ž๋™์œผ๋กœ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„์„ ์ž˜ ์ž‘์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `controlnet_transformer`์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py print(f"Description: '{controlnet_transformer.description}'") print(f"Name: '{controlnet_transformer.name}'") ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text Description: 'This is a tool that transforms an image with ControlNet according to a prompt. It takes two inputs: `image`, which should be the image to transform, and `prompt`, which should be the prompt to use to change it. It returns the modified image.' Name: 'image_transformer' ``` ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์ •ํ™•ํ•˜๊ณ  [ํ๋ ˆ์ดํŒ… ๋œ ๋„๊ตฌ ์„ธํŠธ(curated set of tools)](./transformers_agents#a-curated-set-of-tools)์˜ ์Šคํƒ€์ผ์— ๋งž์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, `controlnet_transformer`์™€ `upscaler`๋กœ ์—์ด์ „ํŠธ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ด ๋ด…์‹œ๋‹ค: ```py tools = [controlnet_transformer, upscaler] agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=tools) ``` ์ด ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋ฉด ๋‹ค์Œ ์ •๋ณด๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค: ```text image_transformer has been replaced by <transformers_modules.diffusers.controlnet-canny-tool.bd76182c7777eba9612fc03c0 8718a60c0aa6312.image_transformation.ControlNetTransformationTool object at 0x7f1d3bfa3a00> as provided in `additional_tools` ``` ํ๋ ˆ์ดํŒ…๋œ ๋„๊ตฌ ์„ธํŠธ์—๋Š” ์ด๋ฏธ 'image_transformer' ๋„๊ตฌ๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ด ๋„๊ตฌ๋Š” ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋กœ ๋Œ€์ฒด๋ฉ๋‹ˆ๋‹ค. <Tip> ๊ธฐ์กด ๋„๊ตฌ์™€ ๋˜‘๊ฐ™์€ ์ž‘์—…์— ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ๊ธฐ์กด ๋„๊ตฌ๋ฅผ ๋ฎ์–ด์“ฐ๋Š” ๊ฒƒ์ด ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ํ•ด๋‹น ์ž‘์—…์— ๋Šฅ์ˆ™ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๊ฐ€ ๋ฎ์–ด์“ด ๋„๊ตฌ์™€ ์ •ํ™•ํžˆ ๋™์ผํ•œ API๋ฅผ ๋”ฐ๋ผ์•ผ ํ•˜๋ฉฐ, ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ํ•ด๋‹น ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ์˜ˆ์ œ๊ฐ€ ์—…๋ฐ์ดํŠธ๋˜๋„๋ก ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ์กฐ์ •ํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”. </Tip> ์—…์Šค์ผ€์ผ๋Ÿฌ ๋„๊ตฌ์— ์ง€์ •๋œ 'image_upscaler'๋ผ๋Š” ์ด๋ฆ„ ์•„์ง ๊ธฐ๋ณธ ๋„๊ตฌ ์ƒ์ž์—๋Š” ์กด์žฌํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์—, ๋„๊ตฌ ๋ชฉ๋ก์— ํ•ด๋‹น ์ด๋ฆ„์ด ๊ฐ„๋‹จํžˆ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ํ˜„์žฌ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋„๊ตฌ ์ƒ์ž๋Š” ์–ธ์ œ๋“ ์ง€ `agent.toolbox` ์†์„ฑ์„ ํ†ตํ•ด ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py print("\n".join([f"- {a}" for a in agent.toolbox.keys()])) ``` ```text - document_qa - image_captioner - image_qa - image_segmenter - transcriber - summarizer - text_classifier - text_qa - text_reader - translator - image_transformer - text_downloader - image_generator - video_generator - image_upscaler ``` ์—์ด์ „ํŠธ์˜ ๋„๊ตฌ ์ƒ์ž์— `image_upscaler`๊ฐ€ ์ถ”๊ฐ€๋œ ์ ์„ ์ฃผ๋ชฉํ•˜์„ธ์š”. ์ด์ œ ์ƒˆ๋กœ์šด ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•ด๋ด…์‹œ๋‹ค! [Transformers Agents Quickstart](./transformers_agents#single-execution-run)์—์„œ ์ƒ์„ฑํ•œ ์ด๋ฏธ์ง€๋ฅผ ๋‹ค์‹œ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py from diffusers.utils import load_image image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" ) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ์ด๋ฏธ์ง€๋ฅผ ์•„๋ฆ„๋‹ค์šด ๊ฒจ์šธ ํ’๊ฒฝ์œผ๋กœ ๋ฐ”๊ฟ” ๋ด…์‹œ๋‹ค: ```py image = agent.run("Transform the image: 'A frozen lake and snowy forest'", image=image) ``` ```text ==Explanation from the agent== I will use the following tool: `image_transformer` to transform the image. ==Code generated by the agent== image = image_transformer(image, prompt="A frozen lake and snowy forest") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter.png" width=200> ์ƒˆ๋กœ์šด ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ๋„๊ตฌ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋งค์šฐ ๊ฐ•๋ ฅํ•˜๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š” ControlNet์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ์ ์œผ๋กœ ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ๋„๊ตฌ๋Š” 512x512 ํ”ฝ์…€ ํฌ๊ธฐ์˜ ์ด๋ฏธ์ง€๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์—…์Šค์ผ€์ผ๋งํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์‚ดํŽด๋ด…์‹œ๋‹ค. ```py image = agent.run("Upscale the image", image) ``` ```text ==Explanation from the agent== I will use the following tool: `image_upscaler` to upscale the image. ==Code generated by the agent== upscaled_image = image_upscaler(image) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter_upscale.png" width=400> ์—์ด์ „ํŠธ๋Š” ์—…์Šค์ผ€์ผ๋Ÿฌ ๋„๊ตฌ์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„๋งŒ ๋ณด๊ณ  ๋ฐฉ๊ธˆ ์ถ”๊ฐ€ํ•œ ์—…์Šค์ผ€์ผ๋Ÿฌ ๋„๊ตฌ์— "์ด๋ฏธ์ง€ ์—…์Šค์ผ€์ผ๋ง"์ด๋ผ๋Š” ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ž๋™์œผ๋กœ ๋งคํ•‘ํ•˜์—ฌ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‹คํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ ์ƒˆ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ์ƒˆ ๋„๊ตฌ ์ถ”๊ฐ€ํ•˜๊ธฐ[[adding-new-tools]] ์ด ์„น์…˜์—์„œ๋Š” ์—์ด์ „ํŠธ์—๊ฒŒ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ ๋“œ๋ฆฝ๋‹ˆ๋‹ค. #### ์ƒˆ ๋„๊ตฌ ๋งŒ๋“ค๊ธฐ[[creating-a-new-tool]] ๋จผ์ € ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ํŠน์ • ์ž‘์—…์— ๋Œ€ํ•ด ๊ฐ€์žฅ ๋งŽ์€ ๋‹ค์šด๋กœ๋“œ๋ฅผ ๋ฐ›์€ Hugging Face Hub์˜ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š”, ๊ทธ๋‹ค์ง€ ์œ ์šฉํ•˜์ง€๋Š” ์•Š์ง€๋งŒ ์žฌ๋ฏธ์žˆ๋Š” ์ž‘์—…์„ ์ถ”๊ฐ€ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```python from huggingface_hub import list_models task = "text-classification" model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) print(model.id) ``` `text-classification`(ํ…์ŠคํŠธ ๋ถ„๋ฅ˜) ์ž‘์—…์˜ ๊ฒฝ์šฐ `'facebook/bart-large-mnli'`๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , `translation`(๋ฒˆ์—ญ) ์ž‘์—…์˜ ๊ฒฝ์šฐ `'google-t5/t5-base'`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์—์ด์ „ํŠธ๊ฐ€ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋„๊ตฌ๋กœ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ• ๊นŒ์š”? ๋ชจ๋“  ๋„๊ตฌ๋Š” ํ•„์š”ํ•œ ์ฃผ์š” ์†์„ฑ์„ ๋ณด์œ ํ•˜๋Š” ์Šˆํผํด๋ž˜์Šค `Tool`์— ์˜์กดํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์ƒ์†ํ•˜๋Š” ํด๋ž˜์Šค๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python from transformers import Tool class HFModelDownloadsTool(Tool): pass ``` ์ด ํด๋ž˜์Šค์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์š”๊ตฌ์‚ฌํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ๋„๊ตฌ ์ž์ฒด์˜ ์ด๋ฆ„์— ํ•ด๋‹นํ•˜๋Š” `name` ์†์„ฑ. ์ˆ˜ํ–‰๋ช…์ด ์žˆ๋Š” ๋‹ค๋ฅธ ๋„๊ตฌ์™€ ํ˜ธํ™˜๋˜๋„๋ก `model_download_counter`๋กœ ์ด๋ฆ„์„ ์ง€์ •ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. - ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ฑ„์šฐ๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ์†์„ฑ `description`. - `inputs` ๋ฐ `outputs` ์†์„ฑ. ์ด๋ฅผ ์ •์˜ํ•˜๋ฉด Python ์ธํ„ฐํ”„๋ฆฌํ„ฐ๊ฐ€ ์œ ํ˜•์— ๋Œ€ํ•œ ์ •๋ณด์— ์ž…๊ฐํ•œ ์„ ํƒ์„ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋ฉฐ, ๋„๊ตฌ๋ฅผ ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•  ๋•Œ gradio ๋ฐ๋ชจ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘ ์†์„ฑ ๋ชจ๋‘ ๊ฐ’์€ 'ํ…์ŠคํŠธ', '์ด๋ฏธ์ง€' ๋˜๋Š” '์˜ค๋””์˜ค'๊ฐ€ ๋  ์ˆ˜ ์žˆ๋Š” ์˜ˆ์ƒ ๊ฐ’์˜ ๋ฆฌ์ŠคํŠธ์ž…๋‹ˆ๋‹ค. - ์ถ”๋ก  ์ฝ”๋“œ๊ฐ€ ํฌํ•จ๋œ `__call__` ๋ฉ”์†Œ๋“œ. ์ด๊ฒƒ์ด ์šฐ๋ฆฌ๊ฐ€ ์œ„์—์„œ ๋‹ค๋ฃจ์—ˆ๋˜ ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค! ์ด์ œ ํด๋ž˜์Šค์˜ ๋ชจ์Šต์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers import Tool from huggingface_hub import list_models class HFModelDownloadsTool(Tool): name = "model_download_counter" description = ( "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. " "It takes the name of the category (such as text-classification, depth-estimation, etc), and " "returns the name of the checkpoint." ) inputs = ["text"] outputs = ["text"] def __call__(self, task: str): model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id ``` ์ด์ œ ๋„๊ตฌ๋ฅผ ์†์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋„๊ตฌ๋ฅผ ํŒŒ์ผ์— ์ €์žฅํ•˜๊ณ  ๋ฉ”์ธ ์Šคํฌ๋ฆฝํŠธ์—์„œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด ํŒŒ์ผ์˜ ์ด๋ฆ„์„ `model_downloads.py`๋กœ ์ง€์ •ํ•˜๋ฉด ๊ฒฐ๊ณผ์ ์œผ๋กœ ๊ฐ€์ ธ์˜ค๊ธฐ ์ฝ”๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from model_downloads import HFModelDownloadsTool tool = HFModelDownloadsTool() ``` ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์ด ๊ธฐ๋Šฅ์„ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๊ณ  ์ดˆ๊ธฐํ™”๋ฅผ ๋” ๊ฐ„๋‹จํ•˜๊ฒŒ ํ•˜๋ ค๋ฉด ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์•„๋ž˜์˜ Hub๋กœ ํ‘ธ์‹œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ฒŒ ํ•˜๋ ค๋ฉด `tool` ๋ณ€์ˆ˜์—์„œ `push_to_hub`๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```python tool.push_to_hub("hf-model-downloads") ``` ์ด์ œ ํ—ˆ๋ธŒ์— ์ฝ”๋“œ๊ฐ€ ์ƒ๊ฒผ์Šต๋‹ˆ๋‹ค! ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„์ธ ์—์ด์ „ํŠธ๊ฐ€ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ํ•˜๋Š” ๋‹จ๊ณ„๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. #### ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒŒ ํ•˜๊ธฐ[[Having-the-agent-use-the-tool]] ์ด์ œ ์ด๋Ÿฐ ์‹์œผ๋กœ ํ—ˆ๋ธŒ์— ์กด์žฌํ•˜๋Š” ๋„๊ตฌ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋„๊ตฌ์˜ ์‚ฌ์šฉ์ž ์ด๋ฆ„์€ ๋ณ€๊ฒฝํ•˜์„ธ์š”): We now have our tool that lives on the Hub which can be instantiated as such (change the user name for your tool): ```python from transformers import load_tool tool = load_tool("lysandre/hf-model-downloads") ``` ์ด ๋„๊ตฌ๋ฅผ ์—์ด์ „ํŠธ์—์„œ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์—์ด์ „ํŠธ ์ดˆ๊ธฐํ™” ๋ฉ”์†Œ๋“œ์˜ `additional_tools` ๋งค๊ฐœ๋ณ€์ˆ˜์— ์ „๋‹ฌํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```python from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run( "Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" ) ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text ==Code generated by the agent== model = model_download_counter(task="text-to-video") print(f"The model with the most downloads is {model}.") audio_model = text_reader(model) ==Result== The model with the most downloads is damo-vilab/text-to-video-ms-1.7b. ``` and generates the following audio. | **Audio** | |------------------------------------------------------------------------------------------------------------------------------------------------------| | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/damo.wav" type="audio/wav"/> | <Tip> LLM์— ๋”ฐ๋ผ ์ผ๋ถ€๋Š” ๋งค์šฐ ์ทจ์•ฝํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋ ค๋ฉด ๋งค์šฐ ์ •ํ™•ํ•œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ๋ฅผ ์ž˜ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ์ž˜ ์ •์˜ํ•˜๋Š” ๊ฒƒ์ด ๋ฌด์—‡๋ณด๋‹ค ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. </Tip> ### ๊ธฐ์กด ๋„๊ตฌ ๋Œ€์ฒดํ•˜๊ธฐ[[replacing-existing-tools]] ์—์ด์ „ํŠธ์˜ ๋„๊ตฌ ์ƒ์ž์— ์ƒˆ ํ•ญ๋ชฉ์„ ๋ฐฐ์ •ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๊ธฐ์กด ๋„๊ตฌ๋ฅผ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers import HfAgent, load_tool agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.toolbox["image-transformation"] = load_tool("diffusers/controlnet-canny-tool") ``` <Tip> ๋‹ค๋ฅธ ๋„๊ตฌ๋กœ ๊ต์ฒดํ•  ๋•Œ๋Š” ์ฃผ์˜ํ•˜์„ธ์š”! ์ด ์ž‘์—…์œผ๋กœ ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ๋„ ์กฐ์ •๋ฉ๋‹ˆ๋‹ค. ์ž‘์—…์— ๋” ์ ํ•ฉํ•œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์žˆ์œผ๋ฉด ์ข‹์„ ์ˆ˜ ์žˆ์ง€๋งŒ, ๋‹ค๋ฅธ ๋„๊ตฌ๋ณด๋‹ค ๋” ๋งŽ์ด ์„ ํƒ๋˜๊ฑฐ๋‚˜ ์ •์˜ํ•œ ๋„๊ตฌ ๋Œ€์‹  ๋‹ค๋ฅธ ๋„๊ตฌ๊ฐ€ ์„ ํƒ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ## gradio-tools ์‚ฌ์šฉํ•˜๊ธฐ[[leveraging-gradio-tools]] [gradio-tools](https://github.com/freddyaboulton/gradio-tools)๋Š” Hugging Face Spaces๋ฅผ ๋„๊ตฌ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ•๋ ฅํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ๊ธฐ์กด์˜ ๋งŽ์€ Spaces๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์‚ฌ์šฉ์ž ์ •์˜ Spaces๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋””์ž์ธํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” `Tool.from_gradio` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `gradio_tools`์— ๋Œ€ํ•œ ์ง€์›์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํ”„๋กฌํ”„ํŠธ๋ฅผ ๊ฐœ์„ ํ•˜๊ณ  ๋” ๋‚˜์€ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด `gradio-tools` ํˆดํ‚ท์—์„œ ์ œ๊ณต๋˜๋Š” `StableDiffusionPromptGeneratorTool` ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๊ณ ์ž ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € `gradio_tools`์—์„œ ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```python from gradio_tools import StableDiffusionPromptGeneratorTool gradio_tool = StableDiffusionPromptGeneratorTool() ``` ํ•ด๋‹น ์ธ์Šคํ„ด์Šค๋ฅผ `Tool.from_gradio` ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```python from transformers import Tool tool = Tool.from_gradio(gradio_tool) ``` ์ด์ œ ์ผ๋ฐ˜์ ์ธ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ๋˜‘๊ฐ™์ด ๊ด€๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ํ™œ์šฉํ•˜์—ฌ `a rabbit wearing a space suit'(์šฐ์ฃผ๋ณต์„ ์ž…์€ ํ† ๋ผ)๋ผ๋Š” ํ”„๋กฌํ”„ํŠธ๋ฅผ ๊ฐœ์„ ํ–ˆ์Šต๋‹ˆ๋‹ค: ```python from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run("Generate an image of the `prompt` after improving it.", prompt="A rabbit wearing a space suit") ``` ๋ชจ๋ธ์ด ๋„๊ตฌ๋ฅผ ์ ์ ˆํžˆ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tools: `StableDiffusionPromptGenerator` to improve the prompt, then `image_generator` to generate an image according to the improved prompt. ==Code generated by the agent== improved_prompt = StableDiffusionPromptGenerator(prompt) print(f"The improved prompt is {improved_prompt}.") image = image_generator(improved_prompt) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์ „์—: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"> <Tip warning={true}> gradio-tools๋Š” ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๋กœ ์ž‘์—…ํ•  ๋•Œ์—๋„ *ํ…์ŠคํŠธ* ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ตฌํ˜„์€ ์ด๋ฏธ์ง€ ๋ฐ ์˜ค๋””์˜ค ๊ฐ์ฒด์—์„œ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ํ˜„์žฌ๋Š” ์ด ๋‘ ๊ฐ€์ง€๊ฐ€ ํ˜ธํ™˜๋˜์ง€ ์•Š์ง€๋งŒ ์ง€์› ๊ฐœ์„ ์„ ์œ„ํ•ด ๋…ธ๋ ฅํ•˜๋ฉด์„œ ๋น ๋ฅด๊ฒŒ ํ˜ธํ™˜๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. </Tip> ## ํ–ฅํ›„ Langchain๊ณผ์˜ ํ˜ธํ™˜์„ฑ[[future-compatibility-with-langchain]] ์ €ํฌ๋Š” Langchain์„ ์ข‹์•„ํ•˜๋ฉฐ ๋งค์šฐ ๋งค๋ ฅ์ ์ธ ๋„๊ตฌ ๋ชจ์Œ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด Langchain์€ ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์™€ ์ž‘์—…ํ•  ๋•Œ์—๋„ *ํ…์ŠคํŠธ* ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ข…์ข… ๊ฐ์ฒด์˜ ์ง๋ ฌํ™”๋œ(์ฆ‰, ๋””์Šคํฌ์— ์ €์žฅ๋œ) ๋ฒ„์ „์ž…๋‹ˆ๋‹ค. ์ด ์ฐจ์ด๋กœ ์ธํ•ด transformers-agents์™€ Langchain ๊ฐ„์—๋Š” ๋ฉ€ํ‹ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๊ฐ€ ์ฒ˜๋ฆฌ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ–ฅํ›„ ๋ฒ„์ „์—์„œ ์ด ์ œํ•œ์ด ํ•ด๊ฒฐ๋˜๊ธฐ๋ฅผ ๋ฐ”๋ผ๋ฉฐ, ์ด ํ˜ธํ™˜์„ฑ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ์—ด๋ ฌํ•œ Langchain ์‚ฌ์šฉ์ž์˜ ๋„์›€์„ ํ™˜์˜ํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๋Š” ๋” ๋‚˜์€ ์ง€์›์„ ์ œ๊ณตํ•˜๊ณ ์ž ํ•ฉ๋‹ˆ๋‹ค. ๋„์›€์„ ์ฃผ๊ณ  ์‹ถ์œผ์‹œ๋‹ค๋ฉด, [์ด์Šˆ๋ฅผ ์—ด์–ด](https://github.com/huggingface/transformers/issues/new) ์˜๊ฒฌ์„ ๊ณต์œ ํ•ด ์ฃผ์„ธ์š”.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/torchscript.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TorchScript๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ[[export-to-torchscript]] <Tip> TorchScript๋ฅผ ํ™œ์šฉํ•œ ์‹คํ—˜์€ ์•„์ง ์ดˆ๊ธฐ ๋‹จ๊ณ„๋กœ, ๊ฐ€๋ณ€์ ์ธ ์ž…๋ ฅ ํฌ๊ธฐ ๋ชจ๋ธ๋“ค์„ ํ†ตํ•ด ๊ทธ ๊ธฐ๋Šฅ์„ฑ์„ ๊ณ„์† ํƒ๊ตฌํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์€ ์ €ํฌ๊ฐ€ ๊ด€์‹ฌ์„ ๋‘๊ณ  ์žˆ๋Š” ๋ถ„์•ผ ์ค‘ ํ•˜๋‚˜์ด๋ฉฐ, ์•ž์œผ๋กœ ์ถœ์‹œ๋  ๋ฒ„์ „์—์„œ ๋” ๋งŽ์€ ์ฝ”๋“œ ์˜ˆ์ œ, ๋” ์œ ์—ฐํ•œ ๊ตฌํ˜„, ๊ทธ๋ฆฌ๊ณ  Python ๊ธฐ๋ฐ˜ ์ฝ”๋“œ์™€ ์ปดํŒŒ์ผ๋œ TorchScript๋ฅผ ๋น„๊ตํ•˜๋Š” ๋ฒค์น˜๋งˆํฌ๋ฅผ ๋“ฑ์„ ํ†ตํ•ด ๋ถ„์„์„ ์‹ฌํ™”ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. </Tip> [TorchScript ๋ฌธ์„œ](https://pytorch.org/docs/stable/jit.html)์—์„œ๋Š” ์ด๋ ‡๊ฒŒ ๋งํ•ฉ๋‹ˆ๋‹ค. > TorchScript๋Š” PyTorch ์ฝ”๋“œ์—์„œ ์ง๋ ฌํ™” ๋ฐ ์ตœ์ ํ™” ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. [JIT๊ณผ TRACE](https://pytorch.org/docs/stable/jit.html)๋Š” ๊ฐœ๋ฐœ์ž๊ฐ€ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด์„œ ํšจ์œจ ์ง€ํ–ฅ์ ์ธ C++ ํ”„๋กœ๊ทธ๋žจ๊ณผ ๊ฐ™์€ ๋‹ค๋ฅธ ํ”„๋กœ๊ทธ๋žจ์—์„œ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” PyTorch ๋ชจ๋“ˆ์ž…๋‹ˆ๋‹ค. PyTorch ๊ธฐ๋ฐ˜ Python ํ”„๋กœ๊ทธ๋žจ๊ณผ ๋‹ค๋ฅธ ํ™˜๊ฒฝ์—์„œ ๋ชจ๋ธ์„ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก, ๐Ÿค— Transformers ๋ชจ๋ธ์„ TorchScript๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์žˆ๋Š” ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ๋Š” TorchScript๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๊ณ  ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋‘ ๊ฐ€์ง€๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค: - `torchscript` ํ”Œ๋ž˜๊ทธ๋กœ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šคํ™” - ๋”๋ฏธ ์ž…๋ ฅ์„ ์‚ฌ์šฉํ•œ ์ˆœ์ „ํŒŒ(forward pass) ์ด ํ•„์ˆ˜ ์กฐ๊ฑด๋“ค์€ ์•„๋ž˜์— ์ž์„ธํžˆ ์„ค๋ช…๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๊ฐœ๋ฐœ์ž๋“ค์ด ์ฃผ์˜ํ•ด์•ผ ํ•  ์—ฌ๋Ÿฌ ์‚ฌํ•ญ๋“ค์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ## TorchScript ํ”Œ๋ž˜๊ทธ์™€ ๋ฌถ์ธ ๊ฐ€์ค‘์น˜(tied weights)[[torchscript-flag-and-tied-weights]] `torchscript` ํ”Œ๋ž˜๊ทธ๊ฐ€ ํ•„์š”ํ•œ ์ด์œ ๋Š” ๋Œ€๋ถ€๋ถ„์˜ ๐Ÿค— Transformers ์–ธ์–ด ๋ชจ๋ธ์—์„œ `Embedding` ๋ ˆ์ด์–ด์™€ `Decoding` ๋ ˆ์ด์–ด ๊ฐ„์˜ ๋ฌถ์ธ ๊ฐ€์ค‘์น˜(tied weights)๊ฐ€ ์กด์žฌํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. TorchScript๋Š” ๋ฌถ์ธ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์—†์œผ๋ฏ€๋กœ, ๋ฏธ๋ฆฌ ๊ฐ€์ค‘์น˜๋ฅผ ํ’€๊ณ  ๋ณต์ œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `torchscript` ํ”Œ๋ž˜๊ทธ๋กœ ์ธ์Šคํ„ด์Šคํ™”๋œ ๋ชจ๋ธ์€ `Embedding` ๋ ˆ์ด์–ด์™€ `Decoding` ๋ ˆ์ด์–ด๊ฐ€ ๋ถ„๋ฆฌ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ์ดํ›„์— ํ›ˆ๋ จํ•ด์„œ๋Š” ์•ˆ ๋ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ํ•˜๊ฒŒ ๋˜๋ฉด ๋‘ ๋ ˆ์ด์–ด ๊ฐ„ ๋™๊ธฐํ™”๊ฐ€ ํ•ด์ œ๋˜์–ด ์˜ˆ์ƒ์น˜ ๋ชปํ•œ ๊ฒฐ๊ณผ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ๊ฐ–์ง€ ์•Š์€ ๋ชจ๋ธ์€ ๊ฐ€์ค‘์น˜๊ฐ€ ๋ฌถ์—ฌ ์žˆ์ง€ ์•Š์•„์„œ ์ด ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ๋“ค์€ `torchscript` ํ”Œ๋ž˜๊ทธ ์—†์ด ์•ˆ์ „ํ•˜๊ฒŒ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋”๋ฏธ ์ž…๋ ฅ๊ณผ ํ‘œ์ค€ ๊ธธ์ด[[dummy-inputs-and-standard-lengths]] ๋”๋ฏธ ์ž…๋ ฅ(dummy inputs)์€ ๋ชจ๋ธ์˜ ์ˆœ์ „ํŒŒ(forward pass)์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๊ฐ’์ด ๋ ˆ์ด์–ด๋ฅผ ํ†ตํ•ด ์ „ํŒŒ๋˜๋Š” ๋™์•ˆ, PyTorch๋Š” ๊ฐ ํ…์„œ์—์„œ ์‹คํ–‰๋œ ๋‹ค๋ฅธ ์—ฐ์‚ฐ์„ ์ถ”์ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋ก๋œ ์—ฐ์‚ฐ์€ ๋ชจ๋ธ์˜ *์ถ”์ (trace)*์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ถ”์ ์€ ์ž…๋ ฅ์˜ ์ฐจ์›์„ ๊ธฐ์ค€์œผ๋กœ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋”๋ฏธ ์ž…๋ ฅ์˜ ์ฐจ์›์— ์ œํ•œ๋˜์–ด, ๋‹ค๋ฅธ ์‹œํ€€์Šค ๊ธธ์ด๋‚˜ ๋ฐฐ์น˜ ํฌ๊ธฐ์—์„œ๋Š” ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ํฌ๊ธฐ๋กœ ์‹œ๋„ํ•  ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค: ``` `The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2` ``` ์ถ”๋ก  ์ค‘ ๋ชจ๋ธ์— ๊ณต๊ธ‰๋  ๊ฐ€์žฅ ํฐ ์ž…๋ ฅ๋งŒํผ ํฐ ๋”๋ฏธ ์ž…๋ ฅ ํฌ๊ธฐ๋กœ ๋ชจ๋ธ์„ ์ถ”์ ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํŒจ๋”ฉ์€ ๋ˆ„๋ฝ๋œ ๊ฐ’์„ ์ฑ„์šฐ๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ์ด ๋” ํฐ ์ž…๋ ฅ ํฌ๊ธฐ๋กœ ์ถ”์ ๋˜๊ธฐ ๋•Œ๋ฌธ์—, ํ–‰๋ ฌ์˜ ์ฐจ์›์ด ์ปค์ง€๊ณ  ๊ณ„์‚ฐ๋Ÿ‰์ด ๋งŽ์•„์ง‘๋‹ˆ๋‹ค. ๋‹ค์–‘ํ•œ ์‹œํ€€์Šค ๊ธธ์ด ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ผ ๋•Œ๋Š” ๊ฐ ์ž…๋ ฅ์— ๋Œ€ํ•ด ์ˆ˜ํ–‰๋˜๋Š” ์ด ์—ฐ์‚ฐ ํšŸ์ˆ˜์— ์ฃผ์˜ํ•˜๊ณ  ์„ฑ๋Šฅ์„ ์ฃผ์˜ ๊นŠ๊ฒŒ ํ™•์ธํ•˜์„ธ์š”. ## Python์—์„œ TorchScript ์‚ฌ์šฉํ•˜๊ธฐ[[using-torchscript-in-python]] ์ด ์„น์…˜์—์„œ๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•˜๊ณ  ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•, ์ถ”์ ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ### ๋ชจ๋ธ ์ €์žฅํ•˜๊ธฐ[[saving-a-model]] `BertModel`์„ TorchScript๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด `BertConfig` ํด๋ž˜์Šค์—์„œ `BertModel`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•œ ๋‹ค์Œ, `traced_bert.pt`๋ผ๋Š” ํŒŒ์ผ๋ช…์œผ๋กœ ๋””์Šคํฌ์— ์ €์žฅํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ```python from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") # ์ž…๋ ฅ ํ…์ŠคํŠธ ํ† ํฐํ™”ํ•˜๊ธฐ text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) # ์ž…๋ ฅ ํ† ํฐ ์ค‘ ํ•˜๋‚˜๋ฅผ ๋งˆ์Šคํ‚นํ•˜๊ธฐ masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # ๋”๋ฏธ ์ž…๋ ฅ ๋งŒ๋“ค๊ธฐ tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # torchscript ํ”Œ๋ž˜๊ทธ๋กœ ๋ชจ๋ธ ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ # ์ด ๋ชจ๋ธ์€ LM ํ—ค๋“œ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ํ•„์š”ํ•˜์ง€ ์•Š์ง€๋งŒ, ํ”Œ๋ž˜๊ทธ๋ฅผ True๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # ๋ชจ๋ธ์„ ์ธ์Šคํ„ดํŠธํ™”ํ•˜๊ธฐ model = BertModel(config) # ๋ชจ๋ธ์„ ํ‰๊ฐ€ ๋ชจ๋“œ๋กœ ๋‘์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. model.eval() # ๋งŒ์•ฝ *from_pretrained*๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๋Š” ๊ฒฝ์šฐ, TorchScript ํ”Œ๋ž˜๊ทธ๋ฅผ ์‰ฝ๊ฒŒ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค model = BertModel.from_pretrained("google-bert/bert-base-uncased", torchscript=True) # ์ถ”์  ์ƒ์„ฑํ•˜๊ธฐ traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt") ``` ### ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ[[loading-a-model]] ์ด์ œ ์ด์ „์— ์ €์žฅํ•œ `BertModel`, ์ฆ‰ `traced_bert.pt`๋ฅผ ๋””์Šคํฌ์—์„œ ๊ฐ€์ ธ์˜ค๊ณ , ์ด์ „์— ์ดˆ๊ธฐํ™”ํ•œ `dummy_input`์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ``` ### ์ถ”์ ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜๊ธฐ[[using-a-traced-model-for-inference]] `__call__` ์ด์ค‘ ์–ธ๋”์Šค์ฝ”์–ด(dunder) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ์— ์ถ”์ ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์„ธ์š”: ```python traced_model(tokens_tensor, segments_tensors) ``` ## Neuron SDK๋กœ Hugging Face TorchScript ๋ชจ๋ธ์„ AWS์— ๋ฐฐํฌํ•˜๊ธฐ[[deploy-hugging-face-torchscript-models-to-aws-with-the-neuron-sdk]] AWS๊ฐ€ ํด๋ผ์šฐ๋“œ์—์„œ ์ €๋น„์šฉ, ๊ณ ์„ฑ๋Šฅ ๋จธ์‹  ๋Ÿฌ๋‹ ์ถ”๋ก ์„ ์œ„ํ•œ [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) ์ธ์Šคํ„ด์Šค ์ œํ’ˆ๊ตฐ์„ ์ถœ์‹œํ–ˆ์Šต๋‹ˆ๋‹ค. Inf1 ์ธ์Šคํ„ด์Šค๋Š” ๋”ฅ๋Ÿฌ๋‹ ์ถ”๋ก  ์›Œํฌ๋กœ๋“œ์— ํŠนํ™”๋œ ๋งž์ถค ํ•˜๋“œ์›จ์–ด ๊ฐ€์†๊ธฐ์ธ AWS Inferentia ์นฉ์œผ๋กœ ๊ตฌ๋™๋ฉ๋‹ˆ๋‹ค. [AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#)์€ Inferentia๋ฅผ ์œ„ํ•œ SDK๋กœ, Inf1์— ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•œ transformers ๋ชจ๋ธ ์ถ”์  ๋ฐ ์ตœ์ ํ™”๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. Neuron SDK๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค: 1. ์ฝ”๋“œ ํ•œ ์ค„๋งŒ ๋ณ€๊ฒฝํ•˜๋ฉด ํด๋ผ์šฐ๋“œ ์ถ”๋ก ๋ฅผ ์œ„ํ•ด TorchScript ๋ชจ๋ธ์„ ์ถ”์ ํ•˜๊ณ  ์ตœ์ ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ์‰ฌ์šด API 2. ์ฆ‰์‹œ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™”๋กœ [๋น„์šฉ ํšจ์œจ ํ–ฅ์ƒ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>) 3. [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) ๋˜๋Š” [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html)๋กœ ๊ตฌ์ถ•๋œ Hugging Face transformers ๋ชจ๋ธ ์ง€์› ### ์‹œ์‚ฌ์ [[implications]] [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert) ์•„ํ‚คํ…์ฒ˜ ๋˜๋Š” ๊ทธ ๋ณ€ํ˜•์ธ [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) ๋ฐ [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ Transformers ๋ชจ๋ธ์€ ์ถ”์ถœ ๊ธฐ๋ฐ˜ ์งˆ์˜์‘๋‹ต, ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ๋ฐ ํ† ํฐ ๋ถ„๋ฅ˜์™€ ๊ฐ™์€ ๋น„์ƒ์„ฑ ์ž‘์—… ์‹œ Inf1์—์„œ ์ตœ์ƒ์˜ ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ž‘์—…๋„ [AWS Neuron MarianMT ํŠœํ† ๋ฆฌ์–ผ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html)์„ ๋”ฐ๋ผ Inf1์—์„œ ์‹คํ–‰๋˜๋„๋ก ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Inferentia์—์„œ ๋ฐ”๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋Š” Neuron ๋ฌธ์„œ์˜ [Model Architecture Fit](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia) ์„น์…˜์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์ข…์†์„ฑ[[dependencies]] AWS Neuron์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด [Neuron SDK ํ™˜๊ฒฝ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide)์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html)์— ๋ฏธ๋ฆฌ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ### AWS Neuron์œผ๋กœ ๋ชจ๋ธ ๋ณ€ํ™˜ํ•˜๊ธฐ[[converting-a-model-for-aws-neuron]] `BertModel`์„ ์ถ”์ ํ•˜๋ ค๋ฉด, [Python์—์„œ TorchScript ์‚ฌ์šฉํ•˜๊ธฐ](torchscript#using-torchscript-in-python)์—์„œ์™€ ๋™์ผํ•œ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•ด์„œ AWS NEURON์šฉ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `torch.neuron` ํ”„๋ ˆ์ž„์›Œํฌ ์ต์Šคํ…์…˜์„ ๊ฐ€์ ธ์™€ Python API๋ฅผ ํ†ตํ•ด Neuron SDK์˜ ๊ตฌ์„ฑ ์š”์†Œ์— ์ ‘๊ทผํ•ฉ๋‹ˆ๋‹ค: ```python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron ``` ๋‹ค์Œ ์ค„๋งŒ ์ˆ˜์ •ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```diff - torch.jit.trace(model, [tokens_tensor, segments_tensors]) + torch.neuron.trace(model, [token_tensor, segments_tensors]) ``` ์ด๋กœ์จ Neuron SDK๊ฐ€ ๋ชจ๋ธ์„ ์ถ”์ ํ•˜๊ณ  Inf1 ์ธ์Šคํ„ด์Šค์— ์ตœ์ ํ™”ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. AWS Neuron SDK์˜ ๊ธฐ๋Šฅ, ๋„๊ตฌ, ์˜ˆ์ œ ํŠœํ† ๋ฆฌ์–ผ ๋ฐ ์ตœ์‹  ์—…๋ฐ์ดํŠธ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [AWS NeuronSDK ๋ฌธ์„œ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Transformers ์„ค์น˜ ๋ฐฉ๋ฒ• ! pip install transformers datasets evaluate accelerate # ๋งˆ์ง€๋ง‰ ๋ฆด๋ฆฌ์Šค ๋Œ€์‹  ์†Œ์Šค์—์„œ ์„ค์น˜ํ•˜๋ ค๋ฉด, ์œ„ ๋ช…๋ น์„ ์ฃผ์„์œผ๋กœ ๋ฐ”๊พธ๊ณ  ์•„๋ž˜ ๋ช…๋ น์„ ํ•ด์ œํ•˜์„ธ์š”. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/performance.md
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์„ฑ๋Šฅ ๋ฐ ํ™•์žฅ์„ฑ [[performance-and-scalability]] ์ ์  ๋” ํฐ ๊ทœ๋ชจ์˜ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ํ”„๋กœ๋•์…˜์— ๋ฐฐํฌํ•˜๋Š” ๋ฐ์—๋Š” ๋‹ค์–‘ํ•œ ์–ด๋ ค์›€์ด ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์ค‘์—๋Š” ๋ชจ๋ธ์ด ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ GPU ๋ฉ”๋ชจ๋ฆฌ๋ณด๋‹ค ๋” ๋งŽ์€ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ•„์š”๋กœ ํ•˜๊ฑฐ๋‚˜ ํ›ˆ๋ จ ์†๋„๊ฐ€ ๋งค์šฐ ๋Š๋ฆด ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฐฐํฌํ•  ๋•Œ๋Š” ์ œํ’ˆ ํ™˜๊ฒฝ์—์„œ ์š”๊ตฌ๋˜๋Š” ์ฒ˜๋ฆฌ๋Ÿ‰์œผ๋กœ ์ธํ•ด ๊ณผ๋ถ€ํ•˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ๊ทน๋ณตํ•˜๊ณ  ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๊ฐ€์žฅ ์ ํ•ฉํ•œ ์„ค์ •์„ ์ฐพ๋„๋ก ๋„์›€์„ ์ฃผ๊ธฐ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ๊ณผ ์ถ”๋ก ์œผ๋กœ ๊ฐ€์ด๋“œ๋ฅผ ๋ถ„ํ• ํ–ˆ๋Š”๋ฐ, ์ด๋Š” ๊ฐ๊ฐ ๋‹ค๋ฅธ ๋ฌธ์ œ์™€ ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๊ฐ ๊ฐ€์ด๋“œ์—๋Š” ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ํ•˜๋“œ์›จ์–ด ์„ค์ •์— ๋Œ€ํ•œ ๋ณ„๋„์˜ ๊ฐ€์ด๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋‹จ์ผ GPU vs ๋‹ค์ค‘ GPU ๋˜๋Š” ์ถ”๋ก ์„ ์œ„ํ•œ CPU vs GPU). ![perf_overview](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perf_overview.png) ์ด ๋ฌธ์„œ๋Š” ์‚ฌ์šฉ์ž์˜ ์ƒํ™ฉ์— ์œ ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋“ค์— ๋Œ€ํ•œ ๊ฐœ์š” ๋ฐ ์‹œ์ž‘์  ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ [[training]] ํšจ์œจ์ ์ธ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ ํ›ˆ๋ จ์—๋Š” GPU๋‚˜ TPU์™€ ๊ฐ™์€ ๊ฐ€์†๊ธฐ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ๊ฒฝ์šฐ๋Š” ๋‹จ์ผ GPU๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์ง€๋งŒ, ๋‹ค์ค‘ GPU ๋ฐ CPU ํ›ˆ๋ จ์— ๋Œ€ํ•œ ์„น์…˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค(๊ณง ๋” ๋งŽ์€ ๋‚ด์šฉ์ด ์ถ”๊ฐ€๋  ์˜ˆ์ •). <Tip> ์ฐธ๊ณ : ๋‹จ์ผ GPU ์„น์…˜์—์„œ ์†Œ๊ฐœ๋œ ๋Œ€๋ถ€๋ถ„์˜ ์ „๋žต(์˜ˆ: ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ ๋˜๋Š” ๊ทธ๋ผ๋””์–ธํŠธ ๋ˆ„์ )์€ ์ผ๋ฐ˜์ ์ธ ๋ชจ๋ธ ํ›ˆ๋ จ์—๋„ ์ ์šฉ๋˜๋ฏ€๋กœ, ๋‹ค์ค‘ GPU๋‚˜ CPU ํ›ˆ๋ จ๊ณผ ๊ฐ™์€ ์„น์…˜์„ ์‚ดํŽด๋ณด๊ธฐ ์ „์— ๊ผญ ์ฐธ๊ณ ํ•˜์‹œ๊ธธ ๋ฐ”๋ž๋‹ˆ๋‹ค. </Tip> ### ๋‹จ์ผ GPU [[single-gpu]] ๋‹จ์ผ GPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์ง€๋งŒ, ์ด๋ฅผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋„๊ตฌ์™€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ๋Š” ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ, ๊ทธ๋ผ๋””์–ธํŠธ ๋ˆ„์  ๋ฐ ์ฒดํฌํฌ์ธํŒ…, ํšจ์œจ์ ์ธ ์˜ตํ‹ฐ๋งˆ์ด์ €, ์ตœ์ ์˜ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๊ฒฐ์ •ํ•˜๊ธฐ ์œ„ํ•œ ์ „๋žต ๋“ฑ์— ๋Œ€ํ•ด ๋…ผ์˜ํ•ฉ๋‹ˆ๋‹ค. [๋‹จ์ผ GPU ํ›ˆ๋ จ ์„น์…˜์œผ๋กœ ์ด๋™](perf_train_gpu_one) ### ๋‹ค์ค‘ GPU [[multigpu]] ๋‹จ์ผ GPU์—์„œ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์ด ๋„ˆ๋ฌด ๋Š๋ฆฌ๊ฑฐ๋‚˜ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์— ์ ํ•ฉํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์ค‘ GPU ์„ค์ •์œผ๋กœ ์ „ํ™˜ํ•˜๋Š” ๊ฒƒ์€ ๋…ผ๋ฆฌ์ ์ธ ๋‹จ๊ณ„์ด์ง€๋งŒ, ์—ฌ๋Ÿฌ GPU์—์„œ ํ•œ ๋ฒˆ์— ํ›ˆ๋ จํ•˜๋ ค๋ฉด ๊ฐ GPU๋งˆ๋‹ค ๋ชจ๋ธ์˜ ์ „์ฒด ์‚ฌ๋ณธ์„ ๋‘˜์ง€, ํ˜น์€ ๋ชจ๋ธ ์ž์ฒด๋„ ์—ฌ๋Ÿฌ GPU์— ๋ถ„์‚ฐํ•˜์—ฌ ๋‘˜์ง€ ๋“ฑ ์ƒˆ๋กœ์šด ๊ฒฐ์ •์„ ๋‚ด๋ ค์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ๋Š” ๋ฐ์ดํ„ฐ, ํ…์„œ ๋ฐ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌํ™”์— ๋Œ€ํ•ด ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. [๋‹ค์ค‘ GPU ํ›ˆ๋ จ ์„น์…˜์œผ๋กœ ์ด๋™](perf_train_gpu_many) ### CPU [[cpu]] [CPU ํ›ˆ๋ จ ์„น์…˜์œผ๋กœ ์ด๋™](perf_train_cpu) ### TPU [[tpu]] [_๊ณง ์ œ๊ณต๋  ์˜ˆ์ •_](perf_train_tpu) ### ํŠน์ˆ˜ํ•œ ํ•˜๋“œ์›จ์–ด [[specialized-hardware]] [_๊ณง ์ œ๊ณต๋  ์˜ˆ์ •_](perf_train_special) ## ์ถ”๋ก  [[inference]] ์ œํ’ˆ ๋ฐ ์„œ๋น„์Šค ํ™˜๊ฒฝ์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ์ถ”๋ก ํ•˜๋Š” ๊ฒƒ์€ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ๋งŒํผ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์–ด์ง€๋Š” ์„น์…˜์—์„œ๋Š” CPU ๋ฐ ๋‹จ์ผ/๋‹ค์ค‘ GPU ์„ค์ •์—์„œ ์ถ”๋ก ์„ ์ง„ํ–‰ํ•˜๋Š” ๋‹จ๊ณ„๋ฅผ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. ### CPU [[cpu]] [CPU ์ถ”๋ก  ์„น์…˜์œผ๋กœ ์ด๋™](perf_infer_cpu) ### ๋‹จ์ผ GPU [[single-gpu]] [๋‹จ์ผ GPU ์ถ”๋ก  ์„น์…˜์œผ๋กœ ์ด๋™](perf_infer_gpu_one) ### ๋‹ค์ค‘ GPU [[multigpu]] [๋‹ค์ค‘ GPU ์ถ”๋ก  ์„น์…˜์œผ๋กœ ์ด๋™](perf_infer_gpu_many) ### ํŠน์ˆ˜ํ•œ ํ•˜๋“œ์›จ์–ด [[specialized-hardware]] [_๊ณง ์ œ๊ณต๋  ์˜ˆ์ •_](perf_infer_special) ## ํ•˜๋“œ์›จ์–ด [[hardware]] ํ•˜๋“œ์›จ์–ด ์„น์…˜์—์„œ๋Š” ์ž์‹ ๋งŒ์˜ ๋”ฅ๋Ÿฌ๋‹ ์žฅ๋น„๋ฅผ ๊ตฌ์ถ•ํ•  ๋•Œ ์œ ์šฉํ•œ ํŒ๊ณผ ์š”๋ น์„ ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [ํ•˜๋“œ์›จ์–ด ์„น์…˜์œผ๋กœ ์ด๋™](perf_hardware) ## ๊ธฐ์—ฌํ•˜๊ธฐ [[contribute]] ์ด ๋ฌธ์„œ๋Š” ์™„์„ฑ๋˜์ง€ ์•Š์€ ์ƒํƒœ์ด๋ฉฐ, ์ถ”๊ฐ€ํ•ด์•ผ ํ•  ๋‚ด์šฉ์ด๋‚˜ ์ˆ˜์ • ์‚ฌํ•ญ์ด ๋งŽ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ถ”๊ฐ€ํ•˜๊ฑฐ๋‚˜ ์ˆ˜์ •ํ•  ๋‚ด์šฉ์ด ์žˆ์œผ๋ฉด ์ฃผ์ €ํ•˜์ง€ ๋ง๊ณ  PR์„ ์—ด์–ด ์ฃผ์‹œ๊ฑฐ๋‚˜, ์ž์„ธํ•œ ๋‚ด์šฉ์„ ๋…ผ์˜ํ•˜๊ธฐ ์œ„ํ•ด Issue๋ฅผ ์‹œ์ž‘ํ•ด ์ฃผ์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. A๊ฐ€ B๋ณด๋‹ค ์ข‹๋‹ค๊ณ  ํ•˜๋Š” ๊ธฐ์—ฌ๋ฅผ ํ•  ๋•Œ๋Š”, ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๋ฒค์น˜๋งˆํฌ์™€/๋˜๋Š” ํ•ด๋‹น ์ •๋ณด์˜ ์ถœ์ฒ˜ ๋งํฌ๋ฅผ ํฌํ•จํ•ด์ฃผ์„ธ์š”(๋‹น์‹ ์œผ๋กœ๋ถ€ํ„ฐ์˜ ์ง์ ‘์ ์ธ ์ •๋ณด๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ).
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/tflite.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TFLite๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ[[export-to-tflite]] [TensorFlow Lite](https://www.tensorflow.org/lite/guide)๋Š” ์ž์›์ด ์ œํ•œ๋œ ํœด๋Œ€ํฐ, ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ, ์‚ฌ๋ฌผ์ธํ„ฐ๋„ท(IoT) ๊ธฐ๊ธฐ์—์„œ ๊ธฐ๊ณ„ํ•™์Šต ๋ชจ๋ธ์„ ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•œ ๊ฒฝ๋Ÿ‰ ํ”„๋ ˆ์ž„์›Œํฌ์ž…๋‹ˆ๋‹ค. TFLite๋Š” ์—ฐ์‚ฐ ๋Šฅ๋ ฅ, ๋ฉ”๋ชจ๋ฆฌ, ์ „๋ ฅ ์†Œ๋น„๊ฐ€ ์ œํ•œ๋œ ๊ธฐ๊ธฐ์—์„œ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ์ตœ์ ํ™”ํ•˜๊ณ  ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. TensorFlow Lite ๋ชจ๋ธ์€ `.tflite` ํŒŒ์ผ ํ™•์žฅ์ž๋กœ ์‹๋ณ„๋˜๋Š” ํŠน์ˆ˜ํ•˜๊ณ  ํšจ์œจ์ ์ธ ํœด๋Œ€์šฉ ํฌ๋งท์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ `exporters.tflite` ๋ชจ๋“ˆ๋กœ ๐Ÿค— Transformers ๋ชจ๋ธ์„ TFLite๋กœ ๋‚ด๋ณด๋‚ด๋Š” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ง€์›๋˜๋Š” ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜ ๋ชฉ๋ก์€ [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/tflite/overview)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋ชจ๋ธ์„ TFLite๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด, ํ•„์š”ํ•œ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜์„ธ์š”: ```bash pip install optimum[exporters-tf] ``` ๋ชจ๋“  ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ธ์ˆ˜๋ฅผ ํ™•์ธํ•˜๋ ค๋ฉด, [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model)๋ฅผ ์ฐธ๊ณ ํ•˜๊ฑฐ๋‚˜ ํ„ฐ๋ฏธ๋„์—์„œ ๋„์›€๋ง์„ ์‚ดํŽด๋ณด์„ธ์š”: ```bash optimum-cli export tflite --help ``` ์˜ˆ๋ฅผ ๋“ค์–ด ๐Ÿค— Hub์—์„œ์˜ `google-bert/bert-base-uncased` ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด, ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash optimum-cli export tflite --model google-bert/bert-base-uncased --sequence_length 128 bert_tflite/ ``` ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง„ํ–‰ ์ƒํ™ฉ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋กœ๊ทธ์™€ ๊ฒฐ๊ณผ๋ฌผ์ธ `model.tflite`๊ฐ€ ์ €์žฅ๋œ ์œ„์น˜๋ฅผ ๋ณด์—ฌ์ฃผ๋Š” ๋กœ๊ทธ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค: ```bash Validating TFLite model... -[โœ“] TFLite model output names match reference model (logits) - Validating TFLite Model output "logits": -[โœ“] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite ``` ์œ„ ์˜ˆ์ œ๋Š” ๐Ÿค— Hub์—์„œ์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ๋กœ์ปฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ธ๋‹ค๋ฉด, ๋จผ์ € ๋ชจ๋ธ ๊ฐ€์ค‘์น˜์™€ ํ† ํฌ๋‚˜์ด์ € ํŒŒ์ผ์ด ๋ชจ๋‘ ๊ฐ™์€ ๋””๋ ‰ํ„ฐ๋ฆฌ( `local_path` )์— ์ €์žฅ๋๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. CLI๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ, ๐Ÿค— Hub์—์„œ์˜ ์ฒดํฌํฌ์ธํŠธ ์ด๋ฆ„ ๋Œ€์‹  `model` ์ธ์ˆ˜์— `local_path`๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/perf_train_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPU์—์„œ ํšจ์œจ์ ์ธ ํ›ˆ๋ จ [[efficient-training-on-cpu]] ์ด ๊ฐ€์ด๋“œ๋Š” CPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ํ›ˆ๋ จํ•˜๋Š” ๋ฐ ์ดˆ์ ์„ ๋งž์ถฅ๋‹ˆ๋‹ค. ## IPEX์™€ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ [[mixed-precision-with-ipex]] IPEX๋Š” AVX-512 ์ด์ƒ์„ ์ง€์›ํ•˜๋Š” CPU์— ์ตœ์ ํ™”๋˜์–ด ์žˆ์œผ๋ฉฐ, AVX2๋งŒ ์ง€์›ํ•˜๋Š” CPU์—๋„ ๊ธฐ๋Šฅ์ ์œผ๋กœ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ AVX-512 ์ด์ƒ์˜ Intel CPU ์„ธ๋Œ€์—์„œ๋Š” ์„ฑ๋Šฅ์ƒ ์ด์ ์ด ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜์ง€๋งŒ, AVX2๋งŒ ์ง€์›ํ•˜๋Š” CPU (์˜ˆ: AMD CPU ๋˜๋Š” ์˜ค๋ž˜๋œ Intel CPU)์˜ ๊ฒฝ์šฐ์—๋Š” IPEX ์•„๋ž˜์—์„œ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์ด๋Š” ๋ณด์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. IPEX๋Š” Float32์™€ BFloat16๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์—ฌ CPU ํ›ˆ๋ จ์„ ์œ„ํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. BFloat16์˜ ์‚ฌ์šฉ์€ ๋‹ค์Œ ์„น์…˜์˜ ์ฃผ์š” ์ดˆ์ ์ž…๋‹ˆ๋‹ค. ์ €์ •๋ฐ€๋„ ๋ฐ์ดํ„ฐ ํƒ€์ž…์ธ BFloat16์€ 3์„ธ๋Œ€ Xeonยฎ Scalable ํ”„๋กœ์„ธ์„œ (์ฝ”๋“œ๋ช…: Cooper Lake)์—์„œ AVX512 ๋ช…๋ น์–ด ์ง‘ํ•ฉ์„ ๋„ค์ดํ‹ฐ๋ธŒ๋กœ ์ง€์›ํ•ด ์™”์œผ๋ฉฐ, ๋‹ค์Œ ์„ธ๋Œ€์˜ Intelยฎ Xeonยฎ Scalable ํ”„๋กœ์„ธ์„œ์—์„œ Intelยฎ Advanced Matrix Extensions (Intelยฎ AMX) ๋ช…๋ น์–ด ์ง‘ํ•ฉ์„ ์ง€์›ํ•˜์—ฌ ์„ฑ๋Šฅ์„ ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œํ‚ฌ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. CPU ๋ฐฑ์—”๋“œ์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๊ธฐ๋Šฅ์€ PyTorch-1.10๋ถ€ํ„ฐ ํ™œ์„ฑํ™”๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋™์‹œ์—, Intelยฎ Extension for PyTorch์—์„œ BFloat16์— ๋Œ€ํ•œ CPU์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋ฐ ์—ฐ์‚ฐ์ž์˜ BFloat16 ์ตœ์ ํ™”๋ฅผ ๋Œ€๊ทœ๋ชจ๋กœ ํ™œ์„ฑํ™”ํ•˜๊ณ , PyTorch ๋งˆ์Šคํ„ฐ ๋ธŒ๋žœ์น˜๋กœ ๋ถ€๋ถ„์ ์œผ๋กœ ์—…์ŠคํŠธ๋ฆผ์„ ๋ฐ˜์˜ํ–ˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž๋“ค์€ IPEX ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ๋‚˜์€ ์„ฑ๋Šฅ๊ณผ ์‚ฌ์šฉ์ž ๊ฒฝํ—˜์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html)์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ### IPEX ์„ค์น˜: [[ipex-installation]] IPEX ๋ฆด๋ฆฌ์Šค๋Š” PyTorch๋ฅผ ๋”ฐ๋ผ๊ฐ‘๋‹ˆ๋‹ค. pip๋ฅผ ํ†ตํ•ด ์„ค์น˜ํ•˜๋ ค๋ฉด: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | ```bash pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` [IPEX ์„ค์น˜](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html)์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ### Trainer์—์„œ์˜ ์‚ฌ์šฉ๋ฒ• [[usage-in-trainer]] Trainer์—์„œ IPEX์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ์‚ฌ์šฉ์ž๋Š” ํ›ˆ๋ จ ๋ช…๋ น ์ธ์ˆ˜์— `use_ipex`, `bf16`, `no_cuda`๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [Transformers ์งˆ๋ฌธ-์‘๋‹ต](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)์˜ ์‚ฌ์šฉ ์‚ฌ๋ก€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. - CPU์—์„œ BF16 ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ IPEX๋กœ ํ›ˆ๋ จํ•˜๊ธฐ: <pre> python run_qa.py \ --model_name_or_path google-bert/bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex \</b> <b>--bf16 --no_cuda</b></pre> ### ์‹ค์Šต ์˜ˆ์‹œ [[practice-example]] ๋ธ”๋กœ๊ทธ: [Intel Sapphire Rapids๋กœ PyTorch Transformers ๊ฐ€์†ํ™”](https://huggingface.co/blog/intel-sapphire-rapids)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/model_summary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Transformer ๋ชจ๋ธ๊ตฐ[[the-transformer-model-family]] 2017๋…„์— ์†Œ๊ฐœ๋œ [๊ธฐ๋ณธ Transformer](https://arxiv.org/abs/1706.03762) ๋ชจ๋ธ์€ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) ์ž‘์—…์„ ๋„˜์–ด ์ƒˆ๋กญ๊ณ  ํฅ๋ฏธ๋กœ์šด ๋ชจ๋ธ๋“ค์— ์˜๊ฐ์„ ์ฃผ์—ˆ์Šต๋‹ˆ๋‹ค. [๋‹จ๋ฐฑ์งˆ ์ ‘ํž˜ ๊ตฌ์กฐ ์˜ˆ์ธก](https://huggingface.co/blog/deep-learning-with-proteins), [์น˜ํƒ€์˜ ๋‹ฌ๋ฆฌ๊ธฐ ํ›ˆ๋ จ](https://huggingface.co/blog/train-decision-transformers), [์‹œ๊ณ„์—ด ์˜ˆ์ธก](https://huggingface.co/blog/time-series-transformers) ๋“ฑ์„ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ๋ชจ๋ธ์ด ์ƒ๊ฒจ๋‚ฌ์Šต๋‹ˆ๋‹ค. Transformer์˜ ๋ณ€ํ˜•์ด ๋„ˆ๋ฌด ๋งŽ์•„์„œ, ํฐ ๊ทธ๋ฆผ์„ ๋†“์น˜๊ธฐ ์‰ฝ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ฌ๊ธฐ ์žˆ๋Š” ๋ชจ๋“  ๋ชจ๋ธ์˜ ๊ณตํ†ต์ ์€ ๊ธฐ๋ณธ Trasnformer ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์€ ์ธ์ฝ”๋” ๋˜๋Š” ๋””์ฝ”๋”๋งŒ ์‚ฌ์šฉํ•˜๊ณ , ๋‹ค๋ฅธ ๋ชจ๋ธ๋“ค์€ ์ธ์ฝ”๋”์™€ ๋””์ฝ”๋”๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ Transformer ๋ชจ๋ธ๊ตฐ ๋‚ด ์ƒ์œ„ ๋ ˆ๋ฒจ์—์„œ์˜ ์ฐจ์ด์ ์„ ๋ถ„๋ฅ˜ํ•˜๊ณ  ๊ฒ€ํ† ํ•˜๋ฉด ์œ ์šฉํ•œ ๋ถ„๋ฅ˜ ์ฒด๊ณ„๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด์ „์— ์ ‘ํ•ด๋ณด์ง€ ๋ชปํ•œ Transformer ๋ชจ๋ธ๋“ค ๋˜ํ•œ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ธฐ๋ณธ Transformer ๋ชจ๋ธ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ๋ณต์Šต์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ, Hugging Face ๊ฐ•์˜์˜ [ํŠธ๋žœ์Šคํฌ๋จธ๋Š” ์–ด๋–ป๊ฒŒ ๋™์ž‘ํ•˜๋‚˜์š”?](https://huggingface.co/course/chapter1/4?fw=pt) ์ฑ•ํ„ฐ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. <div align="center"> <iframe width="560" height="315" src="https://www.youtube.com/embed/H39Z_720T5s" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </div> ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer-vision]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FacQBpeFBVvrDUlzFlkejoz%2FModelscape-timeline%3Fnode-id%3D0%253A1%26t%3Dm0zJ7m2BQ9oe0WtO-1" allowfullscreen></iframe> ### ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ[[convolutional-network]] [Vision Transformer](https://arxiv.org/abs/2010.11929)๊ฐ€ ํ™•์žฅ์„ฑ๊ณผ ํšจ์œจ์„ฑ์„ ์ž…์ฆํ•˜๊ธฐ ์ „๊นŒ์ง€ ์˜ค๋žซ๋™์•ˆ ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ(CNN)๊ฐ€ ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์˜ ์ง€๋ฐฐ์ ์ธ ํŒจ๋Ÿฌ๋‹ค์ž„์ด์—ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿผ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์ด๋™ ๋ถˆ๋ณ€์„ฑ(translation invariance)๊ณผ ๊ฐ™์€ CNN์˜ ์šฐ์ˆ˜ํ•œ ๋ถ€๋ถ„์ด ๋„๋“œ๋ผ์ง€๊ธฐ ๋•Œ๋ฌธ์— ๋ช‡๋ช‡ (ํŠนํžˆ ํŠน์ • ๊ณผ์—…์—์„œ์˜) Transformer ๋ชจ๋ธ์€ ์•„ํ‚คํ…์ฒ˜์— ํ•ฉ์„ฑ๊ณฑ์„ ํ†ตํ•ฉํ•˜๊ธฐ๋„ ํ–ˆ์Šต๋‹ˆ๋‹ค. [ConvNeXt](model_doc/convnext)๋Š” ์ด๋Ÿฐ ๊ด€๋ก€๋ฅผ ๋’ค์ง‘์–ด CNN์„ ํ˜„๋Œ€ํ™”ํ•˜๊ธฐ ์œ„ํ•ด Transformer์˜ ๋””์ž์ธ์„ ์ฐจ์šฉํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด ConvNeXt๋Š” ๊ฒน์น˜์ง€ ์•Š๋Š” ์Šฌ๋ผ์ด๋”ฉ ์ฐฝ(sliding window)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜ํ™”ํ•˜๊ณ , ๋” ํฐ ์ปค๋„๋กœ ์ „์—ญ ์ˆ˜์šฉ ํ•„๋“œ(global receptive field)๋ฅผ ํ™•์žฅ์‹œํ‚ต๋‹ˆ๋‹ค. ConvNeXt๋Š” ๋˜ํ•œ ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์„ ๋†’์ด๊ณ  ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์—ฌ๋Ÿฌ ๋ ˆ์ด์–ด ์„ค๊ณ„๋ฅผ ์„ ํƒํ•˜๊ธฐ ๋•Œ๋ฌธ์— Transformer์™€ ๊ฒฌ์ค„๋งŒํ•ฉ๋‹ˆ๋‹ค! ### ์ธ์ฝ”๋”[[cv-encoder]] [Vision Transformer(ViT)](model_doc/vit)๋Š” ํ•ฉ์„ฑ๊ณฑ ์—†๋Š” ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์˜ ๋ง‰์„ ์—ด์—ˆ์Šต๋‹ˆ๋‹ค. ViT๋Š” ํ‘œ์ค€ Transformer ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜์ง€๋งŒ, ๊ฐ€์žฅ ํฐ ํ˜์‹ ์€ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ์‹์ด์—ˆ์Šต๋‹ˆ๋‹ค. ๋ฌธ์žฅ์„ ํ† ํฐ์œผ๋กœ ๋ถ„ํ• ํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ์ด๋ฏธ์ง€๋ฅผ ๊ณ ์ •๋œ ํฌ๊ธฐ์˜ ํŒจ์น˜๋กœ ๋ถ„ํ• ํ•˜๊ณ , ์ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ViT๋Š” Transformer์˜ ํšจ์œจ์ ์ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํ›ˆ๋ จ์— ๋” ์ ์€ ์ž์›์„ ์‚ฌ์šฉํ•˜๋ฉด์„œ๋„ ๋‹น์‹œ CNN์— ๋น„๊ฒฌํ•˜๋Š” ๊ฒฐ๊ณผ๋ฅผ ์ž…์ฆํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ViT๋ฅผ ๋’ค์ด์–ด ๋ถ„ํ• (segmentation)๊ณผ ๊ฐ™์€ ๊ณ ๋ฐ€๋„ ๋น„์ „ ์ž‘์—…๊ณผ ํƒ์ง€ ์ž‘์—…๋„ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ๋น„์ „ ๋ชจ๋ธ์ด ๋“ฑ์žฅํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ ์ค‘ ํ•˜๋‚˜๊ฐ€ [Swin](model_doc/swin) Transformer์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์ž‘์€ ํฌ๊ธฐ์˜ ํŒจ์น˜์—์„œ ๊ณ„์ธต์  ํŠน์ง• ๋งต(CNN ๐Ÿ‘€๊ณผ ๊ฐ™์ง€๋งŒ ViT์™€๋Š” ๋‹ค๋ฆ„)์„ ๋งŒ๋“ค๊ณ  ๋” ๊นŠ์€ ๋ ˆ์ด์–ด์˜ ์ธ์ ‘ ํŒจ์น˜์™€ ๋ณ‘ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์–ดํ…์…˜(Attention)์€ ์ง€์—ญ ์œˆ๋„์šฐ ๋‚ด์—์„œ๋งŒ ๊ณ„์‚ฐ๋˜๋ฉฐ, ๋ชจ๋ธ์ด ๋” ์ž˜ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋„๋ก ์–ดํ…์…˜ ๋ ˆ์ด์–ด ๊ฐ„์— ์œˆ๋„์šฐ๋ฅผ ์ด๋™ํ•˜๋ฉฐ ์—ฐ๊ฒฐ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. Swin Transformer๋Š” ๊ณ„์ธต์  ํŠน์ง• ๋งต์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ๋ถ„ํ• (segmentation)๊ณผ ํƒ์ง€์™€ ๊ฐ™์€ ๊ณ ๋ฐ€๋„ ์˜ˆ์ธก ์ž‘์—…์— ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. [SegFormer](model_doc/segformer) ์—ญ์‹œ Transformer ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ณ„์ธต์  ํŠน์ง• ๋งต์„ ๊ตฌ์ถ•ํ•˜์ง€๋งŒ, ์ƒ๋‹จ์— ๊ฐ„๋‹จํ•œ ๋‹ค์ธต ํผ์…‰ํŠธ๋ก (MLP) ๋””์ฝ”๋”๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ๋ชจ๋“  ํŠน์ง• ๋งต์„ ๊ฒฐํ•ฉํ•˜๊ณ  ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. BeIT์™€ ViTMAE์™€ ๊ฐ™์€ ๋‹ค๋ฅธ ๋น„์ „ ๋ชจ๋ธ์€ BERT์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ(objective)์—์„œ ์˜๊ฐ์„ ์–ป์—ˆ์Šต๋‹ˆ๋‹ค. [BeIT](model_doc/beit)๋Š” *๋งˆ์Šคํฌ๋“œ ์ด๋ฏธ์ง€ ๋ชจ๋ธ๋ง(MIM)*์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋˜๋ฉฐ, ์ด๋ฏธ์ง€ ํŒจ์น˜๋Š” ์ž„์˜๋กœ ๋งˆ์Šคํ‚น๋˜๊ณ  ์ด๋ฏธ์ง€๋„ ์‹œ๊ฐ์  ํ† ํฐ์œผ๋กœ ํ† ํฐํ™”๋ฉ๋‹ˆ๋‹ค. BeIT๋Š” ๋งˆ์Šคํ‚น๋œ ํŒจ์น˜์— ํ•ด๋‹นํ•˜๋Š” ์‹œ๊ฐ์  ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋„๋ก ํ•™์Šต๋ฉ๋‹ˆ๋‹ค. [ViTMAE](model_doc/vitmae)๋„ ๋น„์Šทํ•œ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๊ฐ€ ์žˆ์ง€๋งŒ, ์‹œ๊ฐ์  ํ† ํฐ ๋Œ€์‹  ํ”ฝ์…€์„ ์˜ˆ์ธกํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ํŠน์ดํ•œ ์ ์€ ์ด๋ฏธ์ง€ ํŒจ์น˜์˜ 75%๊ฐ€ ๋งˆ์Šคํ‚น๋˜์–ด ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! ๋””์ฝ”๋”๋Š” ๋งˆ์Šคํ‚น๋œ ํ† ํฐ๊ณผ ์ธ์ฝ”๋”ฉ๋œ ํŒจ์น˜์—์„œ ํ”ฝ์…€์„ ์žฌ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ์ด ๋๋‚˜๋ฉด ๋””์ฝ”๋”๋Š” ํ๊ธฐ๋˜๊ณ  ์ธ์ฝ”๋”๋Š” ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ### ๋””์ฝ”๋”[[cv-decoder]] ๋Œ€๋ถ€๋ถ„์˜ ๋น„์ „ ๋ชจ๋ธ์€ ์ธ์ฝ”๋”์— ์˜์กดํ•˜์—ฌ ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋””์ฝ”๋” ์ „์šฉ ๋น„์ „ ๋ชจ๋ธ์€ ๋“œ๋ญ…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋“ฑ์˜ ์‚ฌ๋ก€์˜ ๊ฒฝ์šฐ, GPT-2์™€ ๊ฐ™์€ ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ์—์„œ ๋ณด์•˜๋“ฏ์ด ๋””์ฝ”๋”๊ฐ€ ๊ฐ€์žฅ ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. [ImageGPT](model_doc/imagegpt)๋Š” GPT-2์™€ ๋™์ผํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์‚ฌ์šฉํ•˜์ง€๋งŒ, ์‹œํ€€์Šค์˜ ๋‹ค์Œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๋Œ€์‹  ์ด๋ฏธ์ง€์˜ ๋‹ค์Œ ํ”ฝ์…€์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ImageGPT๋Š” ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์ธ์ฝ”๋”-๋””์ฝ”๋”[[cv-encoder-decoder]] ๋น„์ „ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ธ์ฝ”๋”(๋ฐฑ๋ณธ์œผ๋กœ๋„ ์•Œ๋ ค์ง)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ค‘์š”ํ•œ ์ด๋ฏธ์ง€ ํŠน์ง•์„ ์ถ”์ถœํ•œ ํ›„, ์ด๋ฅผ Transformer ๋””์ฝ”๋”๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. [DETR](model_doc/detr)์— ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ฐฑ๋ณธ์ด ์žˆ์ง€๋งŒ, ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•ด ์™„์ „ํ•œ Transformer ์ธ์ฝ”๋”-๋””์ฝ”๋” ์•„ํ‚คํ…์ฒ˜๋„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ธ์ฝ”๋”๋Š” ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ณ  ์ด๋ฅผ ๋””์ฝ”๋”์—์„œ ๊ฐ์ฒด ์ฟผ๋ฆฌ(๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ๋Š” ์ด๋ฏธ์ง€์˜ ์˜์—ญ ๋˜๋Š” ๊ฐ์ฒด์— ์ค‘์ ์„ ๋‘๊ณ  ํ•™์Šต๋œ ์ž„๋ฒ ๋”ฉ)์™€ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. DETR์€ ๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ์— ๋Œ€ํ•œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ์ขŒํ‘œ์™€ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural-language-processing]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FUhbQAZDlpYW5XEpdFy6GoG%2Fnlp-model-timeline%3Fnode-id%3D0%253A1%26t%3D4mZMr4r1vDEYGJ50-1" allowfullscreen></iframe> ### ์ธ์ฝ”๋”[[nlp-encoder]] [BERT](model_doc/bert)๋Š” ์ธ์ฝ”๋” ์ „์šฉ Transformer๋กœ, ๋‹ค๋ฅธ ํ† ํฐ์„ ๋ณด๊ณ  ์†Œ์œ„ "๋ถ€์ • ํ–‰์œ„"๋ฅผ ์ €์ง€๋ฅด๋Š” ๊ฑธ ๋ง‰๊ธฐ ์œ„ํ•ด ์ž…๋ ฅ์—์„œ ํŠน์ • ํ† ํฐ์„ ์ž„์˜๋กœ ๋งˆ์Šคํ‚นํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ์˜ ๋ชฉํ‘œ๋Š” ์ปจํ…์ŠคํŠธ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด BERT๋Š” ์™ผ์ชฝ๊ณผ ์˜ค๋ฅธ์ชฝ ์ปจํ…์ŠคํŠธ๋ฅผ ์ถฉ๋ถ„ํžˆ ํ™œ์šฉํ•˜์—ฌ ์ž…๋ ฅ์— ๋Œ€ํ•ด ๋” ๊นŠ๊ณ  ํ’๋ถ€ํ•œ ํ‘œํ˜„์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ BERT์˜ ์‚ฌ์ „ํ›ˆ๋ จ ์ „๋žต์—๋Š” ์—ฌ์ „ํžˆ ๊ฐœ์„ ์˜ ์—ฌ์ง€๊ฐ€ ๋‚จ์•„ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. [RoBERTa](model_doc/roberta)๋Š” ๋” ๊ธด ์‹œ๊ฐ„ ๋™์•ˆ ๋” ํฐ ๋ฐฐ์น˜์— ๋Œ€ํ•œ ํ›ˆ๋ จ์„ ํฌํ•จํ•˜๊ณ , ์ „์ฒ˜๋ฆฌ ์ค‘์— ํ•œ ๋ฒˆ๋งŒ ๋งˆ์Šคํ‚นํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ๊ฐ ์—ํญ์—์„œ ํ† ํฐ์„ ์ž„์˜๋กœ ๋งˆ์Šคํ‚นํ•˜๊ณ , ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก ๋ชฉํ‘œ๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ์ƒˆ๋กœ์šด ์‚ฌ์ „ํ›ˆ๋ จ ๋ฐฉ์‹์„ ๋„์ž…ํ•จ์œผ๋กœ์จ ์ด๋ฅผ ๊ฐœ์„ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์„ฑ๋Šฅ ๊ฐœ์„ ์„ ์œ„ํ•œ ์ „๋žต์œผ๋กœ ๋ชจ๋ธ ํฌ๊ธฐ๋ฅผ ํ‚ค์šฐ๋Š” ๊ฒƒ์ด ์ง€๋ฐฐ์ ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํฐ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋ ค๋ฉด ๊ณ„์‚ฐ ๋น„์šฉ์ด ๋งŽ์ด ๋“ญ๋‹ˆ๋‹ค. ๊ณ„์‚ฐ ๋น„์šฉ์„ ์ค„์ด๋Š” ํ•œ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ [DistilBERT](model_doc/distilbert)์™€ ๊ฐ™์ด ์ž‘์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. DistilBERT๋Š” ์••์ถ• ๊ธฐ๋ฒ•์ธ [์ง€์‹ ์ฆ๋ฅ˜(knowledge distillation)](https://arxiv.org/abs/1503.02531)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ, ๊ฑฐ์˜ ๋ชจ๋“  ์–ธ์–ด ์ดํ•ด ๋Šฅ๋ ฅ์„ ์œ ์ง€ํ•˜๋ฉด์„œ ๋” ์ž‘์€ ๋ฒ„์ „์˜ BERT๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋Œ€๋ถ€๋ถ„์˜ Transformer ๋ชจ๋ธ์— ๋” ๋งŽ์€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝํ–ฅ์ด ์ด์–ด์กŒ๊ณ , ์ด์— ๋”ฐ๋ผ ํ›ˆ๋ จ ํšจ์œจ์„ฑ์„ ๊ฐœ์„ ํ•˜๋Š” ๊ฒƒ์— ์ค‘์ ์„ ๋‘” ์ƒˆ๋กœ์šด ๋ชจ๋ธ์ด ๋“ฑ์žฅํ–ˆ์Šต๋‹ˆ๋‹ค. [ALBERT](model_doc/albert)๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ๋งค๊ฐœ๋ณ€์ˆ˜ ์ˆ˜๋ฅผ ์ค„์—ฌ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ์ค„์˜€์Šต๋‹ˆ๋‹ค. ๋ฐ”๋กœ ํฐ ์–ดํœ˜๋ฅผ ๋‘ ๊ฐœ์˜ ์ž‘์€ ํ–‰๋ ฌ๋กœ ๋ถ„๋ฆฌํ•˜๋Š” ๊ฒƒ๊ณผ ๋ ˆ์ด์–ด๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ณต์œ ํ•˜๋„๋ก ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [DeBERTa](model_doc/deberta)๋Š” ๋‹จ์–ด์™€ ๊ทธ ์œ„์น˜๋ฅผ ๋‘ ๊ฐœ์˜ ๋ฒกํ„ฐ๋กœ ๊ฐœ๋ณ„์ ์œผ๋กœ ์ธ์ฝ”๋”ฉํ•˜๋Š” ๋ถ„๋ฆฌ๋œ(disentangled) ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์–ดํ…์…˜์€ ๋‹จ์–ด์™€ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ์„ ํฌํ•จํ•˜๋Š” ๋‹จ์ผ ๋ฒกํ„ฐ ๋Œ€์‹  ์ด ๋ณ„๋„์˜ ๋ฒกํ„ฐ์—์„œ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. [Longformer](model_doc/longformer)๋Š” ํŠนํžˆ ์‹œํ€€์Šค ๊ธธ์ด๊ฐ€ ๊ธด ๋ฌธ์„œ๋ฅผ ์ฒ˜๋ฆฌํ•  ๋•Œ, ์–ดํ…์…˜์„ ๋” ํšจ์œจ์ ์œผ๋กœ ๋งŒ๋“œ๋Š” ๊ฒƒ์— ์ค‘์ ์„ ๋‘์—ˆ์Šต๋‹ˆ๋‹ค. ์ง€์—ญ(local) ์œˆ๋„์šฐ ์–ดํ…์…˜(๊ฐ ํ† ํฐ ์ฃผ๋ณ€์˜ ๊ณ ์ •๋œ ์œˆ๋„์šฐ ํฌ๊ธฐ์—์„œ๋งŒ ๊ณ„์‚ฐ๋˜๋Š” ์–ดํ…์…˜)๊ณผ ์ „์—ญ(global) ์–ดํ…์…˜(๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด `[CLS]`์™€ ๊ฐ™์€ ํŠน์ • ์ž‘์—… ํ† ํฐ์—๋งŒ ํ•ด๋‹น)์˜ ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด(full) ์–ดํ…์…˜ ํ–‰๋ ฌ ๋Œ€์‹  ํฌ์†Œ(sparse) ์–ดํ…์…˜ ํ–‰๋ ฌ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ### ๋””์ฝ”๋”[[nlp-decoder]] [GPT-2](model_doc/gpt2)๋Š” ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๋””์ฝ”๋” ์ „์šฉ Transformer์ž…๋‹ˆ๋‹ค. ํ† ํฐ์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ๋งˆ์Šคํ‚นํ•˜์—ฌ ๋ชจ๋ธ์ด ์ด์ „ ํ† ํฐ์„ ๋ณด๊ณ  "๋ถ€์ • ํ–‰์œ„"๋ฅผ ํ•˜์ง€ ๋ชปํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. GPT-2๋Š” ๋ฐฉ๋Œ€ํ•œ ํ…์ŠคํŠธ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จํ•˜์—ฌ ํ…์ŠคํŠธ๊ฐ€ ์ผ๋ถ€๋งŒ ์ •ํ™•ํ•˜๊ฑฐ๋‚˜ ์‚ฌ์‹ค์ธ ๊ฒฝ์šฐ์—๋„ ์ƒ๋‹นํžˆ ๋Šฅ์ˆ™ํ•˜๊ฒŒ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ GPT-2๋Š” BERT๊ฐ€ ์‚ฌ์ „ํ›ˆ๋ จ์—์„œ ๊ฐ–๋Š” ์–‘๋ฐฉํ–ฅ ์ปจํ…์ŠคํŠธ๊ฐ€ ๋ถ€์กฑํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํŠน์ • ์ž‘์—…์— ์ ํ•ฉํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. [XLNET](model_doc/xlnet)์€ ์–‘๋ฐฉํ–ฅ ํ›ˆ๋ จ์ด ๊ฐ€๋Šฅํ•œ permutation language modeling objective(PLM)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ BERT์™€ GPT-2์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ์— ๋Œ€ํ•œ ์žฅ์ ์„ ํ•จ๊ป˜ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. GPT-2 ์ดํ›„, ์–ธ์–ด ๋ชจ๋ธ์€ ๋”์šฑ ๊ฑฐ๋Œ€ํ•ด์กŒ๊ณ  ํ˜„์žฌ๋Š” *๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(LLM)*๋กœ ์•Œ๋ ค์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถฉ๋ถ„ํžˆ ํฐ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ LLM์€ ํ“จ์ƒท(few-shot) ๋˜๋Š” ์ œ๋กœ์ƒท(zero-shot) ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. [GPT-J](model_doc/gptj)๋Š” 6B ํฌ๊ธฐ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ๊ณ  400B ํฌ๊ธฐ์˜ ํ† ํฐ์œผ๋กœ ํ›ˆ๋ จ๋œ LLM์ž…๋‹ˆ๋‹ค. GPT-J์— ์ด์–ด ๋””์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ๊ตฐ์ธ [OPT](model_doc/opt)๊ฐ€ ๋“ฑ์žฅํ–ˆ์œผ๋ฉฐ, ์ด ์ค‘ ๊ฐ€์žฅ ํฐ ๋ชจ๋ธ์€ 175B ํฌ๊ธฐ์ด๊ณ  180B ํฌ๊ธฐ์˜ ํ† ํฐ์œผ๋กœ ํ›ˆ๋ จ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. [BLOOM](model_doc/bloom)์€ ๋น„์Šทํ•œ ์‹œ๊ธฐ์— ์ถœ์‹œ๋˜์—ˆ์œผ๋ฉฐ, ์ด ์ค‘ ๊ฐ€์žฅ ํฐ ๋ชจ๋ธ์€ 176B ํฌ๊ธฐ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ๊ณ  46๊ฐœ์˜ ์–ธ์–ด์™€ 13๊ฐœ์˜ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์–ธ์–ด๋กœ ๋œ 366B ํฌ๊ธฐ์˜ ํ† ํฐ์œผ๋กœ ํ›ˆ๋ จ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ### ์ธ์ฝ”๋”-๋””์ฝ”๋”[[nlp-encoder-decoder]] [BART](model_doc/bart)๋Š” ๊ธฐ๋ณธ Transformer ์•„ํ‚คํ…์ฒ˜๋ฅผ ์œ ์ง€ํ•˜์ง€๋งŒ, ์ผ๋ถ€ ํ…์ŠคํŠธ ์ŠคํŒฌ(span)์ด ๋‹จ์ผ `๋งˆ์Šคํฌ` ํ† ํฐ์œผ๋กœ ๋Œ€์ฒด๋˜๋Š” *text infilling* ๋ณ€ํ˜•์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ๋ณ€ํ˜•๋˜์ง€ ์•Š์€ ํ† ํฐ(ํ–ฅํ›„ ํ† ํฐ์€ ๋งˆ์Šคํ‚น๋จ)์„ ์˜ˆ์ธกํ•˜๊ณ  ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ์ž‘์—…์„ ๋•์Šต๋‹ˆ๋‹ค. [Pegasus](model_doc/pegasus)๋Š” BART์™€ ์œ ์‚ฌํ•˜์ง€๋งŒ, Pegasus๋Š” ํ…์ŠคํŠธ ์ŠคํŒฌ ๋Œ€์‹  ์ „์ฒด ๋ฌธ์žฅ์„ ๋งˆ์Šคํ‚นํ•ฉ๋‹ˆ๋‹ค. Pegasus๋Š” ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง ์™ธ์—๋„ gap sentence generation(GSG)๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. GSG๋Š” ๋ฌธ์„œ์— ์ค‘์š”ํ•œ ๋ฌธ์žฅ ์ „์ฒด๋ฅผ ๋งˆ์Šคํ‚นํ•˜์—ฌ `๋งˆ์Šคํฌ` ํ† ํฐ์œผ๋กœ ๋Œ€์ฒดํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ๋‚จ์€ ๋ฌธ์žฅ์—์„œ ์ถœ๋ ฅ์„ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [T5](model_doc/t5)๋Š” ํŠน์ • ์ ‘๋‘์‚ฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  NLP ์ž‘์—…์„ ํ…์ŠคํŠธ ํˆฌ ํ…์ŠคํŠธ ๋ฌธ์ œ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๋” ํŠน์ˆ˜ํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ ‘๋‘์‚ฌ `Summarize:`์€ ์š”์•ฝ ์ž‘์—…์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. T5๋Š” ์ง€๋„(GLUE ๋ฐ SuperGLUE) ํ›ˆ๋ จ๊ณผ ์ž๊ธฐ์ง€๋„ ํ›ˆ๋ จ(ํ† ํฐ์˜ 15%๋ฅผ ์ž„์˜๋กœ ์ƒ˜ํ”Œ๋งํ•˜์—ฌ ์ œ๊ฑฐ)์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ## ์˜ค๋””์˜ค[[audio]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2Fvrchl8jDV9YwNVPWu2W0kK%2Fspeech-and-audio-model-timeline%3Fnode-id%3D0%253A1%26t%3DmM4H8pPMuK23rClL-1" allowfullscreen></iframe> ### ์ธ์ฝ”๋”[[audio-encoder]] [Wav2Vec2](model_doc/wav2vec2)๋Š” Transformer ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›๋ณธ ์˜ค๋””์˜ค ํŒŒํ˜•(raw audio waveform)์—์„œ ์ง์ ‘ ์Œ์„ฑ ํ‘œํ˜„์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ํ—ˆ์œ„ ์Œ์„ฑ ํ‘œํ˜„ ์„ธํŠธ์—์„œ ์‹ค์ œ ์Œ์„ฑ ํ‘œํ˜„์„ ํŒ๋ณ„ํ•˜๋Š” ๋Œ€์กฐ ์ž‘์—…์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. [HuBERT](model_doc/hubert)๋Š” Wav2Vec2์™€ ์œ ์‚ฌํ•˜์ง€๋งŒ ํ›ˆ๋ จ ๊ณผ์ •์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ํƒ€๊ฒŸ ๋ ˆ์ด๋ธ”์ด ์œ ์‚ฌํ•œ ์˜ค๋””์˜ค ์„ธ๊ทธ๋จผํŠธ๊ฐ€ ํด๋Ÿฌ์Šคํ„ฐ์— ํ• ๋‹น๋˜์–ด ์€๋‹‰ ๋‹จ์œ„(unit)๊ฐ€ ๋˜๋Š” ๊ตฐ์ง‘ํ™”(clustering) ๋‹จ๊ณ„์—์„œ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์€๋‹‰ ๋‹จ์œ„๋Š” ์˜ˆ์ธก์„ ์œ„ํ•œ ์ž„๋ฒ ๋”ฉ์— ๋งคํ•‘๋ฉ๋‹ˆ๋‹ค. ### ์ธ์ฝ”๋”-๋””์ฝ”๋”[[audio-encoder-decoder]] [Speech2Text](model_doc/speech_to_text)๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR) ๋ฐ ์Œ์„ฑ ๋ฒˆ์—ญ์„ ์œ„ํ•ด ๊ณ ์•ˆ๋œ ์Œ์„ฑ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์˜ค๋””์˜ค ํŒŒํ˜•์—์„œ ์ถ”์ถœํ•œ log mel-filter bank ํŠน์ง•์„ ์ฑ„ํƒํ•˜๊ณ  ์ž๊ธฐํšŒ๊ท€ ๋ฐฉ์‹์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จํ•˜์—ฌ, ์ „์‚ฌ๋ณธ ๋˜๋Š” ๋ฒˆ์—ญ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. [Whisper](model_doc/whisper)์€ ASR ๋ชจ๋ธ์ด์ง€๋งŒ, ๋‹ค๋ฅธ ๋งŽ์€ ์Œ์„ฑ ๋ชจ๋ธ๊ณผ ๋‹ฌ๋ฆฌ ์ œ๋กœ์ƒท ์„ฑ๋Šฅ์„ ์œ„ํ•ด ๋Œ€๋Ÿ‰์˜ โœจ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ โœจ ์˜ค๋””์˜ค ์ „์‚ฌ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํฐ ๋ฌถ์Œ์—๋Š” ์˜์–ด๊ฐ€ ์•„๋‹Œ ์–ธ์–ด๋„ ํฌํ•จ๋˜์–ด ์žˆ์–ด์„œ ์ž์›์ด ์ ์€ ์–ธ์–ด์—๋„ Whisper๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌ์กฐ์ ์œผ๋กœ, Whisper๋Š” Speech2Text์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ์‹ ํ˜ธ๋Š” ์ธ์ฝ”๋”์— ์˜ํ•ด ์ธ์ฝ”๋”ฉ๋œ log-mel spectrogram์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ์™€ ์ด์ „ ํ† ํฐ์œผ๋กœ๋ถ€ํ„ฐ ์ž๊ธฐํšŒ๊ท€ ๋ฐฉ์‹์œผ๋กœ ์ „์‚ฌ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ## ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ[[multimodal]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FcX125FQHXJS2gxeICiY93p%2Fmultimodal%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe> ### ์ธ์ฝ”๋”[[mm-encoder]] [VisualBERT](model_doc/visual_bert)๋Š” BERT ์ดํ›„์— ์ถœ์‹œ๋œ ๋น„์ „ ์–ธ์–ด ์ž‘์—…์„ ์œ„ํ•œ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ BERT์™€ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๊ฐ์ฒด ํƒ์ง€ ์‹œ์Šคํ…œ์„ ๊ฒฐํ•ฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ํŠน์ง•์„ ์‹œ๊ฐ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ์ถ”์ถœํ•˜๊ณ , ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ๊ณผ ํ•จ๊ป˜ BERT๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. VisualBERT๋Š” ๋งˆ์Šคํ‚น๋˜์ง€ ์•Š์€ ํ…์ŠคํŠธ์™€ ์‹œ๊ฐ ์ž„๋ฒ ๋”ฉ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋งˆ์Šคํ‚น๋œ ํ…์ŠคํŠธ๋ฅผ ์˜ˆ์ธกํ•˜๊ณ , ํ…์ŠคํŠธ๊ฐ€ ์ด๋ฏธ์ง€์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ViT๊ฐ€ ์ด๋ฏธ์ง€ ์ž„๋ฒ ๋”ฉ์„ ๊ตฌํ•˜๋Š” ๋ฐฉ์‹์ด ๋” ์‰ฌ์› ๊ธฐ ๋•Œ๋ฌธ์—, ViT๊ฐ€ ์ถœ์‹œ๋œ ํ›„ [ViLT](model_doc/vilt)๋Š” ์•„ํ‚คํ…์ฒ˜์— ViT๋ฅผ ์ฑ„ํƒํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ž„๋ฒ ๋”ฉ์€ ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ๊ณผ ํ•จ๊ป˜ ์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ, ViLT๋Š” ์ด๋ฏธ์ง€ ํ…์ŠคํŠธ ๋งค์นญ, ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์ „์ฒด ๋‹จ์–ด ๋งˆ์Šคํ‚น์„ ํ†ตํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. [CLIP](model_doc/clip)์€ ๋‹ค๋ฅธ ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜์—ฌ (`์ด๋ฏธ์ง€`, `ํ…์ŠคํŠธ`)์˜ ์Œ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. (`์ด๋ฏธ์ง€`, `ํ…์ŠคํŠธ`) ์Œ์—์„œ์˜ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ ๊ฐ„์˜ ์œ ์‚ฌ๋„๋ฅผ ์ตœ๋Œ€ํ™”ํ•˜๊ธฐ ์œ„ํ•ด 4์–ต ๊ฐœ์˜ (`์ด๋ฏธ์ง€`, `ํ…์ŠคํŠธ`) ์Œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•ด ์ด๋ฏธ์ง€ ์ธ์ฝ”๋”(ViT)์™€ ํ…์ŠคํŠธ ์ธ์ฝ”๋”(Transformer)๋ฅผ ํ•จ๊ป˜ ํ›ˆ๋ จํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ ํ›„, ์ž์—ฐ์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๊ฐ€ ์ฃผ์–ด์ง„ ํ…์ŠคํŠธ๋ฅผ ์˜ˆ์ธกํ•˜๊ฑฐ๋‚˜ ๊ทธ ๋ฐ˜๋Œ€๋กœ ์˜ˆ์ธกํ•˜๋„๋ก CLIP์— ์ง€์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [OWL-ViT](model_doc/owlvit)๋Š” CLIP์„ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•œ ๋ฐฑ๋ณธ(backbone)์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ CLIP ์ƒ์— ๊ตฌ์ถ•๋ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ ํ›„, ๊ฐ์ฒด ํƒ์ง€ ํ—ค๋“œ๊ฐ€ ์ถ”๊ฐ€๋˜์–ด (`ํด๋ž˜์Šค`, `๋ฐ”์šด๋”ฉ ๋ฐ•์Šค`) ์Œ์— ๋Œ€ํ•œ ์ง‘ํ•ฉ(set) ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ### ์ธ์ฝ”๋”-๋””์ฝ”๋”[[mm-encoder-decoder]] ๊ด‘ํ•™ ๋ฌธ์ž ์ธ์‹(OCR)์€ ์ด๋ฏธ์ง€๋ฅผ ์ดํ•ดํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ํ•„์š”๋กœ ํ•˜๋Š” ์ „ํ†ต์ ์ธ ํ…์ŠคํŠธ ์ธ์‹ ์ž‘์—…์ž…๋‹ˆ๋‹ค. [TrOCR](model_doc/trocr)์€ ์ข…๋‹จ๊ฐ„(end-to-end) Transformer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ํ”„๋กœ์„ธ์Šค๋ฅผ ๊ฐ„์†Œํ™”ํ•ฉ๋‹ˆ๋‹ค. ์ธ์ฝ”๋”๋Š” ์ด๋ฏธ์ง€ ์ดํ•ด๋ฅผ ์œ„ํ•œ ViT ๋ฐฉ์‹์˜ ๋ชจ๋ธ์ด๋ฉฐ ์ด๋ฏธ์ง€๋ฅผ ๊ณ ์ •๋œ ํฌ๊ธฐ์˜ ํŒจ์น˜๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›์•„์„œ ์ž๊ธฐํšŒ๊ท€ ๋ฐฉ์‹์œผ๋กœ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. [Donut](model_doc/donut)์€ OCR ๊ธฐ๋ฐ˜ ์ ‘๊ทผ ๋ฐฉ์‹์— ์˜์กดํ•˜์ง€ ์•Š๋Š” ๋” ์ผ๋ฐ˜์ ์ธ ์‹œ๊ฐ ๋ฌธ์„œ ์ดํ•ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ Swin Transformer๋ฅผ ์ธ์ฝ”๋”๋กœ, ๋‹ค๊ตญ์–ด BART๋ฅผ ๋””์ฝ”๋”๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. Donut์€ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ์ฃผ์„์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ฝ๋„๋ก ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ฃผ์–ด์ง€๋ฉด ํ† ํฐ ์‹œํ€€์Šค๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋Š” ๊ฐ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ๋Œ€ํ•œ ํŠน์ˆ˜ ํ† ํฐ์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌธ์„œ ํŒŒ์‹ฑ(parsing)์—๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ์™€ ๊ฒฐํ•ฉ๋˜์–ด ๋ฌธ์„œ๋ฅผ ์ •ํ˜• ์ถœ๋ ฅ ํ˜•์‹(JSON)์œผ๋กœ ํŒŒ์‹ฑํ•˜๋Š” ํŠน์ˆ˜ `ํŒŒ์‹ฑ` ํ† ํฐ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ## ๊ฐ•ํ™” ํ•™์Šต[[reinforcement-learning]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FiB3Y6RvWYki7ZuKO6tNgZq%2Freinforcement-learning%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe> ### ๋””์ฝ”๋”[[rl-decoder]] Decision ๋ฐ Trajectory Transformer๋Š” ์ƒํƒœ(state), ํ–‰๋™(action), ๋ณด์ƒ(reward)์„ ์‹œํ€€์Šค ๋ชจ๋ธ๋ง ๋ฌธ์ œ๋กœ ํ‘œํ˜„ํ•ฉ๋‹ˆ๋‹ค. [Decision Transformer](model_doc/decision_transformer)๋Š” ๊ธฐ๋Œ€ ๋ณด์ƒ(returns-to-go), ๊ณผ๊ฑฐ ์ƒํƒœ ๋ฐ ํ–‰๋™์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ฏธ๋ž˜์˜ ์›ํ•˜๋Š” ์ˆ˜์ต(return)์œผ๋กœ ์ด์–ด์ง€๋Š” ์ผ๋ จ์˜ ํ–‰๋™์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰ *K* ์‹œ๊ฐ„ ์Šคํ…(timestep)์— ๋Œ€ํ•ด, ์„ธ ๊ฐ€์ง€ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๋Š” ๊ฐ๊ฐ ํ† ํฐ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ๋ณ€ํ™˜๋˜๊ณ  GPT์™€ ๊ฐ™์€ ๋ชจ๋ธ์— ์˜ํ•ด ์ฒ˜๋ฆฌ๋˜์–ด ๋ฏธ๋ž˜์˜ ์•ก์…˜ ํ† ํฐ์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. [Trajectory Transformer](model_doc/trajectory_transformer)๋„ ์ƒํƒœ, ํ–‰๋™, ๋ณด์ƒ์„ ํ† ํฐํ™”ํ•˜์—ฌ GPT ์•„ํ‚คํ…์ฒ˜๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๋ณด์ƒ ์กฐ๊ฑด์— ์ค‘์ ์„ ๋‘” Decision Transformer์™€ ๋‹ฌ๋ฆฌ Trajectory Transformer๋Š” ๋น” ์„œ์น˜(beam search)๋กœ ๋ฏธ๋ž˜ ํ–‰๋™์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # AutoClass๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ์ธ์Šคํ„ด์Šค ๋กœ๋“œ[[load-pretrained-instances-with-an-autoclass]] ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๋งค์šฐ ๋‹ค์–‘ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ฒดํฌํฌ์ธํŠธ์— ๋งž๋Š” ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์ด ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‰ฝ๊ณ  ๊ฐ„๋‹จํ•˜๋ฉฐ ์œ ์—ฐํ•˜๊ฒŒ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•œ Transformer ํ•ต์‹ฌ ์ฒ ํ•™์˜ ์ผํ™˜์œผ๋กœ, `AutoClass`๋Š” ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜์—ฌ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. `from_pretrained()` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ•™์Šตํ•˜๋Š” ๋ฐ ์‹œ๊ฐ„๊ณผ ๋ฆฌ์†Œ์Šค๋ฅผ ํˆฌ์ž…ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š๋Š” ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•œ๋‹ค๋Š” ๊ฒƒ์€ ์ฝ”๋“œ๊ฐ€ ํ•œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์ž‘๋™ํ•˜๋ฉด ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๋‹ค๋ฅด๋”๋ผ๋„ ๋‹ค๋ฅธ ์ฒดํฌํฌ์ธํŠธ(์œ ์‚ฌํ•œ ์ž‘์—…์— ๋Œ€ํ•ด ํ•™์Šต๋œ ๊ฒฝ์šฐ)์—์„œ๋„ ์ž‘๋™ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. <Tip> ์•„ํ‚คํ…์ฒ˜๋Š” ๋ชจ๋ธ์˜ ๊ณจ๊ฒฉ์„ ์˜๋ฏธํ•˜๋ฉฐ ์ฒดํฌํฌ์ธํŠธ๋Š” ์ฃผ์–ด์ง„ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•œ ๊ฐ€์ค‘์น˜์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [BERT](https://huggingface.co/google-bert/bert-base-uncased)๋Š” ์•„ํ‚คํ…์ฒ˜์ด๊ณ , `google-bert/bert-base-uncased`๋Š” ์ฒดํฌํฌ์ธํŠธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ์•„ํ‚คํ…์ฒ˜ ๋˜๋Š” ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์˜๋ฏธํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ฐ˜์ ์ธ ์šฉ์–ด์ž…๋‹ˆ๋‹ค. </Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค: * ์‚ฌ์ „ ํ•™์Šต๋œ ํ† ํฌ๋‚˜์ด์ € ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ•™์Šต๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ•™์Šต๋œ ํŠน์ง• ์ถ”์ถœ๊ธฐ ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ํ”„๋กœ์„ธ์„œ ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋กœ๋“œํ•˜๊ธฐ. ## AutoTokenizer[[autotokenizer]] ๊ฑฐ์˜ ๋ชจ๋“  NLP ์ž‘์—…์€ ํ† ํฌ๋‚˜์ด์ €๋กœ ์‹œ์ž‘๋ฉ๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์„ ๋ชจ๋ธ์—์„œ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. [`AutoTokenizer.from_pretrained`]๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") ``` ๊ทธ๋ฆฌ๊ณ  ์•„๋ž˜์™€ ๊ฐ™์ด ์ž…๋ ฅ์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> sequence = "In a hole in the ground there lived a hobbit." >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoImageProcessor[[autoimageprocessor]] ๋น„์ „ ์ž‘์—…์˜ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ์ด๋ฏธ์ง€๋ฅผ ์˜ฌ๋ฐ”๋ฅธ ์ž…๋ ฅ ํ˜•์‹์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ## AutoFeatureExtractor[[autofeatureextractor]] ์˜ค๋””์˜ค ์ž‘์—…์˜ ๊ฒฝ์šฐ ํŠน์ง• ์ถ”์ถœ๊ธฐ๊ฐ€ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์˜ฌ๋ฐ”๋ฅธ ์ž…๋ ฅ ํ˜•์‹์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. [`AutoFeatureExtractor.from_pretrained`]๋กœ ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor[[autoprocessor]] ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์—๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์˜ ์ „์ฒ˜๋ฆฌ ๋„๊ตฌ๋ฅผ ๊ฒฐํ•ฉํ•œ ํ”„๋กœ์„ธ์„œ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด LayoutLMV2 ๋ชจ๋ธ์—๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ํ”„๋กœ์„ธ์„œ๋Š” ์ด ๋‘ ๊ฐ€์ง€๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. [`AutoProcessor.from_pretrained()`]๋กœ ํ”„๋กœ์„ธ์„œ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel[[automodel]] <frameworkcontent> <pt> ๋งˆ์ง€๋ง‰์œผ๋กœ AutoModelForํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ฃผ์–ด์ง„ ์ž‘์—…์— ๋Œ€ํ•ด ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [์—ฌ๊ธฐ](model_doc/auto)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). ์˜ˆ๋ฅผ ๋“ค์–ด, [`AutoModelForSequenceClassification.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹œํ€€์Šค ๋ถ„๋ฅ˜์šฉ ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‰ฝ๊ฒŒ ์žฌ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค๋ฅธ ์ž‘์—…์— ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` <Tip warning={true}> PyTorch๋ชจ๋ธ์˜ ๊ฒฝ์šฐ `from_pretrained()` ๋ฉ”์„œ๋“œ๋Š” ๋‚ด๋ถ€์ ์œผ๋กœ ํ”ผํด์„ ์‚ฌ์šฉํ•˜์—ฌ ์•ˆ์ „ํ•˜์ง€ ์•Š์€ ๊ฒƒ์œผ๋กœ ์•Œ๋ ค์ง„ `torch.load()`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์‹ ๋ขฐํ•  ์ˆ˜ ์—†๋Š” ์†Œ์Šค์—์„œ ๊ฐ€์ ธ์™”๊ฑฐ๋‚˜ ๋ณ€์กฐ๋˜์—ˆ์„ ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์€ ๋กœ๋“œํ•˜์ง€ ๋งˆ์„ธ์š”. ํ—ˆ๊น… ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์—์„œ ํ˜ธ์ŠคํŒ…๋˜๋Š” ๊ณต๊ฐœ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ด๋Ÿฌํ•œ ๋ณด์•ˆ ์œ„ํ—˜์ด ๋ถ€๋ถ„์ ์œผ๋กœ ์™„ํ™”๋˜๋ฉฐ, ๊ฐ ์ปค๋ฐ‹ ์‹œ ๋ฉ€์›จ์–ด๋ฅผ [๊ฒ€์‚ฌํ•ฉ๋‹ˆ๋‹ค](https://huggingface.co/docs/hub/security-malware). GPG๋ฅผ ์‚ฌ์šฉํ•ด ์„œ๋ช…๋œ [์ปค๋ฐ‹ ๊ฒ€์ฆ](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg)๊ณผ ๊ฐ™์€ ๋ชจ๋ฒ”์‚ฌ๋ก€๋Š” [๋ฌธ์„œ](https://huggingface.co/docs/hub/security)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ํ…์„œํ”Œ๋กœ์šฐ์™€ Flax ์ฒดํฌํฌ์ธํŠธ๋Š” ์˜ํ–ฅ์„ ๋ฐ›์ง€ ์•Š์œผ๋ฉฐ, `from_pretrained`๋ฉ”์„œ๋“œ์— `from_tf` ์™€ `from_flax` ํ‚ค์›Œ๋“œ ๊ฐ€๋ณ€ ์ธ์ž๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ๋ฌธ์ œ๋ฅผ ์šฐํšŒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์ผ๋ฐ˜์ ์œผ๋กœ AutoTokenizer ํด๋ž˜์Šค์™€ AutoModelFor ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šค๋ฅผ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋งค๋ฒˆ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ [ํŠœํ† ๋ฆฌ์–ผ](preprocessing)์—์„œ๋Š” ์ƒˆ๋กญ๊ฒŒ ๋กœ๋“œํ•œ ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ์„ธ ํŠœ๋‹์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค. </pt> <tf> ๋งˆ์ง€๋ง‰์œผ๋กœ `TFAutoModelFor` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ฃผ์–ด์ง„ ์ž‘์—…์— ๋Œ€ํ•ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [์—ฌ๊ธฐ](model_doc/auto)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด, [`TFAutoModelForSequenceClassification.from_pretrained`]๋กœ ์‹œํ€€์Šค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` ์‰ฝ๊ฒŒ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์žฌ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค๋ฅธ ์ž‘์—…์— ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` ์ผ๋ฐ˜์ ์œผ๋กœ, `AutoTokenizer`ํด๋ž˜์Šค์™€ `TFAutoModelFor` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šค๋ฅผ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋งค๋ฒˆ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ [ํŠœํ† ๋ฆฌ์–ผ](preprocessing)์—์„œ๋Š” ์ƒˆ๋กญ๊ฒŒ ๋กœ๋“œํ•œ ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ์„ธ ํŠœ๋‹์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค. </tf> </frameworkcontent>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/multilingual.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹ค๊ตญ์–ด ๋ชจ๋ธ ์ถ”๋ก ํ•˜๊ธฐ[[multilingual-models-for-inference]] [[open-in-colab]] ๐Ÿค— Transformers์—๋Š” ์—ฌ๋Ÿฌ ์ข…๋ฅ˜์˜ ๋‹ค๊ตญ์–ด(multilingual) ๋ชจ๋ธ์ด ์žˆ์œผ๋ฉฐ, ๋‹จ์ผ ์–ธ์–ด(monolingual) ๋ชจ๋ธ๊ณผ ์ถ”๋ก  ์‹œ ์‚ฌ์šฉ๋ฒ•์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๊ณ  ํ•ด์„œ *๋ชจ๋“ * ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์˜ ์‚ฌ์šฉ๋ฒ•์ด ๋‹ค๋ฅธ ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased)์™€ ๊ฐ™์€ ๋ช‡๋ช‡ ๋ชจ๋ธ์€ ๋‹จ์ผ ์–ธ์–ด ๋ชจ๋ธ์ฒ˜๋Ÿผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์˜ ์ถ”๋ก  ์‹œ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## XLM[[xlm]] XLM์—๋Š” 10๊ฐ€์ง€ ์ฒดํฌํฌ์ธํŠธ(checkpoint)๊ฐ€ ์žˆ๋Š”๋ฐ, ์ด ์ค‘ ํ•˜๋‚˜๋งŒ ๋‹จ์ผ ์–ธ์–ด์ž…๋‹ˆ๋‹ค. ๋‚˜๋จธ์ง€ ์ฒดํฌํฌ์ธํŠธ 9๊ฐœ๋Š” ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•˜๋Š” ์ฒดํฌํฌ์ธํŠธ์™€ ๊ทธ๋ ‡์ง€ ์•Š์€ ์ฒดํฌํฌ์ธํŠธ์˜ ๋‘ ๊ฐ€์ง€ ๋ฒ”์ฃผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•˜๋Š” XLM[[xlm-with-language-embeddings]] ๋‹ค์Œ XLM ๋ชจ๋ธ์€ ์ถ”๋ก  ์‹œ์— ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: - `FacebookAI/xlm-mlm-ende-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์˜์–ด-๋…์ผ์–ด) - `FacebookAI/xlm-mlm-enfr-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์˜์–ด-ํ”„๋ž‘์Šค์–ด) - `FacebookAI/xlm-mlm-enro-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์˜์–ด-๋ฃจ๋งˆ๋‹ˆ์•„์–ด) - `FacebookAI/xlm-mlm-xnli15-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, XNLI ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ œ๊ณตํ•˜๋Š” 15๊ฐœ ๊ตญ์–ด) - `FacebookAI/xlm-mlm-tlm-xnli15-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง + ๋ฒˆ์—ญ, XNLI ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ œ๊ณตํ•˜๋Š” 15๊ฐœ ๊ตญ์–ด) - `FacebookAI/xlm-clm-enfr-1024` (Causal language modeling, ์˜์–ด-ํ”„๋ž‘์Šค์–ด) - `FacebookAI/xlm-clm-ende-1024` (Causal language modeling, ์˜์–ด-๋…์ผ์–ด) ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์€ ๋ชจ๋ธ์— ์ „๋‹ฌ๋œ `input_ids`์™€ ๋™์ผํ•œ shape์˜ ํ…์„œ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…์„œ์˜ ๊ฐ’์€ ์‚ฌ์šฉ๋œ ์–ธ์–ด์— ๋”ฐ๋ผ ๋‹ค๋ฅด๋ฉฐ ํ† ํฌ๋‚˜์ด์ €์˜ `lang2id` ๋ฐ `id2lang` ์†์„ฑ์— ์˜ํ•ด ์‹๋ณ„๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ์ œ์—์„œ๋Š” `FacebookAI/xlm-clm-enfr-1024` ์ฒดํฌํฌ์ธํŠธ(์ฝ”์ž˜ ์–ธ์–ด ๋ชจ๋ธ๋ง(causal language modeling), ์˜์–ด-ํ”„๋ž‘์Šค์–ด)๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("FacebookAI/xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("FacebookAI/xlm-clm-enfr-1024") ``` ํ† ํฌ๋‚˜์ด์ €์˜ `lang2id` ์†์„ฑ์€ ๋ชจ๋ธ์˜ ์–ธ์–ด์™€ ํ•ด๋‹น ID๋ฅผ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` ๋‹ค์Œ์œผ๋กœ, ์˜ˆ์ œ ์ž…๋ ฅ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # ๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” 1์ž…๋‹ˆ๋‹ค ``` ์–ธ์–ด ID๋ฅผ `"en"`์œผ๋กœ ์„ค์ •ํ•ด ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์€ ์˜์–ด์˜ ์–ธ์–ด ID์ธ `0`์œผ๋กœ ์ฑ„์›Œ์ง„ ํ…์„œ์ž…๋‹ˆ๋‹ค. ์ด ํ…์„œ๋Š” `input_ids`์™€ ๊ฐ™์€ ํฌ๊ธฐ์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # (batch_size, sequence_length) shape์˜ ํ…์„œ๊ฐ€ ๋˜๋„๋ก ๋งŒ๋“ญ๋‹ˆ๋‹ค. >>> langs = langs.view(1, -1) # ์ด์ œ [1, sequence_length] shape์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค(๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” 1์ž…๋‹ˆ๋‹ค) ``` ์ด์ œ `input_ids`์™€ ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ๋ชจ๋ธ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> outputs = model(input_ids, langs=langs) ``` [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) ์Šคํฌ๋ฆฝํŠธ๋กœ `xlm-clm` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•ด ํ…์ŠคํŠธ์™€ ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” XLM[[xlm-without-language-embeddings]] ๋‹ค์Œ XLM ๋ชจ๋ธ์€ ์ถ”๋ก  ์‹œ์— ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค: - `FacebookAI/xlm-mlm-17-1280` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 17๊ฐœ ๊ตญ์–ด) - `FacebookAI/xlm-mlm-100-1280` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 100๊ฐœ ๊ตญ์–ด) ์ด์ „์˜ XLM ์ฒดํฌํฌ์ธํŠธ์™€ ๋‹ฌ๋ฆฌ ์ด ๋ชจ๋ธ์€ ์ผ๋ฐ˜ ๋ฌธ์žฅ ํ‘œํ˜„์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ## BERT[[bert]] ๋‹ค์Œ BERT ๋ชจ๋ธ์€ ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `google-bert/bert-base-multilingual-uncased` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง + ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก, 102๊ฐœ ๊ตญ์–ด) - `google-bert/bert-base-multilingual-cased` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง + ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก, 104๊ฐœ ๊ตญ์–ด) ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ถ”๋ก  ์‹œ์— ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฌธ๋งฅ์—์„œ ์–ธ์–ด๋ฅผ ์‹๋ณ„ํ•˜๊ณ , ์‹๋ณ„๋œ ์–ธ์–ด๋กœ ์ถ”๋ก ํ•ฉ๋‹ˆ๋‹ค. ## XLM-RoBERTa[[xlmroberta]] ๋‹ค์Œ XLM-RoBERTa ๋˜ํ•œ ๋‹ค๊ตญ์–ด ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `FacebookAI/xlm-roberta-base` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 100๊ฐœ ๊ตญ์–ด) - `FacebookAI/xlm-roberta-large` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 100๊ฐœ ๊ตญ์–ด) XLM-RoBERTa๋Š” 100๊ฐœ ๊ตญ์–ด์— ๋Œ€ํ•ด ์ƒˆ๋กœ ์ƒ์„ฑ๋˜๊ณ  ์ •์ œ๋œ 2.5TB ๊ทœ๋ชจ์˜ CommonCrawl ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์ „์— ๊ณต๊ฐœ๋œ mBERT๋‚˜ XLM๊ณผ ๊ฐ™์€ ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์— ๋น„ํ•ด ๋ถ„๋ฅ˜, ์‹œํ€€์Šค ๋ผ๋ฒจ๋ง, ์งˆ์˜ ์‘๋‹ต๊ณผ ๊ฐ™์€ ๋‹ค์šด์ŠคํŠธ๋ฆผ(downstream) ์ž‘์—…์—์„œ ์ด์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ## M2M100[[m2m100]] ๋‹ค์Œ M2M100 ๋ชจ๋ธ ๋˜ํ•œ ๋‹ค๊ตญ์–ด ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `facebook/m2m100_418M` (๋ฒˆ์—ญ) - `facebook/m2m100_1.2B` (๋ฒˆ์—ญ) ์ด ์˜ˆ์ œ์—์„œ๋Š” `facebook/m2m100_418M` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ค‘๊ตญ์–ด๋ฅผ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•ฉ๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €์—์„œ ๋ฒˆ์—ญ ๋Œ€์ƒ ์–ธ์–ด(source language)๋ฅผ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` ๋ฌธ์žฅ์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100์€ ๋ฒˆ์—ญ์„ ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์ฒซ ๋ฒˆ์งธ๋กœ ์ƒ์„ฑ๋˜๋Š” ํ† ํฐ์€ ๋ฒˆ์—ญํ•  ์–ธ์–ด(target language) ID๋กœ ๊ฐ•์ œ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด `generate` ๋ฉ”์†Œ๋“œ์—์„œ `forced_bos_token_id`๋ฅผ `en`์œผ๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart[[mbart]] ๋‹ค์Œ MBart ๋ชจ๋ธ ๋˜ํ•œ ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `facebook/mbart-large-50-one-to-many-mmt` (์ผ๋Œ€๋‹ค ๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-50-many-to-many-mmt` (๋‹ค๋Œ€๋‹ค ๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-50-many-to-one-mmt` (๋‹ค๋Œ€์ผ ๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-50` (๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-cc25` ์ด ์˜ˆ์ œ์—์„œ๋Š” ํ•€๋ž€๋“œ์–ด๋ฅผ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด `facebook/mbart-large-50-many-to-many-mmt` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €์—์„œ ๋ฒˆ์—ญ ๋Œ€์ƒ ์–ธ์–ด(source language)๋ฅผ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` ๋ฌธ์žฅ์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart๋Š” ๋ฒˆ์—ญ์„ ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์ฒซ ๋ฒˆ์งธ๋กœ ์ƒ์„ฑ๋˜๋Š” ํ† ํฐ์€ ๋ฒˆ์—ญํ•  ์–ธ์–ด(target language) ID๋กœ ๊ฐ•์ œ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด `generate` ๋ฉ”์†Œ๋“œ์—์„œ `forced_bos_token_id`๋ฅผ `en`์œผ๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` `facebook/mbart-large-50-many-to-one-mmt` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋ฉด, ์ฒซ ๋ฒˆ์งธ๋กœ ์ƒ์„ฑ๋˜๋Š” ํ† ํฐ์„ ๋ฒˆ์—ญํ•  ์–ธ์–ด(target language) ID๋กœ ๊ฐ•์ œ ์ง€์ •ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ[[share-a-model]] ์ง€๋‚œ ๋‘ ํŠœํ† ๋ฆฌ์–ผ์—์„œ ๋ถ„์‚ฐ ์„ค์ •์„ ์œ„ํ•ด PyTorch, Keras ๋ฐ ๐Ÿค— Accelerate๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์•˜์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ชจ๋ธ์„ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! Hugging Face๋Š” ์ธ๊ณต์ง€๋Šฅ์˜ ๋ฏผ์ฃผํ™”๋ฅผ ์œ„ํ•ด ๋ชจ๋‘์—๊ฒŒ ์ง€์‹๊ณผ ์ž์›์„ ๊ณต๊ฐœ์ ์œผ๋กœ ๊ณต์œ ํ•ด์•ผ ํ•œ๋‹ค๊ณ  ๋ฏฟ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์‹œ๊ฐ„๊ณผ ์ž์›์„ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ๋„๋ก ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ณ ๋ คํ•ด ๋ณด์„ธ์š”. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ [Model Hub](https://huggingface.co/models)์—์„œ ํ›ˆ๋ จ๋˜๊ฑฐ๋‚˜ ๋ฏธ์„ธ ์กฐ์ • ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…์‹œ๋‹ค: - API๋ฅผ ํ†ตํ•ด ํŒŒ์ผ์„ Hub์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค. - ์›น์‚ฌ์ดํŠธ๋ฅผ ํ†ตํ•ด ํŒŒ์ผ์„ Hub๋กœ ๋Œ์–ด๋‹ค ๋†“์Šต๋‹ˆ๋‹ค. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋ ค๋ฉด, [huggingface.co](https://huggingface.co/join)์— ๊ณ„์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด ์กฐ์ง์— ๊ฐ€์ž…ํ•˜๊ฑฐ๋‚˜ ์ƒˆ๋กœ ๋งŒ๋“ค ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ## ์ €์žฅ์†Œ ํŠน์ง•[[repository-features]] ๋ชจ๋ธ ํ—ˆ๋ธŒ์˜ ๊ฐ ์ €์žฅ์†Œ๋Š” ์ผ๋ฐ˜์ ์ธ GitHub ์ €์žฅ์†Œ์ฒ˜๋Ÿผ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ๋Š” ๋ฒ„์ „ ๊ด€๋ฆฌ, ์ปค๋ฐ‹ ๊ธฐ๋ก, ์ฐจ์ด์  ์‹œ๊ฐํ™” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ๋‚ด์žฅ๋œ ๋ฒ„์ „ ๊ด€๋ฆฌ๋Š” git ๋ฐ [git-lfs](https://git-lfs.github.com/)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ํ•˜๋‚˜์˜ ๋ชจ๋ธ์„ ํ•˜๋‚˜์˜ ์ €์žฅ์†Œ๋กœ ์ทจ๊ธ‰ํ•˜์—ฌ ์ ‘๊ทผ ์ œ์–ด ๋ฐ ํ™•์žฅ์„ฑ์ด ํ–ฅ์ƒ๋ฉ๋‹ˆ๋‹ค. ๋ฒ„์ „ ์ œ์–ด๋Š” ์ปค๋ฐ‹ ํ•ด์‹œ, ํƒœ๊ทธ ๋˜๋Š” ๋ธŒ๋žœ์น˜๋กœ ๋ชจ๋ธ์˜ ํŠน์ • ๋ฒ„์ „์„ ๊ณ ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์ธ *revision*์„ ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ `revision` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ • ๋ชจ๋ธ ๋ฒ„์ „์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ... ) ``` ๋˜ํ•œ ์ €์žฅ์†Œ์—์„œ ํŒŒ์ผ์„ ์‰ฝ๊ฒŒ ํŽธ์ง‘ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ปค๋ฐ‹ ๊ธฐ๋ก๊ณผ ์ฐจ์ด๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## ์„ค์ •[[setup]] ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜๊ธฐ ์ „์— Hugging Face ์ž๊ฒฉ ์ฆ๋ช…์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ„ฐ๋ฏธ๋„์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ, ๐Ÿค— Transformers๊ฐ€ ์„ค์น˜๋œ ๊ฐ€์ƒ ํ™˜๊ฒฝ์—์„œ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฉด Hugging Face ์บ์‹œ ํด๋”(๊ธฐ๋ณธ์ ์œผ๋กœ `~/.cache/`)์— ์•ก์„ธ์Šค ํ† ํฐ์„ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```bash huggingface-cli login ``` Jupyter ๋˜๋Š” Colaboratory์™€ ๊ฐ™์€ ๋…ธํŠธ๋ถ์„ ์‚ฌ์šฉ ์ค‘์ธ ๊ฒฝ์šฐ, [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด API๋กœ ํ—ˆ๋ธŒ์™€ ์ƒํ˜ธ ์ž‘์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash pip install huggingface_hub ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ `notebook_login`๋กœ ํ—ˆ๋ธŒ์— ๋กœ๊ทธ์ธํ•˜๊ณ , [์—ฌ๊ธฐ](https://huggingface.co/settings/token) ๋งํฌ์—์„œ ๋กœ๊ทธ์ธํ•  ํ† ํฐ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„ ๋ชจ๋ธ ๋ณ€ํ™˜ํ•˜๊ธฐ[[convert-a-model-for-all-frameworks]] ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ž‘์—…ํ•˜๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋ ค๋ฉด, PyTorch ๋ฐ TensorFlow ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•˜๊ณ  ์—…๋กœ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋‹จ๊ณ„๋ฅผ ๊ฑด๋„ˆ๋›ฐ์–ด๋„ ์‚ฌ์šฉ์ž๋Š” ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์ง€๋งŒ, ๐Ÿค— Transformers๊ฐ€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ฆ‰์„์—์„œ ๋ณ€ํ™˜ํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์†๋„๊ฐ€ ๋Š๋ ค์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์€ ์‰ฝ์Šต๋‹ˆ๋‹ค. PyTorch ๋ฐ TensorFlow๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•œ ๋‹ค์Œ(์„ค์น˜ ์ง€์นจ์€ [์—ฌ๊ธฐ](installation) ์ฐธ์กฐ) ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ์ž‘์—…์— ๋Œ€ํ•œ ํŠน์ • ๋ชจ๋ธ์„ ์ฐพ์Šต๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์ฒดํฌํฌ์ธํŠธ๋ฅผ TensorFlow์—์„œ PyTorch๋กœ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด `from_tf=True`๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> ์ฒดํฌํฌ์ธํŠธ๋ฅผ PyTorch์—์„œ TensorFlow๋กœ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด `from_pt=True`๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ƒˆ๋กœ์šด ์ฒดํฌํฌ์ธํŠธ์™€ ํ•จ๊ป˜ ์ƒˆ๋กœ์šด TensorFlow ๋ชจ๋ธ์„ ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <jax> Flax์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, PyTorch์—์„œ Flax๋กœ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณ€ํ™˜ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent> ## ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ ํ‘ธ์‹œํ•˜๊ธฐ[[push-a-model-during-training]] <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์€ ์ถ”๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜๋‚˜ ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ๋งŒํผ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. [๋ฏธ์„ธ ์กฐ์ • ํŠœํ† ๋ฆฌ์–ผ](training)์—์„œ [`TrainingArguments`] ํด๋ž˜์Šค๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์™€ ์ถ”๊ฐ€ ํ›ˆ๋ จ ์˜ต์…˜์„ ์ง€์ •ํ•˜๋Š” ๊ณณ์ด๋ผ๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ์ด๋Ÿฌํ•œ ํ›ˆ๋ จ ์˜ต์…˜ ์ค‘ ํ•˜๋‚˜๋Š” ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ์ง์ ‘ ํ‘ธ์‹œํ•˜๋Š” ๊ธฐ๋Šฅ์„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. [`TrainingArguments`]์—์„œ `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` ํ‰์†Œ์™€ ๊ฐ™์ด ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•œ ํ›„, [`Trainer`]์—์„œ [`~transformers.Trainer.push_to_hub`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•˜์„ธ์š”. ๐Ÿค— Transformers๋Š” ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ํ›ˆ๋ จ ๊ฒฐ๊ณผ ๋ฐ ํ”„๋ ˆ์ž„์›Œํฌ ๋ฒ„์ „์„ ๋ชจ๋ธ ์นด๋“œ์— ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค! ```py >>> trainer.push_to_hub() ``` </pt> <tf> [`PushToHubCallback`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜๋ ค๋ฉด, [`PushToHubCallback`]์— ๋‹ค์Œ ์ธ์ˆ˜๋ฅผ ์ •์˜ํ•˜์„ธ์š”: - ์ถœ๋ ฅ๋œ ๋ชจ๋ธ์˜ ํŒŒ์ผ ๊ฒฝ๋กœ - ํ† ํฌ๋‚˜์ด์ € - `{Hub ์‚ฌ์šฉ์ž ์ด๋ฆ„}/{๋ชจ๋ธ ์ด๋ฆ„}` ํ˜•์‹์˜ `hub_model_id` ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` [`fit`](https://keras.io/api/models/model_training_apis/)์— ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜๋ฉด, ๐Ÿค— Transformers๊ฐ€ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## `push_to_hub` ํ•จ์ˆ˜ ์‚ฌ์šฉํ•˜๊ธฐ[[use-the-pushtohub-function]] ๋ชจ๋ธ์—์„œ ์ง์ ‘ `push_to_hub`๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `push_to_hub`์— ๋ชจ๋ธ ์ด๋ฆ„์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์‚ฌ์šฉ์ž ์ด๋ฆ„ ์•„๋ž˜์— ๋ชจ๋ธ ์ด๋ฆ„ `my-awesome-model`๋กœ ์ €์žฅ์†Œ๊ฐ€ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด์ œ ์‚ฌ์šฉ์ž๋Š” `from_pretrained` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` ์กฐ์ง์— ์†ํ•˜๊ณ  ๋ชจ๋ธ์„ ์กฐ์ง ์ด๋ฆ„์œผ๋กœ ๋Œ€์‹  ํ‘ธ์‹œํ•˜๋ ค๋ฉด `repo_id`์— ์ถ”๊ฐ€ํ•˜์„ธ์š”: ```py >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` `push_to_hub` ํ•จ์ˆ˜๋Š” ๋ชจ๋ธ ์ €์žฅ์†Œ์— ๋‹ค๋ฅธ ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ชจ๋ธ ์ €์žฅ์†Œ์— ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` ๋˜๋Š” ๋ฏธ์„ธ ์กฐ์ •๋œ PyTorch ๋ชจ๋ธ์˜ TensorFlow ๋ฒ„์ „์„ ์ถ”๊ฐ€ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` ์ด์ œ Hugging Face ํ”„๋กœํ•„๋กœ ์ด๋™ํ•˜๋ฉด, ์ƒˆ๋กœ ์ƒ์„ฑํ•œ ๋ชจ๋ธ ์ €์žฅ์†Œ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. **Files** ํƒญ์„ ํด๋ฆญํ•˜๋ฉด ์ €์žฅ์†Œ์— ์—…๋กœ๋“œํ•œ ๋ชจ๋“  ํŒŒ์ผ์ด ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ์— ํŒŒ์ผ์„ ๋งŒ๋“ค๊ณ  ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ํ—ˆ๋ธŒ ์„ค๋ช…์„œ [์—ฌ๊ธฐ](https://huggingface.co/docs/hub/how-to-upstream)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## ์›น ์ธํ„ฐํŽ˜์ด์Šค๋กœ ์—…๋กœ๋“œํ•˜๊ธฐ[[upload-with-the-web-interface]] ์ฝ”๋“œ ์—†๋Š” ์ ‘๊ทผ ๋ฐฉ์‹์„ ์„ ํ˜ธํ•˜๋Š” ์‚ฌ์šฉ์ž๋Š” ํ—ˆ๋ธŒ์˜ ์›น ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [huggingface.co/new](https://huggingface.co/new)๋ฅผ ๋ฐฉ๋ฌธํ•˜์—ฌ ์ƒˆ๋กœ์šด ์ €์žฅ์†Œ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) ์—ฌ๊ธฐ์„œ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ •๋ณด๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”: - ์ €์žฅ์†Œ์˜ **์†Œ์œ ์ž**๋ฅผ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์‚ฌ์šฉ์ž ๋˜๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์†ํ•œ ์กฐ์ง์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์ €์žฅ์†Œ ์ด๋ฆ„์ด ๋  ๋ชจ๋ธ์˜ ์ด๋ฆ„์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ์ด ๊ณต๊ฐœ์ธ์ง€ ๋น„๊ณต๊ฐœ์ธ์ง€ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ์˜ ๋ผ์ด์„ผ์Šค ์‚ฌ์šฉ์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ **Files** ํƒญ์„ ํด๋ฆญํ•˜๊ณ  **Add file** ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ƒˆ๋กœ์šด ํŒŒ์ผ์„ ์ €์žฅ์†Œ์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์—…๋กœ๋“œํ•  ํŒŒ์ผ์„ ๋Œ์–ด๋‹ค ๋†“๊ณ  ์ปค๋ฐ‹ ๋ฉ”์‹œ์ง€๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## ๋ชจ๋ธ ์นด๋“œ ์ถ”๊ฐ€ํ•˜๊ธฐ[[add-a-model-card]] ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์˜ ๊ธฐ๋Šฅ, ์ œํ•œ, ์ž ์žฌ์  ํŽธํ–ฅ ๋ฐ ์œค๋ฆฌ์  ๊ณ ๋ ค ์‚ฌํ•ญ์„ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์ €์žฅ์†Œ์— ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ๋ชจ๋ธ ์นด๋“œ๋Š” `README.md` ํŒŒ์ผ์— ์ •์˜๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * `README.md` ํŒŒ์ผ์„ ์ˆ˜๋™์œผ๋กœ ์ƒ์„ฑํ•˜์—ฌ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. * ๋ชจ๋ธ ์ €์žฅ์†Œ์—์„œ **Edit model card** ๋ฒ„ํŠผ์„ ํด๋ฆญํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์นด๋“œ์— ํฌํ•จํ•  ์ •๋ณด ์œ ํ˜•์— ๋Œ€ํ•œ ์ข‹์€ ์˜ˆ๋Š” DistilBert [๋ชจ๋ธ ์นด๋“œ](https://huggingface.co/distilbert/distilbert-base-uncased)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๋ชจ๋ธ์˜ ํƒ„์†Œ ๋ฐœ์ž๊ตญ์ด๋‚˜ ์œ„์ ฏ ์˜ˆ์‹œ ๋“ฑ `README.md` ํŒŒ์ผ์—์„œ ์ œ์–ดํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ์˜ต์…˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://huggingface.co/docs/hub/models-cards) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/custom_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ[[sharing-custom-models]] ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์‰ฝ๊ฒŒ ํ™•์žฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์€ ์ถ”์ƒํ™” ์—†์ด ์ €์žฅ์†Œ์˜ ์ง€์ •๋œ ํ•˜์œ„ ํด๋”์— ์™„์ „ํžˆ ์ฝ”๋”ฉ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, ์†์‰ฝ๊ฒŒ ๋ชจ๋ธ๋ง ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๊ณ  ํ•„์š”์— ๋”ฐ๋ผ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์™„์ „ํžˆ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๋งŒ๋“œ๋Š” ๊ฒฝ์šฐ์—๋Š” ์ฒ˜์Œ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์ด ๋” ์‰ฌ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” Transformers ๋‚ด์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ๊ณผ ๊ตฌ์„ฑ์„ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์—†๋Š” ๊ฒฝ์šฐ์—๋„ ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก (์˜์กด์„ฑ๊ณผ ํ•จ๊ป˜) ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [timm ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ](https://github.com/rwightman/pytorch-image-models)์˜ ResNet ํด๋ž˜์Šค๋ฅผ [`PreTrainedModel`]๋กœ ๋ž˜ํ•‘ํ•œ ResNet ๋ชจ๋ธ์„ ์˜ˆ๋กœ ๋ชจ๋“  ๊ฒƒ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ๊ตฌ์„ฑ ์ž‘์„ฑํ•˜๊ธฐ[[writing-a-custom-configuration]] ๋ชจ๋ธ์— ๋“ค์–ด๊ฐ€๊ธฐ ์ „์— ๋จผ์ € ๊ตฌ์„ฑ์„ ์ž‘์„ฑํ•ด๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ `configuration`์€ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ๋ชจ๋“  ์ค‘์š”ํ•œ ๊ฒƒ๋“ค์„ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š” ๊ฐ์ฒด์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด, ๋ชจ๋ธ์€ `config`๋ฅผ ์‚ฌ์šฉํ•ด์„œ๋งŒ ์ดˆ๊ธฐํ™”ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์™„๋ฒฝํ•œ ๊ตฌ์„ฑ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜ ์˜ˆ์‹œ์—์„œ๋Š” ResNet ํด๋ž˜์Šค์˜ ์ธ์ˆ˜(argument)๋ฅผ ์กฐ์ •ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๊ตฌ์„ฑ์€ ๊ฐ€๋Šฅํ•œ ResNet ์ค‘ ๋‹ค๋ฅธ ์œ ํ˜•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ช‡ ๊ฐ€์ง€ ์œ ํšจ์„ฑ์„ ํ™•์ธํ•œ ํ›„ ํ•ด๋‹น ์ธ์ˆ˜๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` ์‚ฌ์šฉ์ž ์ •์˜ `configuration`์„ ์ž‘์„ฑํ•  ๋•Œ ๊ธฐ์–ตํ•ด์•ผ ํ•  ์„ธ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์‚ฌํ•ญ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - `PretrainedConfig`์„ ์ƒ์†ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - `PretrainedConfig`์˜ `__init__`์€ ๋ชจ๋“  kwargs๋ฅผ ํ—ˆ์šฉํ•ด์•ผ ํ•˜๊ณ , - ์ด๋Ÿฌํ•œ `kwargs`๋Š” ์ƒ์œ„ ํด๋ž˜์Šค `__init__`์— ์ „๋‹ฌ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ƒ์†์€ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ชจ๋“  ๊ธฐ๋Šฅ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ ์œผ๋กœ๋ถ€ํ„ฐ ๋น„๋กฏ๋˜๋Š” ๋‘ ๊ฐ€์ง€ ์ œ์•ฝ ์กฐ๊ฑด์€ `PretrainedConfig`์— ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋” ๋งŽ์€ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. `from_pretrained` ๋ฉ”์„œ๋“œ๋กœ ๊ตฌ์„ฑ์„ ๋‹ค์‹œ ๋กœ๋“œํ•  ๋•Œ ํ•ด๋‹น ํ•„๋“œ๋Š” ๊ตฌ์„ฑ์—์„œ ์ˆ˜๋ฝํ•œ ํ›„ ์ƒ์œ„ ํด๋ž˜์Šค๋กœ ๋ณด๋‚ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ๋“ฑ๋กํ•˜์ง€ ์•Š๋Š” ํ•œ, `configuration`์—์„œ `model_type`์„ ์ •์˜(์—ฌ๊ธฐ์„œ `model_type="resnet"`)ํ•˜๋Š” ๊ฒƒ์€ ํ•„์ˆ˜ ์‚ฌํ•ญ์ด ์•„๋‹™๋‹ˆ๋‹ค (๋งˆ์ง€๋ง‰ ์„น์…˜ ์ฐธ์กฐ). ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‹ค๋ฅธ ๋ชจ๋ธ ๊ตฌ์„ฑ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ตฌ์„ฑ์„ ์‰ฝ๊ฒŒ ๋งŒ๋“ค๊ณ  ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ resnet50d ๊ตฌ์„ฑ์„ ์ƒ์„ฑํ•˜๊ณ  ์ €์žฅํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด `custom-resnet` ํด๋” ์•ˆ์— `config.json`์ด๋ผ๋Š” ํŒŒ์ผ์ด ์ €์žฅ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `from_pretrained` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌ์„ฑ์„ ๋‹ค์‹œ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` ๊ตฌ์„ฑ์„ Hub์— ์ง์ ‘ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด [`PretrainedConfig`] ํด๋ž˜์Šค์˜ [`~PretrainedConfig.push_to_hub`]์™€ ๊ฐ™์€ ๋‹ค๋ฅธ ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ ์ž‘์„ฑํ•˜๊ธฐ[[writing-a-custom-model]] ์ด์ œ ResNet ๊ตฌ์„ฑ์ด ์žˆ์œผ๋ฏ€๋กœ ๋ชจ๋ธ์„ ์ž‘์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ๋Š” ๋‘ ๊ฐœ๋ฅผ ์ž‘์„ฑํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜๋‚˜๋Š” ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์—์„œ hidden features๋ฅผ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ([`BertModel`]๊ณผ ๊ฐ™์ด), ๋‹ค๋ฅธ ํ•˜๋‚˜๋Š” ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์ ํ•ฉํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค([`BertForSequenceClassification`]๊ณผ ๊ฐ™์ด). ์ด์ „์— ์–ธ๊ธ‰ํ–ˆ๋“ฏ์ด ์ด ์˜ˆ์ œ์—์„œ๋Š” ๋‹จ์ˆœํ•˜๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์˜ ๋Š์Šจํ•œ ๋ž˜ํผ(loose wrapper)๋งŒ ์ž‘์„ฑํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋ฅผ ์ž‘์„ฑํ•˜๊ธฐ ์ „์— ๋ธ”๋ก ์œ ํ˜•๊ณผ ์‹ค์ œ ๋ธ”๋ก ํด๋ž˜์Šค ๊ฐ„์˜ ๋งคํ•‘ ์ž‘์—…๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `ResNet` ํด๋ž˜์Šค๋กœ ์ „๋‹ฌ๋˜์–ด `configuration`์„ ํ†ตํ•ด ๋ชจ๋ธ์ด ์„ ์–ธ๋ฉ๋‹ˆ๋‹ค: ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด์„œ๋Š” forward ๋ฉ”์†Œ๋“œ๋งŒ ๋ณ€๊ฒฝํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` ๋‘ ๊ฒฝ์šฐ ๋ชจ๋‘ `PreTrainedModel`๋ฅผ ์ƒ์†๋ฐ›๊ณ , `config`๋ฅผ ํ†ตํ•ด ์ƒ์œ„ ํด๋ž˜์Šค ์ดˆ๊ธฐํ™”๋ฅผ ํ˜ธ์ถœํ•˜๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š” (์ผ๋ฐ˜์ ์ธ `torch.nn.Module`์„ ์ž‘์„ฑํ•  ๋•Œ์™€ ๋น„์Šทํ•จ). ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ๋“ฑ๋กํ•˜๊ณ  ์‹ถ์€ ๊ฒฝ์šฐ์—๋Š” `config_class`๋ฅผ ์„ค์ •ํ•˜๋Š” ๋ถ€๋ถ„์ด ํ•„์ˆ˜์ž…๋‹ˆ๋‹ค (๋งˆ์ง€๋ง‰ ์„น์…˜ ์ฐธ์กฐ). <Tip> ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ๊ณผ ๊ต‰์žฅํžˆ ์œ ์‚ฌํ•˜๋‹ค๋ฉด, ๋ชจ๋ธ์„ ์ƒ์„ฑํ•  ๋•Œ ๊ตฌ์„ฑ์„ ์ฐธ์กฐํ•ด ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์›ํ•˜๋Š” ๊ฒƒ์„ ๋ชจ๋ธ์ด ๋ฐ˜ํ™˜ํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, `ResnetModelForImageClassification`์—์„œ ํ–ˆ๋˜ ๊ฒƒ ์ฒ˜๋Ÿผ ๋ ˆ์ด๋ธ”์„ ํ†ต๊ณผ์‹œ์ผฐ์„ ๋•Œ ์†์‹ค๊ณผ ํ•จ๊ป˜ ์‚ฌ์ „ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒƒ์ด [`Trainer`] ํด๋ž˜์Šค ๋‚ด์—์„œ ์ง์ ‘ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ž์‹ ๋งŒ์˜ ํ•™์Šต ๋ฃจํ”„ ๋˜๋Š” ๋‹ค๋ฅธ ํ•™์Šต ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ๊ณ„ํš์ด๋ผ๋ฉด ๋‹ค๋ฅธ ์ถœ๋ ฅ ํ˜•์‹์„ ์‚ฌ์šฉํ•ด๋„ ์ข‹์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋ชจ๋ธ ํด๋ž˜์Šค๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ํ•˜๋‚˜ ์ƒ์„ฑํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, [`~PreTrainedModel.save_pretrained`]๋˜๋Š” [`~PreTrainedModel.push_to_hub`]์ฒ˜๋Ÿผ [`PreTrainedModel`]์— ์†ํ•˜๋Š” ๋ชจ๋“  ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ ๋‘ ๋ฒˆ์งธ ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ ์ฝ”๋“œ์™€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ €, ๋ชจ๋ธ ๋‚ด๋ถ€์— ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ๋กœ๋“œํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ๋ฅผ ํ™œ์šฉํ•  ๋•Œ๋Š”, ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ ์ž์‹ ๋งŒ์˜ ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต์‹œํ‚ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋น ๋ฅด๊ฒŒ ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ resnet50d๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ๋ชจ๋ธ์€ resnet50d์˜ ๋ž˜ํผ์ด๊ธฐ ๋•Œ๋ฌธ์—, ๊ฐ€์ค‘์น˜๋ฅผ ์‰ฝ๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ์ด์ œ [`~PreTrainedModel.save_pretrained`] ๋˜๋Š” [`~PreTrainedModel.push_to_hub`]๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ๋ชจ๋ธ ์ฝ”๋“œ๊ฐ€ ์ €์žฅ๋˜๋Š”์ง€ ํ™•์ธํ•ด๋ด…์‹œ๋‹ค. ## Hub๋กœ ์ฝ”๋“œ ์—…๋กœ๋“œํ•˜๊ธฐ[[sending-the-code-to-the-hub]] <Tip warning={true}> ์ด API๋Š” ์‹คํ—˜์ ์ด๋ฉฐ ๋‹ค์Œ ๋ฆด๋ฆฌ์Šค์—์„œ ์•ฝ๊ฐ„์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ๋จผ์ € ๋ชจ๋ธ์ด `.py` ํŒŒ์ผ์— ์™„์ „ํžˆ ์ •์˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ๋ชจ๋“  ํŒŒ์ผ์ด ๋™์ผํ•œ ์ž‘์—… ๊ฒฝ๋กœ์— ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ƒ๋Œ€๊ฒฝ๋กœ ์ž„ํฌํŠธ(relative import)์— ์˜์กดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (transformers์—์„œ๋Š” ์ด ๊ธฐ๋Šฅ์— ๋Œ€ํ•œ ํ•˜์œ„ ๋ชจ๋“ˆ์„ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค). ์ด ์˜ˆ์‹œ์—์„œ๋Š” ์ž‘์—… ๊ฒฝ๋กœ ์•ˆ์˜ `resnet_model`์—์„œ `modeling_resnet.py` ํŒŒ์ผ๊ณผ `configuration_resnet.py` ํŒŒ์ผ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ์„ฑ ํŒŒ์ผ์—๋Š” `ResnetConfig`์— ๋Œ€ํ•œ ์ฝ”๋“œ๊ฐ€ ์žˆ๊ณ  ๋ชจ๋ธ๋ง ํŒŒ์ผ์—๋Š” `ResnetModel` ๋ฐ `ResnetModelForImageClassification`์— ๋Œ€ํ•œ ์ฝ”๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` Python์ด `resnet_model`์„ ๋ชจ๋“ˆ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ฐ์ง€ํ•˜๋Š” ๋ชฉ์ ์ด๊ธฐ ๋•Œ๋ฌธ์— `__init__.py`๋Š” ๋น„์–ด ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <Tip warning={true}> ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ชจ๋ธ๋ง ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๋Š” ๊ฒฝ์šฐ, ๋ชจ๋“  ํŒŒ์ผ ์ƒ๋‹จ์— ์žˆ๋Š” ์ƒ๋Œ€ ๊ฒฝ๋กœ ์ž„ํฌํŠธ(relative import) ๋ถ€๋ถ„์„ `transformers` ํŒจํ‚ค์ง€์—์„œ ์ž„ํฌํŠธ ํ•˜๋„๋ก ๋ณ€๊ฒฝํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๊ธฐ์กด ๊ตฌ์„ฑ์ด๋‚˜ ๋ชจ๋ธ์„ ์žฌ์‚ฌ์šฉ(๋˜๋Š” ์„œ๋ธŒ ํด๋ž˜์Šคํ™”)ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค: ๋จผ์ €, ์ƒˆ๋กœ ๋งŒ๋“  ํŒŒ์ผ์— ResNet ๋ชจ๋ธ๊ณผ ๊ตฌ์„ฑ์„ ์ž„ํฌํŠธํ•ฉ๋‹ˆ๋‹ค: ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` ๋‹ค์Œ์œผ๋กœ `save_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ•ด๋‹น ๊ฐ์ฒด์˜ ์ฝ”๋“œ ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๊ณ , ๋ณต์‚ฌํ•œ ํŒŒ์ผ์„ Auto ํด๋ž˜์Šค๋กœ ๋“ฑ๋กํ•˜๊ณ (๋ชจ๋ธ์ธ ๊ฒฝ์šฐ) ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค: ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` `configuration`์— ๋Œ€ํ•œ auto ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•  ํ•„์š”๋Š” ์—†์ง€๋งŒ(`configuration` ๊ด€๋ จ auto ํด๋ž˜์Šค๋Š” AutoConfig ํด๋ž˜์Šค ํ•˜๋‚˜๋งŒ ์žˆ์Œ), ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ์—๋Š” ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ง€์ • ๋ชจ๋ธ์€ ๋‹ค์–‘ํ•œ ์ž‘์—…์— ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ๋ชจ๋ธ์— ๋งž๋Š” auto ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ด์ „์— ์ž‘์—…ํ–ˆ๋˜ ๊ฒƒ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ตฌ์„ฑ๊ณผ ๋ชจ๋ธ์„ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ์ด์ œ ๋ชจ๋ธ์„ Hub๋กœ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด ๋กœ๊ทธ์ธ ์ƒํƒœ์ธ์ง€ ํ™•์ธํ•˜์„ธ์š”. ํ„ฐ๋ฏธ๋„์—์„œ ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•ด ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash huggingface-cli login ``` ์ฃผํ”ผํ„ฐ ๋…ธํŠธ๋ถ์˜ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py from huggingface_hub import notebook_login notebook_login() ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด๋ ‡๊ฒŒ ์ž์‹ ์˜ ๋„ค์ž„์ŠคํŽ˜์ด์Šค(๋˜๋Š” ์ž์‹ ์ด ์†ํ•œ ์กฐ์ง)์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py resnet50d.push_to_hub("custom-resnet50d") ``` On top of the modeling weights and the configuration in json format, this also copied the modeling and configuration `.py` files in the folder `custom-resnet50d` and uploaded the result to the Hub. You can check the result in this [model repo](https://huggingface.co/sgugger/custom-resnet50d). json ํ˜•์‹์˜ ๋ชจ๋ธ๋ง ๊ฐ€์ค‘์น˜์™€ ๊ตฌ์„ฑ ์™ธ์—๋„ `custom-resnet50d` ํด๋” ์•ˆ์˜ ๋ชจ๋ธ๋ง๊ณผ ๊ตฌ์„ฑ `.py` ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜ํ•ด Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. [๋ชจ๋ธ ์ €์žฅ์†Œ](https://huggingface.co/sgugger/custom-resnet50d)์—์„œ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [sharing tutorial](model_sharing) ๋ฌธ์„œ์˜ `push_to_hub` ๋ฉ”์†Œ๋“œ์—์„œ ์ž์„ธํ•œ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋กœ ๋ชจ๋ธ ์‚ฌ์šฉํ•˜๊ธฐ[[using-a-model-with-custom-code]] auto ํด๋ž˜์Šค์™€ `from_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ง€์ • ์ฝ”๋“œ ํŒŒ์ผ๊ณผ ํ•จ๊ป˜ ๋ชจ๋“  ๊ตฌ์„ฑ, ๋ชจ๋ธ, ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hub์— ์—…๋กœ๋“œ๋œ ๋ชจ๋“  ํŒŒ์ผ ๋ฐ ์ฝ”๋“œ๋Š” ๋ฉœ์›จ์–ด๊ฐ€ ์žˆ๋Š”์ง€ ๊ฒ€์‚ฌ๋˜์ง€๋งŒ (์ž์„ธํ•œ ๋‚ด์šฉ์€ [Hub ๋ณด์•ˆ](https://huggingface.co/docs/hub/security#malware-scanning) ์„ค๋ช… ์ฐธ์กฐ), ์ž์‹ ์˜ ์ปดํ“จํ„ฐ์—์„œ ๋ชจ๋ธ ์ฝ”๋“œ์™€ ์ž‘์„ฑ์ž๊ฐ€ ์•…์„ฑ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋กœ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด `trust_remote_code=True`๋กœ ์„ค์ •ํ•˜์„ธ์š”: ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` ๋ชจ๋ธ ์ž‘์„ฑ์ž๊ฐ€ ์•…์˜์ ์œผ๋กœ ์ฝ”๋“œ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜์ง€ ์•Š์•˜๋‹ค๋Š” ์ ์„ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด, ์ปค๋ฐ‹ ํ•ด์‹œ(commit hash)๋ฅผ `revision`์œผ๋กœ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ๋„ ๊ฐ•๋ ฅํžˆ ๊ถŒ์žฅ๋ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ ์ž‘์„ฑ์ž๋ฅผ ์™„์ „ํžˆ ์‹ ๋ขฐํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ). ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` Hub์—์„œ ๋ชจ๋ธ ์ €์žฅ์†Œ์˜ ์ปค๋ฐ‹ ๊ธฐ๋ก์„ ์ฐพ์•„๋ณผ ๋•Œ, ๋ชจ๋“  ์ปค๋ฐ‹์˜ ์ปค๋ฐ‹ ํ•ด์‹œ๋ฅผ ์‰ฝ๊ฒŒ ๋ณต์‚ฌํ•  ์ˆ˜ ์žˆ๋Š” ๋ฒ„ํŠผ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋กœ ๋งŒ๋“  ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค๋กœ ๋“ฑ๋กํ•˜๊ธฐ[[registering-a-model-with-custom-code-to-the-auto-classes]] ๐Ÿค— Transformers๋ฅผ ์ƒ์†ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ํ•ด๋‹น ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์ž„ํฌํŠธํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ด๋Š” Hub๋กœ ์ฝ”๋“œ๋ฅผ ์—…๋กœ๋“œํ•˜๋Š” ๊ฒƒ๊ณผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค (Hub์—์„œ ์ž๋™์ ์œผ๋กœ ๋ชจ๋ธ ์ฝ”๋“œ๋ฅผ ๋‹ค์šด๋กœ๋“œ ํ•˜๋Š” ๊ฒƒ๊ณผ ๋ฐ˜๋Œ€). ๊ตฌ์„ฑ์— ๊ธฐ์กด ๋ชจ๋ธ ์œ ํ˜•๊ณผ ๋‹ค๋ฅธ `model_type` ์†์„ฑ์ด ์žˆ๊ณ  ๋ชจ๋ธ ํด๋ž˜์Šค์— ์˜ฌ๋ฐ”๋ฅธ `config_class` ์†์„ฑ์ด ์žˆ๋Š” ํ•œ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด auto ํด๋ž˜์Šค์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` ์‚ฌ์šฉ์ž ์ •์˜ ๊ตฌ์„ฑ์„ [`AutoConfig`]์— ๋“ฑ๋กํ•  ๋•Œ ์‚ฌ์šฉ๋˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ธ์ˆ˜๋Š” ์‚ฌ์šฉ์ž ์ •์˜ ๊ตฌ์„ฑ์˜ `model_type`๊ณผ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ๋“ฑ๋กํ•  ๋•Œ ์‚ฌ์šฉ๋˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ธ์ˆ˜๋Š” ํ•ด๋‹น ๋ชจ๋ธ์˜ `config_class`์™€ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/perf_infer_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPU์—์„œ ํšจ์œจ์ ์ธ ์ถ”๋ก ํ•˜๊ธฐ [[efficient-inference-on-cpu]] ์ด ๊ฐ€์ด๋“œ๋Š” CPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ค‘์ ์„ ๋‘๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋” ๋น ๋ฅธ ์ถ”๋ก ์„ ์œ„ํ•œ `BetterTransformer` [[bettertransformer-for-faster-inference]] ์šฐ๋ฆฌ๋Š” ์ตœ๊ทผ CPU์—์„œ ํ…์ŠคํŠธ, ์ด๋ฏธ์ง€ ๋ฐ ์˜ค๋””์˜ค ๋ชจ๋ธ์˜ ๋น ๋ฅธ ์ถ”๋ก ์„ ์œ„ํ•ด `BetterTransformer`๋ฅผ ํ†ตํ•ฉํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ํ†ตํ•ฉ์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์ด ๋ฌธ์„œ](https://huggingface.co/docs/optimum/bettertransformer/overview)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## PyTorch JIT ๋ชจ๋“œ (TorchScript) [[pytorch-jitmode-torchscript]] TorchScript๋Š” PyTorch ์ฝ”๋“œ์—์„œ ์ง๋ ฌํ™”์™€ ์ตœ์ ํ™”๊ฐ€ ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์„ ์ƒ์„ฑํ• ๋•Œ ์“ฐ์ž…๋‹ˆ๋‹ค. TorchScript๋กœ ๋งŒ๋“ค์–ด์ง„ ํ”„๋กœ๊ทธ๋žจ์€ ๊ธฐ์กด Python ํ”„๋กœ์„ธ์Šค์—์„œ ์ €์žฅํ•œ ๋’ค, ์ข…์†์„ฑ์ด ์—†๋Š” ์ƒˆ๋กœ์šด ํ”„๋กœ์„ธ์Šค๋กœ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. PyTorch์˜ ๊ธฐ๋ณธ ์„ค์ •์ธ `eager` ๋ชจ๋“œ์™€ ๋น„๊ตํ–ˆ์„๋•Œ, `jit` ๋ชจ๋“œ๋Š” ์—ฐ์‚ฐ์ž ๊ฒฐํ•ฉ๊ณผ ๊ฐ™์€ ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•ด ๋ชจ๋ธ ์ถ”๋ก ์—์„œ ๋Œ€๋ถ€๋ถ„ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. TorchScript์— ๋Œ€ํ•œ ์นœ์ ˆํ•œ ์†Œ๊ฐœ๋Š” [PyTorch TorchScript ํŠœํ† ๋ฆฌ์–ผ](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ### JIT ๋ชจ๋“œ์™€ ํ•จ๊ป˜ํ•˜๋Š” IPEX ๊ทธ๋ž˜ํ”„ ์ตœ์ ํ™” [[ipex-graph-optimization-with-jitmode]] Intelยฎ Extension for PyTorch(IPEX)๋Š” Transformers ๊ณ„์—ด ๋ชจ๋ธ์˜ jit ๋ชจ๋“œ์—์„œ ์ถ”๊ฐ€์ ์ธ ์ตœ์ ํ™”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. jit ๋ชจ๋“œ์™€ ๋”๋ถˆ์–ด Intelยฎ Extension for PyTorch(IPEX)๋ฅผ ํ™œ์šฉํ•˜์‹œ๊ธธ ๊ฐ•๋ ฅํžˆ ๊ถŒ์žฅ๋“œ๋ฆฝ๋‹ˆ๋‹ค. Transformers ๋ชจ๋ธ์—์„œ ์ž์ฃผ ์‚ฌ์šฉ๋˜๋Š” ์ผ๋ถ€ ์—ฐ์‚ฐ์ž ํŒจํ„ด์€ ์ด๋ฏธ jit ๋ชจ๋“œ ์—ฐ์‚ฐ์ž ๊ฒฐํ•ฉ(operator fusion)์˜ ํ˜•ํƒœ๋กœ Intelยฎ Extension for PyTorch(IPEX)์—์„œ ์ง€์›๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. Multi-head-attention, Concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm ๊ฒฐํ•ฉ ํŒจํ„ด ๋“ฑ์ด ์ด์šฉ ๊ฐ€๋Šฅํ•˜๋ฉฐ ํ™œ์šฉํ–ˆ์„ ๋•Œ ์„ฑ๋Šฅ์ด ์šฐ์ˆ˜ํ•ฉ๋‹ˆ๋‹ค. ์—ฐ์‚ฐ์ž ๊ฒฐํ•ฉ์˜ ์ด์ ์€ ์‚ฌ์šฉ์ž์—๊ฒŒ ๊ณ ์Šค๋ž€ํžˆ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ๋ถ„์„์— ๋”ฐ๋ฅด๋ฉด, ์งˆ์˜ ์‘๋‹ต, ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๋ฐ ํ† ํฐ ๋ถ„๋ฅ˜์™€ ๊ฐ™์€ ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” NLP ํƒœ์Šคํฌ ์ค‘ ์•ฝ 70%๊ฐ€ ์ด๋Ÿฌํ•œ ๊ฒฐํ•ฉ ํŒจํ„ด์„ ์‚ฌ์šฉํ•˜์—ฌ Float32 ์ •๋ฐ€๋„์™€ BFloat16 ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋ชจ๋‘์—์„œ ์„ฑ๋Šฅ์ƒ์˜ ์ด์ ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [IPEX ๊ทธ๋ž˜ํ”„ ์ตœ์ ํ™”](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html)์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์„ธ์š”. #### IPEX ์„ค์น˜: [[ipex-installation]] IPEX ๋ฐฐํฌ ์ฃผ๊ธฐ๋Š” PyTorch๋ฅผ ๋”ฐ๋ผ์„œ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์ •๋ณด๋Š” [IPEX ์„ค์น˜ ๋ฐฉ๋ฒ•](https://intel.github.io/intel-extension-for-pytorch/)์„ ํ™•์ธํ•˜์„ธ์š”. ### JIT ๋ชจ๋“œ ์‚ฌ์šฉ๋ฒ• [[usage-of-jitmode]] ํ‰๊ฐ€ ๋˜๋Š” ์˜ˆ์ธก์„ ์œ„ํ•ด Trainer์—์„œ JIT ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด Trainer์˜ ๋ช…๋ น ์ธ์ˆ˜์— `jit_mode_eval`์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. <Tip warning={true}> PyTorch์˜ ๋ฒ„์ „์ด 1.14.0 ์ด์ƒ์ด๋ผ๋ฉด, jit ๋ชจ๋“œ๋Š” jit.trace์—์„œ dict ์ž…๋ ฅ์ด ์ง€์›๋˜๋ฏ€๋กœ, ๋ชจ๋“  ๋ชจ๋ธ์˜ ์˜ˆ์ธก๊ณผ ํ‰๊ฐ€๊ฐ€ ๊ฐœ์„ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. PyTorch์˜ ๋ฒ„์ „์ด 1.14.0 ๋ฏธ๋งŒ์ด๋ผ๋ฉด, ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ๊ณผ ๊ฐ™์ด forward ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ์ˆœ์„œ๊ฐ€ jit.trace์˜ ํŠœํ”Œ ์ž…๋ ฅ ์ˆœ์„œ์™€ ์ผ์น˜ํ•˜๋Š” ๋ชจ๋ธ์— ๋“์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๋ชจ๋ธ๊ณผ ๊ฐ™์ด forward ๋งค๊ฐœ๋ณ€์ˆ˜ ์ˆœ์„œ๊ฐ€ jit.trace์˜ ํŠœํ”Œ ์ž…๋ ฅ ์ˆœ์„œ์™€ ๋‹ค๋ฅธ ๊ฒฝ์šฐ, jit.trace๊ฐ€ ์‹คํŒจํ•˜๋ฉฐ ์˜ˆ์™ธ๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์ด๋•Œ ์˜ˆ์™ธ์ƒํ™ฉ์„ ์‚ฌ์šฉ์ž์—๊ฒŒ ์•Œ๋ฆฌ๊ธฐ ์œ„ํ•ด Logging์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. </Tip> [Transformers ์งˆ์˜ ์‘๋‹ต](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)์˜ ์‚ฌ์šฉ ์‚ฌ๋ก€ ์˜ˆ์‹œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. - CPU์—์„œ jit ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•œ ์ถ”๋ก : <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--jit_mode_eval </b></pre> - CPU์—์„œ IPEX์™€ ํ•จ๊ป˜ jit ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•œ ์ถ”๋ก : <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--use_ipex \</b> <b>--jit_mode_eval</b></pre>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/perplexity.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(Perplexity)[[perplexity-of-fixedlength-models]] [[open-in-colab]] ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(Perplexity, PPL)๋Š” ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ์–ธ์–ด ๋ชจ๋ธ ํ‰๊ฐ€์ง€ํ‘œ ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ธฐ ์ „์— ์ด ํ‰๊ฐ€์ง€ํ‘œ๋Š” ๊ณ ์ „์ ์ธ ์–ธ์–ด ๋ชจ๋ธ(์ž๊ธฐํšŒ๊ท€ ๋˜๋Š” ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ์ด๋ผ๊ณ ๋„ ํ•จ)์—๋งŒ ์ ์šฉ๋˜๋ฉฐ BERT์™€ ๊ฐ™์€ ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ์—๋Š” ์ž˜ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค (BERT๋Š” [summary of the models](../en/model_summary) ๋ฌธ์„œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”). ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋Š” ์‹œํ€€์Šค์˜ ์Œ์˜ ๋กœ๊ทธ ์šฐ๋„(negative log-likelihood, NLL) ๊ฐ’์˜ ํ‰๊ท ์— ์ง€์ˆ˜(exponentiate)๋ฅผ ์ทจํ•œ ๊ฐ’์œผ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค. ํ† ํฐํ™”๋œ ์‹œํ€€์Šค \\(X = (x_0, x_1, \dots, x_t)\\) ๊ฐ€ ์žˆ์„ ๋•Œ, \\(X\\) ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋Š” ์•„๋ž˜ ์ˆ˜์‹๊ณผ ๊ฐ™์ด ๊ตฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. $$\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\}$$ \\(\log p_\theta (x_i|x_{<i})\\) ๋Š” ๋ชจ๋ธ์— i๋ฒˆ์งธ ์ด์ „๊นŒ์ง€ ํ† ํฐ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ i๋ฒˆ์งธ ํ† ํฐ์˜ ๋กœ๊ทธ ์šฐ๋„๊ฐ’์ž…๋‹ˆ๋‹ค. ์ง๊ด€์ ์œผ๋กœ ๋ง๋ญ‰์น˜์—์„œ ์ง€์ •๋œ ํ† ํฐ ์ง‘ํ•ฉ์„ ๊ท ์ผํ•˜๊ฒŒ ์˜ˆ์ธกํ•˜๋Š” ๋ชจ๋ธ์˜ ๋Šฅ๋ ฅ์— ๋Œ€ํ•œ ํ‰๊ฐ€๋กœ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ค‘์š”ํ•œ ์ ์€ ํ† ํฐํ™” ๊ณผ์ •์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ์— ์ง์ ‘์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋ฏ€๋กœ ์„œ๋กœ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๋น„๊ตํ•  ๋•Œ ํ•ญ์ƒ ์ด๋ฅผ ๊ณ ๋ คํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ฐ์ดํ„ฐ์™€ ๋ชจ๋ธ ์˜ˆ์ธก ๊ฐ„์˜ cross-entropy ๊ฐ’์— ์ง€์ˆ˜๋ฅผ ์ทจํ•œ ๊ฒƒ๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ์™€ ๋ฌธ์ž๋‹น ๋น„ํŠธ ์ˆ˜(BPC) ๋ฐ ๋ฐ์ดํ„ฐ ์••์ถ•๊ณผ์˜ ๊ด€๊ณ„์— ๋Œ€ํ•ด ๋” ์ง๊ด€์ ์ธ ์ดํ•ด๋ฅผ ์›ํ•˜์‹ ๋‹ค๋ฉด ๋‹ค์Œ ๊ธ€ [fantastic blog post on The Gradient](https://thegradient.pub/understanding-evaluation-metrics-for-language-models/)์„ ํ™•์ธํ•˜์„ธ์š”. ## ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(PPL) ๊ณ„์‚ฐํ•˜๊ธฐ[[calculating-ppl-with-fixedlength-models]] ๋ชจ๋ธ์˜ ์ปจํ…์ŠคํŠธ ํฌ๊ธฐ๊ฐ€ ์ •ํ•ด์ ธ์žˆ์ง€ ์•Š๋‹ค๋ฉด, ์•„๋ž˜์™€ ๊ฐ™์ด ์‹œํ€€์Šค๋ฅผ ์ž๋™ ํšŒ๊ท€์ ์œผ๋กœ ๋ถ„ํ•ดํ•˜๊ณ  ๊ฐ ๋‹จ๊ณ„์—์„œ ์„ ํ–‰ ํ•˜๋Š” ์ „์ฒด ์‹œํ€€์Šค๋ฅผ ์กฐ๊ฑด๋ถ€ ํ™•๋ฅ ์— ๋„ฃ์–ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ๊ณ„์‚ฐํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. <img width="600" alt="Full decomposition of a sequence with unlimited context length" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_full.gif"/> ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ์˜ ๊ทผ์‚ฌ์น˜๋ฅผ ๊ตฌํ•  ๋•Œ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์ด ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ํ† ํฐ ์ˆ˜์— ์ œํ•œ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๊ฐ€์žฅ ํฐ ๋ฒ„์ „์˜ [GPT-2](model_doc/gpt2)๋Š” ํ† ํฐ์˜ ๊ธธ์ด๊ฐ€ 1024๋กœ ๊ณ ์ •๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ \\(t\\) ๊ฐ€ 1024๋ณด๋‹ค ํฐ ๊ฒฝ์šฐ์— \\(p_\theta(x_t|x_{<t})\\) ์„ ๊ณ„์‚ฐํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ๋Œ€์‹  ์‹œํ€€์Šค๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ํฌ๊ธฐ์™€ ๋™์ผํ•œ ๊ธธ์ด๋Š” ๊ฐ€์ง€๋Š” ๋ถ€๋ถ„ ์‹œํ€€์Šค๋กœ ์ชผ๊ฐญ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๊ฐ€ \\(k\\) ๋ผ๋ฉด, ํ† ํฐ \\(x_t\\) ์˜ ์šฐ๋„ ๊ฐ’์„ ๊ณ„์‚ฐํ•  ๋•Œ ์ด์ „ ํ† ํฐ์„ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ , \\(k-1\\) ํ† ํฐ๊นŒ์ง€ ์‚ฌ์šฉํ•ด ๋Œ€๋žต์ ์ธ ์šฐ๋„ ๊ฐ’์„ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์‹œํ€€์Šค์— ๋Œ€ํ•œ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ๊ณ„์‚ฐํ•  ๋•Œ, ์ˆ˜์›”ํ•˜์ง€๋งŒ ์ฐจ์„ ์ฑ…์€ ์‹œํ€€์Šค๋ฅผ ์ฒญํฌ๋กœ ์ชผ๊ฐœ๊ณ  ๋ถ„ํ•ด๋œ ๊ฐ ๋ถ€๋ถ„์˜ ๋กœ๊ทธ ์šฐ๋„ ๊ฐ’์„ ๋…๋ฆฝ์ ์œผ๋กœ ํ•ฉ์‚ฐํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. <img width="600" alt="Suboptimal PPL not taking advantage of full available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_chunked.gif"/> ์ด ๋ฐฉ๋ฒ•์€ ๊ฐ ๋ถ€๋ถ„์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ํ•œ ๋ฒˆ์˜ ํฌ์›Œ๋“œ ํŒจ์Šค๋กœ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ์–ด ๋น ๋ฅด์ง€๋งŒ ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ๋†’์€(๋” ๋‚˜์œ) PPL์„ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ๋Œ€๋ถ€๋ถ„์˜ ์˜ˆ์ธก ๋‹จ๊ณ„์—์„œ ๋ชจ๋ธ์˜ ์ปจํ…์ŠคํŠธ๊ฐ€ ์ ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋Œ€์‹ , ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ PPL์€ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต์œผ๋กœ ํ‰๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ „๋žต์—๋Š” ์ปจํ…์ŠคํŠธ ์œˆ๋„์šฐ์„ ๋ฐ˜๋ณต์ ์œผ๋กœ ์Šฌ๋ผ์ด๋”ฉํ•ด ๋ชจ๋ธ์ด ๊ฐ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•  ๋•Œ ๋” ๋งŽ์€ ์ปจํ…์ŠคํŠธ๋ฅผ ๊ฐ–๋„๋ก ํ•˜๋Š” ์ž‘์—…์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. <img width="600" alt="Sliding window PPL taking advantage of all available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_sliding.gif"/> ์ด๋Š” ์‹œํ€€์Šค ํ™•๋ฅ ์˜ ์‹ค์ œ ๋ถ„ํ•ด์— ๋” ๊ฐ€๊นŒ์šด ๊ทผ์‚ฌ์น˜์ด๋ฉฐ ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ์œ ๋ฆฌํ•œ ์ ์ˆ˜๋ฅผ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ ์€ ๋ง๋ญ‰์น˜์˜ ๊ฐ ํ† ํฐ์— ๋Œ€ํ•ด ๋ณ„๋„์˜ ํฌ์›Œ๋“œ ํŒจ์Šค๊ฐ€ ํ•„์š”ํ•˜๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ˜„์‹ค์ ์œผ๋กœ ์ข‹์€ ์ ˆ์ถฉ์•ˆ์€ ํ•œ ๋ฒˆ์— ํ•œ ํ† ํฐ์”ฉ ์Šฌ๋ผ์ด๋”ฉํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ๋” ํฐ ๊ฐ„๊ฒฉ์œผ๋กœ ์ปจํ…์ŠคํŠธ๋ฅผ ์ด๋™ํ•˜๋Š” ์ŠคํŠธ๋ผ์ด๋“œ๊ฐ€ ์ ์šฉ๋œ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๊ณ„์‚ฐ์„ ํ›จ์”ฌ ๋” ๋น ๋ฅด๊ฒŒ ์ง„ํ–‰ํ•˜๋ฉด์„œ๋„ ๋ชจ๋ธ์— ๊ฐ ๋‹จ๊ณ„์—์„œ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๊ธด ์ปจํ…์ŠคํŠธ๋ฅผ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์˜ˆ์ œ: ๐Ÿค— Transformers์—์„œ GPT-2๋กœ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(perplexity) ๊ณ„์‚ฐํ•˜๊ธฐ[[example-calculating-perplexity-with-gpt2-in-transformers]] ์ด์ œ GPT-2๋กœ ์œ„์˜ ๊ณผ์ •์„ ์‹œ์—ฐํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast device = "cuda" model_id = "openai-community/gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) ``` WikiText-2 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๋ช‡ ๊ฐ€์ง€ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต์„ ์‚ฌ์šฉํ•ด ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ๊ณ„์‚ฐํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ํฌ๊ธฐ๊ฐ€ ์ž‘๊ณ  ํฌ์›Œ๋“œ ํŒจ์Šค ํ•œ ๋ฒˆ๋งŒ ์ˆ˜ํ–‰ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ๊ฐ€์ ธ์˜ค๊ณ  ์ธ์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from datasets import load_dataset test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt") ``` ๐Ÿค— Transformers๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์˜ `labels`๋กœ `input_ids`๋ฅผ ์ „๋‹ฌํ•ด ๊ฐ ํ† ํฐ์— ๋Œ€ํ•œ ํ‰๊ท  ์Œ์˜ ์šฐ๋„ ๊ฐ’์„ ์†์‹ค๋กœ ๋ฐ˜ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ๊ฐ ๋ฐ˜๋ณต๋งˆ๋‹ค ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๋Š” ํ† ํฐ์ด ๊ฒน์นฉ๋‹ˆ๋‹ค. ์ปจํ…์ŠคํŠธ๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฐ์— ๋Œ€ํ•œ ๋กœ๊ทธ ์šฐ๋„ ๊ฐ’์ด ์†์‹ค์— ํฌํ•จ๋˜๋Š” ๊ฒƒ์„ ์›ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์ด๋Ÿฌํ•œ ํ† ํฐ์˜ `input_ids`๋ฅผ `-100`์œผ๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฌด์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์ŠคํŠธ๋ผ์ด๋“œ(stride)๋ฅผ `512`๋กœ ์‚ฌ์šฉํ•œ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์ด ํ•œ ํ† ํฐ์˜ ์กฐ๊ฑด๋ถ€ ์šฐ๋„ ๊ฐ’์„ ๊ณ„์‚ฐํ•  ๋•Œ ์ปจํ…์ŠคํŠธ์— ์ตœ์†Œํ•œ 512๊ฐœ์˜ ํ† ํฐ์ด ํฌํ•จ๋˜์–ด์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค (ํ•ด๋‹น ํ† ํฐ ์•ž์— 512๊ฐœ์˜ ํ† ํฐ์ด ์žˆ๋Š” ๊ฒฝ์šฐ). ```python import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nlls = [] prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # ๋งˆ์ง€๋ง‰ ๋ฃจํ”„์˜ ์ŠคํŠธ๋ผ์ด๋“œ ๊ฐ’๊ณผ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ์Œ input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # ์†์‹ค์€ ๋ชจ๋“  ์œ ํšจํ•œ ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•œ ํ‰๊ท ๊ฐ’์„ ๊ตฌํ•˜๋Š” ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ(cross entropy)๋กœ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. # ๋‚˜์ด๋ธŒ ๋ฒ ์ด์ง€์•ˆ ๋ชจ๋ธ์€ ๋‚ด๋ถ€์ ์œผ๋กœ ๋ ˆ์ด๋ธ”์„ ์™ผ์ชฝ์œผ๋กœ 1๊ฐœ์”ฉ ๋ฐ€๊ธฐ ๋•Œ๋ฌธ์—, (ํƒ€์ผ“ - 1)๊ฐœ ๋งŒํผ์˜ ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•ด ์†์‹ค์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. neg_log_likelihood = outputs.loss nlls.append(neg_log_likelihood) prev_end_loc = end_loc if end_loc == seq_len: break ppl = torch.exp(torch.stack(nlls).mean()) ``` ์ŠคํŠธ๋ผ์ด๋“œ๋ฅผ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด์™€ ๋™์ผํ•˜๊ฒŒ ์„ค์ •ํ•˜๋ฉด ์œ„์—์„œ ์„ค๋ช…ํ•œ ์ฐจ์„ ์ฑ…์ธ ๋น„์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ŠคํŠธ๋ผ์ด๋“œ๊ฐ€ ์ž‘์„์ˆ˜๋ก ๋ชจ๋ธ์ด ๊ฐ ์˜ˆ์ธก์„ ํ•  ๋•Œ ๋” ๋งŽ์€ ์ปจํ…์ŠคํŠธ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๊ฒŒ ๋˜์–ด ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ ๊ฐ’์ด ์ข‹์•„์ง‘๋‹ˆ๋‹ค. ์œ„์˜ ๊ณ„์‚ฐ์„ ํ† ํฐ์ด ๊ฒน์น˜์ง€ ์•Š๋„๋ก `stride = 1024`๋กœ ์„ค์ •ํ•˜๋ฉด PPL์€ `19.44`๋กœ GPT-2 ๋…ผ๋ฌธ์—์„œ ๋ณด๊ณ ๋œ `19.93`๊ณผ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. `stride = 512`๋กœ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต์„ ์‚ฌ์šฉํ•˜๋ฉด PPL์€ `16.45`๋กœ ๋–จ์–ด์ง‘๋‹ˆ๋‹ค. ์ด๋Š” ๋” ์ข‹์€ ์ ์ˆ˜์ผ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์‹œํ€€์Šค ํ™•๋ฅ ์˜ ์‹ค์ œ ์ž๋™ ํšŒ๊ท€ ๋ถ„ํ•ด์— ๋” ๊ฐ€๊นŒ์šด ๋ฐฉ์‹์œผ๋กœ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/perf_train_cpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹ค์ค‘ CPU์—์„œ ํšจ์œจ์ ์œผ๋กœ ํ›ˆ๋ จํ•˜๊ธฐ [[efficient-training-on-multiple-cpus]] ํ•˜๋‚˜์˜ CPU์—์„œ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์ด ๋„ˆ๋ฌด ๋Š๋ฆด ๋•Œ๋Š” ๋‹ค์ค‘ CPU๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ๋Š” PyTorch ๊ธฐ๋ฐ˜์˜ DDP๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ถ„์‚ฐ CPU ํ›ˆ๋ จ์„ ํšจ์œจ์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ## PyTorch์šฉ Intelยฎ oneCCL ๋ฐ”์ธ๋”ฉ [[intel-oneccl-bindings-for-pytorch]] [Intelยฎ oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library)์€ allreduce, allgather, alltoall๊ณผ ๊ฐ™์€ ์ง‘ํ•ฉ ํ†ต์‹ (collective communications)์„ ๊ตฌํ˜„ํ•œ ํšจ์œจ์ ์ธ ๋ถ„์‚ฐ ๋”ฅ๋Ÿฌ๋‹ ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. oneCCL์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋Š” [oneCCL ๋ฌธ์„œ](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html)์™€ [oneCCL ์‚ฌ์–‘](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html)์„ ์ฐธ์กฐํ•˜์„ธ์š”. `oneccl_bindings_for_pytorch` ๋ชจ๋“ˆ (`torch_ccl`์€ ๋ฒ„์ „ 1.12 ์ด์ „์— ์‚ฌ์šฉ)์€ PyTorch C10D ProcessGroup API๋ฅผ ๊ตฌํ˜„ํ•˜๋ฉฐ, ์™ธ๋ถ€ ProcessGroup๋กœ ๋™์ ์œผ๋กœ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์œผ๋ฉฐ ํ˜„์žฌ Linux ํ”Œ๋žซํผ์—์„œ๋งŒ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. [oneccl_bind_pt](https://github.com/intel/torch-ccl)์—์„œ ๋” ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ### PyTorch์šฉ Intelยฎ oneCCL ๋ฐ”์ธ๋”ฉ ์„ค์น˜: [[intel-oneccl-bindings-for-pytorch-installation]] ๋‹ค์Œ Python ๋ฒ„์ „์— ๋Œ€ํ•œ Wheel ํŒŒ์ผ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 1.13.0 | | โˆš | โˆš | โˆš | โˆš | | 1.12.100 | | โˆš | โˆš | โˆš | โˆš | | 1.12.0 | | โˆš | โˆš | โˆš | โˆš | | 1.11.0 | | โˆš | โˆš | โˆš | โˆš | | 1.10.0 | โˆš | โˆš | โˆš | โˆš | | ```bash pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu ``` `{pytorch_version}`์€ 1.13.0๊ณผ ๊ฐ™์ด PyTorch ๋ฒ„์ „์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. [oneccl_bind_pt ์„ค์น˜](https://github.com/intel/torch-ccl)์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•ด ๋ณด์„ธ์š”. oneCCL๊ณผ PyTorch์˜ ๋ฒ„์ „์€ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. <Tip warning={true}> oneccl_bindings_for_pytorch 1.12.0 ๋ฒ„์ „์˜ ๋ฏธ๋ฆฌ ๋นŒ๋“œ๋œ Wheel ํŒŒ์ผ์€ PyTorch 1.12.1๊ณผ ํ˜ธํ™˜๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค(PyTorch 1.12.0์šฉ์ž…๋‹ˆ๋‹ค). PyTorch 1.12.1์€ oneccl_bindings_for_pytorch 1.12.10 ๋ฒ„์ „๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ## Intelยฎ MPI ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ [[intel-mpi-library]] ์ด ํ‘œ์ค€ ๊ธฐ๋ฐ˜ MPI ๊ตฌํ˜„์„ ์‚ฌ์šฉํ•˜์—ฌ Intelยฎ ์•„ํ‚คํ…์ฒ˜์—์„œ ์œ ์—ฐํ•˜๊ณ  ํšจ์œจ์ ์ด๋ฉฐ ํ™•์žฅ ๊ฐ€๋Šฅํ•œ ํด๋Ÿฌ์Šคํ„ฐ ๋ฉ”์‹œ์ง•์„ ์ œ๊ณตํ•˜์„ธ์š”. ์ด ๊ตฌ์„ฑ ์š”์†Œ๋Š” Intelยฎ oneAPI HPC Toolkit์˜ ์ผ๋ถ€์ž…๋‹ˆ๋‹ค. oneccl_bindings_for_pytorch๋Š” MPI ๋„๊ตฌ ์„ธํŠธ์™€ ํ•จ๊ป˜ ์„ค์น˜๋ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ํ™˜๊ฒฝ์„ ์†Œ์Šค๋กœ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. Intelยฎ oneCCL ๋ฒ„์ „ 1.12.0 ์ด์ƒ์ธ ๊ฒฝ์šฐ ```bash oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh ``` Intelยฎ oneCCL ๋ฒ„์ „์ด 1.12.0 ๋ฏธ๋งŒ์ธ ๊ฒฝ์šฐ ```bash torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") source $torch_ccl_path/env/setvars.sh ``` #### IPEX ์„ค์น˜: [[ipex-installation]] IPEX๋Š” Float32์™€ BFloat16์„ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜๋Š” CPU ํ›ˆ๋ จ์„ ์œ„ํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. [single CPU section](./perf_train_cpu)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ์ด์–ด์„œ ๋‚˜์˜ค๋Š” "Trainer์—์„œ์˜ ์‚ฌ์šฉ"์€ Intelยฎ MPI ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ mpirun์„ ์˜ˆ๋กœ ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ## Trainer์—์„œ์˜ ์‚ฌ์šฉ [[usage-in-trainer]] Trainer์—์„œ ccl ๋ฐฑ์—”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฉ€ํ‹ฐ CPU ๋ถ„์‚ฐ ํ›ˆ๋ จ์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋ช…๋ น ์ธ์ˆ˜์— **`--ddp_backend ccl`**์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [์งˆ์˜ ์‘๋‹ต ์˜ˆ์ œ](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)๋ฅผ ์‚ฌ์šฉํ•œ ์˜ˆ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์€ ํ•œ Xeon ๋…ธ๋“œ์—์„œ 2๊ฐœ์˜ ํ”„๋กœ์„ธ์Šค๋กœ ํ›ˆ๋ จ์„ ํ™œ์„ฑํ™”ํ•˜๋ฉฐ, ๊ฐ ์†Œ์ผ“๋‹น ํ•˜๋‚˜์˜ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. OMP_NUM_THREADS/CCL_WORKER_COUNT ๋ณ€์ˆ˜๋Š” ์ตœ์ ์˜ ์„ฑ๋Šฅ์„ ์œ„ํ•ด ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ``` ๋‹ค์Œ ๋ช…๋ น์€ ๋‘ ๊ฐœ์˜ Xeon(๋…ธ๋“œ0 ๋ฐ ๋…ธ๋“œ1, ์ฃผ ํ”„๋กœ์„ธ์Šค๋กœ ๋…ธ๋“œ0์„ ์‚ฌ์šฉ)์—์„œ ์ด 4๊ฐœ์˜ ํ”„๋กœ์„ธ์Šค๋กœ ํ›ˆ๋ จ์„ ํ™œ์„ฑํ™”ํ•˜๋ฉฐ, ๊ฐ ์†Œ์ผ“๋‹น ํ•˜๋‚˜์˜ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. OMP_NUM_THREADS/CCL_WORKER_COUNT ๋ณ€์ˆ˜๋Š” ์ตœ์ ์˜ ์„ฑ๋Šฅ์„ ์œ„ํ•ด ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋…ธ๋“œ0์—์„œ๋Š” ๊ฐ ๋…ธ๋“œ์˜ IP ์ฃผ์†Œ๋ฅผ ํฌํ•จํ•˜๋Š” ๊ตฌ์„ฑ ํŒŒ์ผ(์˜ˆ: hostfile)์„ ์ƒ์„ฑํ•˜๊ณ  ํ•ด๋‹น ๊ตฌ์„ฑ ํŒŒ์ผ ๊ฒฝ๋กœ๋ฅผ ์ธ์ˆ˜๋กœ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip ``` ์ด์ œ ๋…ธ๋“œ0์—์„œ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋ฉด **4DDP**๊ฐ€ ๋…ธ๋“œ0 ๋ฐ ๋…ธ๋“œ1์—์„œ BF16 ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋กœ ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/perf_train_gpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹ค์ค‘ GPU์—์„œ ํšจ์œจ์ ์ธ ํ›ˆ๋ จ [[efficient-training-on-multiple-gpus]] ๋‹จ์ผ GPU์—์„œ์˜ ํ›ˆ๋ จ์ด ๋„ˆ๋ฌด ๋Š๋ฆฌ๊ฑฐ๋‚˜ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๊ฐ€ ๋‹จ์ผ GPU์˜ ๋ฉ”๋ชจ๋ฆฌ์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, ๋‹ค์ค‘-GPU ์„ค์ •์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ GPU์—์„œ ๋‹ค์ค‘ GPU๋กœ ์ „ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ž‘์—…์„ ๋ถ„์‚ฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ, ํ…์„œ ๋˜๋Š” ํŒŒ์ดํ”„๋ผ์ธ๊ณผ ๊ฐ™์€ ๋ณ‘๋ ฌํ™” ๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž‘์—…์„ ๋ณ‘๋ ฌ๋กœ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์„ค์ •์„ ๋ชจ๋‘์—๊ฒŒ ์ ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์™„๋ฒฝํ•œ ํ•ด๊ฒฐ์ฑ…์€ ์—†์œผ๋ฉฐ, ์–ด๋–ค ์„ค์ •์ด ๊ฐ€์žฅ ์ ํ•ฉํ•œ์ง€๋Š” ์‚ฌ์šฉํ•˜๋Š” ํ•˜๋“œ์›จ์–ด์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ๋Š” ์ฃผ๋กœ PyTorch ๊ธฐ๋ฐ˜์˜ ๊ตฌํ˜„์„ ์ค‘์‹ฌ์œผ๋กœ ์„ค๋ช…ํ•˜๋ฉฐ, ๋Œ€๋ถ€๋ถ„์˜ ๊ฐœ๋…์€ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ์—๋„ ์ ์šฉ๋  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. <Tip> ์ฐธ๊ณ : [๋‹จ์ผ GPU ์„น์…˜](perf_train_gpu_one)์—์„œ ์†Œ๊ฐœ๋œ ์ „๋žต(ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ ๋˜๋Š” ๊ทธ๋ž˜๋””์–ธํŠธ ๋ˆ„์  ๋“ฑ)์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ ํ›ˆ๋ จ์— ์ ์šฉ๋˜๋ฉฐ, ๋‹ค์ค‘-GPU ๋˜๋Š” CPU ํ›ˆ๋ จ๊ณผ ๊ฐ™์€ ๋‹ค์Œ ์„น์…˜์œผ๋กœ ์ง„์ž…ํ•˜๊ธฐ ์ „์— ํ•ด๋‹น ์„น์…˜์„ ์ฐธ๊ณ ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ๋จผ์ € 1D ๋ณ‘๋ ฌํ™” ๊ธฐ์ˆ ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ๋…ผ์˜ํ•œ ํ›„, ์ด๋Ÿฌํ•œ ๊ธฐ์ˆ ์„ ๊ฒฐํ•ฉํ•˜์—ฌ 2D ๋ฐ 3D ๋ณ‘๋ ฌํ™”๋ฅผ ๊ตฌํ˜„ํ•˜์—ฌ ๋” ๋น ๋ฅธ ํ›ˆ๋ จ๊ณผ ๋” ํฐ ๋ชจ๋ธ์„ ์ง€์›ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ ๋‹ค๋ฅธ ํšจ๊ณผ์ ์ธ ๋Œ€์•ˆ ๋ฐฉ์‹๋„ ์†Œ๊ฐœ๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ## ๊ฐœ๋… [[concepts]] ๋‹ค์Œ์€ ์ด ๋ฌธ์„œ์—์„œ ์ž์„ธํžˆ ์„ค๋ช…๋  ์ฃผ์š” ๊ฐœ๋…์— ๋Œ€ํ•œ ๊ฐ„๋‹จํ•œ ์„ค๋ช…์ž…๋‹ˆ๋‹ค. 1. **DataParallel (DP)** - ๋™์ผํ•œ ์„ค์ •์ด ์—ฌ๋Ÿฌ ๋ฒˆ ๋ณต์ œ๋˜๊ณ , ๊ฐ ์„ค์ •์— ๋ฐ์ดํ„ฐ ์ผ๋ถ€๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ฒ˜๋ฆฌ๋Š” ๋ณ‘๋ ฌ๋กœ ์ˆ˜ํ–‰๋˜๋ฉฐ ๋ชจ๋“  ์„ค์ •์€ ๊ฐ ํ›ˆ๋ จ ๋‹จ๊ณ„์˜ ๋๋‚  ๋•Œ ๋™๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. 2. **TensorParallel (TP)** - ๊ฐ ํ…์„œ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋ฌถ์Œ์œผ๋กœ ๋ถ„ํ• ๋˜๊ธฐ์—, ์ „์ฒด ํ…์„œ๊ฐ€ ๋‹จ์ผ GPU์— ์ƒ์ฃผํ•˜๋Š” ๋Œ€์‹  ํ…์„œ์˜ ๊ฐ ์ƒค๋“œ๊ฐ€ ์ง€์ •๋œ GPU์— ์ƒ์ฃผํ•ฉ๋‹ˆ๋‹ค. ์ฒ˜๋ฆฌํ•˜๋Š” ๋™์•ˆ ๊ฐ ์ƒค๋“œ๋Š” ์„œ๋กœ ๋‹ค๋ฅธ GPU์—์„œ ๊ฐœ๋ณ„์ ์œผ๋กœ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋˜๋ฉฐ ๊ฒฐ๊ณผ๋Š” ๋‹จ๊ณ„๊ฐ€ ๋๋‚  ๋•Œ ๋™๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ๋ถ„ํ• ์ด ์ˆ˜ํ‰ ์ˆ˜์ค€์—์„œ ์ด๋ฃจ์–ด์ง€๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฅผ ์ˆ˜ํ‰ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋ผ๊ณ  ๋ถ€๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. **PipelineParallel (PP)** - ๋ชจ๋ธ์ด ์ˆ˜์ง์œผ๋กœ (๋ ˆ์ด์–ด ์ˆ˜์ค€) ์—ฌ๋Ÿฌ GPU์— ๋ถ„ํ• ๋˜์–ด ๋ชจ๋ธ์˜ ๋‹จ์ผ GPU์—๋Š” ํ•˜๋‚˜ ๋˜๋Š” ์—ฌ๋Ÿฌ ๋ ˆ์ด์–ด๊ฐ€ ๋ฐฐ์น˜๋ฉ๋‹ˆ๋‹ค. ๊ฐ GPU๋Š” ํŒŒ์ดํ”„๋ผ์ธ์˜ ์„œ๋กœ ๋‹ค๋ฅธ ๋‹จ๊ณ„๋ฅผ ๋ณ‘๋ ฌ๋กœ ์ฒ˜๋ฆฌํ•˜๋ฉฐ ์ž‘์€ ๋ฐฐ์น˜ ๋ฌถ์Œ์—์„œ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. 4. **Zero Redundancy Optimizer (ZeRO)** - TP์™€ ์œ ์‚ฌํ•˜๊ฒŒ ํ…์„œ๋ฅผ ์ƒค๋”ฉํ•˜์ง€๋งŒ, ์ „์ฒด ํ…์„œ๋Š” ์ˆœ๋ฐฉํ–ฅ ๋˜๋Š” ์—ญ๋ฐฉํ–ฅ ๊ณ„์‚ฐ์„ ์œ„ํ•ด ์žฌ๊ตฌ์„ฑ๋˜๋ฏ€๋กœ ๋ชจ๋ธ์„ ์ˆ˜์ •ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์ œํ•œ๋œ GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ์˜คํ”„๋กœ๋“œ ๊ธฐ์ˆ ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. 5. **Sharded DDP** - ZeRO์˜ ๊ธฐ๋ณธ ๊ฐœ๋…์œผ๋กœ ๋‹ค๋ฅธ ZeRO ๊ตฌํ˜„์—์„œ๋„ ์‚ฌ์šฉ๋˜๋Š” ์šฉ์–ด์ž…๋‹ˆ๋‹ค. ๊ฐ ๊ฐœ๋…์˜ ๊ตฌ์ฒด์ ์ธ ๋‚ด์šฉ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ๋“ค์–ด๊ฐ€๊ธฐ ์ „์— ๋Œ€๊ทœ๋ชจ ์ธํ”„๋ผ์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒฝ์šฐ์˜ ๋Œ€๋žต์ ์ธ ๊ฒฐ์ • ๊ณผ์ •์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## ํ™•์žฅ์„ฑ ์ „๋žต [[scalability-strategy]] **โ‡จ ๋‹จ์ผ ๋…ธ๋“œ / ๋‹ค์ค‘-GPU** * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž๋Š” ๊ฒฝ์šฐ: 1. DDP - ๋ถ„์‚ฐ DP 2. ZeRO - ์ƒํ™ฉ๊ณผ ๊ตฌ์„ฑ์— ๋”ฐ๋ผ ๋” ๋น ๋ฅผ ์ˆ˜๋„ ์žˆ๊ณ  ๊ทธ๋ ‡์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Œ * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. PP 2. ZeRO 3. TP ๋…ธ๋“œ ๋‚ด ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ๋งค์šฐ ๋น ๋ฅธ NVLINK ๋˜๋Š” NVSwitch์˜ ๊ฒฝ์šฐ ์„ธ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ ๋Œ€๋ถ€๋ถ„ ๋น„์Šทํ•œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์•ผ ํ•˜๋ฉฐ, PP๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ TP ๋˜๋Š” ZeRO๋ณด๋‹ค ๋น ๋ฅผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. TP์˜ ์ •๋„๋„ ์ฐจ์ด๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ์„ค์ •์—์„œ ์Šน์ž๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ์‹คํ—˜ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์ข‹์Šต๋‹ˆ๋‹ค. TP๋Š” ๊ฑฐ์˜ ํ•ญ์ƒ ๋‹จ์ผ ๋…ธ๋“œ ๋‚ด์—์„œ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ฆ‰, TP ํฌ๊ธฐ <= ๋…ธ๋“œ๋‹น GPU ์ˆ˜์ž…๋‹ˆ๋‹ค. * ๊ฐ€์žฅ ํฐ ๋ ˆ์ด์–ด๊ฐ€ ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. ZeRO๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ - PP๋งŒ์œผ๋กœ๋Š” ๋งž์ง€ ์•Š์œผ๋ฏ€๋กœ TP๋ฅผ ๋ฐ˜๋“œ์‹œ ์‚ฌ์šฉํ•ด์•ผ ํ•จ 2. ZeRO๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ์œ„์˜ "๋‹จ์ผ GPU" ํ•ญ๋ชฉ๊ณผ ๋™์ผ **โ‡จ ๋‹ค์ค‘ ๋…ธ๋“œ / ๋‹ค์ค‘ GPU** * ๋…ธ๋“œ ๊ฐ„ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ๋น ๋ฅธ ๊ฒฝ์šฐ: 1. ZeRO - ๋ชจ๋ธ์— ๋Œ€๋ถ€๋ถ„์˜ ์ˆ˜์ •์„ ํ•„์š”๋กœ ํ•˜์ง€ ์•Š์Œ 2. PP+TP+DP - ํ†ต์‹ ์ด ์ ์ง€๋งŒ ๋ชจ๋ธ์— ๋Œ€๋Œ€์ ์ธ ๋ณ€๊ฒฝ์ด ํ•„์š”ํ•จ * ๋…ธ๋“œ ๊ฐ„ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ๋Š๋ฆฌ๋ฉฐ, GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์—ฌ์ „ํžˆ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ: 1. DP+PP+TP+ZeRO-1 ## ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” [[data-parallelism]] 2๊ฐœ์˜ GPU๋งŒ์œผ๋กœ๋„ ๋Œ€๋ถ€๋ถ„์˜ ์‚ฌ์šฉ์ž๋“ค์€ `DataParallel` (DP)๊ณผ `DistributedDataParallel` (DDP)์„ ํ†ตํ•ด ํ–ฅ์ƒ๋œ ํ›ˆ๋ จ ์†๋„๋ฅผ ๋ˆ„๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” PyTorch์˜ ๋‚ด์žฅ ๊ธฐ๋Šฅ์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ DDP๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์œผ๋ฉฐ, DP๋Š” ์ผ๋ถ€ ๋ชจ๋ธ์—์„œ ์ž‘๋™ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [PyTorch ๋ฌธ์„œ](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html)์—์„œ๋„ DDP์˜ ์‚ฌ์šฉ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ### DP vs DDP [[dp-vs-ddp]] `DistributedDataParallel` (DDP)์€ ์ผ๋ฐ˜์ ์œผ๋กœ `DataParallel` (DP)๋ณด๋‹ค ๋น ๋ฅด์ง€๋งŒ, ํ•ญ์ƒ ๊ทธ๋ ‡์ง€๋Š” ์•Š์Šต๋‹ˆ๋‹ค: * DP๋Š” ํŒŒ์ด์ฌ ์Šค๋ ˆ๋“œ ๊ธฐ๋ฐ˜์ธ ๋ฐ˜๋ฉด, DDP๋Š” ๋‹ค์ค‘ ํ”„๋กœ์„ธ์Šค ๊ธฐ๋ฐ˜์ด๊ธฐ ๋•Œ๋ฌธ์— GIL๊ณผ ๊ฐ™์€ ํŒŒ์ด์ฌ ์Šค๋ ˆ๋“œ ์ œํ•œ์ด ์—†์Šต๋‹ˆ๋‹ค. * ๊ทธ๋Ÿฌ๋‚˜ GPU ์นด๋“œ ๊ฐ„์˜ ๋Š๋ฆฐ ์ƒํ˜ธ ์—ฐ๊ฒฐ์„ฑ์€ DDP๋กœ ์ธํ•ด ์‹ค์ œ๋กœ ๋Š๋ฆฐ ๊ฒฐ๊ณผ๋ฅผ ๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ๋ชจ๋“œ ๊ฐ„์˜ GPU ๊ฐ„ ํ†ต์‹  ์˜ค๋ฒ„ํ—ค๋“œ์˜ ์ฃผ์š” ์ฐจ์ด์ ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: [DDP](https://pytorch.org/docs/master/notes/ddp.html): - ์‹œ์ž‘ํ•  ๋•Œ, ์ฃผ ํ”„๋กœ์„ธ์Šค๊ฐ€ ๋ชจ๋ธ์„ gpu 0์—์„œ ๋‹ค๋ฅธ ๋ชจ๋“  gpu๋กœ ๋ณต์ œํ•ฉ๋‹ˆ๋‹ค. - ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ ๋ฐฐ์น˜์— ๋Œ€ํ•ด: 1. ๊ฐ gpu๋Š” ์ž์ฒด ๋ฏธ๋‹ˆ ๋ฐฐ์น˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ง์ ‘ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. 2. `backward` ๋™์•ˆ ๋กœ์ปฌ ๊ทธ๋ž˜๋””์–ธํŠธ๊ฐ€ ์ค€๋น„๋˜๋ฉด, ๋ชจ๋“  ํ”„๋กœ์„ธ์Šค์— ํ‰๊ท ํ™”๋ฉ๋‹ˆ๋‹ค. [DP](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html): ๊ฐ ๋ฐฐ์น˜์— ๋Œ€ํ•ด: 1. gpu 0์€ ๋ฐ์ดํ„ฐ ๋ฐฐ์น˜๋ฅผ ์ฝ๊ณ  ๊ฐ gpu์— ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋ฅผ ๋ณด๋ƒ…๋‹ˆ๋‹ค. 2. ์—…๋ฐ์ดํŠธ๋œ ๋ชจ๋ธ์„ gpu 0์—์„œ ๊ฐ gpu๋กœ ๋ณต์ œํ•ฉ๋‹ˆ๋‹ค. 3. `forward`๋ฅผ ์‹คํ–‰ํ•˜๊ณ  ๊ฐ gpu์˜ ์ถœ๋ ฅ์„ gpu 0์œผ๋กœ ๋ณด๋‚ด๊ณ  ์†์‹ค์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. 4. gpu 0์—์„œ ๋ชจ๋“  gpu๋กœ ์†์‹ค์„ ๋ถ„์‚ฐํ•˜๊ณ  `backward`๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. 5. ๊ฐ gpu์—์„œ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ gpu 0์œผ๋กœ ๋ณด๋‚ด๊ณ  ์ด๋ฅผ ํ‰๊ท ํ™”ํ•ฉ๋‹ˆ๋‹ค. DDP๋Š” ๊ฐ ๋ฐฐ์น˜๋งˆ๋‹ค ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๋ณด๋‚ด๋Š” ํ†ต์‹ ๋งŒ์„ ์ˆ˜ํ–‰ํ•˜๋ฉฐ, DP๋Š” ๋ฐฐ์น˜๋งˆ๋‹ค 5๊ฐœ์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ๊ตํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. DP๋Š” ํŒŒ์ด์ฌ ์Šค๋ ˆ๋“œ๋ฅผ ํ†ตํ•ด ํ”„๋กœ์„ธ์Šค ๋‚ด์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณต์ œํ•˜๋ฉฐ, DDP๋Š” [torch.distributed](https://pytorch.org/docs/master/distributed.html)๋ฅผ ํ†ตํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ๋ณต์ œํ•ฉ๋‹ˆ๋‹ค. DP์—์„œ๋Š” gpu 0์ด ๋‹ค๋ฅธ gpu๋ณด๋‹ค ํ›จ์”ฌ ๋” ๋งŽ์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋ฏ€๋กœ, gpu์˜ ํ™œ์šฉ๋„๊ฐ€ ๋‚ฎ์•„์ง‘๋‹ˆ๋‹ค. DDP๋Š” ์—ฌ๋Ÿฌ ๋Œ€์˜ ์ปดํ“จํ„ฐ์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, DP์˜ ๊ฒฝ์šฐ๋Š” ๊ทธ๋ ‡์ง€ ์•Š์Šต๋‹ˆ๋‹ค. DP์™€ DDP ์‚ฌ์ด์—๋Š” ๋‹ค๋ฅธ ์ฐจ์ด์ ์ด ์žˆ์ง€๋งŒ, ์ด ํ† ๋ก ๊ณผ๋Š” ๊ด€๋ จ์ด ์—†์Šต๋‹ˆ๋‹ค. ์ด 2๊ฐ€์ง€ ๋ชจ๋“œ๋ฅผ ๊นŠ๊ฒŒ ์ดํ•ดํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, [์ด ๋ฌธ์„œ](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/)๋ฅผ ๊ฐ•๋ ฅํžˆ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ๋Š” ๋ฉ‹์ง„ ๋‹ค์ด์–ด๊ทธ๋žจ์„ ํฌํ•จํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ๋‹ค์–‘ํ•œ ํ•˜๋“œ์›จ์–ด์—์„œ ์—ฌ๋Ÿฌ ๋ฒค์น˜๋งˆํฌ์™€ ํ”„๋กœํŒŒ์ผ๋Ÿฌ ์ถœ๋ ฅ์„ ์„ค๋ช…ํ•˜์—ฌ ํ•„์š”ํ•œ ์„ธ๋ถ€ ์‚ฌํ•ญ์„ ๋ชจ๋‘ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์‹ค์ œ ๋ฒค์น˜๋งˆํฌ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: | Type | NVlink | Time | | :----- | ----- | ---: | | 2:DP | Y | 110s | | 2:DDP | Y | 101s | | 2:DDP | N | 131s | ๋ถ„์„: ์—ฌ๊ธฐ์„œ DP๋Š” NVlink๊ฐ€ ์žˆ๋Š” DDP๋ณด๋‹ค ์•ฝ 10% ๋Š๋ฆฝ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ NVlink๊ฐ€ ์—†๋Š” DDP๋ณด๋‹ค ์•ฝ 15% ๋น ๋ฆ…๋‹ˆ๋‹ค. ์‹ค์ œ ์ฐจ์ด๋Š” ๊ฐ GPU๊ฐ€ ๋‹ค๋ฅธ GPU์™€ ๋™๊ธฐํ™”ํ•ด์•ผ ํ•˜๋Š” ๋ฐ์ดํ„ฐ ์–‘์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์งˆ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋™๊ธฐํ™”ํ•  ๋ฐ์ดํ„ฐ๊ฐ€ ๋งŽ์„์ˆ˜๋ก ๋Š๋ฆฐ ๋งํฌ๊ฐ€ ์ด ์‹คํ–‰ ์‹œ๊ฐ„์„ ๋Šฆ์ถœ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์ „์ฒด ๋ฒค์น˜๋งˆํฌ ์ฝ”๋“œ์™€ ์ถœ๋ ฅ์ž…๋‹ˆ๋‹ค: ํ•ด๋‹น ๋ฒค์น˜๋งˆํฌ์—์„œ `NCCL_P2P_DISABLE=1`์„ ์‚ฌ์šฉํ•˜์—ฌ NVLink ๊ธฐ๋Šฅ์„ ๋น„ํ™œ์„ฑํ™”ํ–ˆ์Šต๋‹ˆ๋‹ค. ```bash # DP rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69} # DDP w/ NVlink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVlink rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` ํ•˜๋“œ์›จ์–ด: ๊ฐ๊ฐ 24GB์˜ TITAN RTX 2๊ฐœ + NVlink๊ณผ 2๊ฐœ์˜ NVLink (`nvidia-smi topo -m`์—์„œ `NV2`์ž…๋‹ˆ๋‹ค.) ์†Œํ”„ํŠธ์›จ์–ด: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0` ## ZeRO ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” [[zero-data-parallelism]] ZeRO๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” (ZeRO-DP)๋Š” ๋‹ค์Œ [๋ธ”๋กœ๊ทธ ๊ธ€](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)์˜ ๋‹ค์Œ ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ ์„ค๋ช…๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ![DeepSpeed-Image-1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png) ์ด ๊ฐœ๋…์€ ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์ง€๋งŒ, ์‹ค์ œ๋กœ๋Š” ๋งค์šฐ ๊ฐ„๋‹จํ•œ ๊ฐœ๋…์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์ผ๋ฐ˜์ ์ธ `DataParallel` (DP)๊ณผ ๋™์ผํ•˜์ง€๋งŒ, ์ „์ฒด ๋ชจ๋ธ ๋งค๊ฐœ๋ณ€์ˆ˜, ๊ทธ๋ž˜๋””์–ธํŠธ ๋ฐ ์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ๋ฅผ ๋ณต์ œํ•˜๋Š” ๋Œ€์‹  ๊ฐ GPU๋Š” ๊ทธ ์ค‘ ์ผ๋ถ€๋งŒ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์‹คํ–‰ ์‹œ๊ฐ„์—๋Š” ์ฃผ์–ด์ง„ ๋ ˆ์ด์–ด์— ๋Œ€ํ•ด ์ „์ฒด ๋ ˆ์ด์–ด ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ํ•„์š”ํ•  ๋•Œ ๊ฐ GPU๊ฐ€ ์„œ๋กœ์—๊ฒŒ ํ•„์š”ํ•œ ๋ถ€๋ถ„์„ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•ด ๋™๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค - ๊ทธ๊ฒŒ ์ „๋ถ€์ž…๋‹ˆ๋‹ค. ๊ฐ๊ฐ 3๊ฐœ์˜ ๋ ˆ์ด์–ด์™€ 3๊ฐœ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ๋Š” ๊ฐ„๋‹จํ•œ ๋ชจ๋ธ์„ ์ƒ๊ฐํ•ด ๋ด…์‹œ๋‹ค: ``` La | Lb | Lc ---|----|--- a0 | b0 | c0 a1 | b1 | c1 a2 | b2 | c2 ``` ๋ ˆ์ด์–ด La์—๋Š” ๊ฐ€์ค‘์น˜ a0, a1 ๋ฐ a2๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. 3๊ฐœ์˜ GPU๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ, Sharded DDP (= Zero-DP)๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ชจ๋ธ์„ 3๊ฐœ์˜ GPU์— ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค: ``` GPU0: La | Lb | Lc ---|----|--- a0 | b0 | c0 GPU1: La | Lb | Lc ---|----|--- a1 | b1 | c1 GPU2: La | Lb | Lc ---|----|--- a2 | b2 | c2 ``` ์ผ๋ฐ˜์ ์ธ DNN ๋‹ค์ด์–ด๊ทธ๋žจ์„ ์ƒ์ƒํ•ด๋ณด๋ฉด ์ด๋Š” ํ…์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ์™€ ๊ฐ™์€ ์ˆ˜ํ‰ ์Šฌ๋ผ์ด์‹ฑ์ž…๋‹ˆ๋‹ค. ์ˆ˜์ง ์Šฌ๋ผ์ด์‹ฑ์€ ์ „์ฒด ๋ ˆ์ด์–ด ๊ทธ๋ฃน์„ ๋‹ค๋ฅธ GPU์— ๋ฐฐ์น˜ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์‹œ์ž‘์— ๋ถˆ๊ณผํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ์ด๋Ÿฌํ•œ ๊ฐ๊ฐ์˜ GPU๋Š” DP์—์„œ ์ž‘๋™ํ•˜๋Š” ๊ฒƒ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ผ๋ฐ˜์ ์ธ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค: ``` x0 => GPU0 x1 => GPU1 x2 => GPU2 ``` ์ž…๋ ฅ์€ ์ˆ˜์ •๋˜์ง€ ์•Š์€ ์ƒํƒœ๋กœ ์ผ๋ฐ˜ ๋ชจ๋ธ์— ์˜ํ•ด ์ฒ˜๋ฆฌ๋  ๊ฒƒ์œผ๋กœ ๊ฐ„์ฃผํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ €, ์ž…๋ ฅ์€ ๋ ˆ์ด์–ด La์— ๋„๋‹ฌํ•ฉ๋‹ˆ๋‹ค. GPU0์—๋งŒ ์ง‘์ค‘ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. x0์€ ์ˆœ๋ฐฉํ–ฅ ๊ฒฝ๋กœ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด a0, a1, a2 ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ํ•„์š”ํ•˜์ง€๋งŒ GPU0์—๋Š” a0๋งŒ ์žˆ์Šต๋‹ˆ๋‹ค. GPU1์—์„œ a1์„, GPU2์—์„œ a2๋ฅผ ์ „์†ก๋ฐ›์•„ ๋ชจ๋ธ์˜ ๋ชจ๋“  ์กฐ๊ฐ์„ ํ•˜๋‚˜๋กœ ๋ชจ์๋‹ˆ๋‹ค. ๋ณ‘๋ ฌ์ ์œผ๋กœ, GPU1์€ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜ x1์„ ๋ฐ›๊ณ  a1๋งŒ ๊ฐ€์ง€๊ณ  ์žˆ์ง€๋งŒ, a0 ๋ฐ a2 ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ GPU0 ๋ฐ GPU2์—์„œ ์ด๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. GPU2๋„ ๋™์ผํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ x2๋ฅผ ๋ฐ›๊ณ  GPU0 ๋ฐ GPU1์—์„œ ๊ฐ๊ฐ a0๊ณผ a1์„, ๊ทธ๋ฆฌ๊ณ  ์ž์‹ ์˜ a2์™€ ํ•จ๊ป˜ ์ „์ฒด ํ…์„œ๋ฅผ ๋ณต์›ํ•ฉ๋‹ˆ๋‹ค. 3๊ฐœ์˜ GPU๋Š” ๋ณต์›๋œ ์ „์ฒด ํ…์„œ๋ฅผ ๋ฐ›๊ณ  forward๊ฐ€ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ๊ณ„์‚ฐ์ด ์™„๋ฃŒ๋˜๋ฉด ๋” ์ด์ƒ ํ•„์š”ํ•˜์ง€ ์•Š์€ ๋ฐ์ดํ„ฐ๋Š” ์‚ญ์ œ๋˜๊ณ , ํ•ด๋‹น ๋ฐ์ดํ„ฐ๋Š” ๊ณ„์‚ฐ ์ค‘์—๋งŒ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋ณต์›์€ ์‚ฌ์ „ ํŒจ์น˜๋ฅผ ํ†ตํ•ด ํšจ์œจ์ ์œผ๋กœ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ „์ฒด ํ”„๋กœ์„ธ์Šค๋Š” ๋ ˆ์ด์–ด Lb์— ๋Œ€ํ•ด ๋ฐ˜๋ณต๋˜๊ณ , ๊ทธ ๋‹ค์Œ Lc๋กœ ์ˆœ๋ฐฉํ–ฅ์œผ๋กœ, ๊ทธ๋‹ค์Œ์€ ์—ญ๋ฐฉํ–ฅ์œผ๋กœ Lc -> Lb -> La๋กœ ๋ฐ˜๋ณต๋ฉ๋‹ˆ๋‹ค. ๊ฐœ์ธ์ ์œผ๋กœ ์ด๊ฒƒ์€ ํšจ์œจ์ ์ธ ๊ทธ๋ฃน ๋ฐฐ๋‚ญ ์—ฌํ–‰์ž์˜ ์ค‘๋Ÿ‰ ๋ถ„๋ฐฐ ์ „๋žต์ฒ˜๋Ÿผ ๋“ค๋ฆฝ๋‹ˆ๋‹ค: 1. ์‚ฌ๋žŒ A๊ฐ€ ํ…ํŠธ๋ฅผ ์šด๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์‚ฌ๋žŒ B๊ฐ€ ๋‚œ๋กœ๋ฅผ ์šด๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. 3. ์‚ฌ๋žŒ C๊ฐ€ ๋„๋ผ๋ฅผ ์šด๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ๋งค์ผ ๋ฐค ๊ฐ์ž ๊ฐ€์ง„ ๊ฒƒ์„ ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค๊ณผ ๊ณต์œ ํ•˜๊ณ , ๊ฐ€์ง€์ง€ ์•Š์€ ๊ฒƒ์€ ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค๋กœ๋ถ€ํ„ฐ ๋ฐ›๊ณ , ์•„์นจ์—๋Š” ํ• ๋‹น๋œ ์œ ํ˜•์˜ ์žฅ๋น„๋ฅผ ์‹ธ๊ณ  ๊ณ„์†ํ•ด์„œ ์—ฌํ–‰์„ ์ง„ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด Sharded DDP / Zero DP์ž…๋‹ˆ๋‹ค. ์ด ์ „๋žต์„ ๊ฐ๊ฐ ์ž์‹ ์˜ ํ…ํŠธ, ๋‚œ๋กœ ๋ฐ ๋„๋ผ๋ฅผ ๊ฐœ๋ณ„์ ์œผ๋กœ ์šด๋ฐ˜ํ•ด์•ผ ํ•˜๋Š” ๋‹จ์ˆœํ•œ ์ „๋žต๊ณผ ๋น„๊ตํ•ด๋ณด๋ฉด ํ›จ์”ฌ ๋น„ํšจ์œจ์ ์ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด Pytorch์˜ DataParallel (DP ๋ฐ DDP)์ž…๋‹ˆ๋‹ค. ์ด ์ฃผ์ œ์— ๋Œ€ํ•ด ๋…ผ๋ฌธ์„ ์ฝ์„ ๋•Œ ๋‹ค์Œ ๋™์˜์–ด๋ฅผ ๋งŒ๋‚  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: Sharded, Partitioned. ZeRO๊ฐ€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ๋ถ„ํ• ํ•˜๋Š” ๋ฐฉ์‹์„ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๋ฉด, ํ…์„œ ๋ณ‘๋ ฌํ™”์™€ ๋งค์šฐ ์œ ์‚ฌํ•œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ดํ›„์— ์„ค๋ช…๋  ์ˆ˜์ง ๋ชจ๋ธ ๋ณ‘๋ ฌํ™”์™€๋Š” ๋‹ฌ๋ฆฌ ๊ฐ ๋ ˆ์ด์–ด์˜ ๊ฐ€์ค‘์น˜๋ฅผ ๋ถ„ํ• /๋ถ„ํ• ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [DeepSpeed](https://www.deepspeed.ai/tutorials/zero/)๋Š” 1๋‹จ๊ณ„ + 2๋‹จ๊ณ„ + 3๋‹จ๊ณ„์˜ ZeRO-DP๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - [Fairscale](https://github.com/facebookresearch/fairscale/#optimizer-state-sharding-zero)์€ 1๋‹จ๊ณ„ + 2๋‹จ๊ณ„ + 3๋‹จ๊ณ„์˜ ZeRO-DP๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - [`transformers` ํ†ตํ•ฉ](main_classes/trainer#trainer-integrations) ## ๋„ค์ดํ‹ฐ๋ธŒ ๋ชจ๋ธ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ(์ˆ˜์ง์ ) ๋ฐ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ[[naive-model-parallelism-vertical-and-pipeline-parallelism]] Naive Model Parallelism (MP)์€ ๋ชจ๋ธ ๋ ˆ์ด์–ด ๊ทธ๋ฃน์„ ๋‹ค์ค‘ GPU์— ๋ถ„์‚ฐํ•˜๋Š” ๋ฐฉ์‹์ž…๋‹ˆ๋‹ค. ๋ฉ”์ปค๋‹ˆ์ฆ˜์€ ์ƒ๋Œ€์ ์œผ๋กœ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. ์›ํ•˜๋Š” ๋ ˆ์ด์–ด๋ฅผ `.to()`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ์žฅ์น˜๋กœ ์ „ํ™˜ํ•˜๋ฉด ๋ฐ์ดํ„ฐ๊ฐ€ ํ•ด๋‹น ๋ ˆ์ด์–ด๋กœ ๋“ค์–ด์˜ค๊ณ  ๋‚˜๊ฐˆ ๋•Œ ๋ฐ์ดํ„ฐ๋„ ๋ ˆ์ด์–ด์™€ ๋™์ผํ•œ ์žฅ์น˜๋กœ ์ „ํ™˜๋˜๊ณ  ๋‚˜๋จธ์ง€๋Š” ์ˆ˜์ •๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๋ชจ๋ธ์ด ๊ทธ๋ ค์ง€๋Š” ๋ฐฉ์‹์ด ๋ ˆ์ด์–ด๋ฅผ ์„ธ๋กœ๋กœ ์Šฌ๋ผ์ด์Šคํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฅผ ์ˆ˜์ง ๋ชจ๋ธ ๋ณ‘๋ ฌํ™”๋ผ๊ณ  ๋ถ€๋ฆ…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ ๋‹ค์ด์–ด๊ทธ๋žจ์€ 8๋ ˆ์ด์–ด ๋ชจ๋ธ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: ``` =================== =================== | 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 | =================== =================== gpu0 gpu1 ``` ์šฐ๋ฆฌ๋Š” ๋ชจ๋ธ์„ ์ˆ˜์ง์œผ๋กœ 2๊ฐœ๋กœ ๋ถ„ํ• ํ•˜์—ฌ ๋ ˆ์ด์–ด 0-3์„ GPU0์— ๋ฐฐ์น˜ํ•˜๊ณ  ๋ ˆ์ด์–ด 4-7์„ GPU1์— ๋ฐฐ์น˜ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ ˆ์ด์–ด 0์—์„œ 1๋กœ, 1์—์„œ 2๋กœ, 2์—์„œ 3์œผ๋กœ ์ด๋™ํ•˜๋Š” ๋™์•ˆ์—๋Š” ์ผ๋ฐ˜์ ์ธ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ ˆ์ด์–ด 3์—์„œ ๋ ˆ์ด์–ด 4๋กœ ์ „๋‹ฌ๋˜์–ด์•ผ ํ•  ๋•Œ๋Š” GPU0์—์„œ GPU1๋กœ ์ด๋™ํ•ด์•ผ ํ•˜๋ฏ€๋กœ ํ†ต์‹  ์˜ค๋ฒ„ํ—ค๋“œ๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์ฐธ์—ฌํ•˜๋Š” GPU๊ฐ€ ๋™์ผํ•œ ์ปดํ“จํŒ… ๋…ธ๋“œ(์˜ˆ: ๋™์ผํ•œ ๋ฌผ๋ฆฌ์ ์ธ ๊ธฐ๊ณ„)์— ์žˆ๋Š” ๊ฒฝ์šฐ ์ด ๋ณต์‚ฌ๋Š” ๋งค์šฐ ๋น ๋ฆ…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ GPU๊ฐ€ ์„œ๋กœ ๋‹ค๋ฅธ ์ปดํ“จํŒ… ๋…ธ๋“œ(์˜ˆ: ์—ฌ๋Ÿฌ ๊ธฐ๊ณ„)์— ์œ„์น˜ํ•œ ๊ฒฝ์šฐ ํ†ต์‹  ์˜ค๋ฒ„ํ—ค๋“œ๋Š” ์ƒ๋‹นํžˆ ํฌ๊ฒŒ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ ˆ์ด์–ด 4๋ถ€ํ„ฐ 5๋กœ, 6์œผ๋กœ, 7๋กœ ์ง„ํ–‰๋˜๋Š” ๊ฒƒ์€ ์ผ๋ฐ˜์ ์ธ ๋ชจ๋ธ๊ณผ ๋™์ผํ•˜๊ฒŒ ์ง„ํ–‰๋˜๊ณ , 7๋ฒˆ์งธ ๋ ˆ์ด์–ด๊ฐ€ ์™„๋ฃŒ๋˜๋ฉด ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์‹œ ๋ ˆ์ด์–ด 0์œผ๋กœ ๋ณด๋‚ด๊ฑฐ๋‚˜ ๋˜๋Š” ๋ ˆ์ด๋ธ”์„ ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๋กœ ๋ณด๋‚ด์•ผ ํ•  ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜๊ณ  ์˜ตํ‹ฐ๋งˆ์ด์ €๊ฐ€ ์ž‘๋™ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌธ์ œ์ : - ์ด ๋ฐฉ์‹์„ "naive" MP๋ผ๊ณ  ๋ถ€๋ฅด๋Š” ์ด์œ ๋Š” ์ฃผ์–ด์ง„ ์ƒํ™ฉ์— ํ•˜๋‚˜์˜ GPU๋ฅผ ์ œ์™ธํ•œ ๋ชจ๋“  GPU๊ฐ€ ์œ ํœด ์ƒํƒœ๋ผ๋Š” ์ ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ 4๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๋‹จ์ผ GPU์˜ ๋ฉ”๋ชจ๋ฆฌ ์–‘์„ 4๋ฐฐ๋กœ ๋Š˜๋ฆฌ๊ณ  ๋‚˜๋จธ์ง€ ํ•˜๋“œ์›จ์–ด๋Š” ๋ฌด์‹œํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์žฅ์น˜ ๊ฐ„ ๋ฐ์ดํ„ฐ ๋ณต์‚ฌ์˜ ์˜ค๋ฒ„ํ—ค๋“œ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ 4๊ฐœ์˜ 6GB ์นด๋“œ๋Š” naive MP๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ 1๊ฐœ์˜ 24GB ์นด๋“œ์™€ ๋™์ผํ•œ ํฌ๊ธฐ๋ฅผ ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ํ›„์ž๋Š” ๋ฐ์ดํ„ฐ ๋ณต์‚ฌ์˜ ์˜ค๋ฒ„ํ—ค๋“œ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ํ›ˆ๋ จ์„ ๋” ๋นจ๋ฆฌ ์™„๋ฃŒํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์˜ˆ๋ฅผ ๋“ค์–ด 40GB ์นด๋“œ๊ฐ€ ์žˆ๊ณ  45GB ๋ชจ๋ธ์„ ๋งž์ถ”์–ด์•ผ ํ•  ๊ฒฝ์šฐ 4๊ฐœ์˜ 40GB ์นด๋“œ๋กœ ๋งž์ถœ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (ํ•˜์ง€๋งŒ ๊ทธ๋ž˜๋””์–ธํŠธ์™€ ์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ ๋•Œ๋ฌธ์— ๊ฐ€๊นŒ์Šค๋กœ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค). - ๊ณต์œ  ์ž„๋ฒ ๋”ฉ์€ GPU ๊ฐ„์— ๋ณต์‚ฌํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌํ™” (PP)์€ ๊ฑฐ์˜ naive MP์™€ ๋™์ผํ•˜์ง€๋งŒ GPU ์œ ํœด ์ƒํƒœ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๋“ค์–ด์˜ค๋Š” ๋ฐฐ์น˜๋ฅผ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜๋กœ ๋‚˜๋ˆ„๊ณ  ์ธ๊ณต์ ์œผ๋กœ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ƒ์„ฑํ•˜์—ฌ ์„œ๋กœ ๋‹ค๋ฅธ GPU๊ฐ€ ๋™์‹œ์— ๊ณ„์‚ฐ์— ์ฐธ์—ฌํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. [GPipe ๋…ผ๋ฌธ](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html)์—์„œ ๊ฐ€์ ธ์˜จ ๊ทธ๋ฆผ์€ ์ƒ๋‹จ์— naive MP๋ฅผ, ํ•˜๋‹จ์—๋Š” PP๋ฅผ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: ![mp-pp](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-gpipe-bubble.png) ํ•˜๋‹จ ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ PP๊ฐ€ ์œ ํœด ์˜์—ญ์ด ์ ์€ ๊ฒƒ์„ ์‰ฝ๊ฒŒ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ํœด ๋ถ€๋ถ„์„ "bubble"์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์ด์–ด๊ทธ๋žจ์˜ ์–‘์ชฝ ๋ถ€๋ถ„์€ ์ฐธ์—ฌํ•˜๋Š” GPU๊ฐ€ 4๊ฐœ์ธ ๋ณ‘๋ ฌ์„ฑ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ์ฆ‰, 4๊ฐœ์˜ GPU๊ฐ€ ํŒŒ์ดํ”„๋ผ์ธ์— ์ฐธ์—ฌํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ 4๊ฐœ์˜ ํŒŒ์ดํ”„ ๋‹จ๊ณ„ F0, F1, F2 ๋ฐ F3์˜ ์ˆœ๋ฐฉํ–ฅ ๊ฒฝ๋กœ์™€ B3, B2, B1 ๋ฐ B0์˜ ์—ญ๋ฐฉํ–ฅ ๊ฒฝ๋กœ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. PP๋Š” ์กฐ์ •ํ•ด์•ผ ํ•  ์ƒˆ๋กœ์šด ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์ธ `chunks`๋ฅผ ๋„์ž…ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋™์ผํ•œ ํŒŒ์ดํ”„ ๋‹จ๊ณ„๋ฅผ ํ†ตํ•ด ์ผ๋ จ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฌถ์–ด์„œ ๋ณด๋‚ด๋Š” ๋ฐฉ์‹์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์•„๋ž˜ ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ `chunks=4`๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU0์€ 0, 1, 2 ๋ฐ 3 (F0,0, F0,1, F0,2, F0,3) ๋ฌถ์Œ์—์„œ ๋™์ผํ•œ ์ˆœ๋ฐฉํ–ฅ ๊ฒฝ๋กœ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ , ๋‹ค๋ฅธ GPU๊ฐ€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๊ณ  ์™„๋ฃŒ๊ฐ€ ์‹œ์ž‘๋  ๋•Œ๋งŒ GPU0์ด ๋ฌถ์Œ์˜ ์—ญ์ˆœ์œผ๋กœ 3, 2, 1 ๋ฐ 0 (B0,3, B0,2, B0,1, B0,0) ๊ฒฝ๋กœ๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ฐœ๋…์ ์œผ๋กœ ์ด๋Š” ๊ทธ๋ž˜๋””์–ธํŠธ ๋ˆ„์  ๋‹จ๊ณ„ (GAS)์™€ ๋™์ผํ•œ ๊ฐœ๋…์ž…๋‹ˆ๋‹ค. ํŒŒ์ดํ† ์น˜์—์„œ๋Š” `chunks`๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  DeepSpeed์—์„œ๋Š” ๋™์ผํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ GAS๋กœ ์ฐธ์กฐํ•ฉ๋‹ˆ๋‹ค. ๋ฌถ์Œ์œผ๋กœ ์ธํ•ด PP๋Š” ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ (MBS)์˜ ๊ฐœ๋…์„ ๋„์ž…ํ•ฉ๋‹ˆ๋‹ค. DP๋Š” ์ „์—ญ ๋ฐ์ดํ„ฐ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ DP ์ฐจ์ˆ˜๊ฐ€ 4์ด๊ณ  ์ „์—ญ ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ 1024์ด๋ฉด 256์”ฉ 4๊ฐœ์˜ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋กœ ๋ถ„ํ• ๋ฉ๋‹ˆ๋‹ค (1024/4). ๊ทธ๋ฆฌ๊ณ  `chunks` (๋˜๋Š” GAS)์˜ ์ˆ˜๊ฐ€ 32์ด๋ฉด ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” 8์ด ๋ฉ๋‹ˆ๋‹ค (256/32). ๊ฐ ํŒŒ์ดํ”„๋ผ์ธ ๋‹จ๊ณ„๋Š” ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. DP + PP ์„ค์ •์˜ ์ „์—ญ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๊ณ„์‚ฐํ•˜๋ ค๋ฉด `mbs*chunks*dp_degree` (`8*32*4=1024`)๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์ด์–ด๊ทธ๋žจ์œผ๋กœ ๋Œ์•„๊ฐ€ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. `chunks=1`๋กœ ์„ค์ •ํ•˜๋ฉด ๋งค์šฐ ๋น„ํšจ์œจ์ ์ธ naive MP๊ฐ€ ์ƒ์„ฑ๋˜๋ฉฐ, ๋งค์šฐ ํฐ `chunks` ๊ฐ’์œผ๋กœ ์„ค์ •ํ•˜๋ฉด ์•„์ฃผ ์ž‘์€ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ ์ƒ์„ฑ๋˜์–ด ํšจ์œจ์ ์ด์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐ€์žฅ ํšจ์œจ์ ์ธ GPU ํ™œ์šฉ์„ ์œ„ํ•ด ์–ด๋–ค ๊ฐ’์ด ๊ฐ€์žฅ ์ ์ ˆํ•œ์ง€ ์‹คํ—˜์„ ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ ๋ณด์ด๋Š” ๊ฒƒ์ฒ˜๋Ÿผ "dead" ์‹œ๊ฐ„์˜ ๋ฒ„๋ธ”์ด ์กด์žฌํ•˜์—ฌ ๋งˆ์ง€๋ง‰ `forward` ๋‹จ๊ณ„๊ฐ€ `backward` ๋‹จ๊ณ„๊ฐ€ ํŒŒ์ดํ”„๋ผ์ธ์„ ์™„๋ฃŒํ•˜๊ธฐ๋ฅผ ๊ธฐ๋‹ค๋ ค์•ผ ํ•˜๋Š” ์ƒํ™ฉ์ด ๋ฐœ์ƒํ•˜์ง€๋งŒ, `chunks`์˜ ๊ฐ€์žฅ ์ ์ ˆํ•œ ๊ฐ’์„ ์ฐพ๋Š” ๊ฒƒ์˜ ๋ชฉ์ ์€ ๋ชจ๋“  ์ฐธ์—ฌํ•˜๋Š” GPU์—์„œ ๋™์‹œ์— ๊ณ ๋„๋กœ ํ™œ์šฉ๋˜๋Š” GPU ํ™œ์šฉ์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜์—ฌ ๋ฒ„๋ธ”์˜ ํฌ๊ธฐ๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•ด๊ฒฐ์ฑ…์€ ์ „ํ†ต์ ์ธ ํŒŒ์ดํ”„๋ผ์ธ API์™€ ๋” ํ˜„๋Œ€์ ์ธ ์†”๋ฃจ์…˜์œผ๋กœ ๋‚˜๋‰ฉ๋‹ˆ๋‹ค. ์ „ํ†ต์ ์ธ ํŒŒ์ดํ”„๋ผ์ธ API ์†”๋ฃจ์…˜๊ณผ ํ˜„๋Œ€์ ์ธ ์†”๋ฃจ์…˜์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ „ํ†ต์ ์ธ ํŒŒ์ดํ”„๋ผ์ธ API ์†”๋ฃจ์…˜: - ํŒŒ์ดํ† ์น˜ - FairScale - DeepSpeed - Megatron-LM ํ˜„๋Œ€์ ์ธ ์†”๋ฃจ์…˜: - Varuna - Sagemaker ์ „ํ†ต์ ์ธ ํŒŒ์ดํ”„๋ผ์ธ API ์†”๋ฃจ์…˜์˜ ๋ฌธ์ œ์ : - ๋ชจ๋ธ์„ ์ƒ๋‹นํžˆ ์ˆ˜์ •ํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ด ๋ฌธ์ œ์ž…๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์€ ๋ชจ๋“ˆ์˜ ์ •์ƒ์ ์ธ ํ๋ฆ„์„ `nn.Sequential` ์‹œํ€€์Šค๋กœ ๋‹ค์‹œ ์ž‘์„ฑํ•ด์•ผ ํ•˜๋ฏ€๋กœ ๋ชจ๋ธ์˜ ์„ค๊ณ„๋ฅผ ๋ณ€๊ฒฝํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ํ˜„์žฌ ํŒŒ์ดํ”„๋ผ์ธ API๋Š” ๋งค์šฐ ์ œํ•œ์ ์ž…๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์˜ ๋งค์šฐ ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„์—์„œ ์ „๋‹ฌ๋˜๋Š” ๋งŽ์€ ํŒŒ์ด์ฌ ๋ณ€์ˆ˜๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ ์ด๋ฅผ ํ•ด๊ฒฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ˜„์žฌ ํŒŒ์ดํ”„๋ผ์ธ ์ธํ„ฐํŽ˜์ด์Šค๋Š” ํ•˜๋‚˜์˜ ํ…์„œ ๋˜๋Š” ํ…์„œ์˜ ํŠœํ”Œ์„ ์œ ์ผํ•œ ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ์œผ๋กœ ์š”๊ตฌํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…์„œ๋Š” ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜๋กœ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋กœ ๋ฌถ์„ ๊ฒƒ์ด๋ฏ€๋กœ ์ฒซ ๋ฒˆ์งธ ์ฐจ์›์œผ๋กœ ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€๋Šฅํ•œ ๊ฐœ์„  ์‚ฌํ•ญ์€ ์—ฌ๊ธฐ์—์„œ ๋…ผ์˜๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. https://github.com/pytorch/pytorch/pull/50693 - ํŒŒ์ดํ”„ ๋‹จ๊ณ„ ์ˆ˜์ค€์—์„œ ์กฐ๊ฑด๋ถ€ ์ œ์–ด ํ๋ฆ„์€ ๋ถˆ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, T5์™€ ๊ฐ™์€ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์€ ์กฐ๊ฑด๋ถ€ ์ธ์ฝ”๋” ๋‹จ๊ณ„๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํ•œ ํ•ด๊ฒฐ์ฑ…์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - ๊ฐ ๋ ˆ์ด์–ด๋ฅผ ์ •๋ ฌํ•˜์—ฌ ํ•˜๋‚˜์˜ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์ด ๋‹ค๋ฅธ ๋ชจ๋ธ์˜ ์ž…๋ ฅ์ด ๋˜๋„๋กํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์•„์ง Varuna์™€ SageMaker๋กœ ์‹คํ—˜ํ•˜์ง€ ์•Š์•˜์ง€๋งŒ, ํ•ด๋‹น ๋…ผ๋ฌธ๋“ค์€ ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ ๋ฌธ์ œ๋“ค์˜ ๋ชฉ๋ก์„ ๊ทน๋ณตํ–ˆ๊ณ  ์‚ฌ์šฉ์ž์˜ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ํ›จ์”ฌ ์ ๊ฒŒ ํ•„์š”ํ•˜๋‹ค๊ณ  ๋ณด๊ณ ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [ํŒŒ์ดํ† ์น˜](https://pytorch.org/docs/stable/pipeline.html) (ํŒŒ์ดํ† ์น˜-1.8์—์„œ ์ดˆ๊ธฐ ์ง€์›, 1.9์—์„œ ์ ์ง„์ ์œผ๋กœ ๊ฐœ์„ ๋˜๊ณ  1.10์—์„œ ๋” ๊ฐœ์„ ๋จ). [์˜ˆ์ œ](https://github.com/pytorch/pytorch/blob/master/benchmarks/distributed/pipeline/pipe.py)๋„ ์ฐธ๊ณ ํ•˜์„ธ์š”. - [FairScale](https://fairscale.readthedocs.io/en/latest/tutorials/pipe.html) - [DeepSpeed](https://www.deepspeed.ai/tutorials/pipeline/) - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)์€ ๋‚ด๋ถ€ ๊ตฌํ˜„์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค - API ์—†์Œ. - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - ์ด๋Š” AWS์—์„œ๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์†Œ์œ  ์†”๋ฃจ์…˜์ž…๋‹ˆ๋‹ค. - [OSLO](https://github.com/tunib-ai/oslo) - ์ด๋Š” Hugging Face Transformers๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ตฌํ˜„๋œ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌํ™”์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers ์ƒํƒœ: ์ด ์ž‘์„ฑ ์‹œ์ ์—์„œ ๋ชจ๋ธ ์ค‘ ์–ด๋Š ๊ฒƒ๋„ ์™„์ „ํ•œ PP๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. GPT2์™€ T5 ๋ชจ๋ธ์€ naive MP๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ฃผ์š” ์žฅ์• ๋ฌผ์€ ๋ชจ๋ธ์„ `nn.Sequential`๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๋ชจ๋“  ์ž…๋ ฅ์„ ํ…์„œ๋กœ ๊ฐ€์ ธ์™€์•ผ ํ•˜๋Š” ๊ฒƒ์„ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์—†๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ˜„์žฌ ๋ชจ๋ธ์—๋Š” ์ด๋Ÿฌํ•œ ๋ณ€ํ™˜์„ ๋งค์šฐ ๋ณต์žกํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๋งŽ์€ ๊ธฐ๋Šฅ์ด ํฌํ•จ๋˜์–ด ์žˆ์–ด ์ œ๊ฑฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐํƒ€ ์ ‘๊ทผ ๋ฐฉ๋ฒ•: DeepSpeed, Varuna ๋ฐ SageMaker๋Š” [๊ต์ฐจ ํŒŒ์ดํ”„๋ผ์ธ(Interleaved Pipeline)](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html) ๊ฐœ๋…์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ![interleaved-pipeline-execution](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-sagemaker-interleaved-pipeline.png) ์—ฌ๊ธฐ์„œ๋Š” ๋ฒ„๋ธ”(์œ ํœด ์‹œ๊ฐ„)์„ ์—ญ๋ฐฉํ–ฅ ํŒจ์Šค์— ์šฐ์„ ์ˆœ์œ„๋ฅผ ๋ถ€์—ฌํ•˜์—ฌ ์ตœ์†Œํ™”ํ•ฉ๋‹ˆ๋‹ค. Varuna๋Š” ๊ฐ€์žฅ ํšจ์œจ์ ์ธ ์Šค์ผ€์ค„๋ง์„ ์ฐพ๊ธฐ ์œ„ํ•ด ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์‚ฌ์šฉํ•˜์—ฌ ์Šค์ผ€์ค„์„ ๊ฐœ์„ ํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. OSLO๋Š” `nn.Sequential`๋กœ ๋ณ€ํ™˜ํ•˜์ง€ ์•Š๊ณ  Transformers๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌํ™”๋ฅผ ๊ตฌํ˜„ํ–ˆ์Šต๋‹ˆ๋‹ค. ## ํ…์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ [[tensor-parallelism]] ํ…์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ์—์„œ๋Š” ๊ฐ GPU๊ฐ€ ํ…์„œ์˜ ์ผ๋ถ€๋ถ„๋งŒ ์ฒ˜๋ฆฌํ•˜๊ณ  ์ „์ฒด ํ…์„œ๊ฐ€ ํ•„์š”ํ•œ ์—ฐ์‚ฐ์— ๋Œ€ํ•ด์„œ๋งŒ ์ „์ฒด ํ…์„œ๋ฅผ ์ง‘๊ณ„ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ๋Š” [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) ๋…ผ๋ฌธ์ธ [Efficient Large-Scale Language Model Training on GPU Clusters](https://arxiv.org/abs/2104.04473)์—์„œ์˜ ๊ฐœ๋…๊ณผ ๋‹ค์ด์–ด๊ทธ๋žจ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. Transformer์˜ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๋Š” fully connected `nn.Linear`์™€ ๋น„์„ ํ˜• ํ™œ์„ฑํ™” ํ•จ์ˆ˜์ธ `GeLU`์ž…๋‹ˆ๋‹ค. Megatron ๋…ผ๋ฌธ์˜ ํ‘œ๊ธฐ๋ฒ•์„ ๋”ฐ๋ผ ํ–‰๋ ฌ์˜ ์ ๊ณฑ ๋ถ€๋ถ„์„ `Y = GeLU(XA)`๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ `X`์™€ `Y`๋Š” ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ ๋ฒกํ„ฐ์ด๊ณ  `A`๋Š” ๊ฐ€์ค‘์น˜ ํ–‰๋ ฌ์ž…๋‹ˆ๋‹ค. ํ–‰๋ ฌ ํ˜•ํƒœ๋กœ ๊ณ„์‚ฐ์„ ์‚ดํŽด๋ณด๋ฉด, ํ–‰๋ ฌ ๊ณฑ์…ˆ์„ ๋‹ค์ค‘ GPU๋กœ ๋ถ„ํ• ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์„ ์‰ฝ๊ฒŒ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ![Parallel GEMM](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_gemm.png) ๊ฐ€์ค‘์น˜ ํ–‰๋ ฌ `A`๋ฅผ `N`๊ฐœ์˜ GPU์— ๋Œ€ํ•ด ์—ด๋ณ„๋กœ ๋ถ„ํ• ํ•˜๊ณ  ๋ณ‘๋ ฌ๋กœ ํ–‰๋ ฌ ๊ณฑ์…ˆ `XA_1`์—์„œ `XA_n`๊นŒ์ง€ ์ˆ˜ํ–‰ํ•˜๋ฉด `N`๊ฐœ์˜ ์ถœ๋ ฅ ๋ฒกํ„ฐ `Y_1, Y_2, ..., Y_n`๊ฐ€ ์ƒ์„ฑ๋˜๋ฉฐ ๋…๋ฆฝ์ ์œผ๋กœ `GeLU`์— ์ „๋‹ฌ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ![independent GeLU](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-independent-gelu.png) ์ด ์›๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋™๊ธฐํ™”๊ฐ€ ํ•„์š”ํ•˜์ง€ ์•Š์€ GPU ๊ฐ„์˜ ์ž„์˜ ๊นŠ์ด์˜ MLP๋ฅผ ์—…๋ฐ์ดํŠธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ฒฐ๊ณผ ๋ฒกํ„ฐ๋ฅผ ์ƒค๋“œ๋กœ๋ถ€ํ„ฐ ์žฌ๊ตฌ์„ฑํ•ด์•ผ ํ•˜๋Š” ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๊นŒ์ง€๋Š” GPU ๊ฐ„์˜ ๋™๊ธฐํ™”๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. Megatron-LM ๋…ผ๋ฌธ์˜ ์ €์ž๋“ค์€ ์ด์— ๋Œ€ํ•œ ์œ ์šฉํ•œ ๊ทธ๋ฆผ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค: ![parallel shard processing](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_shard_processing.png) ๋‹ค์ค‘ ํ—ค๋“œ ์–ดํ…์…˜ ๋ ˆ์ด์–ด์˜ ๋ณ‘๋ ฌํ™”๋Š” ๋”์šฑ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ ๋…๋ฆฝ์ ์ธ ๋‹ค์ค‘ ํ—ค๋“œ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฏธ ๋ณ‘๋ ฌํ™”๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค! ![parallel self-attention](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_self_attention.png) ํŠน๋ณ„ ๊ณ ๋ ค์‚ฌํ•ญ: TP๋Š” ๋งค์šฐ ๋น ๋ฅธ ๋„คํŠธ์›Œํฌ๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ํ•œ ๊ฐœ ์ด์ƒ์˜ ๋…ธ๋“œ์—์„œ TP๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์€ ๊ถŒ์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ ๋…ธ๋“œ์— 4๊ฐœ์˜ GPU๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ TP์˜ ์ตœ๋Œ€ ์ฐจ์ˆ˜๋Š” 4์ž…๋‹ˆ๋‹ค. TP ์ฐจ์ˆ˜๊ฐ€ 8์ธ ๊ฒฝ์šฐ ์ตœ์†Œํ•œ 8๊ฐœ์˜ GPU๊ฐ€ ์žˆ๋Š” ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์€ ์›๋ž˜์˜ [๋” ์ž์„ธํ•œ TP ๊ฐœ์š”](https://github.com/huggingface/transformers/issues/10321#issuecomment-783543530)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ž‘์„ฑ์ž๋Š” [@anton-l](https://github.com/anton-l)์ž…๋‹ˆ๋‹ค. SageMaker๋Š” ๋” ํšจ์œจ์ ์ธ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด TP์™€ DP๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๋Œ€์ฒด ์ด๋ฆ„: - DeepSpeed๋Š” ์ด๋ฅผ [ํ…์„œ ์Šฌ๋ผ์ด์‹ฑ](https://www.deepspeed.ai/training/#model-parallelism)์ด๋ผ๊ณ  ๋ถ€๋ฆ…๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)์€ ๋‚ด๋ถ€ ๊ตฌํ˜„์„ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฏ€๋กœ ๋ชจ๋ธ์— ๋งค์šฐ ํŠนํ™”๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - [parallelformers](https://github.com/tunib-ai/parallelformers) (ํ˜„์žฌ๋Š” ์ถ”๋ก ์—๋งŒ ํ•ด๋‹น) - [SageMaker](https://arxiv.org/abs/2111.05972) - ์ด๋Š” AWS์—์„œ๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์†Œ์œ  ์†”๋ฃจ์…˜์ž…๋‹ˆ๋‹ค. - [OSLO](https://github.com/tunib-ai/oslo)์€ Transformers๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ…์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ ๊ตฌํ˜„์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํ˜„ํ™ฉ: - core: ์•„์ง ํ•ต์‹ฌ ๋ถ€๋ถ„์— ๊ตฌํ˜„๋˜์ง€ ์•Š์Œ - ๊ทธ๋Ÿฌ๋‚˜ ์ถ”๋ก ์„ ํ•˜๋ ค๋ฉด [parallelformers](https://github.com/tunib-ai/parallelformers)๊ฐ€ ๋Œ€๋ถ€๋ถ„์˜ ๋ชจ๋ธ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ•ต์‹ฌ ๋ถ€๋ถ„์— ๊ตฌํ˜„๋˜๊ธฐ ์ „๊นŒ์ง€ ๊ทธ๋“ค์˜ ๊ฒƒ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ํ›ˆ๋ จ ๋ชจ๋“œ๋„ ์ง€์›๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. - Deepspeed-Inference๋Š” CUDA ์ปค๋„์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๋งค์šฐ ๋น ๋ฅธ ์ถ”๋ก  ๋ชจ๋“œ์—์„œ BERT, GPT-2 ๋ฐ GPT-Neo ๋ชจ๋ธ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://www.deepspeed.ai/tutorials/inference-tutorial/)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## DP+PP [[dppp]] DeepSpeed [pipeline tutorial](https://www.deepspeed.ai/tutorials/pipeline/)์—์„œ ๋‹ค์Œ ๋‹ค์ด์–ด๊ทธ๋žจ์€ DP์™€ PP๋ฅผ ๊ฒฐํ•ฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ![dp-pp-2d](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero-dp-pp.png) ์—ฌ๊ธฐ์„œ DP ๋žญํฌ 0์€ GPU2๋ฅผ ๋ณด์ง€ ๋ชปํ•˜๊ณ , DP ๋žญํฌ 1์€ GPU3์„ ๋ณด์ง€ ๋ชปํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. DP์—๊ฒŒ๋Š” ๋”ฑ 2๊ฐœ์˜ GPU์ธ ๊ฒƒ์ฒ˜๋Ÿผ ๋ฐ์ดํ„ฐ๋ฅผ ๊ณต๊ธ‰ํ•ฉ๋‹ˆ๋‹ค. GPU0์€ PP๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ GPU2์—๊ฒŒ ์ผ๋ถ€ ์ž‘์—…์„ "๋น„๋ฐ€๋ฆฌ์—" ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  GPU1๋„ GPU3์„ ๋„์›€์œผ๋กœ ์‚ผ์•„ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ž‘์—…ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ์ฐจ์›๋งˆ๋‹ค ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ์ตœ์†Œํ•œ 4๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [DeepSpeed](https://github.com/microsoft/DeepSpeed) - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - [OSLO](https://github.com/tunib-ai/oslo) ๐Ÿค— Transformers ํ˜„ํ™ฉ: ์•„์ง ๊ตฌํ˜„๋˜์ง€ ์•Š์Œ ## DP+PP+TP [[dppptp]] ๋” ํšจ์œจ์ ์ธ ํ›ˆ๋ จ์„ ์œ„ํ•ด PP์™€ TP ๋ฐ DP๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ 3D ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ ์ด๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ![dp-pp-tp-3d](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-deepspeed-3d.png) ์ด ๋‹ค์ด์–ด๊ทธ๋žจ์€ [3D parallelism: Scaling to trillion-parameter models](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/)์ด๋ผ๋Š” ๋ธ”๋กœ๊ทธ ๊ธ€์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ์ฐจ์›๋งˆ๋‹ค ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ์ตœ์†Œํ•œ 8๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [DeepSpeed](https://github.com/microsoft/DeepSpeed) - DeepSpeed๋Š” ๋”์šฑ ํšจ์œจ์ ์ธ DP์ธ ZeRO-DP๋ผ๊ณ ๋„ ๋ถ€๋ฆ…๋‹ˆ๋‹ค. - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - [OSLO](https://github.com/tunib-ai/oslo) ๐Ÿค— Transformers ํ˜„ํ™ฉ: ์•„์ง ๊ตฌํ˜„๋˜์ง€ ์•Š์Œ. PP์™€ TP๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ## ZeRO DP+PP+TP [[zero-dppptp]] DeepSpeed์˜ ์ฃผ์š” ๊ธฐ๋Šฅ ์ค‘ ํ•˜๋‚˜๋Š” DP์˜ ํ™•์žฅ์ธ ZeRO์ž…๋‹ˆ๋‹ค. ZeRO-DP์— ๋Œ€ํ•ด ์ด๋ฏธ [ZeRO Data Parallelism](#zero-data-parallelism)์—์„œ ๋…ผ์˜๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋Š” PP๋‚˜ TP๋ฅผ ํ•„์š”๋กœํ•˜์ง€ ์•Š๋Š” ๋…๋ฆฝ์ ์ธ ๊ธฐ๋Šฅ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ PP์™€ TP์™€ ๊ฒฐํ•ฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ZeRO-DP๊ฐ€ PP์™€ (์„ ํƒ์ ์œผ๋กœ TP์™€) ๊ฒฐํ•ฉ๋˜๋ฉด ์ผ๋ฐ˜์ ์œผ๋กœ ZeRO ๋‹จ๊ณ„ 1(์˜ตํ‹ฐ๋งˆ์ด์ € ๋ถ„ํ• )๋งŒ ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ์ด๋ก ์ ์œผ๋กœ๋Š” ZeRO ๋‹จ๊ณ„ 2(๊ทธ๋ผ๋””์–ธํŠธ ๋ถ„ํ• )๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ์ด๋Š” ์„ฑ๋Šฅ์— ๋‚˜์œ ์˜ํ–ฅ์„ ๋ฏธ์น  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜๋งˆ๋‹ค ๊ทธ๋ผ๋””์–ธํŠธ๋ฅผ ์ƒค๋”ฉํ•˜๊ธฐ ์ „์— ์ถ”๊ฐ€์ ์ธ ๋ฆฌ๋“€์Šค-์Šค์บํ„ฐ ์ปฌ๋ ‰ํ‹ฐ๋ธŒ๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ์ด๋Š” ์ž ์žฌ์ ์œผ๋กœ ์ƒ๋‹นํ•œ ํ†ต์‹  ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ์˜ ํŠน์„ฑ์ƒ ์ž‘์€ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜๊ฐ€ ์‚ฌ์šฉ๋˜๋ฉฐ, ์‚ฐ์ˆ  ์—ฐ์‚ฐ ๊ฐ•๋„(๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ ํฌ๊ธฐ)๋ฅผ ๊ท ํ˜• ์žˆ๊ฒŒ ์œ ์ง€ํ•˜๋ฉด์„œ ํŒŒ์ดํ”„๋ผ์ธ ๋ฒ„๋ธ”(๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ ์ˆ˜)์„ ์ตœ์†Œํ™”ํ•˜๋Š” ๊ฒƒ์— ์ค‘์ ์„ ๋‘ก๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ•ด๋‹น ํ†ต์‹  ๋น„์šฉ์€ ๋ฌธ์ œ๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ, PP๋กœ ์ธํ•ด ์ •์ƒ๋ณด๋‹ค ์ ์€ ์ˆ˜์˜ ๋ ˆ์ด์–ด๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ๋ฉ”๋ชจ๋ฆฌ ์ ˆ์•ฝ์€ ํฌ์ง€ ์•Š์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. PP๋Š” ์ด๋ฏธ ๊ทธ๋ž˜๋””์–ธํŠธ ํฌ๊ธฐ๋ฅผ ``1/PP``๋กœ ์ค„์ด๊ธฐ ๋•Œ๋ฌธ์— ๊ทธ๋ž˜๋””์–ธํŠธ ์ƒค๋”ฉ์˜ ์ ˆ์•ฝ ํšจ๊ณผ๋Š” ์ˆœ์ˆ˜ DP๋ณด๋‹ค๋Š” ๋ฏธ๋ฏธํ•ฉ๋‹ˆ๋‹ค. ZeRO ๋‹จ๊ณ„ 3๋„ ๊ฐ™์€ ์ด์œ ๋กœ ์ข‹์€ ์„ ํƒ์ด ์•„๋‹™๋‹ˆ๋‹ค - ๋” ๋งŽ์€ ๋…ธ๋“œ ๊ฐ„ ํ†ต์‹ ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ZeRO๊ฐ€ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค๋ฅธ ์ด์ ์€ ZeRO-Offload์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋‹จ๊ณ„ 1์ด๋ฏ€๋กœ ์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ๋ฅผ CPU๋กœ ์˜คํ”„๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed) ๋ฐ [BigScience์˜ Megatron-Deepspeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed), ์ด์ „ ์ €์žฅ์†Œ์˜ ํฌํฌ์ž…๋‹ˆ๋‹ค. - [OSLO](https://github.com/tunib-ai/oslo) ์ค‘์š”ํ•œ ๋…ผ๋ฌธ: - [Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model]( https://arxiv.org/abs/2201.11990) ๐Ÿค— Transformers ํ˜„ํ™ฉ: ์•„์ง ๊ตฌํ˜„๋˜์ง€ ์•Š์Œ, PP์™€ TP๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ## FlexFlow [[flexflow]] [FlexFlow](https://github.com/flexflow/FlexFlow)๋Š” ์•ฝ๊ฐ„ ๋‹ค๋ฅธ ๋ฐฉ์‹์œผ๋กœ ๋ณ‘๋ ฌํ™” ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ๋…ผ๋ฌธ: ["Beyond Data and Model Parallelism for Deep Neural Networks" by Zhihao Jia, Matei Zaharia, Alex Aiken](https://arxiv.org/abs/1807.05358) ์ด๋Š” Sample-Operator-Attribute-Parameter๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ์ผ์ข…์˜ 4D ๋ณ‘๋ ฌํ™”๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. 1. Sample = ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” (์ƒ˜ํ”Œ๋ณ„ ๋ณ‘๋ ฌ) 2. Operator = ๋‹จ์ผ ์—ฐ์‚ฐ์„ ์—ฌ๋Ÿฌ ํ•˜์œ„ ์—ฐ์‚ฐ์œผ๋กœ ๋ณ‘๋ ฌํ™” 3. Attribute = ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” (๊ธธ์ด๋ณ„ ๋ณ‘๋ ฌ) 4. Parameter = ๋ชจ๋ธ ๋ณ‘๋ ฌํ™” (์ˆ˜ํ‰ ๋˜๋Š” ์ˆ˜์ง๊ณผ ๊ด€๊ณ„์—†์ด) ์˜ˆ์‹œ: * Sample 512 ๊ธธ์ด์˜ 10๊ฐœ์˜ ๋ฐฐ์น˜๋ฅผ ๊ฐ€์ •ํ•ด ๋ด…์‹œ๋‹ค. ์ด๋ฅผ sample ์ฐจ์›์œผ๋กœ 2๊ฐœ์˜ ์žฅ์น˜์— ๋ณ‘๋ ฌํ™”ํ•˜๋ฉด, 10 x 512๋Š” 5 x 2 x 512๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. * Operator ๋ ˆ์ด์–ด ์ •๊ทœํ™”๋ฅผ ์ˆ˜ํ–‰ํ•œ๋‹ค๋ฉด, ์šฐ์„  std๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋‘ ๋ฒˆ์งธ๋กœ mean์„ ๊ณ„์‚ฐํ•œ ๋‹ค์Œ ๋ฐ์ดํ„ฐ๋ฅผ ์ •๊ทœํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Operator ๋ณ‘๋ ฌํ™”๋Š” std์™€ mean์„ ๋ณ‘๋ ฌ๋กœ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ operator ์ฐจ์›์œผ๋กœ 2๊ฐœ์˜ ์žฅ์น˜ (cuda:0, cuda:1)์— ๋ณ‘๋ ฌํ™”ํ•˜๋ฉด, ๋จผ์ € ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๋ฅผ ๋‘ ์žฅ์น˜๋กœ ๋ณต์‚ฌํ•œ ๋‹ค์Œ cuda:0์—์„œ std๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  cuda:1์—์„œ ๋™์‹œ์— mean์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. * Attribute 512 ๊ธธ์ด์˜ 10๊ฐœ์˜ ๋ฐฐ์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ attribute ์ฐจ์›์œผ๋กœ 2๊ฐœ์˜ ์žฅ์น˜์— ๋ณ‘๋ ฌํ™”ํ•˜๋ฉด, 10 x 512๋Š” 10 x 2 x 256์ด ๋ฉ๋‹ˆ๋‹ค. * Parameter ์ด๋Š” tensor ๋ชจ๋ธ ๋ณ‘๋ ฌํ™” ๋˜๋Š” naive layer-wise ๋ชจ๋ธ ๋ณ‘๋ ฌํ™”์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ![flex-flow-soap](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-flexflow.jpeg) ์ด ํ”„๋ ˆ์ž„์›Œํฌ์˜ ์ค‘์š”ํ•œ ์ ์€ (1) GPU/TPU/CPU ๋Œ€ (2) RAM/DRAM ๋Œ€ (3) ๋น ๋ฅธ ์ธํŠธ๋ผ-์ปค๋„ฅํŠธ ๋Œ€ ๋Š๋ฆฐ ์ธํ„ฐ-์ปค๋„ฅํŠธ์™€ ๊ฐ™์€ ๋ฆฌ์†Œ์Šค๋ฅผ ๊ณ ๋ คํ•˜์—ฌ ์–ด๋””์—์„œ ์–ด๋–ค ๋ณ‘๋ ฌํ™”๋ฅผ ์‚ฌ์šฉํ• ์ง€๋ฅผ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ ์œผ๋กœ ์ž๋™์œผ๋กœ ์ตœ์ ํ™”ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜๋‚˜ ๋งค์šฐ ์ค‘์š”ํ•œ ์ธก๋ฉด์€ FlexFlow๊ฐ€ ์ •์ ์ด๊ณ  ๊ณ ์ •๋œ ์›Œํฌ๋กœ๋“œ๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์— ๋Œ€ํ•œ DNN ๋ณ‘๋ ฌํ™”๋ฅผ ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋™์ ์ธ ๋™์ž‘์„ ๊ฐ€์ง„ ๋ชจ๋ธ์€ ๋ฐ˜๋ณต๋งˆ๋‹ค ๋‹ค๋ฅธ ๋ณ‘๋ ฌํ™” ์ „๋žต์„ ์„ ํ˜ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด ํ”„๋ ˆ์ž„์›Œํฌ์˜ ์žฅ์ ์€ ์„ ํƒํ•œ ํด๋Ÿฌ์Šคํ„ฐ์—์„œ 30๋ถ„ ๋™์•ˆ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์‹คํ–‰ํ•˜๊ณ  ์ด ํŠน์ • ํ™˜๊ฒฝ์„ ์ตœ์ ์œผ๋กœ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•œ ์ตœ์ƒ์˜ ์ „๋žต์„ ์ œ์•ˆํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ถ€ํ’ˆ์„ ์ถ”๊ฐ€/์ œ๊ฑฐ/๊ต์ฒดํ•˜๋ฉด ์‹คํ–‰ํ•˜๊ณ  ๊ทธ์— ๋Œ€ํ•œ ๊ณ„ํš์„ ๋‹ค์‹œ ์ตœ์ ํ™”ํ•œ ํ›„ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์„ค์ •์€ ์ž์ฒด์ ์ธ ์‚ฌ์šฉ์ž ์ •์˜ ์ตœ์ ํ™”๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํ˜„ํ™ฉ: ์•„์ง ํ†ตํ•ฉ๋˜์ง€ ์•Š์Œ. ์ด๋ฏธ [transformers.utils.fx](https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py)๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ์„ FX-์ถ”์ ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด๋Š” FlexFlow์˜ ์„ ํ–‰ ์กฐ๊ฑด์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์–ด๋–ค ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ด์•ผ FlexFlow๊ฐ€ ์šฐ๋ฆฌ์˜ ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์ž‘๋™ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ## ์–ด๋–ค ์ „๋žต์„ ์‚ฌ์šฉํ•ด์•ผ ํ• ๊นŒ์š”? [[which-strategy-to-use-when]] ๋‹ค์Œ์€ ์–ด๋–ค ๋ณ‘๋ ฌํ™” ์ „๋žต์„ ์–ธ์ œ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ๋งค์šฐ ๋Œ€๋žต์ ์ธ ๊ฐœ์š”์ž…๋‹ˆ๋‹ค. ๊ฐ ๋ชฉ๋ก์˜ ์ฒซ ๋ฒˆ์งธ ์ „๋žต์ด ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ๋น ๋ฆ…๋‹ˆ๋‹ค. **โ‡จ ๋‹จ์ผ GPU** * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž๋Š” ๊ฒฝ์šฐ: 1. ์ผ๋ฐ˜์ ์ธ ์‚ฌ์šฉ * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. ZeRO + CPU ๋ฐ ์˜ต์…˜์œผ๋กœ NVMe ์–ธ๋กœ๋“œ 2. ์œ„์™€ ๋™์ผํ•˜๊ฒŒ ์‚ฌ์šฉํ•˜๋˜, ๊ฐ€์žฅ ํฐ ๋ ˆ์ด์–ด๊ฐ€ ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ Memory Centric Tiling(์ž์„ธํ•œ ๋‚ด์šฉ์€ ์•„๋ž˜ ์ฐธ์กฐ)์„ ์ถ”๊ฐ€์ ์œผ๋กœ ์‚ฌ์šฉ * ๊ฐ€์žฅ ํฐ ๋ ˆ์ด์–ด๊ฐ€ ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. ZeRO - [Memory Centric Tiling](https://deepspeed.readthedocs.io/en/latest/zero3.html#memory-centric-tiling) (MCT) ํ™œ์„ฑํ™”. ์ด๋ฅผ ํ†ตํ•ด ํฌ๊ธฐ๊ฐ€ ๋งค์šฐ ํฐ ๋ ˆ์ด์–ด๋ฅผ ์ž„์˜๋กœ ๋ถ„ํ• ํ•˜์—ฌ ์ˆœ์ฐจ์ ์œผ๋กœ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. MCT๋Š” GPU์— ํ™œ์„ฑํ™”๋œ ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ์ˆ˜๋ฅผ ์ค„์ด์ง€๋งŒ ํ™œ์„ฑํ™” ๋ฉ”๋ชจ๋ฆฌ์—๋Š” ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ˜„์žฌ ์ž‘์„ฑ ๊ธฐ์ค€์œผ๋กœ ์ด ์š”๊ตฌ์‚ฌํ•ญ์€ ๋งค์šฐ ๋“œ๋ฌผ๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์šฉ์ž๊ฐ€ `torch.nn.Linear`๋ฅผ ์ˆ˜๋™์œผ๋กœ ์ˆ˜์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **โ‡จ ๋‹จ์ผ ๋…ธ๋“œ / ๋‹ค์ค‘ GPU** * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž๋Š” ๊ฒฝ์šฐ: 1. DDP - ๋ถ„์‚ฐ DP 2. ZeRO - ์ƒํ™ฉ๊ณผ ๊ตฌ์„ฑ์— ๋”ฐ๋ผ ๋น ๋ฅผ ์ˆ˜๋„ ์žˆ๊ณ  ๊ทธ๋ ‡์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. PP 2. ZeRO 3. TP NVLINK ๋˜๋Š” NVSwitch๋ฅผ ํ†ตํ•œ ๋งค์šฐ ๋น ๋ฅธ ์ธํŠธ๋ผ-๋…ธ๋“œ ์—ฐ๊ฒฐ์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ด ์„ธ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ ๊ฑฐ์˜ ๋™๋“ฑํ•  ๊ฒƒ์ด๋ฉฐ, ์ด๋Ÿฌํ•œ ์—ฐ๊ฒฐ์ด ์—†๋Š” ๊ฒฝ์šฐ PP๊ฐ€ TP๋‚˜ ZeRO๋ณด๋‹ค ๋น ๋ฅผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ TP์˜ ์ฐจ์ˆ˜๋„ ์˜ํ–ฅ์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ์„ค์ •์—์„œ ์šฐ์Šน์ž๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ์‹คํ—˜ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์ข‹์Šต๋‹ˆ๋‹ค. TP๋Š” ๊ฑฐ์˜ ํ•ญ์ƒ ๋‹จ์ผ ๋…ธ๋“œ ๋‚ด์—์„œ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ฆ‰, TP ํฌ๊ธฐ <= ๋…ธ๋“œ๋‹น GPU ์ˆ˜์ž…๋‹ˆ๋‹ค. * ๊ฐ€์žฅ ํฐ ๋ ˆ์ด์–ด๊ฐ€ ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. ZeRO๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์„ ๊ฒฝ์šฐ - PP๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์œผ๋ฏ€๋กœ TP๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 2. ZeRO๋ฅผ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ, "๋‹จ์ผ GPU"์˜ ํ•ญ๋ชฉ๊ณผ ๋™์ผํ•œ ํ•ญ๋ชฉ ์ฐธ์กฐ **โ‡จ ๋‹ค์ค‘ ๋…ธ๋“œ / ๋‹ค์ค‘ GPU** * ๋น ๋ฅธ ๋…ธ๋“œ ๊ฐ„ ์—ฐ๊ฒฐ์ด ์žˆ๋Š” ๊ฒฝ์šฐ: 1. ZeRO - ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ˆ˜์ •์ด ๊ฑฐ์˜ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 2. PP+TP+DP - ํ†ต์‹ ์ด ์ ์ง€๋งŒ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋Œ€๊ทœ๋ชจ ๋ณ€๊ฒฝ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. * ๋Š๋ฆฐ ๋…ธ๋“œ ๊ฐ„ ์—ฐ๊ฒฐ ๋ฐ GPU ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ: 1. DP+PP+TP+ZeRO-1
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/generation_strategies.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Text generation strategies[[text-generation-strategies]] ํ…์ŠคํŠธ ์ƒ์„ฑ์€ ๊ฐœ๋ฐฉํ˜• ํ…์ŠคํŠธ ์ž‘์„ฑ, ์š”์•ฝ, ๋ฒˆ์—ญ ๋“ฑ ๋‹ค์–‘ํ•œ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) ์ž‘์—…์— ํ•„์ˆ˜์ ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋˜ํ•œ ์Œ์„ฑ-ํ…์ŠคํŠธ ๋ณ€ํ™˜, ์‹œ๊ฐ-ํ…์ŠคํŠธ ๋ณ€ํ™˜๊ณผ ๊ฐ™์ด ํ…์ŠคํŠธ๋ฅผ ์ถœ๋ ฅ์œผ๋กœ ํ•˜๋Š” ์—ฌ๋Ÿฌ ํ˜ผํ•ฉ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์—์„œ๋„ ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ์ƒ์„ฑ์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ๋ช‡๋ช‡ ๋ชจ๋ธ๋กœ๋Š” GPT2, XLNet, OpenAI GPT, CTRL, TransformerXL, XLM, Bart, T5, GIT, Whisper ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. [`~generation.GenerationMixin.generate`] ๋ฉ”์„œ๋“œ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ์ž‘์—…๋“ค์— ๋Œ€ํ•ด ํ…์ŠคํŠธ ๊ฒฐ๊ณผ๋ฌผ์„ ์ƒ์„ฑํ•˜๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: * [ํ…์ŠคํŠธ ์š”์•ฝ](./tasks/summarization#inference) * [์ด๋ฏธ์ง€ ์บก์…”๋‹](./model_doc/git#transformers.GitForCausalLM.forward.example) * [์˜ค๋””์˜ค ์ „์‚ฌ](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example) generate ๋ฉ”์†Œ๋“œ์— ์ž…๋ ฅ๋˜๋Š” ๊ฐ’๋“ค์€ ๋ชจ๋ธ์˜ ๋ฐ์ดํ„ฐ ํ˜•ํƒœ์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์ด ๊ฐ’๋“ค์€ AutoTokenizer๋‚˜ AutoProcessor์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค์— ์˜ํ•ด ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ „์ฒ˜๋ฆฌ ์žฅ์น˜๊ฐ€ ํ•˜๋‚˜ ์ด์ƒ์˜ ์ž…๋ ฅ ์œ ํ˜•์„ ์ƒ์„ฑํ•˜๋Š” ๊ฒฝ์šฐ, ๋ชจ๋“  ์ž…๋ ฅ์„ generate()์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ชจ๋ธ์˜ ์ „์ฒ˜๋ฆฌ ์žฅ์น˜์— ๋Œ€ํ•ด์„œ๋Š” ํ•ด๋‹น ๋ชจ๋ธ์˜ ๋ฌธ์„œ์—์„œ ์ž์„ธํžˆ ์•Œ์•„๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์ถœ๋ ฅ ํ† ํฐ์„ ์„ ํƒํ•˜๋Š” ๊ณผ์ •์„ ๋””์ฝ”๋”ฉ์ด๋ผ๊ณ  ํ•˜๋ฉฐ, `generate()` ๋ฉ”์†Œ๋“œ๊ฐ€ ์‚ฌ์šฉํ•  ๋””์ฝ”๋”ฉ ์ „๋žต์„ ์‚ฌ์šฉ์ž๊ฐ€ ์ปค์Šคํ„ฐ๋งˆ์ด์ง•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋””์ฝ”๋”ฉ ์ „๋žต์„ ์ˆ˜์ •ํ•˜๋Š” ๊ฒƒ์€ ํ›ˆ๋ จ ๊ฐ€๋Šฅํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ๊ฐ’๋“ค์„ ๋ณ€๊ฒฝํ•˜์ง€ ์•Š์ง€๋งŒ, ์ƒ์„ฑ๋œ ์ถœ๋ ฅ์˜ ํ’ˆ์งˆ์— ๋ˆˆ์— ๋„๋Š” ์˜ํ–ฅ์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ํ…์ŠคํŠธ์—์„œ ๋ฐ˜๋ณต์„ ์ค„์ด๊ณ , ๋” ์ผ๊ด€์„ฑ ์žˆ๊ฒŒ ๋งŒ๋“œ๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‚ด์šฉ์„ ๋‹ค๋ฃน๋‹ˆ๋‹ค: * ๊ธฐ๋ณธ ์ƒ์„ฑ ์„ค์ • * ์ผ๋ฐ˜์ ์ธ ๋””์ฝ”๋”ฉ ์ „๋žต๊ณผ ์ฃผ์š” ํŒŒ๋ผ๋ฏธํ„ฐ * ๐Ÿค— Hub์—์„œ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉ์ž ์ •์˜ ์ƒ์„ฑ ์„ค์ •์„ ์ €์žฅํ•˜๊ณ  ๊ณต์œ ํ•˜๋Š” ๋ฐฉ๋ฒ• ## ๊ธฐ๋ณธ ํ…์ŠคํŠธ ์ƒ์„ฑ ์„ค์ •[[default-text-generation-configuration]] ๋ชจ๋ธ์˜ ๋””์ฝ”๋”ฉ ์ „๋žต์€ ์ƒ์„ฑ ์„ค์ •์—์„œ ์ •์˜๋ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ [`pipeline`] ๋‚ด์—์„œ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ๋•Œ, ๋ชจ๋ธ์€ ๋‚ด๋ถ€์ ์œผ๋กœ ๊ธฐ๋ณธ ์ƒ์„ฑ ์„ค์ •์„ ์ ์šฉํ•˜๋Š” `PreTrainedModel.generate()` ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉ์ž ์ •์˜ ์„ค์ •์„ ์ €์žฅํ•˜์ง€ ์•Š์•˜์„ ๊ฒฝ์šฐ์—๋„ ๊ธฐ๋ณธ ์„ค์ •์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ๋ช…์‹œ์ ์œผ๋กœ ๋กœ๋“œํ•  ๋•Œ, `model.generation_config`์„ ํ†ตํ•ด ์ œ๊ณต๋˜๋Š” ์ƒ์„ฑ ์„ค์ •์„ ๊ฒ€์‚ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> model.generation_config GenerationConfig { "bos_token_id": 50256, "eos_token_id": 50256, } ``` `model.generation_config`๋ฅผ ์ถœ๋ ฅํ•˜๋ฉด ๊ธฐ๋ณธ ์„ค์ •๊ณผ ๋‹ค๋ฅธ ๊ฐ’๋“ค๋งŒ ํ‘œ์‹œ๋˜๊ณ , ๊ธฐ๋ณธ๊ฐ’๋“ค์€ ๋‚˜์—ด๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ์ƒ์„ฑ ์„ค์ •์€ ์ž…๋ ฅ ํ”„๋กฌํ”„ํŠธ์™€ ์ถœ๋ ฅ์„ ํ•ฉ์นœ ์ตœ๋Œ€ ํฌ๊ธฐ๋ฅผ 20 ํ† ํฐ์œผ๋กœ ์ œํ•œํ•˜์—ฌ ๋ฆฌ์†Œ์Šค ๋ถ€์กฑ์„ ๋ฐฉ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ๋””์ฝ”๋”ฉ ์ „๋žต์€ ํƒ์š• ํƒ์ƒ‰(greedy search)์œผ๋กœ, ๋‹ค์Œ ํ† ํฐ์œผ๋กœ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํ† ํฐ์„ ์„ ํƒํ•˜๋Š” ๊ฐ€์žฅ ๋‹จ์ˆœํ•œ ๋””์ฝ”๋”ฉ ์ „๋žต์ž…๋‹ˆ๋‹ค. ๋งŽ์€ ์ž‘์—…๊ณผ ์ž‘์€ ์ถœ๋ ฅ ํฌ๊ธฐ์— ๋Œ€ํ•ด์„œ๋Š” ์ด ๋ฐฉ๋ฒ•์ด ์ž˜ ์ž‘๋™ํ•˜์ง€๋งŒ, ๋” ๊ธด ์ถœ๋ ฅ์„ ์ƒ์„ฑํ•  ๋•Œ ์‚ฌ์šฉํ•˜๋ฉด ๋งค์šฐ ๋ฐ˜๋ณต์ ์ธ ๊ฒฐ๊ณผ๋ฅผ ์ƒ์„ฑํ•˜๊ฒŒ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ…์ŠคํŠธ ์ƒ์„ฑ ์‚ฌ์šฉ์ž ์ •์˜[[customize-text-generation]] ํŒŒ๋ผ๋ฏธํ„ฐ์™€ ํ•ด๋‹น ๊ฐ’์„ [`generate`] ๋ฉ”์†Œ๋“œ์— ์ง์ ‘ ์ „๋‹ฌํ•˜์—ฌ `generation_config`์„ ์žฌ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP ``` ๊ธฐ๋ณธ ๋””์ฝ”๋”ฉ ์ „๋žต์ด ๋Œ€๋ถ€๋ถ„์˜ ์ž‘์—…์— ์ž˜ ์ž‘๋™ํ•œ๋‹ค ํ•˜๋”๋ผ๋„, ์กฐ์ •ํ•  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์กฐ์ •๋˜๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒƒ๋“ค์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค: - `max_new_tokens`: ์ƒ์„ฑํ•  ์ตœ๋Œ€ ํ† ํฐ ์ˆ˜์ž…๋‹ˆ๋‹ค. ์ฆ‰, ํ”„๋กฌํ”„ํŠธ์— ์žˆ๋Š” ํ† ํฐ์„ ์ œ์™ธํ•œ ์ถœ๋ ฅ ์‹œํ€€์Šค์˜ ํฌ๊ธฐ์ž…๋‹ˆ๋‹ค. ์ถœ๋ ฅ์˜ ๊ธธ์ด๋ฅผ ์ค‘๋‹จ ๊ธฐ์ค€์œผ๋กœ ์‚ฌ์šฉํ•˜๋Š” ๋Œ€์‹ , ์ „์ฒด ์ƒ์„ฑ๋ฌผ์ด ์ผ์ • ์‹œ๊ฐ„์„ ์ดˆ๊ณผํ•  ๋•Œ ์ƒ์„ฑ์„ ์ค‘๋‹จํ•˜๊ธฐ๋กœ ์„ ํƒํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋” ์•Œ์•„๋ณด๋ ค๋ฉด [`StoppingCriteria`]๋ฅผ ํ™•์ธํ•˜์„ธ์š”. - `num_beams`: 1๋ณด๋‹ค ํฐ ์ˆ˜์˜ ๋น”์„ ์ง€์ •ํ•จ์œผ๋กœ์จ, ํƒ์š• ํƒ์ƒ‰(greedy search)์—์„œ ๋น” ํƒ์ƒ‰(beam search)์œผ๋กœ ์ „ํ™˜ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ด ์ „๋žต์€ ๊ฐ ์‹œ๊ฐ„ ๋‹จ๊ณ„์—์„œ ์—ฌ๋Ÿฌ ๊ฐ€์„ค์„ ํ‰๊ฐ€ํ•˜๊ณ  ๊ฒฐ๊ตญ ์ „์ฒด ์‹œํ€€์Šค์— ๋Œ€ํ•ด ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ๊ฐ€์„ค์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ดˆ๊ธฐ ํ† ํฐ์˜ ํ™•๋ฅ ์ด ๋‚ฎ์•„ ํƒ์š• ํƒ์ƒ‰์— ์˜ํ•ด ๋ฌด์‹œ๋˜์—ˆ์„ ๋†’์€ ํ™•๋ฅ ์˜ ์‹œํ€€์Šค๋ฅผ ์‹๋ณ„ํ•  ์ˆ˜ ์žˆ๋Š” ์žฅ์ ์„ ๊ฐ€์ง‘๋‹ˆ๋‹ค. - `do_sample`: ์ด ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ `True`๋กœ ์„ค์ •ํ•˜๋ฉด, ๋‹คํ•ญ ์ƒ˜ํ”Œ๋ง, ๋น” ํƒ์ƒ‰ ๋‹คํ•ญ ์ƒ˜ํ”Œ๋ง, Top-K ์ƒ˜ํ”Œ๋ง ๋ฐ Top-p ์ƒ˜ํ”Œ๋ง๊ณผ ๊ฐ™์€ ๋””์ฝ”๋”ฉ ์ „๋žต์„ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ „๋žต๋“ค์€ ์ „์ฒด ์–ดํœ˜์— ๋Œ€ํ•œ ํ™•๋ฅ  ๋ถ„ํฌ์—์„œ ๋‹ค์Œ ํ† ํฐ์„ ์„ ํƒํ•˜๋ฉฐ, ์ „๋žต๋ณ„๋กœ ํŠน์ • ์กฐ์ •์ด ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. - `num_return_sequences`: ๊ฐ ์ž…๋ ฅ์— ๋Œ€ํ•ด ๋ฐ˜ํ™˜ํ•  ์‹œํ€€์Šค ํ›„๋ณด์˜ ์ˆ˜์ž…๋‹ˆ๋‹ค. ์ด ์˜ต์…˜์€ ๋น” ํƒ์ƒ‰(beam search)์˜ ๋ณ€ํ˜•๊ณผ ์ƒ˜ํ”Œ๋ง๊ณผ ๊ฐ™์ด ์—ฌ๋Ÿฌ ์‹œํ€€์Šค ํ›„๋ณด๋ฅผ ์ง€์›ํ•˜๋Š” ๋””์ฝ”๋”ฉ ์ „๋žต์—๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํƒ์š• ํƒ์ƒ‰(greedy search)๊ณผ ๋Œ€์กฐ ํƒ์ƒ‰(contrastive search) ๊ฐ™์€ ๋””์ฝ”๋”ฉ ์ „๋žต์€ ๋‹จ์ผ ์ถœ๋ ฅ ์‹œํ€€์Šค๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ## ๋ชจ๋ธ์— ์‚ฌ์šฉ์ž ์ •์˜ ๋””์ฝ”๋”ฉ ์ „๋žต ์ €์žฅ[[save-a-custom-decoding-strategy-with-your-model]] ํŠน์ • ์ƒ์„ฑ ์„ค์ •์„ ๊ฐ€์ง„ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๊ณ ์ž ํ•  ๋•Œ, ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * [`GenerationConfig`] ํด๋ž˜์Šค ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. * ๋””์ฝ”๋”ฉ ์ „๋žต ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. * ์ƒ์„ฑ ์„ค์ •์„ [`GenerationConfig.save_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ €์žฅํ•˜๋ฉฐ, `config_file_name` ์ธ์ž๋Š” ๋น„์›Œ๋‘ก๋‹ˆ๋‹ค. * ๋ชจ๋ธ์˜ ์ €์žฅ์†Œ์— ์„ค์ •์„ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```python >>> from transformers import AutoModelForCausalLM, GenerationConfig >>> model = AutoModelForCausalLM.from_pretrained("my_account/my_model") # doctest: +SKIP >>> generation_config = GenerationConfig( ... max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id ... ) >>> generation_config.save_pretrained("my_account/my_model", push_to_hub=True) # doctest: +SKIP ``` ๋‹จ์ผ ๋””๋ ‰ํ† ๋ฆฌ์— ์—ฌ๋Ÿฌ ์ƒ์„ฑ ์„ค์ •์„ ์ €์žฅํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด๋•Œ [`GenerationConfig.save_pretrained`]์˜ `config_file_name` ์ธ์ž๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‚˜์ค‘์— [`GenerationConfig.from_pretrained`]๋กœ ์ด๋“ค์„ ์ธ์Šคํ„ด์Šคํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋‹จ์ผ ๋ชจ๋ธ์— ๋Œ€ํ•ด ์—ฌ๋Ÿฌ ์ƒ์„ฑ ์„ค์ •์„ ์ €์žฅํ•˜๊ณ  ์‹ถ์„ ๋•Œ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค(์˜ˆ: ์ƒ˜ํ”Œ๋ง์„ ์ด์šฉํ•œ ์ฐฝ์˜์  ํ…์ŠคํŠธ ์ƒ์„ฑ์„ ์œ„ํ•œ ํ•˜๋‚˜, ๋น” ํƒ์ƒ‰์„ ์ด์šฉํ•œ ์š”์•ฝ์„ ์œ„ํ•œ ๋‹ค๋ฅธ ํ•˜๋‚˜ ๋“ฑ). ๋ชจ๋ธ์— ์„ค์ • ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ์ ์ ˆํ•œ Hub ๊ถŒํ•œ์„ ๊ฐ€์ง€๊ณ  ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-small") >>> translation_generation_config = GenerationConfig( ... num_beams=4, ... early_stopping=True, ... decoder_start_token_id=0, ... eos_token_id=model.config.eos_token_id, ... pad_token=model.config.pad_token_id, ... ) >>> # ํŒ: Hub์— pushํ•˜๋ ค๋ฉด `push_to_hub=True`๋ฅผ ์ถ”๊ฐ€ >>> translation_generation_config.save_pretrained("/tmp", "translation_generation_config.json") >>> # ๋ช…๋ช…๋œ ์ƒ์„ฑ ์„ค์ • ํŒŒ์ผ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ์„ฑ์„ ๋งค๊ฐœ๋ณ€์ˆ˜ํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. >>> generation_config = GenerationConfig.from_pretrained("/tmp", "translation_generation_config.json") >>> inputs = tokenizer("translate English to French: Configuration files are easy to use!", return_tensors="pt") >>> outputs = model.generate(**inputs, generation_config=generation_config) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Les fichiers de configuration sont faciles ร  utiliser!'] ``` ## ์ŠคํŠธ๋ฆฌ๋ฐ[[streaming]] `generate()` ๋ฉ”์†Œ๋“œ๋Š” `streamer` ์ž…๋ ฅ์„ ํ†ตํ•ด ์ŠคํŠธ๋ฆฌ๋ฐ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. `streamer` ์ž…๋ ฅ์€ `put()`๊ณผ `end()` ๋ฉ”์†Œ๋“œ๋ฅผ ๊ฐ€์ง„ ํด๋ž˜์Šค์˜ ์ธ์Šคํ„ด์Šค์™€ ํ˜ธํ™˜๋ฉ๋‹ˆ๋‹ค. ๋‚ด๋ถ€์ ์œผ๋กœ, `put()`์€ ์ƒˆ ํ† ํฐ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋ฉฐ, `end()`๋Š” ํ…์ŠคํŠธ ์ƒ์„ฑ์˜ ๋์„ ํ‘œ์‹œํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. <Tip warning={true}> ์ŠคํŠธ๋ฆฌ๋จธ ํด๋ž˜์Šค์˜ API๋Š” ์•„์ง ๊ฐœ๋ฐœ ์ค‘์ด๋ฉฐ, ํ–ฅํ›„ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์‹ค์ œ๋กœ ๋‹ค์–‘ํ•œ ๋ชฉ์ ์„ ์œ„ํ•ด ์ž์ฒด ์ŠคํŠธ๋ฆฌ๋ฐ ํด๋ž˜์Šค๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋˜ํ•œ, ๊ธฐ๋ณธ์ ์ธ ์ŠคํŠธ๋ฆฌ๋ฐ ํด๋ž˜์Šค๋“ค๋„ ์ค€๋น„๋˜์–ด ์žˆ์–ด ๋ฐ”๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [`TextStreamer`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `generate()`์˜ ์ถœ๋ ฅ์„ ํ™”๋ฉด์— ํ•œ ๋‹จ์–ด์”ฉ ์ŠคํŠธ๋ฆฌ๋ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextStreamer(tok) >>> # ์ŠคํŠธ๋ฆฌ๋จธ๋Š” ํ‰์†Œ์™€ ๊ฐ™์€ ์ถœ๋ ฅ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ƒ์„ฑ๋œ ํ…์ŠคํŠธ๋„ ํ‘œ์ค€ ์ถœ๋ ฅ(stdout)์œผ๋กœ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ``` ## ๋””์ฝ”๋”ฉ ์ „๋žต[[decoding-strategies]] `generate()` ๋งค๊ฐœ๋ณ€์ˆ˜์™€ ๊ถ๊ทน์ ์œผ๋กœ `generation_config`์˜ ํŠน์ • ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ • ๋””์ฝ”๋”ฉ ์ „๋žต์„ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐœ๋…์ด ์ฒ˜์Œ์ด๋ผ๋ฉด, ํ”ํžˆ ์‚ฌ์šฉ๋˜๋Š” ๋””์ฝ”๋”ฉ ์ „๋žต์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ์„ค๋ช…ํ•˜๋Š” [์ด ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/how-to-generate)๋ฅผ ์ฝ์–ด๋ณด๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ๋””์ฝ”๋”ฉ ์ „๋žต์„ ์ œ์–ดํ•˜๋Š” ๋ช‡ ๊ฐ€์ง€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ณด์—ฌ์ฃผ๊ณ , ์ด๋ฅผ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ### ํƒ์š• ํƒ์ƒ‰(Greedy Search)[[greedy-search]] [`generate`]๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ํƒ์š• ํƒ์ƒ‰ ๋””์ฝ”๋”ฉ์„ ์‚ฌ์šฉํ•˜๋ฏ€๋กœ ์ด๋ฅผ ํ™œ์„ฑํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋ณ„๋„์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ง€์ •ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Š” `num_beams`๊ฐ€ 1๋กœ ์„ค์ •๋˜๊ณ  `do_sample=False`๋กœ ๋˜์–ด ์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค." ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "I look forward to" >>> checkpoint = "distilbert/distilgpt2" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n'] ``` ### ๋Œ€์กฐ ํƒ์ƒ‰(Contrastive search)[[contrastive-search]] 2022๋…„ ๋…ผ๋ฌธ [A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417)์—์„œ ์ œ์•ˆ๋œ ๋Œ€์กฐ ํƒ์ƒ‰ ๋””์ฝ”๋”ฉ ์ „๋žต์€ ๋ฐ˜๋ณต๋˜์ง€ ์•Š์œผ๋ฉด์„œ๋„ ์ผ๊ด€๋œ ๊ธด ์ถœ๋ ฅ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์žˆ์–ด ์šฐ์ˆ˜ํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์˜€์Šต๋‹ˆ๋‹ค. ๋Œ€์กฐ ํƒ์ƒ‰์ด ์ž‘๋™ํ•˜๋Š” ๋ฐฉ์‹์„ ์•Œ์•„๋ณด๋ ค๋ฉด [์ด ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/introducing-csearch)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ๋Œ€์กฐ ํƒ์ƒ‰์˜ ๋™์ž‘์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๊ณ  ์ œ์–ดํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ์ฃผ์š” ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” `penalty_alpha`์™€ `top_k`์ž…๋‹ˆ๋‹ค: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> checkpoint = "openai-community/gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Hugging Face Company is" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best in the business and our customer service is second to none.\n\nIf you have any questions about our products or services, feel free to contact us at any time. We look forward to hearing from you!'] ``` ### ๋‹คํ•ญ ์ƒ˜ํ”Œ๋ง(Multinomial sampling)[[multinomial-sampling]] ํƒ์š• ํƒ์ƒ‰(greedy search)์ด ํ•ญ์ƒ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํ† ํฐ์„ ๋‹ค์Œ ํ† ํฐ์œผ๋กœ ์„ ํƒํ•˜๋Š” ๊ฒƒ๊ณผ ๋‹ฌ๋ฆฌ, ๋‹คํ•ญ ์ƒ˜ํ”Œ๋ง(multinomial sampling, ์กฐ์ƒ ์ƒ˜ํ”Œ๋ง(ancestral sampling)์ด๋ผ๊ณ ๋„ ํ•จ)์€ ๋ชจ๋ธ์ด ์ œ๊ณตํ•˜๋Š” ์ „์ฒด ์–ดํœ˜์— ๋Œ€ํ•œ ํ™•๋ฅ  ๋ถ„ํฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋‹ค์Œ ํ† ํฐ์„ ๋ฌด์ž‘์œ„๋กœ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. 0์ด ์•„๋‹Œ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ๋ชจ๋“  ํ† ํฐ์€ ์„ ํƒ๋  ๊ธฐํšŒ๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ, ๋ฐ˜๋ณต์˜ ์œ„ํ—˜์„ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹คํ•ญ ์ƒ˜ํ”Œ๋ง์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด `do_sample=True` ๋ฐ `num_beams=1`์„ ์„ค์ •ํ•˜์„ธ์š”. ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(0) # ์žฌํ˜„์„ฑ์„ ์œ„ํ•ด >>> checkpoint = "openai-community/gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Today was an amazing day because" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Today was an amazing day because when you go to the World Cup and you don\'t, or when you don\'t get invited, that\'s a terrible feeling."'] ``` ### ๋น” ํƒ์ƒ‰(Beam-search) ๋””์ฝ”๋”ฉ[[beam-search-decoding]] ํƒ์š• ๊ฒ€์ƒ‰(greedy search)๊ณผ ๋‹ฌ๋ฆฌ, ๋น” ํƒ์ƒ‰(beam search) ๋””์ฝ”๋”ฉ์€ ๊ฐ ์‹œ๊ฐ„ ๋‹จ๊ณ„์—์„œ ์—ฌ๋Ÿฌ ๊ฐ€์„ค์„ ์œ ์ง€ํ•˜๊ณ  ๊ฒฐ๊ตญ ์ „์ฒด ์‹œํ€€์Šค์— ๋Œ€ํ•ด ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ๊ฐ€์„ค์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋‚ฎ์€ ํ™•๋ฅ ์˜ ์ดˆ๊ธฐ ํ† ํฐ์œผ๋กœ ์‹œ์ž‘ํ•˜๊ณ  ๊ทธ๋ฆฌ๋”” ๊ฒ€์ƒ‰์—์„œ ๋ฌด์‹œ๋˜์—ˆ์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ์‹œํ€€์Šค๋ฅผ ์‹๋ณ„ํ•˜๋Š” ์ด์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋””์ฝ”๋”ฉ ์ „๋žต์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด `num_beams` (์ถ”์ ํ•  ๊ฐ€์„ค ์ˆ˜๋ผ๊ณ ๋„ ํ•จ)๋ฅผ 1๋ณด๋‹ค ํฌ๊ฒŒ ์ง€์ •ํ•˜์„ธ์š”. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "It is astonishing how one can" >>> checkpoint = "openai-community/gpt2-medium" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have'] ``` ### ๋น” ํƒ์ƒ‰ ๋‹คํ•ญ ์ƒ˜ํ”Œ๋ง(Beam-search multinomial sampling)[[beam-search-multinomial-sampling]] ์ด ๋””์ฝ”๋”ฉ ์ „๋žต์€ ์ด๋ฆ„์—์„œ ์•Œ ์ˆ˜ ์žˆ๋“ฏ์ด ๋น” ํƒ์ƒ‰๊ณผ ๋‹คํ•ญ ์ƒ˜ํ”Œ๋ง์„ ๊ฒฐํ•ฉํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋””์ฝ”๋”ฉ ์ „๋žต์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” `num_beams`๋ฅผ 1๋ณด๋‹ค ํฐ ๊ฐ’์œผ๋กœ ์„ค์ •ํ•˜๊ณ , `do_sample=True`๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed >>> set_seed(0) # ์žฌํ˜„์„ฑ์„ ์œ„ํ•ด >>> prompt = "translate English to German: The house is wonderful." >>> checkpoint = "google-t5/t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, do_sample=True) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Das Haus ist wunderbar.' ``` ### ๋‹ค์–‘ํ•œ ๋น” ํƒ์ƒ‰ ๋””์ฝ”๋”ฉ(Diverse beam search decoding)[[diverse-beam-search-decoding]] ๋‹ค์–‘ํ•œ ๋น” ํƒ์ƒ‰(Decoding) ์ „๋žต์€ ์„ ํƒํ•  ์ˆ˜ ์žˆ๋Š” ๋” ๋‹ค์–‘ํ•œ ๋น” ์‹œํ€€์Šค ์ง‘ํ•ฉ์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ฃผ๋Š” ๋น” ํƒ์ƒ‰ ์ „๋žต์˜ ํ™•์žฅ์ž…๋‹ˆ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ์•Œ์•„๋ณด๋ ค๋ฉด, [๋‹ค์–‘ํ•œ ๋น” ํƒ์ƒ‰: ์‹ ๊ฒฝ ์‹œํ€€์Šค ๋ชจ๋ธ์—์„œ ๋‹ค์–‘ํ•œ ์†”๋ฃจ์…˜ ๋””์ฝ”๋”ฉํ•˜๊ธฐ](https://arxiv.org/pdf/1610.02424.pdf)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ ์„ธ ๊ฐ€์ง€ ์ฃผ์š” ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค: `num_beams`, `num_beam_groups`, ๊ทธ๋ฆฌ๊ณ  `diversity_penalty`. ๋‹ค์–‘์„ฑ ํŒจ๋„ํ‹ฐ๋Š” ๊ทธ๋ฃน ๊ฐ„์— ์ถœ๋ ฅ์ด ์„œ๋กœ ๋‹ค๋ฅด๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์ด๋ฉฐ, ๊ฐ ๊ทธ๋ฃน ๋‚ด์—์„œ ๋น” ํƒ์ƒ‰์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> checkpoint = "google/pegasus-xsum" >>> prompt = ( ... "The Permaculture Design Principles are a set of universal design principles " ... "that can be applied to any location, climate and culture, and they allow us to design " ... "the most efficient and sustainable human habitation and food production systems. " ... "Permaculture is a design system that encompasses a wide variety of disciplines, such " ... "as ecology, landscape design, environmental science and energy conservation, and the " ... "Permaculture design principles are drawn from these various disciplines. Each individual " ... "design principle itself embodies a complete conceptual framework based on sound " ... "scientific principles. When we bring all these separate principles together, we can " ... "create a design system that both looks at whole systems, the parts that these systems " ... "consist of, and how those parts interact with each other to create a complex, dynamic, " ... "living system. Each design principle serves as a tool that allows us to integrate all " ... "the separate parts of a design, referred to as elements, into a functional, synergistic, " ... "whole system, where the elements harmoniously interact and work together in the most " ... "efficient way possible." ... ) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, num_beam_groups=5, max_new_tokens=30, diversity_penalty=1.0) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'The Design Principles are a set of universal design principles that can be applied to any location, climate and culture, and they allow us to design the' ``` ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์–‘ํ•œ ๋””์ฝ”๋”ฉ ์ „๋žต์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ์ฃผ์š” ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. [`generate`] ๋ฉ”์„œ๋“œ์— ๋Œ€ํ•œ ๊ณ ๊ธ‰ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์กด์žฌํ•˜๋ฏ€๋กœ [`generate`] ๋ฉ”์„œ๋“œ์˜ ๋™์ž‘์„ ๋”์šฑ ์„ธ๋ถ€์ ์œผ๋กœ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ์ „์ฒด ๋ชฉ๋ก์€ [API ๋ฌธ์„œ](./main_classes/text_generation.md)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ### ์ถ”๋ก  ๋””์ฝ”๋”ฉ(Speculative Decoding)[[speculative-decoding]] ์ถ”๋ก  ๋””์ฝ”๋”ฉ(๋ณด์กฐ ๋””์ฝ”๋”ฉ(assisted decoding)์œผ๋กœ๋„ ์•Œ๋ ค์ง)์€ ๋™์ผํ•œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ํ›จ์”ฌ ์ž‘์€ ๋ณด์กฐ ๋ชจ๋ธ์„ ํ™œ์šฉํ•˜์—ฌ ๋ช‡ ๊ฐ€์ง€ ํ›„๋ณด ํ† ํฐ์„ ์ƒ์„ฑํ•˜๋Š” ์ƒ์œ„ ๋ชจ๋ธ์˜ ๋””์ฝ”๋”ฉ ์ „๋žต์„ ์ˆ˜์ •ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ฃผ ๋ชจ๋ธ์€ ๋‹จ์ผ ์ „๋ฐฉ ํ†ต๊ณผ๋กœ ํ›„๋ณด ํ† ํฐ์„ ๊ฒ€์ฆํ•จ์œผ๋กœ์จ ๋””์ฝ”๋”ฉ ๊ณผ์ •์„ ๊ฐ€์†ํ™”ํ•ฉ๋‹ˆ๋‹ค. `do_sample=True`์ผ ๊ฒฝ์šฐ, [์ถ”๋ก  ๋””์ฝ”๋”ฉ ๋…ผ๋ฌธ](https://arxiv.org/pdf/2211.17192.pdf)์— ์†Œ๊ฐœ๋œ ํ† ํฐ ๊ฒ€์ฆ๊ณผ ์žฌ์ƒ˜ํ”Œ๋ง ๋ฐฉ์‹์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ํ˜„์žฌ, ํƒ์š• ๊ฒ€์ƒ‰(greedy search)๊ณผ ์ƒ˜ํ”Œ๋ง๋งŒ์ด ์ง€์›๋˜๋Š” ๋ณด์กฐ ๋””์ฝ”๋”ฉ(assisted decoding) ๊ธฐ๋Šฅ์„ ํ†ตํ•ด, ๋ณด์กฐ ๋””์ฝ”๋”ฉ์€ ๋ฐฐ์น˜ ์ž…๋ ฅ์„ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ณด์กฐ ๋””์ฝ”๋”ฉ์— ๋Œ€ํ•ด ๋” ์•Œ๊ณ  ์‹ถ๋‹ค๋ฉด, [์ด ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/assisted-generation)๋ฅผ ํ™•์ธํ•ด ์ฃผ์„ธ์š”. ๋ณด์กฐ ๋””์ฝ”๋”ฉ์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ `assistant_model` ์ธ์ˆ˜๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ``` ์ƒ˜ํ”Œ๋ง ๋ฐฉ๋ฒ•๊ณผ ํ•จ๊ป˜ ๋ณด์กฐ ๋””์ฝ”๋”ฉ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๋‹คํ•ญ ์ƒ˜ํ”Œ๋ง๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `temperature` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฌด์ž‘์œ„์„ฑ์„ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ณด์กฐ ๋””์ฝ”๋”ฉ์—์„œ๋Š” `temperature`๋ฅผ ๋‚ฎ์ถ”๋ฉด ๋Œ€๊ธฐ ์‹œ๊ฐ„์„ ๊ฐœ์„ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> set_seed(42) # ์žฌํ˜„์„ฑ์„ ์œ„ํ•ด >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.5) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are going to the same party. It is a small party, in a small'] ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ถ”๋ก ์„ ์œ„ํ•œ Pipeline[[pipelines-for-inference]] [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋ฉด ์–ธ์–ด, ์ปดํ“จํ„ฐ ๋น„์ „, ์˜ค๋””์˜ค ๋ฐ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ํƒœ์Šคํฌ์— ๋Œ€ํ•œ ์ถ”๋ก ์„ ์œ„ํ•ด [Hub](https://huggingface.co/models)์˜ ์–ด๋–ค ๋ชจ๋ธ์ด๋“  ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ๋ถ„์•ผ์— ๋Œ€ํ•œ ๊ฒฝํ—˜์ด ์—†๊ฑฐ๋‚˜, ๋ชจ๋ธ์„ ์ด๋ฃจ๋Š” ์ฝ”๋“œ๊ฐ€ ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ์—๋„ [`pipeline`]์„ ์‚ฌ์šฉํ•ด์„œ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ์–ด์š”! ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ์„ ๋ฐฐ์›Œ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. * ์ถ”๋ก ์„ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• * ํŠน์ • ํ† ํฌ๋‚˜์ด์ € ๋˜๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• * ์–ธ์–ด, ์ปดํ“จํ„ฐ ๋น„์ „, ์˜ค๋””์˜ค ๋ฐ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ํƒœ์Šคํฌ์—์„œ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• <Tip> ์ง€์›ํ•˜๋Š” ๋ชจ๋“  ํƒœ์Šคํฌ์™€ ์“ธ ์ˆ˜ ์žˆ๋Š” ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋‹ด์€ ๋ชฉ๋ก์€ [`pipeline`] ์„ค๋ช…์„œ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. </Tip> ## Pipeline ์‚ฌ์šฉํ•˜๊ธฐ[[pipeline-usage]] ๊ฐ ํƒœ์Šคํฌ๋งˆ๋‹ค ๊ณ ์œ ์˜ [`pipeline`]์ด ์žˆ์ง€๋งŒ, ๊ฐœ๋ณ„ ํŒŒ์ดํ”„๋ผ์ธ์„ ๋‹ด๊ณ ์žˆ๋Š” ์ถ”์ƒํ™”๋œ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. [`pipeline`]์€ ํƒœ์Šคํฌ์— ์•Œ๋งž๊ฒŒ ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•œ ๊ธฐ๋ณธ ๋ชจ๋ธ๊ณผ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์ž๋™์œผ๋กœ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. 1. ๋จผ์ € [`pipeline`]์„ ์ƒ์„ฑํ•˜๊ณ  ํƒœ์Šคํฌ๋ฅผ ์ง€์ •ํ•˜์„ธ์š”. ```py >>> from transformers import pipeline >>> generator = pipeline(task="automatic-speech-recognition") ``` 2. ๊ทธ๋ฆฌ๊ณ  [`pipeline`]์— ์ž…๋ ฅ์„ ๋„ฃ์–ด์ฃผ์„ธ์š”. ```py >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES'} ``` ๊ธฐ๋Œ€ํ–ˆ๋˜ ๊ฒฐ๊ณผ๊ฐ€ ์•„๋‹Œ๊ฐ€์š”? Hub์—์„œ [๊ฐ€์žฅ ๋งŽ์ด ๋‹ค์šด๋กœ๋“œ๋œ ์ž๋™ ์Œ์„ฑ ์ธ์‹ ๋ชจ๋ธ](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads)๋กœ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•ด๋ณด์„ธ์š”. ๋‹ค์Œ์€ [openai/whisper-large](https://huggingface.co/openai/whisper-large)๋กœ ์‹œ๋„ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> generator = pipeline(model="openai/whisper-large") >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` ํ›จ์”ฌ ๋” ๋‚˜์•„์กŒ๊ตฐ์š”! Hub์˜ ๋ชจ๋ธ๋“ค์€ ์—ฌ๋Ÿฌ ๋‹ค์–‘ํ•œ ์–ธ์–ด์™€ ์ „๋ฌธ๋ถ„์•ผ๋ฅผ ์•„์šฐ๋ฅด๊ธฐ ๋•Œ๋ฌธ์— ๊ผญ ์ž์‹ ์˜ ์–ธ์–ด๋‚˜ ๋ถ„์•ผ์— ํŠนํ™”๋œ ๋ชจ๋ธ์„ ์ฐพ์•„๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๋ธŒ๋ผ์šฐ์ €๋ฅผ ๋ฒ—์–ด๋‚  ํ•„์š”์—†์ด Hub์—์„œ ์ง์ ‘ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์„ ํ™•์ธํ•˜๊ณ  ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ๋น„๊ตํ•ด์„œ ์ž์‹ ์˜ ์ƒํ™ฉ์— ๋” ์ ํ•ฉํ•œ์ง€, ์• ๋งคํ•œ ์ž…๋ ฅ์„ ๋” ์ž˜ ์ฒ˜๋ฆฌํ•˜๋Š”์ง€๋„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์ƒํ™ฉ์— ์•Œ๋งž๋Š” ๋ชจ๋ธ์„ ์—†๋‹ค๋ฉด ์–ธ์ œ๋‚˜ ์ง์ ‘ [ํ›ˆ๋ จ](training)์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž…๋ ฅ์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ๋Š” ๊ฒฝ์šฐ, ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py generator( [ "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac", "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac", ] ) ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ์„ ์ˆœํšŒํ•˜๊ฑฐ๋‚˜ ์›น์„œ๋ฒ„์— ์˜ฌ๋ ค๋‘์–ด ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ๊ฐ ์ƒ์„ธ ํŽ˜์ด์ง€๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. [๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ](#using-pipelines-on-a-dataset) [์›น์„œ๋ฒ„์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ](./pipeline_webserver) ## ๋งค๊ฐœ๋ณ€์ˆ˜[[parameters]] [`pipeline`]์€ ๋งŽ์€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ํŠน์ • ํƒœ์Šคํฌ์šฉ์ธ ๊ฒƒ๋„ ์žˆ๊ณ , ๋ฒ”์šฉ์ธ ๊ฒƒ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์›ํ•˜๋Š” ์œ„์น˜์— ์–ด๋””๋“  ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋„ฃ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py generator(model="openai/whisper-large", my_parameter=1) out = generate(...) # This will use `my_parameter=1`. out = generate(..., my_parameter=2) # This will override and use `my_parameter=2`. out = generate(...) # This will go back to using `my_parameter=1`. ``` ์ค‘์š”ํ•œ 3๊ฐ€์ง€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๊ธฐ๊ธฐ(device)[[device]] `device=n`์ฒ˜๋Ÿผ ๊ธฐ๊ธฐ๋ฅผ ์ง€์ •ํ•˜๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์ด ์ž๋™์œผ๋กœ ํ•ด๋‹น ๊ธฐ๊ธฐ์— ๋ชจ๋ธ์„ ๋ฐฐ์น˜ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ† ์น˜์—์„œ๋‚˜ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ๋„ ๋ชจ๋‘ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ```py generator(model="openai/whisper-large", device=0) ``` ๋ชจ๋ธ์ด GPU ํ•˜๋‚˜์— ๋Œ์•„๊ฐ€๊ธฐ ๋ฒ„๊ฒ๋‹ค๋ฉด, `device_map="auto"`๋ฅผ ์ง€์ •ํ•ด์„œ ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate)๊ฐ€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ์–ด๋–ป๊ฒŒ ๋กœ๋“œํ•˜๊ณ  ์ €์žฅํ• ์ง€ ์ž๋™์œผ๋กœ ๊ฒฐ์ •ํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py #!pip install accelerate generator(model="openai/whisper-large", device_map="auto") ``` ### ๋ฐฐ์น˜ ์‚ฌ์ด์ฆˆ[[batch-size]] ๊ธฐ๋ณธ์ ์œผ๋กœ ํŒŒ์ดํ”„๋ผ์ธ์€ [์—ฌ๊ธฐ](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching)์— ๋‚˜์˜จ ์ด์œ ๋กœ ์ถ”๋ก ์„ ์ผ๊ด„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํžˆ ์„ค๋ช…ํ•˜์ž๋ฉด ์ผ๊ด„ ์ฒ˜๋ฆฌ๊ฐ€ ๋ฐ˜๋“œ์‹œ ๋” ๋น ๋ฅด์ง€ ์•Š๊ณ  ์˜คํžˆ๋ ค ๋” ๋Š๋ ค์งˆ ์ˆ˜๋„ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ž์‹ ์˜ ์ƒํ™ฉ์— ์ ํ•ฉํ•˜๋‹ค๋ฉด, ์ด๋ ‡๊ฒŒ ์‚ฌ์šฉํ•˜์„ธ์š”. ```py generator(model="openai/whisper-large", device=0, batch_size=2) audio_filenames = [f"audio_{i}.flac" for i in range(10)] texts = generator(audio_filenames) ``` ํŒŒ์ดํ”„๋ผ์ธ ์œ„ ์ œ๊ณต๋œ 10๊ฐœ์˜ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ถ”๊ฐ€๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ์ฝ”๋“œ ์—†์ด (์ผ๊ด„ ์ฒ˜๋ฆฌ์— ๋ณด๋‹ค ํšจ๊ณผ์ ์ธ GPU ์œ„) ๋ชจ๋ธ์— 2๊ฐœ์”ฉ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ์€ ์ผ๊ด„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š์•˜์„ ๋•Œ์™€ ๋˜‘๊ฐ™์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ์†๋„๋ฅผ ๋” ๋‚ผ ์ˆ˜๋„ ์žˆ๋Š” ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜์ผ ๋ฟ์ž…๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์€ ์ผ๊ด„ ์ฒ˜๋ฆฌ์˜ ๋ณต์žกํ•œ ๋ถ€๋ถ„์„ ์ค„์—ฌ์ฃผ๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. (์˜ˆ๋ฅผ ๋“ค์–ด ๊ธด ์˜ค๋””์˜ค ํŒŒ์ผ์ฒ˜๋Ÿผ) ์—ฌ๋Ÿฌ ๋ถ€๋ถ„์œผ๋กœ ๋‚˜๋ˆ ์•ผ ๋ชจ๋ธ์ด ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์„ [*chunk batching*](./main_classes/pipelines#pipeline-chunk-batching)์ด๋ผ๊ณ  ํ•˜๋Š”๋ฐ, ํŒŒ์ดํ”„๋ผ์ธ์„ ์‚ฌ์šฉํ•˜๋ฉด ์ž๋™์œผ๋กœ ๋‚˜๋ˆ ์ค๋‹ˆ๋‹ค. ### ํŠน์ • ํƒœ์Šคํฌ์šฉ ๋งค๊ฐœ๋ณ€์ˆ˜[[task-specific-parameters]] ๊ฐ ํƒœ์Šคํฌ๋งˆ๋‹ค ๊ตฌํ˜„ํ•  ๋•Œ ์œ ์—ฐ์„ฑ๊ณผ ์˜ต์…˜์„ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•ด ํƒœ์Šคํฌ์šฉ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [`transformers.AutomaticSpeechRecognitionPipeline.__call__`] ๋ฉ”์„œ๋“œ์—๋Š” ๋™์˜์ƒ์˜ ์ž๋ง‰์„ ๋„ฃ์„ ๋•Œ ์œ ์šฉํ•  ๊ฒƒ ๊ฐ™์€ `return_timestamps` ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> # Not using whisper, as it cannot provide timestamps. >>> generator = pipeline(model="facebook/wav2vec2-large-960h-lv60-self", return_timestamps="word") >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP AND LIVE OUT THE TRUE MEANING OF ITS CREED', 'chunks': [{'text': 'I', 'timestamp': (1.22, 1.24)}, {'text': 'HAVE', 'timestamp': (1.42, 1.58)}, {'text': 'A', 'timestamp': (1.66, 1.68)}, {'text': 'DREAM', 'timestamp': (1.76, 2.14)}, {'text': 'BUT', 'timestamp': (3.68, 3.8)}, {'text': 'ONE', 'timestamp': (3.94, 4.06)}, {'text': 'DAY', 'timestamp': (4.16, 4.3)}, {'text': 'THIS', 'timestamp': (6.36, 6.54)}, {'text': 'NATION', 'timestamp': (6.68, 7.1)}, {'text': 'WILL', 'timestamp': (7.32, 7.56)}, {'text': 'RISE', 'timestamp': (7.8, 8.26)}, {'text': 'UP', 'timestamp': (8.38, 8.48)}, {'text': 'AND', 'timestamp': (10.08, 10.18)}, {'text': 'LIVE', 'timestamp': (10.26, 10.48)}, {'text': 'OUT', 'timestamp': (10.58, 10.7)}, {'text': 'THE', 'timestamp': (10.82, 10.9)}, {'text': 'TRUE', 'timestamp': (10.98, 11.18)}, {'text': 'MEANING', 'timestamp': (11.26, 11.58)}, {'text': 'OF', 'timestamp': (11.66, 11.7)}, {'text': 'ITS', 'timestamp': (11.76, 11.88)}, {'text': 'CREED', 'timestamp': (12.0, 12.38)}]} ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ ๋ชจ๋ธ์ด ํ…์ŠคํŠธ๋ฅผ ์ถ”๋ก ํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๊ฐ ๋‹จ์–ด๋ฅผ ๋งํ•œ ์‹œ์ ๊นŒ์ง€๋„ ์ถœ๋ ฅํ–ˆ์Šต๋‹ˆ๋‹ค. ํƒœ์Šคํฌ๋งˆ๋‹ค ๋‹ค์–‘ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”๋ฐ์š”. ์›ํ•˜๋Š” ํƒœ์Šคํฌ์˜ API๋ฅผ ์ฐธ์กฐํ•ด์„œ ๋ฐ”๊ฟ”๋ณผ ์ˆ˜ ์žˆ๋Š” ์—ฌ๋Ÿฌ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! ์ง€๊ธˆ๊นŒ์ง€ ๋‹ค๋ค„๋ณธ [`~transformers.AutomaticSpeechRecognitionPipeline`]์—๋Š” `chunk_length_s` ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ํ™”๋‚˜ 1์‹œ๊ฐ„ ๋ถ„๋Ÿ‰์˜ ๋™์˜์ƒ์˜ ์ž๋ง‰ ์ž‘์—…์„ ํ•  ๋•Œ์ฒ˜๋Ÿผ, ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์ด ์ž์ฒด์ ์œผ๋กœ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์—†๋Š” ๋งค์šฐ ๊ธด ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ฒ˜๋ฆฌํ•  ๋•Œ ์œ ์šฉํ•˜์ฃ . ๋„์›€์ด ๋  ๋งŒํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ฐพ์ง€ ๋ชปํ–ˆ๋‹ค๋ฉด ์–ธ์ œ๋“ ์ง€ [์š”์ฒญ](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)ํ•ด์ฃผ์„ธ์š”! ## ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ[[using-pipelines-on-a-dataset]] ํŒŒ์ดํ”„๋ผ์ธ์€ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ๋„ ์ถ”๋ก  ์ž‘์—…์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋•Œ ์ดํ„ฐ๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฑธ ์ถ”์ฒœ๋“œ๋ฆฝ๋‹ˆ๋‹ค. ```py def data(): for i in range(1000): yield f"My example {i}" pipe = pipe(model="openai-community/gpt2", device=0) generated_characters = 0 for out in pipe(data()): generated_characters += len(out["generated_text"]) ``` ์ดํ„ฐ๋ ˆ์ดํ„ฐ `data()`๋Š” ๊ฐ ๊ฒฐ๊ณผ๋ฅผ ํ˜ธ์ถœ๋งˆ๋‹ค ์ƒ์„ฑํ•˜๊ณ , ํŒŒ์ดํ”„๋ผ์ธ์€ ์ž…๋ ฅ์ด ์ˆœํšŒํ•  ์ˆ˜ ์žˆ๋Š” ์ž๋ฃŒ๊ตฌ์กฐ์ž„์„ ์ž๋™์œผ๋กœ ์ธ์‹ํ•˜์—ฌ GPU์—์„œ ๊ธฐ์กด ๋ฐ์ดํ„ฐ๊ฐ€ ์ฒ˜๋ฆฌ๋˜๋Š” ๋™์•ˆ ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค.(์ด๋•Œ ๋‚ด๋ถ€์ ์œผ๋กœ [DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)๋ฅผ ์‚ฌ์šฉํ•ด์š”.) ์ด ๊ณผ์ •์€ ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ์ ์žฌํ•˜์ง€ ์•Š๊ณ ๋„ GPU์— ์ตœ๋Œ€ํ•œ ๋น ๋ฅด๊ฒŒ ์ƒˆ๋กœ์šด ์ž‘์—…์„ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ผ๊ด„ ์ฒ˜๋ฆฌ๊ฐ€ ๋” ๋น ๋ฅผ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, `batch_size` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์กฐ์ •ํ•ด๋ด๋„ ์ข‹์•„์š”. ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ์ˆœํšŒํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ ๐Ÿค— [Datasets](https://github.com/huggingface/datasets/)๋ฅผ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์ธ๋ฐ์š”. ```py # KeyDataset is a util that will just output the item we're interested in. from transformers.pipelines.pt_utils import KeyDataset pipe = pipeline(model="hf-internal-testing/tiny-random-wav2vec2", device=0) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]") for out in pipe(KeyDataset(dataset["audio"])): print(out) ``` ## ์›น์„œ๋ฒ„์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ[[using-pipelines-for-a-webserver]] <Tip> ์ถ”๋ก  ์—”์ง„์„ ๋งŒ๋“œ๋Š” ๊ณผ์ •์€ ๋”ฐ๋กœ ํŽ˜์ด์ง€๋ฅผ ์ž‘์„ฑํ• ๋งŒํ•œ ๋ณต์žกํ•œ ์ฃผ์ œ์ž…๋‹ˆ๋‹ค. </Tip> [Link](./pipeline_webserver) ## ๋น„์ „ Pipeline[[vision-pipeline]] ๋น„์ „ ํƒœ์Šคํฌ๋ฅผ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ์ผ์€ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ํƒœ์Šคํฌ๋ฅผ ์ง€์ •ํ•˜๊ณ  ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜๊ธฐ์— ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” ์ธํ„ฐ๋„ท ๋งํฌ ๋˜๋Š” ๋กœ์ปฌ ๊ฒฝ๋กœ์˜ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•ด์ฃผ์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด ์•„๋ž˜์— ํ‘œ์‹œ๋œ ๊ณ ์–‘์ด๋Š” ์–ด๋–ค ์ข…์ธ๊ฐ€์š”? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(model="google/vit-base-patch16-224") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ``` ### ํ…์ŠคํŠธ Pipeline[[text-pipeline]] NLP ํƒœ์Šคํฌ๋ฅผ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ์ผ๋„ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> # This model is a `zero-shot-classification` model. >>> # It will classify text, except you are free to choose any label you might imagine >>> classifier = pipeline(model="facebook/bart-large-mnli") >>> classifier( ... "I have a problem with my iphone that needs to be resolved asap!!", ... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"], ... ) {'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]} ``` ### ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ Pipeline[[multimodal-pipeline]] [`pipeline`]์€ ์—ฌ๋Ÿฌ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ(์—ญ์ฃผ: ์˜ค๋””์˜ค, ๋น„๋””์˜ค, ํ…์ŠคํŠธ์™€ ๊ฐ™์€ ๋ฐ์ดํ„ฐ ํ˜•ํƒœ)๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ๋กœ ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต(VQA; Visual Question Answering) ํƒœ์Šคํฌ๋Š” ํ…์ŠคํŠธ์™€ ์ด๋ฏธ์ง€๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ์–ด๋–ค ์ด๋ฏธ์ง€ ๋งํฌ๋‚˜ ๋ฌป๊ณ  ์‹ถ์€ ์งˆ๋ฌธ๋„ ์ž์œ ๋กญ๊ฒŒ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” URL ๋˜๋Š” ๋กœ์ปฌ ๊ฒฝ๋กœ์˜ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•ด์ฃผ์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด ์ด [๊ฑฐ๋ž˜๋ช…์„ธ์„œ ์‚ฌ์ง„](https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png)์—์„œ ๊ฑฐ๋ž˜๋ช…์„ธ์„œ ๋ฒˆํ˜ธ๋ฅผ ๋ฌป๊ณ  ์‹ถ๋‹ค๋ฉด, ```py >>> from transformers import pipeline >>> vqa = pipeline(model="impira/layoutlm-document-qa") >>> vqa( ... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", ... question="What is the invoice number?", ... ) [{'score': 0.42514941096305847, 'answer': 'us-001', 'start': 16, 'end': 16}] ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/add_new_pipeline.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ƒ์„ฑํ•˜๋‚˜์š”? [[how-to-create-a-custom-pipeline]] ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ์–ด๋–ป๊ฒŒ ์ƒ์„ฑํ•˜๊ณ  [ํ—ˆ๋ธŒ](https://hf.co/models)์— ๊ณต์œ ํ•˜๊ฑฐ๋‚˜ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ํŒŒ์ดํ”„๋ผ์ธ์ด ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์›์‹œ ์ž…๋ ฅ์„ ๊ฒฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ์ž์—ด, ์›์‹œ ๋ฐ”์ดํŠธ, ๋”•์…”๋„ˆ๋ฆฌ ๋˜๋Š” ๊ฐ€์žฅ ์›ํ•˜๋Š” ์ž…๋ ฅ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ฒƒ์ด๋ฉด ๋ฌด์—‡์ด๋“  ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์ด ์ž…๋ ฅ์„ ๊ฐ€๋Šฅํ•œ ํ•œ ์ˆœ์ˆ˜ํ•œ Python ํ˜•์‹์œผ๋กœ ์œ ์ง€ํ•ด์•ผ (JSON์„ ํ†ตํ•ด ๋‹ค๋ฅธ ์–ธ์–ด์™€๋„) ํ˜ธํ™˜์„ฑ์ด ์ข‹์•„์ง‘๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ์ „์ฒ˜๋ฆฌ(`preprocess`) ํŒŒ์ดํ”„๋ผ์ธ์˜ ์ž…๋ ฅ(`inputs`)์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `outputs`๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `inputs`์™€ ๊ฐ™์€ ์ •์ฑ…์„ ๋”ฐ๋ฅด๊ณ , ๊ฐ„๋‹จํ• ์ˆ˜๋ก ์ข‹์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ํ›„์ฒ˜๋ฆฌ(`postprocess`) ๋ฉ”์†Œ๋“œ์˜ ์ถœ๋ ฅ์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋จผ์ € 4๊ฐœ์˜ ๋ฉ”์†Œ๋“œ(`preprocess`, `_forward`, `postprocess` ๋ฐ `_sanitize_parameters`)๋ฅผ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ๊ธฐ๋ณธ ํด๋ž˜์Šค `Pipeline`์„ ์ƒ์†ํ•˜์—ฌ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import Pipeline class MyPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] return preprocess_kwargs, {}, {} def preprocess(self, inputs, maybe_arg=2): model_input = Tensor(inputs["input_ids"]) return {"model_input": model_input} def _forward(self, model_inputs): # model_inputs == {"model_input": model_input} outputs = self.model(**model_inputs) # Maybe {"logits": Tensor(...)} return outputs def postprocess(self, model_outputs): best_class = model_outputs["logits"].softmax(-1) return best_class ``` ์ด ๋ถ„ํ•  ๊ตฌ์กฐ๋Š” CPU/GPU์— ๋Œ€ํ•œ ๋น„๊ต์  ์›ํ™œํ•œ ์ง€์›์„ ์ œ๊ณตํ•˜๋Š” ๋™์‹œ์—, ๋‹ค๋ฅธ ์Šค๋ ˆ๋“œ์—์„œ CPU์— ๋Œ€ํ•œ ์‚ฌ์ „/์‚ฌํ›„ ์ฒ˜๋ฆฌ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๊ฒŒ ์ง€์›ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. `preprocess`๋Š” ์›๋ž˜ ์ •์˜๋œ ์ž…๋ ฅ์„ ๊ฐ€์ ธ์™€ ๋ชจ๋ธ์— ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ์ •๋ณด๋ฅผ ํฌํ•จํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ์ผ๋ฐ˜์ ์œผ๋กœ `Dict` ํ˜•ํƒœ์ž…๋‹ˆ๋‹ค. `_forward`๋Š” ๊ตฌํ˜„ ์„ธ๋ถ€ ์‚ฌํ•ญ์ด๋ฉฐ ์ง์ ‘ ํ˜ธ์ถœํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. `forward`๋Š” ์˜ˆ์ƒ ์žฅ์น˜์—์„œ ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•œ ์•ˆ์ „์žฅ์น˜๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์–ด ์„ ํ˜ธ๋˜๋Š” ํ˜ธ์ถœ ๋ฉ”์†Œ๋“œ์ž…๋‹ˆ๋‹ค. ์‹ค์ œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ๊ฒƒ์€ `_forward` ๋ฉ”์†Œ๋“œ์— ์†ํ•˜๋ฉฐ, ๋‚˜๋จธ์ง€๋Š” ์ „์ฒ˜๋ฆฌ/ํ›„์ฒ˜๋ฆฌ ๊ณผ์ •์— ์žˆ์Šต๋‹ˆ๋‹ค. `postprocess` ๋ฉ”์†Œ๋“œ๋Š” `_forward`์˜ ์ถœ๋ ฅ์„ ๊ฐ€์ ธ์™€ ์ด์ „์— ๊ฒฐ์ •ํ•œ ์ตœ์ข… ์ถœ๋ ฅ ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `_sanitize_parameters`๋Š” ์ดˆ๊ธฐํ™” ์‹œ๊ฐ„์— `pipeline(...., maybe_arg=4)`์ด๋‚˜ ํ˜ธ์ถœ ์‹œ๊ฐ„์— `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`๊ณผ ๊ฐ™์ด, ์‚ฌ์šฉ์ž๊ฐ€ ์›ํ•˜๋Š” ๊ฒฝ์šฐ ์–ธ์ œ๋“ ์ง€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๋„๋ก ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. `_sanitize_parameters`์˜ ๋ฐ˜ํ™˜ ๊ฐ’์€ `preprocess`, `_forward`, `postprocess`์— ์ง์ ‘ ์ „๋‹ฌ๋˜๋Š” 3๊ฐœ์˜ kwargs ๋”•์…”๋„ˆ๋ฆฌ์ž…๋‹ˆ๋‹ค. ํ˜ธ์ถœ์ž๊ฐ€ ์ถ”๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ํ˜ธ์ถœํ•˜์ง€ ์•Š์•˜๋‹ค๋ฉด ์•„๋ฌด๊ฒƒ๋„ ์ฑ„์šฐ์ง€ ๋งˆ์‹ญ์‹œ์˜ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ํ•ญ์ƒ ๋” "์ž์—ฐ์Šค๋Ÿฌ์šด" ํ•จ์ˆ˜ ์ •์˜์˜ ๊ธฐ๋ณธ ์ธ์ˆ˜๋ฅผ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ถ„๋ฅ˜ ์ž‘์—…์—์„œ `top_k` ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ๋Œ€ํ‘œ์ ์ธ ์˜ˆ์ž…๋‹ˆ๋‹ค. ```python >>> pipe = pipeline("my-new-task") >>> pipe("This is a test") [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05} {"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}] >>> pipe("This is a test", top_k=2) [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}] ``` ์ด๋ฅผ ๋‹ฌ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์šฐ๋ฆฌ๋Š” `postprocess` ๋ฉ”์†Œ๋“œ๋ฅผ ๊ธฐ๋ณธ ๋งค๊ฐœ๋ณ€์ˆ˜์ธ `5`๋กœ ์—…๋ฐ์ดํŠธํ•˜๊ณ  `_sanitize_parameters`๋ฅผ ์ˆ˜์ •ํ•˜์—ฌ ์ด ์ƒˆ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. ```python def postprocess(self, model_outputs, top_k=5): best_class = model_outputs["logits"].softmax(-1) # top_k๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋กœ์ง ์ถ”๊ฐ€ return best_class def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] postprocess_kwargs = {} if "top_k" in kwargs: postprocess_kwargs["top_k"] = kwargs["top_k"] return preprocess_kwargs, {}, postprocess_kwargs ``` ์ž…/์ถœ๋ ฅ์„ ๊ฐ€๋Šฅํ•œ ํ•œ ๊ฐ„๋‹จํ•˜๊ณ  ์™„์ „ํžˆ JSON ์ง๋ ฌํ™” ๊ฐ€๋Šฅํ•œ ํ˜•์‹์œผ๋กœ ์œ ์ง€ํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์‹ญ์‹œ์˜ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์‚ฌ์šฉ์ž๊ฐ€ ์ƒˆ๋กœ์šด ์ข…๋ฅ˜์˜ ๊ฐœ์ฒด๋ฅผ ์ดํ•ดํ•˜์ง€ ์•Š๊ณ ๋„ ํŒŒ์ดํ”„๋ผ์ธ์„ ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์‚ฌ์šฉ ์šฉ์ด์„ฑ์„ ์œ„ํ•ด ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์œ ํ˜•์˜ ์ธ์ˆ˜(์˜ค๋””์˜ค ํŒŒ์ผ์€ ํŒŒ์ผ ์ด๋ฆ„, URL ๋˜๋Š” ์ˆœ์ˆ˜ํ•œ ๋ฐ”์ดํŠธ์ผ ์ˆ˜ ์žˆ์Œ)๋ฅผ ์ง€์›ํ•˜๋Š” ๊ฒƒ์ด ๋น„๊ต์  ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ## ์ง€์›๋˜๋Š” ์ž‘์—… ๋ชฉ๋ก์— ์ถ”๊ฐ€ํ•˜๊ธฐ [[adding-it-to-the-list-of-supported-tasks]] `new-task`๋ฅผ ์ง€์›๋˜๋Š” ์ž‘์—… ๋ชฉ๋ก์— ๋“ฑ๋กํ•˜๋ ค๋ฉด `PIPELINE_REGISTRY`์— ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python from transformers.pipelines import PIPELINE_REGISTRY PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, ) ``` ์›ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ์„ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๊ฒฝ์šฐ ํŠน์ • ๊ฐœ์ •(๋ถ„๊ธฐ ์ด๋ฆ„ ๋˜๋Š” ์ปค๋ฐ‹ ํ•ด์‹œ์ผ ์ˆ˜ ์žˆ์Œ, ์—ฌ๊ธฐ์„œ๋Š” "abcdef")๊ณผ ํƒ€์ž…์„ ํ•จ๊ป˜ ๊ฐ€์ ธ์™€์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, default={"pt": ("user/awesome_model", "abcdef")}, type="text", # ํ˜„์žฌ ์ง€์› ์œ ํ˜•: text, audio, image, multimodal ) ``` ## Hub์— ํŒŒ์ดํ”„๋ผ์ธ ๊ณต์œ ํ•˜๊ธฐ [[share-your-pipeline-on-the-hub]] Hub์— ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ๊ณต์œ ํ•˜๋ ค๋ฉด `Pipeline` ํ•˜์œ„ ํด๋ž˜์Šค์˜ ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋ฅผ Python ํŒŒ์ผ์— ์ €์žฅํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ฌธ์žฅ ์Œ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ์‚ฌ์šฉํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits} ``` ๊ตฌํ˜„์€ ํ”„๋ ˆ์ž„์›Œํฌ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š์œผ๋ฉฐ, PyTorch์™€ TensorFlow ๋ชจ๋ธ์— ๋Œ€ํ•ด ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ `pair_classification.py`๋ผ๋Š” ํŒŒ์ผ์— ์ €์žฅํ•œ ๊ฒฝ์šฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ฐ€์ ธ์˜ค๊ณ  ๋“ฑ๋กํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification PIPELINE_REGISTRY.register_pipeline( "pair-classification", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification, tf_model=TFAutoModelForSequenceClassification, ) ``` ์ด ์ž‘์—…์ด ์™„๋ฃŒ๋˜๋ฉด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `sgugger/finetuned-bert-mrpc`์€ MRPC ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ๋ฏธ์„ธ ์กฐ์ •๋˜์–ด ๋ฌธ์žฅ ์Œ์„ ํŒจ๋Ÿฌํ”„๋ ˆ์ด์ฆˆ์ธ์ง€ ์•„๋‹Œ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•ฉ๋‹ˆ๋‹ค. ```py from transformers import pipeline classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ `push_to_hub` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py classifier.push_to_hub("test-dynamic-pipeline") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด "test-dynamic-pipeline" ํด๋” ๋‚ด์— `PairClassificationPipeline`์„ ์ •์˜ํ•œ ํŒŒ์ผ์ด ๋ณต์‚ฌ๋˜๋ฉฐ, ํŒŒ์ดํ”„๋ผ์ธ์˜ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋„ ์ €์žฅํ•œ ํ›„, `{your_username}/test-dynamic-pipeline` ์ €์žฅ์†Œ์— ์žˆ๋Š” ๋ชจ๋“  ๊ฒƒ์„ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค. ์ดํ›„์—๋Š” `trust_remote_code=True` ์˜ต์…˜๋งŒ ์ œ๊ณตํ•˜๋ฉด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py from transformers import pipeline classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True) ``` ## ๐Ÿค— Transformers์— ํŒŒ์ดํ”„๋ผ์ธ ์ถ”๊ฐ€ํ•˜๊ธฐ [[add-the-pipeline-to-transformers]] ๐Ÿค— Transformers์— ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ๊ธฐ์—ฌํ•˜๋ ค๋ฉด, `pipelines` ํ•˜์œ„ ๋ชจ๋“ˆ์— ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ ์ฝ”๋“œ์™€ ํ•จ๊ป˜ ์ƒˆ ๋ชจ๋“ˆ์„ ์ถ”๊ฐ€ํ•œ ๋‹ค์Œ, `pipelines/__init__.py`์—์„œ ์ •์˜๋œ ์ž‘์—… ๋ชฉ๋ก์— ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `tests/test_pipelines_MY_PIPELINE.py`๋ผ๋Š” ์ƒˆ ํŒŒ์ผ์„ ๋งŒ๋“ค๊ณ  ๋‹ค๋ฅธ ํ…Œ์ŠคํŠธ์™€ ์˜ˆ์ œ๋ฅผ ํ•จ๊ป˜ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. `run_pipeline_test` ํ•จ์ˆ˜๋Š” ๋งค์šฐ ์ผ๋ฐ˜์ ์ด๋ฉฐ, `model_mapping` ๋ฐ `tf_model_mapping`์—์„œ ์ •์˜๋œ ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์˜ ์ž‘์€ ๋ฌด์ž‘์œ„ ๋ชจ๋ธ์—์„œ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ–ฅํ›„ ํ˜ธํ™˜์„ฑ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๋ฐ ๋งค์šฐ ์ค‘์š”ํ•˜๋ฉฐ, ๋ˆ„๊ตฐ๊ฐ€ `XXXForQuestionAnswering`์„ ์œ„ํ•œ ์ƒˆ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋ฉด ํŒŒ์ดํ”„๋ผ์ธ ํ…Œ์ŠคํŠธ๊ฐ€ ํ•ด๋‹น ๋ชจ๋ธ์—์„œ ์‹คํ–‰์„ ์‹œ๋„ํ•œ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ๋ฌด์ž‘์œ„์ด๊ธฐ ๋•Œ๋ฌธ์— ์‹ค์ œ ๊ฐ’์„ ํ™•์ธํ•˜๋Š” ๊ฒƒ์€ ๋ถˆ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ, ๋‹จ์ˆœํžˆ ํŒŒ์ดํ”„๋ผ์ธ ์ถœ๋ ฅ `TYPE`๊ณผ ์ผ์น˜์‹œํ‚ค๊ธฐ ์œ„ํ•œ ๋„์šฐ๋ฏธ `ANY`๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ 2๊ฐœ(์ด์ƒ์ ์œผ๋กœ๋Š” 4๊ฐœ)์˜ ํ…Œ์ŠคํŠธ๋ฅผ ๊ตฌํ˜„ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - `test_small_model_pt`: ์ด ํŒŒ์ดํ”„๋ผ์ธ์— ๋Œ€ํ•œ ์ž‘์€ ๋ชจ๋ธ 1๊ฐœ๋ฅผ ์ •์˜(๊ฒฐ๊ณผ๊ฐ€ ์˜๋ฏธ ์—†์–ด๋„ ์ƒ๊ด€์—†์Œ)ํ•˜๊ณ  ํŒŒ์ดํ”„๋ผ์ธ ์ถœ๋ ฅ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋Š” `test_small_model_tf`์™€ ๋™์ผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - `test_small_model_tf`: ์ด ํŒŒ์ดํ”„๋ผ์ธ์— ๋Œ€ํ•œ ์ž‘์€ ๋ชจ๋ธ 1๊ฐœ๋ฅผ ์ •์˜(๊ฒฐ๊ณผ๊ฐ€ ์˜๋ฏธ ์—†์–ด๋„ ์ƒ๊ด€์—†์Œ)ํ•˜๊ณ  ํŒŒ์ดํ”„๋ผ์ธ ์ถœ๋ ฅ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋Š” `test_small_model_pt`์™€ ๋™์ผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - `test_large_model_pt`(`์„ ํƒ์‚ฌํ•ญ`): ๊ฒฐ๊ณผ๊ฐ€ ์˜๋ฏธ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜๋Š” ์‹ค์ œ ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋Š” ์†๋„๊ฐ€ ๋Š๋ฆฌ๋ฏ€๋กœ ์ด๋ฅผ ํ‘œ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ์˜ ๋ชฉํ‘œ๋Š” ํŒŒ์ดํ”„๋ผ์ธ์„ ๋ณด์—ฌ์ฃผ๊ณ  ํ–ฅํ›„ ๋ฆด๋ฆฌ์ฆˆ์—์„œ์˜ ๋ณ€ํ™”๊ฐ€ ์—†๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. - `test_large_model_tf`(`์„ ํƒ์‚ฌํ•ญ`): ๊ฒฐ๊ณผ๊ฐ€ ์˜๋ฏธ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜๋Š” ์‹ค์ œ ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋Š” ์†๋„๊ฐ€ ๋Š๋ฆฌ๋ฏ€๋กœ ์ด๋ฅผ ํ‘œ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ์˜ ๋ชฉํ‘œ๋Š” ํŒŒ์ดํ”„๋ผ์ธ์„ ๋ณด์—ฌ์ฃผ๊ณ  ํ–ฅํ›„ ๋ฆด๋ฆฌ์ฆˆ์—์„œ์˜ ๋ณ€ํ™”๊ฐ€ ์—†๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: ๋‘˜๋Ÿฌ๋ณด๊ธฐ - local: installation title: ์„ค์น˜๋ฐฉ๋ฒ• title: ์‹œ์ž‘ํ•˜๊ธฐ - sections: - local: pipeline_tutorial title: Pipeline์œผ๋กœ ์ถ”๋ก ํ•˜๊ธฐ - local: autoclass_tutorial title: AutoClass๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ์ธ์Šคํ„ด์Šค ๋กœ๋“œํ•˜๊ธฐ - local: preprocessing title: ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ - local: training title: ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ - local: run_scripts title: ์Šคํฌ๋ฆฝํŠธ๋กœ ํ•™์Šตํ•˜๊ธฐ - local: accelerate title: ๐Ÿค— Accelerate๋กœ ๋ถ„์‚ฐ ํ•™์Šต ๊ตฌ์„ฑํ•˜๊ธฐ - local: peft title: ๐Ÿค— PEFT๋กœ ์–ด๋Œ‘ํ„ฐ ๋กœ๋“œ ๋ฐ ํ•™์Šตํ•˜๊ธฐ - local: model_sharing title: ๋งŒ๋“  ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ - local: transformers_agents title: ์—์ด์ „ํŠธ - local: llm_tutorial title: ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ๋กœ ์ƒ์„ฑํ•˜๊ธฐ title: ํŠœํ† ๋ฆฌ์–ผ - sections: - isExpanded: false sections: - local: tasks/sequence_classification title: ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ - local: tasks/token_classification title: ํ† ํฐ ๋ถ„๋ฅ˜ - local: tasks/question_answering title: ์งˆ์˜ ์‘๋‹ต(Question Answering) - local: tasks/language_modeling title: ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง(Causal language modeling) - local: tasks/masked_language_modeling title: ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง(Masked language modeling) - local: tasks/translation title: ๋ฒˆ์—ญ - local: tasks/summarization title: ์š”์•ฝ - local: tasks/multiple_choice title: ๊ฐ๊ด€์‹ ๋ฌธ์ œ(Multiple Choice) title: ์ž์—ฐ์–ด์ฒ˜๋ฆฌ - isExpanded: false sections: - local: tasks/audio_classification title: ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ - local: tasks/asr title: ์ž๋™ ์Œ์„ฑ ์ธ์‹ title: ์˜ค๋””์˜ค - isExpanded: false sections: - local: tasks/image_classification title: ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ - local: tasks/semantic_segmentation title: ์˜๋ฏธ์  ๋ถ„ํ• (Semantic segmentation) - local: tasks/video_classification title: ์˜์ƒ ๋ถ„๋ฅ˜ - local: tasks/object_detection title: ๊ฐ์ฒด ํƒ์ง€ - local: tasks/zero_shot_object_detection title: ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€ - local: tasks/zero_shot_image_classification title: ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ - local: tasks/monocular_depth_estimation title: ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ • - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Image-to-Image - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Image Feature Extraction - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Mask Generation - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Knowledge Distillation for Computer Vision title: ์ปดํ“จํ„ฐ ๋น„์ „ - isExpanded: false sections: - local: tasks/image_captioning title: ์ด๋ฏธ์ง€ ์บก์…”๋‹ - local: tasks/document_question_answering title: ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต(Document Question Answering) - local: tasks/visual_question_answering title: ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต (Visual Question Answering) - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Text to speech title: ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ - isExpanded: false sections: - local: generation_strategies title: ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต ์‚ฌ์šฉ์ž ์ •์˜ title: ์ƒ์„ฑ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Image tasks with IDEFICS - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LLM prompting guide title: (๋ฒˆ์—ญ์ค‘) ํ”„๋กฌํ”„ํŒ… title: ํƒœ์Šคํฌ ๊ฐ€์ด๋“œ - sections: - local: fast_tokenizers title: ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ํ† ํฌ๋‚˜์ด์ € ์‚ฌ์šฉํ•˜๊ธฐ - local: multilingual title: ๋‹ค๊ตญ์–ด ๋ชจ๋ธ ์ถ”๋ก ํ•˜๊ธฐ - local: create_a_model title: ๋ชจ๋ธ๋ณ„ API ์‚ฌ์šฉํ•˜๊ธฐ - local: custom_models title: ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Templates for chat models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Trainer - local: sagemaker title: Amazon SageMaker์—์„œ ํ•™์Šต ์‹คํ–‰ํ•˜๊ธฐ - local: serialization title: ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - local: tflite title: TFLite๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - local: torchscript title: TorchScript๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Benchmarks - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Notebooks with examples - local: community title: ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฆฌ์†Œ์Šค - local: custom_tools title: ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ํ”„๋กฌํ”„ํŠธ - local: troubleshooting title: ๋ฌธ์ œ ํ•ด๊ฒฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Contribute new quantization method title: (๋ฒˆ์—ญ์ค‘) ๊ฐœ๋ฐœ์ž ๊ฐ€์ด๋“œ - sections: - local: performance title: ์„ฑ๋Šฅ ๋ฐ ํ™•์žฅ์„ฑ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Quantization - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on one GPU - local: perf_train_gpu_many title: ๋‹ค์ค‘ GPU์—์„œ ํ›ˆ๋ จ ์ง„ํ–‰ํ•˜๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Fully Sharded Data Parallel - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeepSpeed - local: perf_train_cpu title: CPU์—์„œ ํ›ˆ๋ จ - local: perf_train_cpu_many title: ๋‹ค์ค‘ CPU์—์„œ ํ›ˆ๋ จํ•˜๊ธฐ - local: perf_train_tpu_tf title: TensorFlow๋กœ TPU์—์„œ ํ›ˆ๋ จํ•˜๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PyTorch training on Apple silicon - local: perf_hardware title: ํ›ˆ๋ จ์šฉ ์‚ฌ์šฉ์ž ๋งž์ถคํ˜• ํ•˜๋“œ์›จ์–ด - local: hpo_train title: Trainer API๋ฅผ ์‚ฌ์šฉํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ title: (๋ฒˆ์—ญ์ค‘) ํšจ์œจ์ ์ธ ํ•™์Šต ๊ธฐ์ˆ ๋“ค - sections: - local: perf_infer_cpu title: CPU๋กœ ์ถ”๋ก ํ•˜๊ธฐ - local: perf_infer_gpu_one title: ํ•˜๋‚˜์˜ GPU๋ฅผ ํ™œ์šฉํ•œ ์ถ”๋ก  title: ์ถ”๋ก  ์ตœ์ ํ™”ํ•˜๊ธฐ - local: big_models title: ๋Œ€ํ˜• ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™” - local: debugging title: ๋””๋ฒ„๊น… - local: tf_xla title: TensorFlow ๋ชจ๋ธ์„ ์œ„ํ•œ XLA ํ†ตํ•ฉ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Optimize inference using `torch.compile()` title: (๋ฒˆ์—ญ์ค‘) ์„ฑ๋Šฅ ๋ฐ ํ™•์žฅ์„ฑ - sections: - local: contributing title: ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๋Š” ๋ฐฉ๋ฒ• - local: add_new_model title: ๐Ÿค— Transformers์— ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• - local: add_new_pipeline title: ์–ด๋–ป๊ฒŒ ๐Ÿค— Transformers์— ํŒŒ์ดํ”„๋ผ์ธ์„ ์ถ”๊ฐ€ํ•˜๋‚˜์š”? - local: testing title: ํ…Œ์ŠคํŠธ - local: pr_checks title: Pull Request์— ๋Œ€ํ•œ ๊ฒ€์‚ฌ title: (๋ฒˆ์—ญ์ค‘) ๊ธฐ์—ฌํ•˜๊ธฐ - sections: - local: philosophy title: ์ด๋…๊ณผ ๋ชฉํ‘œ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Glossary - local: task_summary title: ๐Ÿค— Transformers๋กœ ํ•  ์ˆ˜ ์žˆ๋Š” ์ž‘์—… - local: tasks_explained title: ๐Ÿค— Transformers๋กœ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ• - local: model_summary title: Transformer ๋ชจ๋ธ๊ตฐ - local: tokenizer_summary title: ํ† ํฌ๋‚˜์ด์ € ์š”์•ฝ - local: attention title: ์–ดํ…์…˜ ๋งค์ปค๋‹ˆ์ฆ˜ - local: pad_truncation title: ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ - local: bertology title: BERTology - local: perplexity title: ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(Perplexity) - local: pipeline_webserver title: ์ถ”๋ก  ์›น ์„œ๋ฒ„๋ฅผ ์œ„ํ•œ ํŒŒ์ดํ”„๋ผ์ธ - local: model_memory_anatomy title: ๋ชจ๋ธ ํ•™์Šต ํ•ด๋ถ€ํ•˜๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Getting the most out of LLMs title: (๋ฒˆ์—ญ์ค‘) ๊ฐœ๋… ๊ฐ€์ด๋“œ - sections: - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Agents and Tools - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Auto Classes - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Backbones - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Callbacks - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Configuration - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Data Collator - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Keras callbacks - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Logging - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Text Generation - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ONNX - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Optimization - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Model outputs - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Pipelines - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Processors - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Quantization - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Tokenizer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Trainer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeepSpeed - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Feature Extractor - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Image Processor title: (๋ฒˆ์—ญ์ค‘) ๋ฉ”์ธ ํด๋ž˜์Šค - sections: - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ALBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BART - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BARThez - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BARTpho - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BertGeneration - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BertJapanese - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Bertweet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BigBird - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BigBirdPegasus - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BioGpt - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Blenderbot - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Blenderbot Small - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BLOOM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BORT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ByT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CamemBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CANINE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CodeGen - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ConvBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CPM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CPMANT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CTRL - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeBERTa - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeBERTa-v2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DialoGPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DistilBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DPR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ELECTRA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Encoder Decoder Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ERNIE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ErnieM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ESM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FLAN-T5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FLAN-UL2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FlauBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FSMT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Funnel Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT Neo - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT NeoX - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT NeoX Japanese - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT-J - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPTBigCode - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPTSAN Japanese - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPTSw3 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) HerBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) I-BERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Jukebox - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LED - local: model_doc/llama title: LLaMA - local: model_doc/llama2 title: LLaMA2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Longformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LongT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LUKE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) M2M100 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MarianMT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MarkupLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MBart and MBart-50 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MEGA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MegatronBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MegatronGPT2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) mLUKE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MPNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MVP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NEZHA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NLLB - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NLLB-MoE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Nystrรถmformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Open-Llama - local: in_translation title: (๋ฒˆ์—ญ์ค‘) OPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Pegasus - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PEGASUS-X - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PhoBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PLBart - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ProphetNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) QDQBert - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RAG - local: in_translation title: (๋ฒˆ์—ญ์ค‘) REALM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Reformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RemBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RetriBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoBERTa - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoBERTa-PreLayerNorm - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoCBert - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Splinter - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SqueezeBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SwitchTransformers - local: in_translation title: (๋ฒˆ์—ญ์ค‘) T5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) T5v1.1 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TAPEX - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Transformer XL - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UL2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) X-MOD - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XGLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-ProphetNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-RoBERTa - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-RoBERTa-XL - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-V - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) YOSO title: (๋ฒˆ์—ญ์ค‘) ํ…์ŠคํŠธ ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BEiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Conditional DETR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ConvNeXT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ConvNeXTV2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CvT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Deformable DETR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DETA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DETR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DiNAT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) EfficientFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) EfficientNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FocalNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GLPN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ImageGPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LeViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Mask2Former - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MaskFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileNetV1 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileNetV2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NAT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PoolFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RegNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ResNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SegFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Swin Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Swin Transformer V2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Swin2SR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Table Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TimeSformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UperNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) VAN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) VideoMAE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Vision Transformer (ViT) - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViT Hybrid - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViTMAE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViTMSN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) YOLOS title: (๋ฒˆ์—ญ์ค‘) ๋น„์ „ ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Audio Spectrogram Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CLAP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Hubert - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MCTCT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SEW - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SEW-D - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Speech2Text - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Speech2Text2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SpeechT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UniSpeech - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UniSpeech-SAT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Wav2Vec2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Wav2Vec2-Conformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Wav2Vec2Phoneme - local: in_translation title: (๋ฒˆ์—ญ์ค‘) WavLM - local: model_doc/whisper title: Whisper - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLS-R - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLSR-Wav2Vec2 title: (๋ฒˆ์—ญ์ค‘) ์˜ค๋””์˜ค ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ALIGN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) AltCLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BLIP-2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BridgeTower - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Chinese-CLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CLIPSeg - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Data2Vec - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DePlot - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Donut - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FLAVA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GIT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GroupViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutLMV2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutLMV3 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutXLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LiLT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LXMERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MatCha - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MGP-STR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) OneFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) OWL-ViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Perceiver - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Pix2Struct - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Segment Anything - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Speech Encoder Decoder Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TAPAS - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TrOCR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TVLT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViLT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Vision Encoder Decoder Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Vision Text Dual Encoder - local: in_translation title: (๋ฒˆ์—ญ์ค‘) VisualBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) X-CLIP title: (๋ฒˆ์—ญ์ค‘) ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Decision Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Trajectory Transformer title: (๋ฒˆ์—ญ์ค‘) ๊ฐ•ํ™”ํ•™์Šต ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Informer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Time Series Transformer title: (๋ฒˆ์—ญ์ค‘) ์‹œ๊ณ„์—ด ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Graphormer title: (๋ฒˆ์—ญ์ค‘) Graph models title: (๋ฒˆ์—ญ์ค‘) ๋ชจ๋ธ - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Custom Layers and Utilities - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for pipelines - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Tokenizers - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Trainer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Generation - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Image Processors - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Audio processing - local: in_translation title: (๋ฒˆ์—ญ์ค‘) General Utilities - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Time Series title: (๋ฒˆ์—ญ์ค‘) Internal Helpers title: (๋ฒˆ์—ญ์ค‘) API
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/tasks_explained.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers๋กœ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•[[how-transformers-solve-tasks]] [๐Ÿค— Transformers๋กœ ํ•  ์ˆ˜ ์žˆ๋Š” ์ž‘์—…](task_summary)์—์„œ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP), ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค, ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—… ๋“ฑ์˜ ์ค‘์š”ํ•œ ์‘์šฉ์„ ๋ฐฐ์› ์Šต๋‹ˆ๋‹ค. ์ด ํŽ˜์ด์ง€์—์„œ๋Š” ๋ชจ๋ธ์ด ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ์–ด๋–ป๊ฒŒ ํ•ด๊ฒฐํ•˜๋Š”์ง€ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ณ  ๋‚ด๋ถ€์—์„œ ์–ด๋–ค ์ผ์ด ์ผ์–ด๋‚˜๋Š”์ง€ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์ฃผ์–ด์ง„ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋งŽ์€ ๋ฐฉ๋ฒ•์ด ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ๋ชจ๋ธ์€ ํŠน์ • ๊ธฐ์ˆ ์„ ๊ตฌํ˜„ํ•˜๊ฑฐ๋‚˜ ์‹ฌ์ง€์–ด ์ƒˆ๋กœ์šด ๋ฐฉ์‹์œผ๋กœ ์ž‘์—…์— ์ ‘๊ทผํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, Transformer ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ผ๋ฐ˜์ ์ธ ์•„์ด๋””์–ด๋Š” ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์œ ์—ฐํ•œ ์•„ํ‚คํ…์ฒ˜ ๋•๋ถ„์— ๋Œ€๋ถ€๋ถ„์˜ ๋ชจ๋ธ์€ ์ธ์ฝ”๋”, ๋””์ฝ”๋” ๋˜๋Š” ์ธ์ฝ”๋”-๋””์ฝ”๋” ๊ตฌ์กฐ์˜ ๋ณ€ํ˜•์ž…๋‹ˆ๋‹ค. Transformer ๋ชจ๋ธ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์šฐ๋ฆฌ์˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—๋Š” ์˜ค๋Š˜๋‚  ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์— ์‚ฌ์šฉ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง(CNNs)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์šฐ๋ฆฌ๋Š” ํ˜„๋Œ€ CNN์˜ ์ž‘๋™ ๋ฐฉ์‹์— ๋Œ€ํ•ด ์„ค๋ช…ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ž‘์—…์ด ์–ด๋–ป๊ฒŒ ํ•ด๊ฒฐ๋˜๋Š”์ง€ ์„ค๋ช…ํ•˜๊ธฐ ์œ„ํ•ด, ์œ ์šฉํ•œ ์˜ˆ์ธก์„ ์ถœ๋ ฅํ•˜๊ณ ์ž ๋ชจ๋ธ ๋‚ด๋ถ€์—์„œ ์–ด๋–ค ์ผ์ด ์ผ์–ด๋‚˜๋Š”์ง€ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. - ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๋ฐ ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์„ ์œ„ํ•œ [Wav2Vec2](model_doc/wav2vec2) - ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ [Vision Transformer (ViT)](model_doc/vit) ๋ฐ [ConvNeXT](model_doc/convnext) - ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•œ [DETR](model_doc/detr) - ์ด๋ฏธ์ง€ ๋ถ„ํ• ์„ ์œ„ํ•œ [Mask2Former](model_doc/mask2former) - ๊นŠ์ด ์ถ”์ •์„ ์œ„ํ•œ [GLPN](model_doc/glpn) - ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ํ…์ŠคํŠธ ๋ถ„๋ฅ˜, ํ† ํฐ ๋ถ„๋ฅ˜ ๋ฐ ์งˆ์˜์‘๋‹ต๊ณผ ๊ฐ™์€ NLP ์ž‘์—…์„ ์œ„ํ•œ [BERT](model_doc/bert) - ๋””์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ํ…์ŠคํŠธ ์ƒ์„ฑ๊ณผ ๊ฐ™์€ NLP ์ž‘์—…์„ ์œ„ํ•œ [GPT2](model_doc/gpt2) - ์ธ์ฝ”๋”-๋””์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์š”์•ฝ ๋ฐ ๋ฒˆ์—ญ๊ณผ ๊ฐ™์€ NLP ์ž‘์—…์„ ์œ„ํ•œ [BART](model_doc/bart) <Tip> ๋” ๋‚˜์•„๊ฐ€๊ธฐ ์ „์—, ๊ธฐ์กด Transformer ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•œ ๊ธฐ๋ณธ์ ์ธ ์ง€์‹์„ ์ˆ™์ง€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ธ์ฝ”๋”, ๋””์ฝ”๋” ๋ฐ ์–ดํ…์…˜์˜ ์ž‘๋™ ๋ฐฉ์‹์„ ์•Œ๋ฉด ๋‹ค์–‘ํ•œ Transformer ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ ๋‹จ๊ณ„๊ฑฐ๋‚˜ ๋ณต์Šต์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ, ๋” ๋งŽ์€ ์ •๋ณด๋ฅผ ์œ„ํ•ด [์ฝ”์Šค](https://huggingface.co/course/chapter1/4?fw=pt)๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> ## ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค[[speech-and-audio]] [Wav2Vec2](model_doc/wav2vec2)๋Š” ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋˜์ง€ ์•Š์€ ์Œ์„ฑ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๋กœ, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๋ฐ ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/wav2vec2_architecture.png"/> </div> ์ด ๋ชจ๋ธ์—๋Š” 4๊ฐ€์ง€ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. *ํŠน์ง• ์ธ์ฝ”๋”(feature encoder)*๋Š” ์›์‹œ ์˜ค๋””์˜ค ํŒŒํ˜•(raw audio waveform)์„ ๊ฐ€์ ธ์™€์„œ ์ œ๋กœ ํ‰๊ท  ๋ฐ ๋‹จ์œ„ ๋ถ„์‚ฐ์œผ๋กœ ํ‘œ์ค€ํ™”ํ•˜๊ณ , ๊ฐ๊ฐ 20ms ๊ธธ์ด์˜ ํŠน์ง• ๋ฒกํ„ฐ์˜ ์‹œํ€€์Šค๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒํ˜•์€ ๋ณธ์งˆ์ ์œผ๋กœ ์—ฐ์†์ ์ด๊ธฐ ๋•Œ๋ฌธ์—, ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ๋‹จ์–ด๋กœ ๋‚˜๋ˆ„๋Š” ๊ฒƒ๊ณผ ๊ฐ™์ด ๋ถ„ํ• ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ *์–‘์žํ™” ๋ชจ๋“ˆ(quantization module)*๋กœ ์ „๋‹ฌ๋˜๋Š” ํŠน์ง• ๋ฒกํ„ฐ๋Š” ์ด์‚ฐํ˜• ์Œ์„ฑ ๋‹จ์œ„๋ฅผ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์Œ์„ฑ ๋‹จ์œ„๋Š” *์ฝ”๋“œ๋ถ(codebook)*(์–ดํœ˜์ง‘์ด๋ผ๊ณ  ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค)์ด๋ผ๋Š” ์ฝ”๋“œ๋‹จ์–ด(codewords) ์ฝœ๋ ‰์…˜์—์„œ ์„ ํƒ๋ฉ๋‹ˆ๋‹ค. ์ฝ”๋“œ๋ถ์—์„œ ์—ฐ์†์ ์ธ ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ๊ฐ€์žฅ ์ž˜ ๋‚˜ํƒ€๋‚ด๋Š” ๋ฒกํ„ฐ ๋˜๋Š” ์Œ์„ฑ ๋‹จ์œ„๊ฐ€ ์„ ํƒ๋˜์–ด ๋ชจ๋ธ์„ ํ†ต๊ณผํ•ฉ๋‹ˆ๋‹ค. 3. ํŠน์ง• ๋ฒกํ„ฐ์˜ ์ ˆ๋ฐ˜์€ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํฌ๊ฐ€ ์ ์šฉ๋˜๋ฉฐ, ๋งˆ์Šคํฌ๋œ ํŠน์ง• ๋ฒกํ„ฐ๋Š” *์ƒ๋Œ€์  ์œ„์น˜ ์ž„๋ฒ ๋”ฉ*์„ ์ถ”๊ฐ€ํ•˜๋Š” Transformer ์ธ์ฝ”๋”์ธ *๋ฌธ๋งฅ ๋„คํŠธ์›Œํฌ(context network)*๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 4. ๋ฌธ๋งฅ ๋„คํŠธ์›Œํฌ์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๋Š” *๋Œ€์กฐ์  ์ž‘์—…(contrastive task)*์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ์ž˜๋ชป๋œ ์˜ˆ์ธก ์‹œํ€€์Šค์—์„œ ๋งˆ์Šคํฌ๋œ ์˜ˆ์ธก์˜ ์‹ค์ œ ์–‘์žํ™”๋œ ์Œ์„ฑ ํ‘œํ˜„์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์ด ๊ฐ€์žฅ ์œ ์‚ฌํ•œ ์ปจํ…์ŠคํŠธ ๋ฒกํ„ฐ์™€ ์–‘์žํ™”๋œ ์Œ์„ฑ ๋‹จ์œ„(ํƒ€๊ฒŸ ๋ ˆ์ด๋ธ”)๋ฅผ ์ฐพ๋„๋ก ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ wav2vec2๊ฐ€ ์‚ฌ์ „ํ›ˆ๋ จ๋˜์—ˆ์œผ๋ฏ€๋กœ, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๋˜๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ์— ๋งž์ถฐ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ### ์˜ค๋””์˜ค ๋ถ„๋ฅ˜[[audio-classification]] ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ Wav2Vec2 ๋ชจ๋ธ ์ƒ๋‹จ์— ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ(hidden states)๋ฅผ ๋ฐ›๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค. ์€๋‹‰ ์ƒํƒœ๋Š” ๊ฐ๊ฐ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅธ ์˜ค๋””์˜ค ํ”„๋ ˆ์ž„์—์„œ ํ•™์Šต๋œ ํŠน์ง•์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๊ณ ์ • ๊ธธ์ด์˜ ๋ฒกํ„ฐ ํ•˜๋‚˜๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด, ์€๋‹‰ ์ƒํƒœ๋Š” ๋จผ์ € ํ’€๋ง๋˜๊ณ , ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•œ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํด๋ž˜์Šค๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ๋กœ์ง“๊ณผ ํƒ€๊ฒŸ ์‚ฌ์ด์˜ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์ด ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/audio_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ Wav2Vec2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic-speech-recognition]] ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์ž๋™ ์Œ์„ฑ ์ธ์‹์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, [์—ฐ๊ฒฐ์ฃผ์˜์  ์‹œ๊ฐ„ ๋ถ„๋ฅ˜(CTC, Connectionist Temporal Classification)](glossary#connectionist-temporal-classification-ctc)๋ฅผ ์œ„ํ•ด ๊ธฐ๋ณธ Wav2Vec2 ๋ชจ๋ธ ์ƒ๋‹จ์— ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›์•„์„œ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋กœ์ง“์€ ํ† ํฐ ํด๋ž˜์Šค(ํ† ํฐ ์ˆ˜๋Š” ์ž‘์—…์˜ ์–ดํœ˜์—์„œ ๋‚˜ํƒ€๋‚ฉ๋‹ˆ๋‹ค)๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. CTC ์†์‹ค์€ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉ๋œ ํ† ํฐ์—์„œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ† ํฐ ์‹œํ€€์Šค๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ๋กœ์ง“๊ณผ ํƒ€๊ฒŸ ์‚ฌ์ด์—์„œ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์ž๋™ ์Œ์„ฑ ์ธ์‹์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ž๋™ ์Œ์„ฑ ์ธ์‹ ๊ฐ€์ด๋“œ](tasks/asr)๋ฅผ ํ™•์ธํ•˜์—ฌ Wav2Vec2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer-vision]] ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์— ์ ‘๊ทผํ•˜๋Š” 2๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜ ์‹œํ€€์Šค๋กœ ๋ถ„๋ฆฌํ•˜๊ณ  Transformer๋กœ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. 2. [ConvNeXT](model_doc/convnext)์™€ ๊ฐ™์€ ํ˜„๋Œ€ CNN์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์ง€๋งŒ ํ˜„๋Œ€ ๋„คํŠธ์›Œํฌ ์„ค๊ณ„๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์„ธ ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ Transformer์™€ ํ•ฉ์„ฑ๊ณฑ(์˜ˆ๋ฅผ ๋“ค์–ด, [Convolutional Vision Transformer](model_doc/cvt) ๋˜๋Š” [LeViT](model_doc/levit))์„ ๊ฒฐํ•ฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์‚ดํŽด๋ณผ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•๋งŒ ๊ฒฐํ•ฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์—ฌ๊ธฐ์„œ ์ด ๋ฐฉ๋ฒ•์„ ๋‹ค๋ฃจ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. </Tip> ViT์™€ ConvNeXT๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—์„œ ์‚ฌ์šฉ๋˜์ง€๋งŒ, ๋ฌผ์ฒด ๊ฐ์ง€, ๋ถ„ํ• , ๊นŠ์ด ์ถ”์ •๊ณผ ๊ฐ™์€ ๋‹ค๋ฅธ ๋น„์ „ ์ž‘์—…์—๋Š” ๊ฐ๊ฐ DETR, Mask2Former, GLPN์ด ๋” ์ ํ•ฉํ•˜๋ฏ€๋กœ ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[image-classification]] ViT์™€ ConvNeXT ๋ชจ๋‘ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์ง€๋งŒ, ViT๋Š” ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„, ConvNeXT๋Š” ํ•ฉ์„ฑ๊ณฑ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ฃผ๋œ ์ฐจ์ด์ž…๋‹ˆ๋‹ค. #### Transformer[[transformer]] [ViT](model_doc/vit)์€ ํ•ฉ์„ฑ๊ณฑ์„ ์ „์ ์œผ๋กœ ์ˆœ์ˆ˜ Transformer ์•„ํ‚คํ…์ฒ˜๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด Transformer์— ์ต์ˆ™ํ•˜๋‹ค๋ฉด, ViT๋ฅผ ์ดํ•ดํ•˜๋Š” ๋ฐฉ๋ฒ•์˜ ๋Œ€๋ถ€๋ถ„์„ ์ด๋ฏธ ํŒŒ์•…ํ–ˆ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg"/> </div> ViT๊ฐ€ ๋„์ž…ํ•œ ์ฃผ์š” ๋ณ€๊ฒฝ ์‚ฌํ•ญ์€ ์ด๋ฏธ์ง€๊ฐ€ Transformer๋กœ ์–ด๋–ป๊ฒŒ ์ „๋‹ฌ๋˜๋Š”์ง€์— ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€๋Š” ์„œ๋กœ ์ค‘์ฒฉ๋˜์ง€ ์•Š๋Š” ์ •์‚ฌ๊ฐํ˜• ํŒจ์น˜๋กœ ๋ถ„ํ• ๋˜๊ณ , ๊ฐ ํŒจ์น˜๋Š” ๋ฒกํ„ฐ ๋˜๋Š” *ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ(patch embedding)*์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์€ ์ ์ ˆํ•œ ์ž…๋ ฅ ์ฐจ์›์„ ๋งŒ๋“œ๋Š” 2D ํ•ฉ์„ฑ๊ณฑ ๊ณ„์ธต์—์„œ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค(๊ธฐ๋ณธ Transformer์˜ ๊ฒฝ์šฐ ๊ฐ ํŒจ์น˜์˜ ์ž„๋ฒ ๋”ฉ๋งˆ๋‹ค 768๊ฐœ์˜ ๊ฐ’์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค). 224x224 ํ”ฝ์…€ ์ด๋ฏธ์ง€๊ฐ€ ์žˆ๋‹ค๋ฉด, 16x16 ์ด๋ฏธ์ง€ ํŒจ์น˜ 196๊ฐœ๋กœ ๋ถ„ํ• ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๊ฐ€ ๋‹จ์–ด๋กœ ํ† ํฐํ™”๋˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ, ์ด๋ฏธ์ง€๋„ ํŒจ์น˜ ์‹œํ€€์Šค๋กœ "ํ† ํฐํ™”"๋ฉ๋‹ˆ๋‹ค. 2. *ํ•™์Šต ๊ฐ€๋Šฅํ•œ ์ž„๋ฒ ๋”ฉ(learnable embedding)*(ํŠน์ˆ˜ํ•œ `[CLS]` ํ† ํฐ)์ด BERT์™€ ๊ฐ™์ด ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. `[CLS]` ํ† ํฐ์˜ ๋งˆ์ง€๋ง‰ ์€๋‹‰ ์ƒํƒœ๋Š” ๋ถ€์ฐฉ๋œ ๋ถ„๋ฅ˜ ํ—ค๋“œ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋˜๊ณ , ๋‹ค๋ฅธ ์ถœ๋ ฅ์€ ๋ฌด์‹œ๋ฉ๋‹ˆ๋‹ค. ์ด ํ† ํฐ์€ ๋ชจ๋ธ์ด ์ด๋ฏธ์ง€์˜ ํ‘œํ˜„์„ ์ธ์ฝ”๋”ฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. 3. ํŒจ์น˜์™€ ํ•™์Šต ๊ฐ€๋Šฅํ•œ ์ž„๋ฒ ๋”ฉ์— ๋งˆ์ง€๋ง‰์œผ๋กœ ์ถ”๊ฐ€ํ•  ๊ฒƒ์€ *์œ„์น˜ ์ž„๋ฒ ๋”ฉ*์ž…๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€ ํŒจ์น˜์˜ ์ˆœ์„œ๋ฅผ ๋ชจ๋ฅด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์œ„์น˜ ์ž„๋ฒ ๋”ฉ๋„ ํ•™์Šต ๊ฐ€๋Šฅํ•˜๋ฉฐ, ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ๊ณผ ๋™์ผํ•œ ํฌ๊ธฐ๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ตœ์ข…์ ์œผ๋กœ, ๋ชจ๋“  ์ž„๋ฒ ๋”ฉ์ด Transformer ์ธ์ฝ”๋”์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 4. `[CLS]` ํ† ํฐ์„ ํฌํ•จํ•œ ์ถœ๋ ฅ์€ ๋‹ค์ธต ํผ์…‰ํŠธ๋ก  ํ—ค๋“œ(MLP)์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ViT์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๋Š” ๋‹จ์ˆœํžˆ ๋ถ„๋ฅ˜์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ถ„๋ฅ˜ ํ—ค๋“œ์™€ ๊ฐ™์ด, MLP ํ—ค๋“œ๋Š” ์ถœ๋ ฅ์„ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•ด ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํด๋ž˜์Šค๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/image_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ ViT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! #### CNN[[cnn]] <Tip> ์ด ์„น์…˜์—์„œ๋Š” ํ•ฉ์„ฑ๊ณฑ์— ๋Œ€ํ•ด ๊ฐ„๋žตํ•˜๊ฒŒ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฏธ์ง€์˜ ๋ชจ์–‘๊ณผ ํฌ๊ธฐ๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ณ€ํ™”ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ์‚ฌ์ „ ์ดํ•ด๊ฐ€ ์žˆ๋‹ค๋ฉด ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, fastai book์˜ [ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ์ฑ•ํ„ฐ](https://github.com/fastai/fastbook/blob/master/13_convolutions.ipynb)๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> [ConvNeXT](model_doc/convnext)๋Š” ์„ฑ๋Šฅ์„ ๋†’์ด๊ธฐ ์œ„ํ•ด ์ƒˆ๋กœ์šด ํ˜„๋Œ€ ๋„คํŠธ์›Œํฌ ์„ค๊ณ„๋ฅผ ์ ์šฉํ•œ CNN ๊ตฌ์กฐ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•ฉ์„ฑ๊ณฑ์€ ์—ฌ์ „ํžˆ ๋ชจ๋ธ์˜ ํ•ต์‹ฌ์ž…๋‹ˆ๋‹ค. ๋†’์€ ์ˆ˜์ค€์˜ ๊ด€์ ์—์„œ ๋ณผ ๋•Œ, [ํ•ฉ์„ฑ๊ณฑ](glossary#convolution)์€ ์ž‘์€ ํ–‰๋ ฌ(*์ปค๋„*)์— ์ด๋ฏธ์ง€ ํ”ฝ์…€์˜ ์ž‘์€ ์œˆ๋„์šฐ๋ฅผ ๊ณฑํ•˜๋Š” ์—ฐ์‚ฐ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ํŠน์ • ํ…์Šค์ณ(texture)์ด๋‚˜ ์„ ์˜ ๊ณก๋ฅ ๊ณผ ๊ฐ™์€ ์ผ๋ถ€ ํŠน์ง•์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๊ณ  ๋‹ค์Œ ํ”ฝ์…€ ์œˆ๋„์šฐ๋กœ ๋„˜์–ด๊ฐ€๋Š”๋ฐ, ์—ฌ๊ธฐ์„œ ํ•ฉ์„ฑ๊ณฑ์ด ์ด๋™ํ•˜๋Š” ๊ฑฐ๋ฆฌ๋ฅผ *๋ณดํญ(stride)*์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convolution.gif"/> </div> <small>ํŒจ๋”ฉ์ด๋‚˜ ๋ณดํญ์ด ์—†๋Š” ๊ธฐ๋ณธ ํ•ฉ์„ฑ๊ณฑ, <a href="https://arxiv.org/abs/1603.07285">๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ ํ•ฉ์„ฑ๊ณฑ ์—ฐ์‚ฐ ๊ฐ€์ด๋“œ</a></small> ์ด ์ถœ๋ ฅ์„ ๋‹ค๋ฅธ ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ฐ ์—ฐ์†์ ์ธ ๋ ˆ์ด์–ด๋ฅผ ํ†ตํ•ด ๋„คํŠธ์›Œํฌ๋Š” ํ•ซ๋„๊ทธ๋‚˜ ๋กœ์ผ“๊ณผ ๊ฐ™์ด ๋” ๋ณต์žกํ•˜๊ณ  ์ถ”์ƒ์ ์ธ ๊ฒƒ์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด ์‚ฌ์ด์— ํ’€๋ง ๋ ˆ์ด์–ด๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ์ฐจ์›์„ ์ค„์ด๊ณ  ํŠน์ง•์˜ ์œ„์น˜ ๋ณ€ํ™”์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋” ๊ฒฌ๊ณ ํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png"/> </div> ConvNeXT๋Š” CNN์„ 5๊ฐ€์ง€ ๋ฐฉ์‹์œผ๋กœ ํ˜„๋Œ€ํ™”ํ•ฉ๋‹ˆ๋‹ค: 1. ๊ฐ ๋‹จ๊ณ„์˜ ๋ธ”๋ก ์ˆ˜๋ฅผ ๋ณ€๊ฒฝํ•˜๊ณ  ๋” ํฐ ๋ณดํญ๊ณผ ๊ทธ์— ๋Œ€์‘ํ•˜๋Š” ์ปค๋„ ํฌ๊ธฐ๋กœ ์ด๋ฏธ์ง€๋ฅผ "ํŒจ์น˜ํ™”(patchify)"ํ•ฉ๋‹ˆ๋‹ค. ๊ฒน์น˜์ง€ ์•Š๋Š” ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ๋Š” ViT๊ฐ€ ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜๋กœ ๋ถ„ํ• ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ ์ด ํŒจ์น˜ํ™” ์ „๋žต์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. 2. *๋ณ‘๋ชฉ(bottleneck)* ๋ ˆ์ด์–ด๋Š” ์ฑ„๋„ ์ˆ˜๋ฅผ ์ค„์˜€๋‹ค๊ฐ€ ๋‹ค์‹œ ๋ณต์›ํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด 1x1 ํ•ฉ์„ฑ๊ณฑ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ด ๋” ๋น ๋ฅด๊ณ , ๊นŠ์ด๋ฅผ ๋Š˜๋ฆด ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์—ญ ๋ณ‘๋ชฉ(inverted bottlenect)์€ ์ฑ„๋„ ์ˆ˜๋ฅผ ํ™•์žฅํ•˜๊ณ  ์ถ•์†Œํ•จ์œผ๋กœ์จ ๊ทธ ๋ฐ˜๋Œ€๋กœ ์ˆ˜ํ–‰ํ•˜๋ฏ€๋กœ, ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์ด ๋” ๋†’์Šต๋‹ˆ๋‹ค. 3. ๋ณ‘๋ชฉ ๋ ˆ์ด์–ด์˜ ์ผ๋ฐ˜์ ์ธ 3x3 ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด๋ฅผ ๊ฐ ์ž…๋ ฅ ์ฑ„๋„์— ๊ฐœ๋ณ„์ ์œผ๋กœ ํ•ฉ์„ฑ๊ณฑ์„ ์ ์šฉํ•œ ๋‹ค์Œ ๋งˆ์ง€๋ง‰์— ์Œ“๋Š” *๊นŠ์ด๋ณ„ ํ•ฉ์„ฑ๊ณฑ(depthwise convolution)*์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋„คํŠธ์›Œํฌ ํญ์ด ๋„“ํ˜€ ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋ฉ๋‹ˆ๋‹ค. 4. ViT๋Š” ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜ ๋•๋ถ„์— ํ•œ ๋ฒˆ์— ๋” ๋งŽ์€ ์ด๋ฏธ์ง€๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๋Š” ์ „์—ญ ์ˆ˜์‹  ํ•„๋“œ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ConvNeXT๋Š” ์ปค๋„ ํฌ๊ธฐ๋ฅผ 7x7๋กœ ๋Š˜๋ ค ์ด ํšจ๊ณผ๋ฅผ ์žฌํ˜„ํ•˜๋ ค๊ณ  ์‹œ๋„ํ•ฉ๋‹ˆ๋‹ค. 5. ๋˜ํ•œ ConvNeXT๋Š” Transformer ๋ชจ๋ธ์„ ๋ชจ๋ฐฉํ•˜๋Š” ๋ช‡ ๊ฐ€์ง€ ๋ ˆ์ด์–ด ์„ค๊ณ„๋ฅผ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ํ™œ์„ฑํ™” ๋ฐ ์ •๊ทœํ™” ๋ ˆ์ด์–ด๊ฐ€ ๋” ์ ๊ณ , ํ™œ์„ฑํ™” ํ•จ์ˆ˜๊ฐ€ ReLU ๋Œ€์‹  GELU๋กœ ์ „ํ™˜๋˜๊ณ , BatchNorm ๋Œ€์‹  LayerNorm์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ ๋ธ”๋ก์˜ ์ถœ๋ ฅ์€ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ์ „๋‹ฌ๋˜๋ฉฐ, ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ถœ๋ ฅ์„ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋ ˆ์ด๋ธ”์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ### ๊ฐ์ฒด ํƒ์ง€[[object-detection]] [DETR](model_doc/detr), *DEtection TRansformer*๋Š” CNN๊ณผ Transformer ์ธ์ฝ”๋”-๋””์ฝ”๋”๋ฅผ ๊ฒฐํ•ฉํ•œ ์ข…๋‹จ๊ฐ„(end-to-end) ๊ฐ์ฒด ํƒ์ง€ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/detr_architecture.png"/> </div> 1. ์‚ฌ์ „ํ›ˆ๋ จ๋œ CNN *๋ฐฑ๋ณธ(backbone)*์€ ํ”ฝ์…€ ๊ฐ’์œผ๋กœ ๋‚˜ํƒ€๋‚ธ ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์™€ ์ €ํ•ด์ƒ๋„ ํŠน์ง• ๋งต์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ํŠน์ง• ๋งต์— ๋Œ€ํ•ด 1x1 ํ•ฉ์„ฑ๊ณฑ์„ ์ ์šฉํ•˜์—ฌ ์ฐจ์›์„ ์ค„์ด๊ณ , ๊ณ ์ˆ˜์ค€ ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ๊ฐ€์ง„ ์ƒˆ๋กœ์šด ํŠน์ง• ๋งต์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. Transformer๋Š” ์‹œํ€€์Šค ๋ชจ๋ธ์ด๊ธฐ ๋•Œ๋ฌธ์— ํŠน์ง• ๋งต์„ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ๊ณผ ๊ฒฐํ•ฉ๋œ ํŠน์ง• ๋ฒกํ„ฐ์˜ ์‹œํ€€์Šค๋กœ ํ‰ํƒ„ํ™”ํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ง• ๋ฒกํ„ฐ๋Š” ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๋Š” ์ธ์ฝ”๋”์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋Š” ๋””์ฝ”๋”์—์„œ *๊ฐ์ฒด ์ฟผ๋ฆฌ*์™€ ๊ฒฐํ•ฉ๋ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ์ฟผ๋ฆฌ๋Š” ์ด๋ฏธ์ง€์˜ ๋‹ค๋ฅธ ์˜์—ญ์— ์ดˆ์ ์„ ๋งž์ถ˜ ํ•™์Šต๋œ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ํ•™์Šต๋˜๊ณ , ๊ฐ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์ง„ํ–‰ํ•˜๋ฉด์„œ ๊ฐฑ์‹ ๋ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋Š” ๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ์— ๋Œ€ํ•œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ์ขŒํ‘œ์™€ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์„ ์˜ˆ์ธกํ•˜๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ์— ์ „๋‹ฌ๋˜๋ฉฐ, ๊ฐ์ฒด๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ `no object`๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค. DETR์€ ๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ๋ฅผ ๋ณ‘๋ ฌ๋กœ ๋””์ฝ”๋”ฉํ•˜์—ฌ *N* ๊ฐœ์˜ ์ตœ์ข… ์˜ˆ์ธก์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ *N*์€ ์ฟผ๋ฆฌ ์ˆ˜์ž…๋‹ˆ๋‹ค. ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ ์š”์†Œ๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ผ๋ฐ˜์ ์ธ ์ž๊ธฐํšŒ๊ท€ ๋ชจ๋ธ๊ณผ ๋‹ฌ๋ฆฌ, ๊ฐ์ฒด ํƒ์ง€๋Š” ํ•œ ๋ฒˆ์— *N* ๊ฐœ์˜ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•˜๋Š” ์ง‘ํ•ฉ ์˜ˆ์ธก ์ž‘์—…(`๋ฐ”์šด๋”ฉ ๋ฐ•์Šค`, `ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”`)์ž…๋‹ˆ๋‹ค. 3. DETR์€ ํ›ˆ๋ จ ์ค‘ *์ด๋ถ„ ๋งค์นญ ์†์‹ค(bipartite matching loss)*์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ณ ์ •๋œ ์ˆ˜์˜ ์˜ˆ์ธก๊ณผ ๊ณ ์ •๋œ ์‹ค์ œ ์ •๋‹ต ๋ ˆ์ด๋ธ”(ground truth labels) ์„ธํŠธ๋ฅผ ๋น„๊ตํ•ฉ๋‹ˆ๋‹ค. *N*๊ฐœ์˜ ๋ ˆ์ด๋ธ” ์„ธํŠธ์— ์‹ค์ œ ์ •๋‹ต ๋ ˆ์ด๋ธ”๋ณด๋‹ค ์ ์€ ๊ฒฝ์šฐ, `no object` ํด๋ž˜์Šค๋กœ ํŒจ๋”ฉ๋ฉ๋‹ˆ๋‹ค. ์ด ์†์‹ค ํ•จ์ˆ˜๋Š” DETR์ด ์˜ˆ์ธก๊ณผ ์‹ค์ œ ์ •๋‹ต ๋ ˆ์ด๋ธ” ๊ฐ„ 1:1 ๋Œ€์‘์„ ์ฐพ๋„๋ก ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ๋˜๋Š” ํด๋ž˜์Šค ๋ ˆ์ด๋ธ” ์ค‘ ํ•˜๋‚˜๋ผ๋„ ์ž˜๋ชป๋œ ๊ฒฝ์šฐ, ์†์‹ค์ด ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์กด์žฌํ•˜์ง€ ์•Š๋Š” ๊ฐ์ฒด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ฒฝ์šฐ, ํŒจ๋„ํ‹ฐ๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด DETR์€ ์ด๋ฏธ์ง€์—์„œ ๋ˆˆ์— ์ž˜ ๋„๋Š” ๋ฌผ์ฒด ํ•˜๋‚˜์— ์ง‘์ค‘ํ•˜๋Š” ๋Œ€์‹ , ๋‹ค๋ฅธ ๊ฐ์ฒด๋ฅผ ์ฐพ๋„๋ก ๊ถŒ์žฅ๋ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€ ํ—ค๋“œ๊ฐ€ DETR ์ƒ๋‹จ์— ์ถ”๊ฐ€๋˜์–ด ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”๊ณผ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์˜ ์ขŒํ‘œ๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€ ํ—ค๋“œ์—๋Š” ๋‘ ๊ฐ€์ง€ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: ๋””์ฝ”๋” ์€๋‹‰ ์ƒํƒœ๋ฅผ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์˜ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด ๋ฐ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์˜ˆ์ธกํ•˜๋Š” MLP ๊ฐ์ฒด ํƒ์ง€์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [๊ฐ์ฒด ํƒ์ง€ ๊ฐ€์ด๋“œ](tasks/object_detection)๋ฅผ ํ™•์ธํ•˜์—ฌ DETR์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ์ด๋ฏธ์ง€ ๋ถ„ํ• [[image-segmentation]] [Mask2Former](model_doc/mask2former)๋Š” ๋ชจ๋“  ์œ ํ˜•์˜ ์ด๋ฏธ์ง€ ๋ถ„ํ•  ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฒ”์šฉ ์•„ํ‚คํ…์ฒ˜์ž…๋‹ˆ๋‹ค. ์ „ํ†ต์ ์ธ ๋ถ„ํ•  ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์‹œ๋ฉ˜ํ‹ฑ(semantic) ๋˜๋Š” ํŒŒ๋†‰ํ‹ฑ(panoptic) ๋ถ„ํ• ๊ณผ ๊ฐ™์€ ์ด๋ฏธ์ง€ ๋ถ„ํ• ์˜ ํŠน์ • ํ•˜์œ„ ์ž‘์—…์— ๋งž์ถฐ ์กฐ์ •๋ฉ๋‹ˆ๋‹ค. Mask2Former๋Š” ๋ชจ๋“  ์ž‘์—…์„ *๋งˆ์Šคํฌ ๋ถ„๋ฅ˜* ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ ๋ถ„๋ฅ˜๋Š” ํ”ฝ์…€์„ *N*๊ฐœ ์„ธ๊ทธ๋จผํŠธ๋กœ ๊ทธ๋ฃนํ™”ํ•˜๊ณ , ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด *N*๊ฐœ์˜ ๋งˆ์Šคํฌ์™€ ๊ทธ์— ๋Œ€์‘ํ•˜๋Š” ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ Mask2Former์˜ ์ž‘๋™ ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•œ ๋‹ค์Œ, ๋งˆ์ง€๋ง‰์— SegFormer๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png"/> </div> Mask2Former์—๋Š” 3๊ฐ€์ง€ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. [Swin](model_doc/swin) ๋ฐฑ๋ณธ์ด ์ด๋ฏธ์ง€๋ฅผ ๋ฐ›์•„ 3๊ฐœ์˜ ์—ฐ์†๋œ 3x3 ํ•ฉ์„ฑ๊ณฑ์—์„œ ์ €ํ•ด์ƒ๋„ ์ด๋ฏธ์ง€ ํŠน์ง• ๋งต์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ง• ๋งต์€ *ํ”ฝ์…€ ๋””์ฝ”๋”*์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์ด ๋””์ฝ”๋”๋Š” ์ €ํ•ด์ƒ๋„ ํŠน์ง•์„ ๊ณ ํ•ด์ƒ๋„ ํ”ฝ์…€ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ์ ์ง„์ ์œผ๋กœ ์—…์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ํ”ฝ์…€ ๋””์ฝ”๋”๋Š” ์‹ค์ œ๋กœ ์›๋ณธ ์ด๋ฏธ์ง€์˜ 1/32, 1/16, 1/8 ํ•ด์ƒ๋„์˜ ๋‹ค์ค‘ ์Šค์ผ€์ผ ํŠน์ง•(์ €ํ•ด์ƒ๋„ ๋ฐ ๊ณ ํ•ด์ƒ๋„ ํŠน์ง• ๋ชจ๋‘ ํฌํ•จ)์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 3. ์ด๋Ÿฌํ•œ ์„œ๋กœ ๋‹ค๋ฅธ ํฌ๊ธฐ์˜ ํŠน์ง• ๋งต์€ ๊ณ ํ•ด์ƒ๋„ ํŠน์ง•์—์„œ ์ž‘์€ ๊ฐ์ฒด๋ฅผ ํฌ์ฐฉํ•˜๊ธฐ ์œ„ํ•ด ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ Transformer ๋””์ฝ”๋” ๋ ˆ์ด์–ด์— ์—ฐ์†์ ์œผ๋กœ ๊ณต๊ธ‰๋ฉ๋‹ˆ๋‹ค. Mask2Former์˜ ํ•ต์‹ฌ์€ ๋””์ฝ”๋”์˜ *๋งˆ์Šคํฌ ์–ดํ…์…˜* ๋ฉ”์ปค๋‹ˆ์ฆ˜์ž…๋‹ˆ๋‹ค. ์ „์ฒด ์ด๋ฏธ์ง€๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ๋Š” ํฌ๋กœ์Šค ์–ดํ…์…˜(cross-attention)๊ณผ ๋‹ฌ๋ฆฌ, ๋งˆ์Šคํฌ ์–ดํ…์…˜์€ ์ด๋ฏธ์ง€์˜ ํŠน์ • ์˜์—ญ์—๋งŒ ์ง‘์ค‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ด๋ฏธ์ง€์˜ ์ง€์—ญ์  ํŠน์ง•๋งŒ์œผ๋กœ ๋ชจ๋ธ์ด ์ถฉ๋ถ„ํžˆ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋” ๋น ๋ฅด๊ณ  ์„ฑ๋Šฅ์ด ์šฐ์ˆ˜ํ•ฉ๋‹ˆ๋‹ค. 4. [DETR](tasks_explained#object-detection)๊ณผ ๊ฐ™์ด, Mask2Former๋Š” ํ•™์Šต๋œ ๊ฐ์ฒด ์ฟผ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์ด๋ฅผ ํ”ฝ์…€ ๋””์ฝ”๋”์—์„œ์˜ ์ด๋ฏธ์ง€ ํŠน์ง•๊ณผ ๊ฒฐํ•ฉํ•˜์—ฌ ์˜ˆ์ธก ์ง‘ํ•ฉ(`ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”`, `๋งˆ์Šคํฌ ์˜ˆ์ธก`)์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด๋กœ ์ „๋‹ฌ๋˜์–ด ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•œ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ๋กœ์ง“๊ณผ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ” ์‚ฌ์ด์˜ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ฒƒ์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ ์˜ˆ์ธก์€ ํ”ฝ์…€ ์ž„๋ฒ ๋”ฉ๊ณผ ์ตœ์ข… ๋””์ฝ”๋” ์€๋‹‰ ์ƒํƒœ๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์‹œ๊ทธ๋ชจ์ด๋“œ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ๋ฐ Dice ์†์‹ค์€ ๋กœ์ง“๊ณผ ์‹ค์ œ ์ •๋‹ต ๋งˆ์Šคํฌ(ground truth mask) ์‚ฌ์ด์—์„œ ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋งˆ์Šคํฌ๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„ํ• ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ด๋ฏธ์ง€ ๋ถ„ํ•  ๊ฐ€์ด๋“œ](tasks/semantic_segmentation)๋ฅผ ํ™•์ธํ•˜์—ฌ SegFormer๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ๊นŠ์ด ์ถ”์ •[[depth-estimation]] [GLPN](model_doc/glpn), *Global-Local Path Network*๋Š” [SegFormer](model_doc/segformer) ์ธ์ฝ”๋”์™€ ๊ฒฝ๋Ÿ‰ ๋””์ฝ”๋”๋ฅผ ๊ฒฐํ•ฉํ•œ ๊นŠ์ด ์ถ”์ •์„ ์œ„ํ•œ Transformer์ž…๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg"/> </div> 1. ViT์™€ ๊ฐ™์ด, ์ด๋ฏธ์ง€๋Š” ํŒจ์น˜ ์‹œํ€€์Šค๋กœ ๋ถ„ํ• ๋˜์ง€๋งŒ, ์ด๋ฏธ์ง€ ํŒจ์น˜๊ฐ€ ๋” ์ž‘๋‹ค๋Š” ์ ์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ด๋Š” ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜์ด๋‚˜ ๊นŠ์ด ์ถ”์ •๊ณผ ๊ฐ™์€ ๋ฐ€๋„ ์˜ˆ์ธก ์ž‘์—…์— ๋” ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ํŒจ์น˜๋Š” ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ๋ณ€ํ™˜๋˜์–ด(ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์ด ์ƒ์„ฑ๋˜๋Š” ๋ฐฉ๋ฒ•์€ [์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜](#image-classification) ์„น์…˜์„ ์ฐธ์กฐํ•˜์„ธ์š”), ์ธ์ฝ”๋”๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 2. ์ธ์ฝ”๋”๋Š” ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์„ ๋ฐ›์•„, ์—ฌ๋Ÿฌ ์ธ์ฝ”๋” ๋ธ”๋ก์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ธ”๋ก์€ ์–ดํ…์…˜ ๋ฐ Mix-FFN ๋ ˆ์ด์–ด๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ํ›„์ž์˜ ๋ชฉ์ ์€ ์œ„์น˜ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ ์ธ์ฝ”๋” ๋ธ”๋ก์˜ ๋์—๋Š” ๊ณ„์ธต์  ํ‘œํ˜„์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•œ *ํŒจ์น˜ ๋ณ‘ํ•ฉ(patch merging)* ๋ ˆ์ด์–ด๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ์ธ์ ‘ํ•œ ํŒจ์น˜ ๊ทธ๋ฃน์˜ ํŠน์ง•์€ ์—ฐ๊ฒฐ๋˜๊ณ , ์—ฐ๊ฒฐ๋œ ํŠน์ง•์— ์„ ํ˜• ๋ ˆ์ด์–ด๊ฐ€ ์ ์šฉ๋˜์–ด ํŒจ์น˜ ์ˆ˜๋ฅผ 1/4์˜ ํ•ด์ƒ๋„๋กœ ์ค„์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ ์ธ์ฝ”๋” ๋ธ”๋ก์˜ ์ž…๋ ฅ์ด ๋˜๋ฉฐ, ์ด๋Ÿฌํ•œ ์ „์ฒด ํ”„๋กœ์„ธ์Šค๋Š” 1/8, 1/16, 1/32 ํ•ด์ƒ๋„์˜ ์ด๋ฏธ์ง€ ํŠน์ง•์„ ๊ฐ€์งˆ ๋•Œ๊นŒ์ง€ ๋ฐ˜๋ณต๋ฉ๋‹ˆ๋‹ค. 3. ๊ฒฝ๋Ÿ‰ ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์—์„œ ๋งˆ์ง€๋ง‰ ํŠน์ง• ๋งต(1/32 ํฌ๊ธฐ)์„ ๊ฐ€์ ธ์™€ 1/16 ํฌ๊ธฐ๋กœ ์—…์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ, ํŠน์ง•์€ *์„ ํƒ์  ํŠน์ง• ์œตํ•ฉ(SFF, Selective Feature Fusion)* ๋ชจ๋“ˆ๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋“ˆ์€ ๊ฐ ํŠน์ง•์— ๋Œ€ํ•ด ์–ดํ…์…˜ ๋งต์—์„œ ๋กœ์ปฌ ๋ฐ ์ „์—ญ ํŠน์ง•์„ ์„ ํƒํ•˜๊ณ  ๊ฒฐํ•ฉํ•œ ๋‹ค์Œ, 1/8๋กœ ์—…์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ์ด ํ”„๋กœ์„ธ์Šค๋Š” ๋””์ฝ”๋”ฉ๋œ ํŠน์„ฑ์ด ์›๋ณธ ์ด๋ฏธ์ง€์™€ ๋™์ผํ•œ ํฌ๊ธฐ๊ฐ€ ๋  ๋•Œ๊นŒ์ง€ ๋ฐ˜๋ณต๋ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ์€ ๋‘ ๊ฐœ์˜ ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด๋ฅผ ๊ฑฐ์นœ ๋‹ค์Œ, ์‹œ๊ทธ๋ชจ์ด๋“œ ํ™œ์„ฑํ™”๊ฐ€ ์ ์šฉ๋˜์–ด ๊ฐ ํ”ฝ์…€์˜ ๊นŠ์ด๋ฅผ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural-language-processing]] Transformer๋Š” ์ดˆ๊ธฐ์— ๊ธฐ๊ณ„ ๋ฒˆ์—ญ์„ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ๊ณ , ๊ทธ ์ดํ›„๋กœ๋Š” ์‚ฌ์‹ค์ƒ ๋ชจ๋“  NLP ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๋ณธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์–ด๋–ค ์ž‘์—…์€ Transformer์˜ ์ธ์ฝ”๋” ๊ตฌ์กฐ์— ์ ํ•ฉํ•˜๋ฉฐ, ๋‹ค๋ฅธ ์ž‘์—…์€ ๋””์ฝ”๋”์— ๋” ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๋˜ ๋‹ค๋ฅธ ์ž‘์—…์€ Transformer์˜ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๊ตฌ์กฐ๋ฅผ ๋ชจ๋‘ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ### ํ…์ŠคํŠธ ๋ถ„๋ฅ˜[[text-classification]] [BERT](model_doc/bert)๋Š” ์ธ์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ์ด๋ฉฐ, ํ…์ŠคํŠธ์˜ ํ’๋ถ€ํ•œ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด ์–‘๋ฐฉํ–ฅ์˜ ๋‹จ์–ด์— ์ฃผ๋ชฉํ•จ์œผ๋กœ์จ ์‹ฌ์ธต ์–‘๋ฐฉํ–ฅ์„ฑ(deep bidirectionality)์„ ํšจ๊ณผ์ ์œผ๋กœ ๊ตฌํ˜„ํ•œ ์ตœ์ดˆ์˜ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. 1. BERT๋Š” [WordPiece](tokenizer_summary#wordpiece) ํ† ํฐํ™”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฌธ์žฅ์˜ ํ† ํฐ ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ ๋ฌธ์žฅ๊ณผ ํ•œ ์Œ์˜ ๋ฌธ์žฅ์„ ๊ตฌ๋ถ„ํ•˜๊ธฐ ์œ„ํ•ด ํŠน์ˆ˜ํ•œ `[SEP]` ํ† ํฐ์ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…์ŠคํŠธ ์‹œํ€€์Šค์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์—๋Š” ํŠน์ˆ˜ํ•œ `[CLS]` ํ† ํฐ์ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. `[CLS]` ํ† ํฐ์ด ์žˆ๋Š” ์ตœ์ข… ์ถœ๋ ฅ์€ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•œ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ์ž…๋ ฅ์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. BERT๋Š” ๋˜ํ•œ ํ•œ ์Œ์˜ ๋ฌธ์žฅ์—์„œ ๊ฐ ํ† ํฐ์ด ์ฒซ ๋ฒˆ์งธ ๋ฌธ์žฅ์ธ์ง€ ๋‘ ๋ฒˆ์งธ ๋ฌธ์žฅ์— ์†ํ•˜๋Š”์ง€ ๋‚˜ํƒ€๋‚ด๋Š” ์„ธ๊ทธ๋จผํŠธ ์ž„๋ฒ ๋”ฉ(segment embedding)์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. 2. BERT๋Š” ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง๊ณผ ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก, ๋‘ ๊ฐ€์ง€ ๋ชฉ์ ์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง์—์„œ๋Š” ์ž…๋ ฅ ํ† ํฐ์˜ ์ผ๋ถ€๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚น๋˜๊ณ , ๋ชจ๋ธ์€ ์ด๋ฅผ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋ชจ๋“  ๋‹จ์–ด๋ฅผ ๋ณด๊ณ  ๋‹ค์Œ ๋‹จ์–ด๋ฅผ "์˜ˆ์ธก"ํ•  ์ˆ˜ ์žˆ๋Š” ์–‘๋ฐฉํ–ฅ์„ฑ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ธก๋œ ๋งˆ์Šคํฌ ํ† ํฐ์˜ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋Š” ์–ดํœ˜์— ๋Œ€ํ•œ ์†Œํ”„ํŠธ๋งฅ์Šค๊ฐ€ ์žˆ๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ๋กœ ์ „๋‹ฌ๋˜์–ด ๋งˆ์Šคํฌ๋œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ์‚ฌ์ „ํ›ˆ๋ จ ๋Œ€์ƒ์€ ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ๋ฌธ์žฅ B๊ฐ€ ๋ฌธ์žฅ A ๋‹ค์Œ์— ์˜ค๋Š”์ง€ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ์žฅ B๊ฐ€ ๋‹ค์Œ ๋ฌธ์žฅ์ธ ๊ฒฝ์šฐ์™€ ๋ฌด์ž‘์œ„ ๋ฌธ์žฅ์ธ ๊ฒฝ์šฐ ๊ฐ๊ฐ 50%์˜ ํ™•๋ฅ ๋กœ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฌธ์žฅ์ธ์ง€ ์•„๋‹Œ์ง€์— ๋Œ€ํ•œ ์˜ˆ์ธก์€ ๋‘ ๊ฐœ์˜ ํด๋ž˜์Šค(`IsNext` ๋ฐ `NotNext`)์— ๋Œ€ํ•œ ์†Œํ”„ํŠธ๋งฅ์Šค๊ฐ€ ์žˆ๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 3. ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ์€ ์—ฌ๋Ÿฌ ์ธ์ฝ”๋” ๋ ˆ์ด์–ด๋ฅผ ๊ฑฐ์ณ์„œ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ BERT ๋ชจ๋ธ ์ƒ๋‹จ์— ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ด๋ฉฐ, ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ํƒ€๊ฒŸ ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋ ˆ์ด๋ธ”์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/sequence_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ํ† ํฐ ๋ถ„๋ฅ˜[[token-classification]] ๊ฐœ์ฒด๋ช… ์ธ์‹(Named Entity Recognition, NER)๊ณผ ๊ฐ™์€ ํ† ํฐ ๋ถ„๋ฅ˜ ์ž‘์—…์— BERT๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ BERT ๋ชจ๋ธ ์ƒ๋‹จ์— ํ† ํฐ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ด๋ฉฐ, ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ๊ฐ ํ† ํฐ ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋ ˆ์ด๋ธ”์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [ํ† ํฐ ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/token_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ์งˆ์˜์‘๋‹ต[[question-answering]] ์งˆ์˜์‘๋‹ต์— BERT๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ BERT ๋ชจ๋ธ ์œ„์— ์ŠคํŒฌ(span) ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„ ํ˜• ๋ ˆ์ด์–ด๋Š” ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›๊ณ , ๋‹ต๋ณ€์— ๋Œ€์‘ํ•˜๋Š” `์ŠคํŒฌ`์˜ ์‹œ์ž‘๊ณผ ๋ ๋กœ๊ทธ๋ฅผ ๊ณ„์‚ฐํ•˜๊ธฐ ์œ„ํ•ด ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ๊ฐ ๋ ˆ์ด๋ธ” ์œ„์น˜ ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๋‹ต๋ณ€์— ๋Œ€์‘ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ…์ŠคํŠธ์˜ ์ŠคํŒฌ์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์งˆ์˜์‘๋‹ต ๊ฐ€์ด๋“œ](tasks/question_answering)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ๐Ÿ’ก ์‚ฌ์ „ํ›ˆ๋ จ๋œ BERT๋ฅผ ๋‹ค์–‘ํ•œ ์ž‘์—…์— ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์–ผ๋งˆ๋‚˜ ์‰ฌ์šด์ง€ ์ฃผ๋ชฉํ•˜์„ธ์š”. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์— ํŠน์ • ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ์€๋‹‰ ์ƒํƒœ๋ฅผ ์›ํ•˜๋Š” ์ถœ๋ ฅ์œผ๋กœ ์กฐ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </Tip> ### ํ…์ŠคํŠธ ์ƒ์„ฑ[[text-generation]] [GPT-2](model_doc/gpt2)๋Š” ๋Œ€๋Ÿ‰์˜ ํ…์ŠคํŠธ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋””์ฝ”๋”ฉ ์ „์šฉ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ฃผ์–ด์ง€๋ฉด ์„ค๋“๋ ฅ ์žˆ๋Š” (ํ•ญ์ƒ ์‚ฌ์‹ค์€ ์•„๋‹ˆ์ง€๋งŒ!) ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ๋ช…์‹œ์ ์œผ๋กœ ํ›ˆ๋ จ๋˜์ง€ ์•Š์•˜์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ์งˆ์˜์‘๋‹ต๊ณผ ๊ฐ™์€ ๋‹ค๋ฅธ NLP ์ž‘์—…์„ ์™„์ˆ˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gpt2_architecture.png"/> </div> 1. GPT-2๋Š” ๋‹จ์–ด๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  ํ† ํฐ ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด [๋ฐ”์ดํŠธ ํŽ˜์–ด ์ธ์ฝ”๋”ฉ(BPE, byte pair encoding)](tokenizer_summary#bytepair-encoding-bpe)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์œ„์น˜ ์ธ์ฝ”๋”ฉ์€ ์‹œํ€€์Šค์—์„œ ๊ฐ ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ๋‚˜ํƒ€๋‚ด๊ธฐ ์œ„ํ•ด ํ† ํฐ ์ž„๋ฒ ๋”ฉ์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ์€ ์—ฌ๋Ÿฌ ๋””์ฝ”๋” ๋ธ”๋ก์„ ๊ฑฐ์ณ ์ผ๋ถ€ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋””์ฝ”๋” ๋ธ”๋ก ๋‚ด์—์„œ GPT-2๋Š” *๋งˆ์Šคํฌ๋“œ ์…€ํ”„ ์–ดํ…์…˜(masked self-attention)* ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” GPT-2๊ฐ€ ์ดํ›„ ํ† ํฐ(future tokens)์— ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์ผ ์ˆ˜ ์—†๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์™ผ์ชฝ์— ์žˆ๋Š” ํ† ํฐ์—๋งŒ ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ๋“œ ์…€ํ”„ ์–ดํ…์…˜์—์„œ๋Š” ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ดํ›„ ํ† ํฐ์— ๋Œ€ํ•œ ์ ์ˆ˜(score)๋ฅผ `0`์œผ๋กœ ์„ค์ •ํ•˜๊ธฐ ๋•Œ๋ฌธ์— BERT์˜ [`mask`] ํ† ํฐ๊ณผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. 2. ๋””์ฝ”๋”์˜ ์ถœ๋ ฅ์€ ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ์— ์ „๋‹ฌ๋˜๋ฉฐ, ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋Š” ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋กœ์ง“์œผ๋กœ ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์€ ์‹œํ€€์Šค์˜ ๋‹ค์Œ ํ† ํฐ์œผ๋กœ, ๋กœ์ง“์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ํ•˜๋‚˜์”ฉ ์ด๋™ํ•˜์—ฌ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ์ด๋™๋œ ๋กœ์ง“๊ณผ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋‹ค์Œ ํ† ํฐ์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. GPT-2์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉ์ ์€ ์ „์ ์œผ๋กœ [์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง](glossary#causal-language-modeling)์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ, ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” GPT-2๊ฐ€ ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๊ด€๋ จ๋œ ์ž‘์—…์— ํŠนํžˆ ์šฐ์ˆ˜ํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ์ƒ์„ฑ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง ๊ฐ€์ด๋“œ](tasks/language_modeling#causal-language-modeling)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilGPT-2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](generation_strategies) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> ### ์š”์•ฝ[[summarization]] [BART](model_doc/bart) ๋ฐ [T5](model_doc/t5)์™€ ๊ฐ™์€ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์€ ์š”์•ฝ ์ž‘์—…์˜ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ํŒจํ„ด์„ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ BART์˜ ์ž‘๋™ ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•œ ๋‹ค์Œ, ๋งˆ์ง€๋ง‰์— T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bart_architecture.png"/> </div> 1. BART์˜ ์ธ์ฝ”๋” ์•„ํ‚คํ…์ฒ˜๋Š” BERT์™€ ๋งค์šฐ ์œ ์‚ฌํ•˜๋ฉฐ ํ…์ŠคํŠธ์˜ ํ† ํฐ ๋ฐ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. BART๋Š” ์ž…๋ ฅ์„ ๋ณ€ํ˜•์‹œํ‚ค๊ณ  ๋””์ฝ”๋”๋กœ ์žฌ๊ตฌ์„ฑํ•˜์—ฌ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ํŠน์ • ๋ณ€ํ˜• ๊ธฐ๋ฒ•์ด ์žˆ๋Š” ๋‹ค๋ฅธ ์ธ์ฝ”๋”์™€๋Š” ๋‹ฌ๋ฆฌ, BART๋Š” ๋ชจ๋“  ์œ ํ˜•์˜ ๋ณ€ํ˜•์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ *text infilling* ๋ณ€ํ˜• ๊ธฐ๋ฒ•์ด ๊ฐ€์žฅ ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. Text Infiling์—์„œ๋Š” ์—ฌ๋Ÿฌ ํ…์ŠคํŠธ ์ŠคํŒฌ์„ **๋‹จ์ผ** [`mask`] ํ† ํฐ์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋งˆ์Šคํฌ๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•ด์•ผ ํ•˜๊ณ , ๋ชจ๋ธ์— ๋ˆ„๋ฝ๋œ ํ† ํฐ์˜ ์ˆ˜๋ฅผ ์˜ˆ์ธกํ•˜๋„๋ก ๊ฐ€๋ฅด์น˜๊ธฐ ๋•Œ๋ฌธ์— ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ๊ณผ ๋งˆ์Šคํฌ๋œ ์ŠคํŒฌ์ด ์ธ์ฝ”๋”๋ฅผ ๊ฑฐ์ณ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•˜์ง€๋งŒ, BERT์™€ ๋‹ฌ๋ฆฌ BART๋Š” ๋งˆ์ง€๋ง‰์— ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ๋ฅผ ์ถ”๊ฐ€ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 2. ์ธ์ฝ”๋”์˜ ์ถœ๋ ฅ์€ ๋””์ฝ”๋”๋กœ ์ „๋‹ฌ๋˜๋ฉฐ, ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์˜ ์ถœ๋ ฅ์—์„œ ๋งˆ์Šคํฌ ํ† ํฐ๊ณผ ๋ณ€ํ˜•๋˜์ง€ ์•Š์€ ํ† ํฐ์„ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋””์ฝ”๋”๊ฐ€ ์›๋ณธ ํ…์ŠคํŠธ๋ฅผ ๋ณต์›ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ์ถ”๊ฐ€์ ์ธ ๋ฌธ๋งฅ์„ ์–ป๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”์˜ ์ถœ๋ ฅ์€ ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ์— ์ „๋‹ฌ๋˜๋ฉฐ, ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋Š” ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋กœ์ง“์œผ๋กœ ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ํ† ํฐ์ด ์˜ค๋ฅธ์ชฝ์œผ๋กœ ์ด๋™๋œ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์š”์•ฝ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์š”์•ฝ ๊ฐ€์ด๋“œ](tasks/summarization)๋ฅผ ํ™•์ธํ•˜์—ฌ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](generation_strategies) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> ### ๋ฒˆ์—ญ[[translation]] ๋ฒˆ์—ญ์€ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ์ž‘์—…์˜ ๋˜ ๋‹ค๋ฅธ ์˜ˆ๋กœ, [BART](model_doc/bart) ๋˜๋Š” [T5](model_doc/t5)์™€ ๊ฐ™์€ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ BART์˜ ์ž‘๋™ ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•œ ๋‹ค์Œ, ๋งˆ์ง€๋ง‰์— T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. BART๋Š” ์›์ฒœ ์–ธ์–ด๋ฅผ ํƒ€๊ฒŸ ์–ธ์–ด๋กœ ๋””์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ๋Š” ์ž…๋ ฅ์— ๋งคํ•‘ํ•˜๊ธฐ ์œ„ํ•ด ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๋ณ„๋„์˜ ์ธ์ฝ”๋”๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ๋ฒˆ์—ญ์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์ƒˆ๋กœ์šด ์ธ์ฝ”๋”์˜ ์ž„๋ฒ ๋”ฉ์€ ์›๋ณธ ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ ๋Œ€์‹  ์‚ฌ์ „ํ›ˆ๋ จ๋œ ์ธ์ฝ”๋”๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์›์ฒœ ์ธ์ฝ”๋”๋Š” ๋ชจ๋ธ ์ถœ๋ ฅ์˜ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค๋กœ๋ถ€ํ„ฐ ์›์ฒœ ์ธ์ฝ”๋”, ์œ„์น˜ ์ž„๋ฒ ๋”ฉ, ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ์„ ๊ฐฑ์‹ ํ•˜์—ฌ ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„์—์„œ๋Š” ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ๊ณ ์ •๋˜๊ณ , ๋‘ ๋ฒˆ์งธ ๋‹จ๊ณ„์—์„œ๋Š” ๋ชจ๋“  ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ํ•จ๊ป˜ ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. BART๋Š” ์ดํ›„ ๋ฒˆ์—ญ์„ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ์–ธ์–ด๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋‹ค๊ตญ์–ด ๋ฒ„์ „์˜ mBART๋กœ ํ™•์žฅ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ฒˆ์—ญ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [๋ฒˆ์—ญ ๊ฐ€์ด๋“œ](tasks/summarization)๋ฅผ ํ™•์ธํ•˜์—ฌ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](generation_strategies) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip>
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/perf_infer_gpu_one.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹จ์ผ GPU์—์„œ ํšจ์œจ์ ์ธ ์ถ”๋ก  [[efficient-inference-on-a-single-gpu]] ์ด ๊ฐ€์ด๋“œ ์™ธ์—๋„, [๋‹จ์ผ GPU์—์„œ์˜ ํ›ˆ๋ จ ๊ฐ€์ด๋“œ](perf_train_gpu_one)์™€ [CPU์—์„œ์˜ ์ถ”๋ก  ๊ฐ€์ด๋“œ](perf_infer_cpu)์—์„œ๋„ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## Better Transformer: PyTorch ๋„ค์ดํ‹ฐ๋ธŒ Transformer ํŒจ์ŠคํŠธํŒจ์Šค [[better-transformer-pytorchnative-transformer-fastpath]] PyTorch ๋„ค์ดํ‹ฐ๋ธŒ [`nn.MultiHeadAttention`](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/) ์–ดํ…์…˜ ํŒจ์ŠคํŠธํŒจ์Šค์ธ BetterTransformer๋Š” [๐Ÿค— Optimum ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ](https://huggingface.co/docs/optimum/bettertransformer/overview)์˜ ํ†ตํ•ฉ์„ ํ†ตํ•ด Transformers์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. PyTorch์˜ ์–ดํ…์…˜ ํŒจ์ŠคํŠธํŒจ์Šค๋Š” ์ปค๋„ ํ“จ์ „๊ณผ [์ค‘์ฒฉ๋œ ํ…์„œ](https://pytorch.org/docs/stable/nested.html)์˜ ์‚ฌ์šฉ์„ ํ†ตํ•ด ์ถ”๋ก  ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋ฒค์น˜๋งˆํฌ๋Š” [์ด ๋ธ”๋กœ๊ทธ ๊ธ€](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`optimum`](https://github.com/huggingface/optimum) ํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•œ ํ›„์—๋Š” ์ถ”๋ก  ์ค‘ Better Transformer๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~PreTrainedModel.to_bettertransformer`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๊ด€๋ จ ๋‚ด๋ถ€ ๋ชจ๋“ˆ์„ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค: ```python model = model.to_bettertransformer() ``` [`~PreTrainedModel.reverse_bettertransformer`] ๋ฉ”์†Œ๋“œ๋Š” ์ •๊ทœํ™”๋œ transformers ๋ชจ๋ธ๋ง์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์„ ์ €์žฅํ•˜๊ธฐ ์ „ ์›๋ž˜์˜ ๋ชจ๋ธ๋ง์œผ๋กœ ๋Œ์•„๊ฐˆ ์ˆ˜ ์žˆ๋„๋ก ํ•ด์ค๋‹ˆ๋‹ค: ```python model = model.reverse_bettertransformer() model.save_pretrained("saved_model") ``` PyTorch 2.0๋ถ€ํ„ฐ๋Š” ์–ดํ…์…˜ ํŒจ์ŠคํŠธํŒจ์Šค๊ฐ€ ์ธ์ฝ”๋”์™€ ๋””์ฝ”๋” ๋ชจ๋‘์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค. ์ง€์›๋˜๋Š” ์•„ํ‚คํ…์ฒ˜ ๋ชฉ๋ก์€ [์—ฌ๊ธฐ](https://huggingface.co/docs/optimum/bettertransformer/overview#supported-models)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## FP4 ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ์ถ”๋ก ์„ ์œ„ํ•œ `bitsandbytes` ํ†ตํ•ฉ [[bitsandbytes-integration-for-fp4-mixedprecision-inference]] `bitsandbytes`๋ฅผ ์„ค์น˜ํ•˜๋ฉด GPU์—์„œ ์†์‰ฝ๊ฒŒ ๋ชจ๋ธ์„ ์••์ถ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. FP4 ์–‘์žํ™”๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์›๋ž˜์˜ ์ „์ฒด ์ •๋ฐ€๋„ ๋ฒ„์ „๊ณผ ๋น„๊ตํ•˜์—ฌ ๋ชจ๋ธ ํฌ๊ธฐ๋ฅผ ์ตœ๋Œ€ 8๋ฐฐ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜์—์„œ ์‹œ์ž‘ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์„ธ์š”. <Tip> ์ด ๊ธฐ๋Šฅ์€ ๋‹ค์ค‘ GPU ์„ค์ •์—์„œ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ### ์š”๊ตฌ ์‚ฌํ•ญ [[requirements-for-fp4-mixedprecision-inference]] - ์ตœ์‹  `bitsandbytes` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ `pip install bitsandbytes>=0.39.0` - ์ตœ์‹  `accelerate`๋ฅผ ์†Œ์Šค์—์„œ ์„ค์น˜ `pip install git+https://github.com/huggingface/accelerate.git` - ์ตœ์‹  `transformers`๋ฅผ ์†Œ์Šค์—์„œ ์„ค์น˜ `pip install git+https://github.com/huggingface/transformers.git` ### FP4 ๋ชจ๋ธ ์‹คํ–‰ - ๋‹จ์ผ GPU ์„ค์ • - ๋น ๋ฅธ ์‹œ์ž‘ [[running-fp4-models-single-gpu-setup-quickstart]] ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๋‹จ์ผ GPU์—์„œ ๋น ๋ฅด๊ฒŒ FP4 ๋ชจ๋ธ์„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True) ``` `device_map`์€ ์„ ํƒ ์‚ฌํ•ญ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ `device_map = 'auto'`๋กœ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์ด ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ฆฌ์†Œ์Šค๋ฅผ ํšจ์œจ์ ์œผ๋กœ ๋””์ŠคํŒจ์น˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ถ”๋ก ์— ์žˆ์–ด ๊ถŒ์žฅ๋ฉ๋‹ˆ๋‹ค. ### FP4 ๋ชจ๋ธ ์‹คํ–‰ - ๋‹ค์ค‘ GPU ์„ค์ • [[running-fp4-models-multi-gpu-setup]] ๋‹ค์ค‘ GPU์—์„œ ํ˜ผํ•ฉ 4๋น„ํŠธ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๋‹จ์ผ GPU ์„ค์ •๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค(๋™์ผํ•œ ๋ช…๋ น์–ด ์‚ฌ์šฉ): ```py model_name = "bigscience/bloom-2b5" model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True) ``` ํ•˜์ง€๋งŒ `accelerate`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ GPU์— ํ• ๋‹นํ•  GPU RAM์„ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด `max_memory` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py max_memory_mapping = {0: "600MB", 1: "1GB"} model_name = "bigscience/bloom-3b" model_4bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_4bit=True, max_memory=max_memory_mapping ) ``` ์ด ์˜ˆ์—์„œ๋Š” ์ฒซ ๋ฒˆ์งธ GPU๊ฐ€ 600MB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ๋‘ ๋ฒˆ์งธ GPU๊ฐ€ 1GB๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ### ๊ณ ๊ธ‰ ์‚ฌ์šฉ๋ฒ• [[advanced-usage]] ์ด ๋ฐฉ๋ฒ•์˜ ๋” ๊ณ ๊ธ‰ ์‚ฌ์šฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” [์–‘์žํ™”](main_classes/quantization) ๋ฌธ์„œ ํŽ˜์ด์ง€๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## Int8 ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ–‰๋ ฌ ๋ถ„ํ•ด๋ฅผ ์œ„ํ•œ `bitsandbytes` ํ†ตํ•ฉ [[bitsandbytes-integration-for-int8-mixedprecision-matrix-decomposition]] <Tip> ์ด ๊ธฐ๋Šฅ์€ ๋‹ค์ค‘ GPU ์„ค์ •์—์„œ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> [`LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale`](https://arxiv.org/abs/2208.07339) ๋…ผ๋ฌธ์—์„œ ์šฐ๋ฆฌ๋Š” ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋กœ Hub์˜ ๋ชจ๋“  ๋ชจ๋ธ์— ๋Œ€ํ•œ Hugging Face ํ†ตํ•ฉ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ `float16` ๋ฐ `bfloat16` ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•ด `nn.Linear` ํฌ๊ธฐ๋ฅผ 2๋ฐฐ๋กœ ์ค„์ด๊ณ , `float32` ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•ด 4๋ฐฐ๋กœ ์ค„์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์ ˆ๋ฐ˜ ์ •๋ฐ€๋„์—์„œ ์ด์ƒ์น˜๋ฅผ ์ฒ˜๋ฆฌํ•จ์œผ๋กœ์จ ํ’ˆ์งˆ์— ๊ฑฐ์˜ ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ![HFxbitsandbytes.png](https://cdn-uploads.huggingface.co/production/uploads/1659861207959-62441d1d9fdefb55a0b7d12c.png) Int8 ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ–‰๋ ฌ ๋ถ„ํ•ด๋Š” ํ–‰๋ ฌ ๊ณฑ์…ˆ์„ ๋‘ ๊ฐœ์˜ ์ŠคํŠธ๋ฆผ์œผ๋กœ ๋ถ„๋ฆฌํ•ฉ๋‹ˆ๋‹ค: (1) fp16๋กœ ๊ณฑํ•ด์ง€๋Š” ์ฒด๊ณ„์ ์ธ ํŠน์ด๊ฐ’ ์ด์ƒ์น˜ ์ŠคํŠธ๋ฆผ ํ–‰๋ ฌ(0.01%) ๋ฐ (2) int8 ํ–‰๋ ฌ ๊ณฑ์…ˆ์˜ ์ผ๋ฐ˜์ ์ธ ์ŠคํŠธ๋ฆผ(99.9%). ์ด ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜๋ฉด ๋งค์šฐ ํฐ ๋ชจ๋ธ์— ๋Œ€ํ•ด ์˜ˆ์ธก ์ €ํ•˜ ์—†์ด int8 ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [๋…ผ๋ฌธ](https://arxiv.org/abs/2208.07339)์ด๋‚˜ [ํ†ตํ•ฉ์— ๊ด€ํ•œ ๋ธ”๋กœ๊ทธ ๊ธ€](https://huggingface.co/blog/hf-bitsandbytes-integration)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ![MixedInt8.gif](https://cdn-uploads.huggingface.co/production/uploads/1660567469965-62441d1d9fdefb55a0b7d12c.gif) ์ปค๋„์€ GPU ์ „์šฉ์œผ๋กœ ์ปดํŒŒ์ผ๋˜์–ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๋ ค๋ฉด GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ชจ๋ธ์˜ 1/4(๋˜๋Š” ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๊ฐ€ ์ ˆ๋ฐ˜ ์ •๋ฐ€๋„์ธ ๊ฒฝ์šฐ ์ ˆ๋ฐ˜)์„ ์ €์žฅํ•  ์ถฉ๋ถ„ํ•œ GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ด ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ์ฐธ๊ณ  ์‚ฌํ•ญ์ด ์•„๋ž˜์— ๋‚˜์™€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜๋Š” [Google colab](#colab-demos)์—์„œ ๋ฐ๋ชจ๋ฅผ ๋”ฐ๋ผํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์š”๊ตฌ ์‚ฌํ•ญ [[requirements-for-int8-mixedprecision-matrix-decomposition]] - `bitsandbytes<0.37.0`์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, 8๋น„ํŠธ ํ…์„œ ์ฝ”์–ด(Turing, Ampere ๋˜๋Š” ์ดํ›„ ์•„ํ‚คํ…์ฒ˜ - ์˜ˆ: T4, RTX20s RTX30s, A40-A100)๋ฅผ ์ง€์›ํ•˜๋Š” NVIDIA GPU์—์„œ ์‹คํ–‰ํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. `bitsandbytes>=0.37.0`์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋ชจ๋“  GPU๊ฐ€ ์ง€์›๋ฉ๋‹ˆ๋‹ค. - ์˜ฌ๋ฐ”๋ฅธ ๋ฒ„์ „์˜ `bitsandbytes`๋ฅผ ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ์„ค์น˜ํ•˜์„ธ์š”: `pip install bitsandbytes>=0.31.5` - `accelerate`๋ฅผ ์„ค์น˜ํ•˜์„ธ์š” `pip install accelerate>=0.12.0` ### ํ˜ผํ•ฉ Int8 ๋ชจ๋ธ ์‹คํ–‰ - ๋‹จ์ผ GPU ์„ค์ • [[running-mixedint8-models-single-gpu-setup]] ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•œ ํ›„ ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` ํ…์ŠคํŠธ ์ƒ์„ฑ์˜ ๊ฒฝ์šฐ: * `pipeline()` ํ•จ์ˆ˜ ๋Œ€์‹  ๋ชจ๋ธ์˜ `generate()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. `pipeline()` ํ•จ์ˆ˜๋กœ๋Š” ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•˜์ง€๋งŒ, ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์— ์ตœ์ ํ™”๋˜์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์— `generate()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋Š๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, nucleus ์ƒ˜ํ”Œ๋ง๊ณผ ๊ฐ™์€ ์ผ๋ถ€ ์ƒ˜ํ”Œ๋ง ์ „๋žต์€ ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์— ๋Œ€ํ•ด `pipeline()` ํ•จ์ˆ˜์—์„œ ์ง€์›๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. * ์ž…๋ ฅ์„ ๋ชจ๋ธ๊ณผ ๋™์ผํ•œ GPU์— ๋ฐฐ์น˜ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ฐ„๋‹จํ•œ ์˜ˆ์ž…๋‹ˆ๋‹ค: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "bigscience/bloom-2b5" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) prompt = "Hello, my llama is cute" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ``` ### ํ˜ผํ•ฉ Int8 ๋ชจ๋ธ ์‹คํ–‰ - ๋‹ค์ค‘ GPU ์„ค์ • [[running-mixedint8-models-multi-gpu-setup]] ๋‹ค์ค‘ GPU์—์„œ ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋‹จ์ผ GPU ์„ค์ •๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค(๋™์ผํ•œ ๋ช…๋ น์–ด ์‚ฌ์šฉ): ```py model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` ํ•˜์ง€๋งŒ `accelerate`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ GPU์— ํ• ๋‹นํ•  GPU RAM์„ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด `max_memory` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py max_memory_mapping = {0: "1GB", 1: "2GB"} model_name = "bigscience/bloom-3b" model_8bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping ) ``` ์ด ์˜ˆ์‹œ์—์„œ๋Š” ์ฒซ ๋ฒˆ์งธ GPU๊ฐ€ 1GB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ๋‘ ๋ฒˆ์งธ GPU๊ฐ€ 2GB๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ### Colab ๋ฐ๋ชจ [[colab-demos]] ์ด ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜๋ฉด ์ด์ „์— Google Colab์—์„œ ์ถ”๋ก ํ•  ์ˆ˜ ์—†์—ˆ๋˜ ๋ชจ๋ธ์— ๋Œ€ํ•ด ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Google Colab์—์„œ 8๋น„ํŠธ ์–‘์žํ™”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ T5-11b(42GB in fp32)๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐ๋ชจ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: [![Open In Colab: T5-11b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) ๋˜๋Š” BLOOM-3B์— ๋Œ€ํ•œ ๋ฐ๋ชจ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: [![Open In Colab: BLOOM-3b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/debugging.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋””๋ฒ„๊น… [[debugging]] ## Multi-GPU ๋„คํŠธ์›Œํฌ ๋ฌธ์ œ ๋””๋ฒ„๊ทธ [[multigpu-network-issues-debug]] `DistributedDataParallel` ๋ฐ ๋‹ค์ค‘ GPU๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•˜๊ฑฐ๋‚˜ ์ถ”๋ก ํ•  ๋•Œ, ํ”„๋กœ์„ธ์Šค ๋ฐ/๋˜๋Š” ๋…ธ๋“œ ๊ฐ„์˜ ์ƒํ˜ธ ํ†ต์‹  ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋„คํŠธ์›Œํฌ ๋ฌธ์ œ๋ฅผ ์ง„๋‹จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py ``` ์˜ˆ๋ฅผ ๋“ค์–ด, 2๊ฐœ์˜ GPU๊ฐ€ ์ƒํ˜ธ ์ž‘์šฉํ•˜๋Š” ๋ฐฉ์‹์„ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` ๋‘ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์„œ๋กœ ํ†ต์‹ ํ•˜๊ณ  GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ• ๋‹นํ•˜๋Š” ๊ฒฝ์šฐ, ๊ฐ๊ฐ "OK" ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ๋” ๋งŽ์€ GPU ๋˜๋Š” ๋…ธ๋“œ์˜ ๊ฒฝ์šฐ ์Šคํฌ๋ฆฝํŠธ์˜ ์ธ์ˆ˜๋ฅผ ์กฐ์ •ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ง„๋‹จ ์Šคํฌ๋ฆฝํŠธ ๋‚ด์—์„œ ๋” ๋งŽ์€ ์„ธ๋ถ€ ์ •๋ณด์™€ SLURM ํ™˜๊ฒฝ์—์„œ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ ˆ์‹œํ”ผ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๊ฐ€์ ์ธ ๋””๋ฒ„๊ทธ ์ˆ˜์ค€์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด `NCCL_DEBUG=INFO` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```bash NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด NCCL ๊ด€๋ จ ๋””๋ฒ„๊ทธ ์ •๋ณด๊ฐ€ ๋งŽ์ด ์ถœ๋ ฅ๋˜๋ฉฐ, ๋ฌธ์ œ๊ฐ€ ๋ณด๊ณ ๋œ ๊ฒฝ์šฐ์—๋Š” ์ธํ„ฐ๋„ท์—์„œ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜๋Š” ์ถœ๋ ฅ์„ ํ•ด์„ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ž˜ ๋ชจ๋ฅด๋Š” ๊ฒฝ์šฐ ๋กœ๊ทธ ํŒŒ์ผ์„ ์ด์Šˆ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์–ธ๋”ํ”Œ๋กœ ๋ฐ ์˜ค๋ฒ„ํ”Œ๋กœ ๊ฐ์ง€ [[underflow-and-overflow-detection]] <Tip> ์ด ๊ธฐ๋Šฅ์€ ํ˜„์žฌ PyTorch์—์„œ๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> <Tip> ๋‹ค์ค‘ GPU ํ›ˆ๋ จ์„ ์œ„ํ•ด์„œ๋Š” DDP (`torch.distributed.launch`)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. </Tip> <Tip> ์ด ๊ธฐ๋Šฅ์€ `nn.Module`์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> `loss=NaN`์ด ๋‚˜ํƒ€๋‚˜๊ฑฐ๋‚˜ ๋ชจ๋ธ์ด `inf` ๋˜๋Š” `nan`์œผ๋กœ ์ธํ•ด ๋‹ค๋ฅธ ์ด์ƒํ•œ ๋™์ž‘์„ ํ•˜๋Š” ๊ฒฝ์šฐ, ์–ธ๋”ํ”Œ๋กœ ๋˜๋Š” ์˜ค๋ฒ„ํ”Œ๋กœ์˜ ์ฒซ ๋ฒˆ์งธ ๋ฐœ์ƒ ์œ„์น˜์™€ ๊ทธ ์›์ธ์„ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹คํ–‰ํžˆ๋„ ์ด๋ฅผ ์ž๋™์œผ๋กœ ๊ฐ์ง€ํ•˜๋Š” ํŠน์ˆ˜ ๋ชจ๋“ˆ์„ ํ™œ์„ฑํ™”ํ•˜์—ฌ ์‰ฝ๊ฒŒ ์•Œ์•„๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ์„ ๊ธฐ์กด์˜ ๋ช…๋ น์ค„ ์ธ์ˆ˜์— ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ```bash --debug underflow_overflow ``` ๋˜๋Š” [`TrainingArguments`] ๊ฐ์ฒด๋ฅผ ์ƒ์„ฑํ•  ๋•Œ `debug="underflow_overflow"`๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์ž์ฒด ํ›ˆ๋ จ ๋ฃจํ”„๋‚˜ ๋‹ค๋ฅธ Trainer๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from transformers.debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model) ``` [`~debug_utils.DebugUnderflowOverflow`]๋Š” ๋ชจ๋ธ์— ํ›„ํฌ๋ฅผ ์‚ฝ์ž…ํ•˜์—ฌ ๊ฐ forward ํ˜ธ์ถœ ์งํ›„์— ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ ๋ณ€์ˆ˜ ๋ฐ ํ•ด๋‹น ๋ชจ๋“ˆ์˜ ๊ฐ€์ค‘์น˜๋ฅผ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ํ™œ์„ฑํ™”๋‚˜ ๊ฐ€์ค‘์น˜์˜ ์ตœ์†Œํ•œ ํ•˜๋‚˜์˜ ์š”์†Œ์—์„œ `inf` ๋˜๋Š” `nan`์ด ๊ฐ์ง€๋˜๋ฉด ํ”„๋กœ๊ทธ๋žจ์ด ์–ด์„คํŠธ๋˜๊ณ  ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ณด๊ณ ์„œ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค. (์ด ์˜ˆ์ œ๋Š” fp16 ํ˜ผํ•ฉ ์ •๋ฐ€๋„์—์„œ `google/mt5-small`์—์„œ ์บก์ฒ˜๋œ ๊ฒƒ์ž…๋‹ˆ๋‹ค): ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata encoder.block.1.layer.1.DenseReluDense.dropout Dropout 0.00e+00 2.57e+02 input[0] 0.00e+00 2.85e+02 output [...] encoder.block.2.layer.0 T5LayerSelfAttention 6.78e-04 3.15e+03 input[0] 2.65e-04 3.42e+03 output[0] None output[1] 2.25e-01 1.00e+04 output[2] encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.dropout Dropout 0.00e+00 8.76e+03 input[0] 0.00e+00 9.74e+03 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` ์˜ˆ์ œ ์ถœ๋ ฅ์€ ๊ฐ„๋žต์„ฑ์„ ์œ„ํ•ด ์ค‘๊ฐ„ ๋ถ€๋ถ„์ด ์ž˜๋ ค ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ์—ด์€ ์ ˆ๋Œ€์ ์œผ๋กœ ๊ฐ€์žฅ ํฐ ์š”์†Œ์˜ ๊ฐ’์ด๋ฉฐ, ๋”ฐ๋ผ์„œ ๋งˆ์ง€๋ง‰ ๋ช‡ ๊ฐœ์˜ ํ”„๋ ˆ์ž„์„ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๋ฉด ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์ด `1e4` ๋ฒ”์œ„์— ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด ํ›ˆ๋ จ์€ `fp16` ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋กœ ์ˆ˜ํ–‰๋  ๋•Œ ๊ฐ€์žฅ ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„์—์„œ ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ๊ฐ€ ๋ฐœ์ƒํ–ˆ์Šต๋‹ˆ๋‹ค (`fp16`์—์„œ `inf` ์ด์ „์˜ ๊ฐ€์žฅ ํฐ ์ˆซ์ž๋Š” `64e3`์ž…๋‹ˆ๋‹ค). `fp16` ์•„๋ž˜์—์„œ ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ๋ฅผ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ™œ์„ฑํ™”๋Š” `1e4`๋ณด๋‹ค ํ›จ์”ฌ ์ž‘์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด `1e4 * 1e4 = 1e8`์ด๊ธฐ ๋•Œ๋ฌธ์— ํฐ ํ™œ์„ฑํ™”์™€์˜ ํ–‰๋ ฌ ๊ณฑ์€ ์ˆ˜์น˜์ ์ธ ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ ์กฐ๊ฑด์œผ๋กœ ์ด์–ด์งˆ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ถ”์ ์˜ ๋งจ ์ฒ˜์Œ์—์„œ ์–ด๋Š ๋ฐฐ์น˜ ๋ฒˆํ˜ธ์—์„œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ–ˆ๋Š”์ง€ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์—ฌ๊ธฐ์„œ `Detected inf/nan during batch_number=0`์€ ๋ฌธ์ œ๊ฐ€ ์ฒซ ๋ฒˆ์งธ ๋ฐฐ์น˜์—์„œ ๋ฐœ์ƒํ–ˆ์Œ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ๋ณด๊ณ ๋œ ํ”„๋ ˆ์ž„์€ ํ•ด๋‹น ํ”„๋ ˆ์ž„์ด ๋ณด๊ณ ํ•˜๋Š” ํ•ด๋‹น ๋ชจ๋“ˆ์— ๋Œ€ํ•œ ์™„์ „ํ•œ ํ•ญ๋ชฉ์„ ์„ ์–ธํ•˜๋ฉฐ, ์ด ํ”„๋ ˆ์ž„๋งŒ ์‚ดํŽด๋ณด๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ``` encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output ``` ์—ฌ๊ธฐ์„œ `encoder.block.2.layer.1.layer_norm`์€ ์ธ์ฝ”๋”์˜ ๋‘ ๋ฒˆ์งธ ๋ธ”๋ก์˜ ์ฒซ ๋ฒˆ์งธ ๋ ˆ์ด์–ด์— ๋Œ€ํ•œ ๋ ˆ์ด์–ด ์ •๊ทœํ™”๋ฅผ ์˜๋ฏธํ•˜๋ฉฐ, `forward`์˜ ํŠน์ • ํ˜ธ์ถœ์€ `T5LayerNorm`์ž…๋‹ˆ๋‹ค. ์ด ๋ณด๊ณ ์„œ์˜ ๋งˆ์ง€๋ง‰ ๋ช‡ ๊ฐœ ํ”„๋ ˆ์ž„์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` ๋งˆ์ง€๋ง‰ ํ”„๋ ˆ์ž„์€ `Dropout.forward` ํ•จ์ˆ˜์— ๋Œ€ํ•œ ๋ณด๊ณ ์ž…๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ํ•ญ๋ชฉ์€ ์œ ์ผํ•œ ์ž…๋ ฅ์„ ๋‚˜ํƒ€๋‚ด๊ณ  ๋‘ ๋ฒˆ์งธ ํ•ญ๋ชฉ์€ ์œ ์ผํ•œ ์ถœ๋ ฅ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๊ฐ€ `DenseReluDense` ํด๋ž˜์Šค ๋‚ด๋ถ€์˜ `dropout` ์†์„ฑ์—์„œ ํ˜ธ์ถœ๋œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ฒซ ๋ฒˆ์งธ ๋ ˆ์ด์–ด์˜ ๋‘ ๋ฒˆ์งธ ๋ธ”๋ก์—์„œ ์ฒซ ๋ฒˆ์งธ ๋ฐฐ์น˜ ์ค‘์— ๋ฐœ์ƒํ–ˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ ˆ๋Œ€์ ์œผ๋กœ ๊ฐ€์žฅ ํฐ ์ž…๋ ฅ ์š”์†Œ๋Š” `6.27e+04`์ด๊ณ  ์ถœ๋ ฅ๋„ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `inf`์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ๋Š” `T5DenseGatedGeluDense.forward`๊ฐ€ ์ถœ๋ ฅ ํ™œ์„ฑํ™”๋ฅผ ์ƒ์„ฑํ•˜๋Š”๋ฐ, ์ ˆ๋Œ€์ ์œผ๋กœ ๊ฐ€์žฅ ํฐ ๊ฐ’์ด ์•ฝ 62.7K์ธ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ’์€ fp16์˜ ์ตœ๋Œ€ ์ œํ•œ์ธ 64K์— ๋งค์šฐ ๊ทผ์ ‘ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ”„๋ ˆ์ž„์—์„œ๋Š” ์ผ๋ถ€ ์š”์†Œ๋ฅผ 0์œผ๋กœ ๋งŒ๋“  ํ›„ ๊ฐ€์ค‘์น˜๋ฅผ ์žฌ์ •๊ทœํ™”ํ•˜๋Š” `Dropout`์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด ์ ˆ๋Œ€ ์ตœ๋Œ€๊ฐ’์ด 64K๋ฅผ ์ดˆ๊ณผํ•˜๊ณ  ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ(`inf`)๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋ณด์‹œ๋‹ค์‹œํ”ผ, fp16 ์ˆซ์ž์˜ ๊ฒฝ์šฐ ์ˆซ์ž๊ฐ€ ๋งค์šฐ ์ปค์งˆ ๋•Œ ์ด์ „ ํ”„๋ ˆ์ž„์„ ์‚ดํŽด๋ณด์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณด๊ณ ์„œ๋ฅผ `models/t5/modeling_t5.py`์˜ ์ฝ”๋“œ์™€ ์ผ์น˜์‹œ์ผœ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```python class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states ``` ์ด์ œ `dropout` ํ˜ธ์ถœ๊ณผ ์ด์ „์˜ ๋ชจ๋“  ํ˜ธ์ถœ์„ ์‰ฝ๊ฒŒ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ์ง€๋Š” `forward` ํ›„ํฌ์—์„œ ๋ฐœ์ƒํ•˜๋ฏ€๋กœ, ์ด๋Ÿฌํ•œ ๋ณด๊ณ ์„œ๋Š” ๊ฐ `forward`๊ฐ€ ๋ฐ˜ํ™˜๋œ ์งํ›„์— ์ฆ‰์‹œ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค. ์ „์ฒด ๋ณด๊ณ ์„œ๋กœ ๋Œ์•„๊ฐ€์„œ ๋ฌธ์ œ์— ๋Œ€ํ•œ ์กฐ์น˜ ๋ฐ ์ˆ˜์ •์„ ํ•˜๋ ค๋ฉด, ์ˆซ์ž๊ฐ€ ์ฆ๊ฐ€ํ•˜๊ธฐ ์‹œ์ž‘ํ•œ ๋ช‡ ๊ฐœ์˜ ํ”„๋ ˆ์ž„ ์œ„๋กœ ์ด๋™ํ•ด์„œ ์—ฌ๊ธฐ์„œ `fp32` ๋ชจ๋“œ๋กœ ์ „ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•ด์•ผ ์ˆซ์ž๊ฐ€ ๊ณฑํ•ด์ง€๊ฑฐ๋‚˜ ํ•ฉ์ณ์งˆ ๋•Œ ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ๋˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ๋ฌผ๋ก  ๋‹ค๋ฅธ ํ•ด๊ฒฐ์ฑ…๋„ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `amp`๊ฐ€ ํ™œ์„ฑํ™”๋œ ๊ฒฝ์šฐ ์ผ์‹œ์ ์œผ๋กœ ๋„๊ณ  ์›๋ž˜์˜ `forward`๋ฅผ ๋„์šฐ๋ฏธ ๋ž˜ํผ๋กœ ์ด๋™ํ•œ ํ›„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python def _forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states import torch def forward(self, hidden_states): if torch.is_autocast_enabled(): with torch.cuda.amp.autocast(enabled=False): return self._forward(hidden_states) else: return self._forward(hidden_states) ``` ์ž๋™ ๊ฐ์ง€๊ธฐ๋Š” ์ „์ฒด ํ”„๋ ˆ์ž„์˜ ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์— ๋Œ€ํ•ด์„œ๋งŒ ๋ณด๊ณ ํ•˜๋ฏ€๋กœ, ์–ด๋””๋ฅผ ์‚ดํŽด๋ด์•ผ ํ•˜๋Š”์ง€ ์•Œ๋ฉด ํŠน์ • `forward` ํ•จ์ˆ˜์˜ ์ค‘๊ฐ„ ๋‹จ๊ณ„๋„ ๋ถ„์„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” `detect_overflow` ๋„์šฐ๋ฏธ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ์œ„์น˜์— ๊ฐ์ง€๊ธฐ๋ฅผ ์‚ฝ์ž…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด: ```python from debug_utils import detect_overflow class T5LayerFF(nn.Module): [...] def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, "after layer_norm") forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, "after DenseReluDense") return hidden_states + self.dropout(forwarded_states) ``` ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ 2๊ฐœ์˜ ๊ฒƒ์„ ์ถ”์ ํ•˜๊ณ  ์ด์ œ `forwarded_states`์˜ `inf` ๋˜๋Š” `nan`์ด ์ค‘๊ฐ„์— ๊ฐ์ง€๋˜์—ˆ๋Š”์ง€๋ฅผ ์ถ”์ ํ•ฉ๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ ์œ„์˜ ์˜ˆ์ œ์—์„œ ๊ฐ ํ˜ธ์ถœ์ด `nn.Module`์ด๊ธฐ ๋•Œ๋ฌธ์— ํƒ์ง€๊ธฐ๊ฐ€ ์ด๋ฏธ ์ด๋ฅผ ๋ณด๊ณ ํ•ฉ๋‹ˆ๋‹ค. ๋กœ์ปฌ์—์„œ ์ง์ ‘ ๊ณ„์‚ฐํ•˜๋Š” ๊ฒฝ์šฐ ์ด๋ ‡๊ฒŒ ์ˆ˜ํ–‰ํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ด…์‹œ๋‹ค. ๋˜ํ•œ, ์ž์ฒด ์ฝ”๋“œ์—์„œ ๋””๋ฒ„๊ฑฐ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ๊ฐ’์—์„œ ์ถœ๋ ฅ๋˜๋Š” ํ”„๋ ˆ์ž„ ์ˆ˜๋ฅผ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด: ```python from transformers.debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ``` ### ํŠน์ • ๋ฐฐ์น˜์˜ ์ ˆ๋Œ“๊ฐ’ ์ตœ์†Œ ๋ฐ ์ตœ๋Œ€ ๊ฐ’ ์ถ”์  [[specific-batch-absolute-min-and-max-value-tracing]] ๋™์ผํ•œ ๋””๋ฒ„๊น… ํด๋ž˜์Šค๋Š” ์–ธ๋”ํ”Œ๋กœ์šฐ/์˜ค๋ฒ„ํ”Œ๋กœ์šฐ ๊ฐ์ง€ ๊ธฐ๋Šฅ์ด ๊บผ์ง„ ์ƒํƒœ์—์„œ ๋ฐฐ์น˜๋ณ„ ์ถ”์ ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํŠน์ • ๋ฐฐ์น˜์˜ ๊ฐ `forward` ํ˜ธ์ถœ์˜ ๋ชจ๋“  ๊ตฌ์„ฑ ์„ฑ๋ถ„์— ๋Œ€ํ•œ ์ ˆ๋Œ€ ์ตœ์†Ÿ๊ฐ’๊ณผ ์ตœ๋Œ“๊ฐ’์„ ํ™•์ธํ•˜๊ณ , ์ด๋ฅผ ๋ฐฐ์น˜ 1๊ณผ 3์— ๋Œ€ํ•ด์„œ๋งŒ ์ˆ˜ํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ด ํด๋ž˜์Šค๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ``` ๊ทธ๋Ÿฌ๋ฉด ์ด์ œ ๋ฐฐ์น˜ 1๊ณผ 3 ์ „์ฒด๊ฐ€ ์–ธ๋”ํ”Œ๋กœ์šฐ/์˜ค๋ฒ„ํ”Œ๋กœ์šฐ ๊ฐ์ง€๊ธฐ์™€ ๋™์ผํ•œ ํ˜•์‹์œผ๋กœ ์ถ”์ ๋ฉ๋‹ˆ๋‹ค. ๋ฐฐ์น˜๋Š” 0๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ”„๋กœ๊ทธ๋žจ์ด ํŠน์ • ๋ฐฐ์น˜ ๋ฒˆํ˜ธ ์ดํ›„์— ์˜ค์ž‘๋™ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์„ ์•Œ๊ณ  ์žˆ๋Š” ๊ฒฝ์šฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์— ํ•ด๋‹น ์˜์—ญ์œผ๋กœ ๋ฐ”๋กœ ์ด๋™ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๊ตฌ์„ฑ์— ๋Œ€ํ•œ ์ƒ˜ํ”Œ ์ถ•์†Œ๋œ ์ถœ๋ ฅ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ``` *** Starting batch number=1 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.47e+04 input[0] 5.36e-05 7.92e+02 output [...] decoder.dropout Dropout 1.60e-07 2.27e+01 input[0] 0.00e+00 2.52e+01 output decoder T5Stack not a tensor output lm_head Linear 1.01e-06 7.92e+02 weight 0.00e+00 1.11e+00 input[0] 6.06e-02 8.39e+01 output T5ForConditionalGeneration not a tensor output *** Starting batch number=3 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.78e+04 input[0] 5.36e-05 7.92e+02 output [...] ``` ์—ฌ๊ธฐ์—์„œ๋Š” ๋ชจ๋ธ์˜ forward ํ˜ธ์ถœ ์ˆ˜์™€ ๋™์ผํ•œ ์ˆ˜์˜ ํ”„๋ ˆ์ž„์ด ๋คํ”„๋˜๋ฏ€๋กœ ๋งŽ์€ ์ˆ˜์˜ ํ”„๋ ˆ์ž„์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›ํ•˜๋Š” ๊ฒƒ์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์•„๋‹ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋•Œ๋กœ๋Š” ์ผ๋ฐ˜ ๋””๋ฒ„๊ฑฐ๋ณด๋‹ค ๋””๋ฒ„๊น… ๋ชฉ์ ์œผ๋กœ ๋” ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌธ์ œ๊ฐ€ ๋ฐฐ์น˜ ๋ฒˆํ˜ธ 150์—์„œ ์‹œ์ž‘ํ•˜๋Š” ๊ฒฝ์šฐ 149์™€ 150์˜ ์ถ”์ ์„ ๋คํ”„ํ•˜๊ณ  ์ˆซ์ž๊ฐ€ ์–ด๋””์„œ๋ถ€ํ„ฐ ๋‹ค๋ฅด๊ฒŒ ๋˜์—ˆ๋Š”์ง€ ๋น„๊ตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ํ›ˆ๋ จ์„ ์ค‘์ง€ํ•  ๋ฐฐ์น˜ ๋ฒˆํ˜ธ๋ฅผ ์ง€์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), [JAX](https://jax.readthedocs.io/en/latest/)๋ฅผ ์œ„ํ•œ ์ตœ์ฒจ๋‹จ ๋จธ์‹ ๋Ÿฌ๋‹ ๐Ÿค— Transformers๋Š” ์‚ฌ์ „ํ•™์Šต๋œ ์ตœ์ฒจ๋‹จ ๋ชจ๋ธ๋“ค์„ ์‰ฝ๊ฒŒ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ํ›ˆ๋ จ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” API์™€ ๋„๊ตฌ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์“ฐ๋ฉด ์ปดํ“จํŒ… ๋น„์šฉ๊ณผ ํƒ„์†Œ ๋ฐฐ์ถœ๋Ÿ‰์ด ์ค„๊ณ , ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฐ ํ•„์š”ํ•œ ์‹œ๊ฐ„๊ณผ ๋ฆฌ์†Œ์Šค๋ฅผ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ €ํฌ ๋ชจ๋ธ๋“ค์€ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์˜ ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿ“ **์ž์—ฐ์–ด ์ฒ˜๋ฆฌ**: ํ…์ŠคํŠธ ๋ถ„๋ฅ˜, ๊ฐœ์ฒด๋ช… ์ธ์‹, ์งˆ์˜์‘๋‹ต, ์–ธ์–ด ๋ชจ๋ธ๋ง, ์š”์•ฝ, ๋ฒˆ์—ญ, ๊ฐ๊ด€์‹ ์งˆ์˜์‘๋‹ต, ํ…์ŠคํŠธ ์ƒ์„ฑ<br> ๐Ÿ–ผ๏ธ **์ปดํ“จํ„ฐ ๋น„์ „**: ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜, ๊ฐ์ฒด ํƒ์ง€, ๊ฐ์ฒด ๋ถ„ํ• <br> ๐Ÿ—ฃ๏ธ **์˜ค๋””์˜ค**: ์ž๋™์Œ์„ฑ์ธ์‹, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜<br> ๐Ÿ™ **๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ**: ํ‘œ ์งˆ์˜์‘๋‹ต, ๊ด‘ํ•™ ๋ฌธ์ž ์ธ์‹ (OCR), ์Šค์บ”ํ•œ ๋ฌธ์„œ์—์„œ ์ •๋ณด ์ถ”์ถœ, ๋น„๋””์˜ค ๋ถ„๋ฅ˜, ์‹œ๊ฐ ์งˆ์˜์‘๋‹ต ๐Ÿค— Transformers๋Š” PyTorch, TensorFlow์™€ JAX ๊ฐ„์˜ ์ƒํ˜ธ์šด์šฉ์„ฑ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์œ ์—ฐํ•˜๊ฒŒ ๋ชจ๋ธ์˜ ๊ฐ ๋‹จ๊ณ„๋งˆ๋‹ค ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์ฝ”๋“œ 3์ค„๋งŒ ์จ์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚จ ๋‹ค์Œ, ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ ์ƒ์—์„œ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์šด์˜ ํ™˜๊ฒฝ์— ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•ด ONNX๋‚˜ TorchScript ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ์ฐธ์—ฌํ•˜์‹œ๋ ค๋ฉด [Hub](https://huggingface.co/models), [ํฌ๋Ÿผ](https://discuss.huggingface.co/), [๋””์Šค์ฝ”๋“œ](https://discord.com/invite/JfAtkvEtRb)๋ฅผ ๋ฐฉ๋ฌธํ•ด์ฃผ์„ธ์š”! ## Hugging Face ํŒ€๊ณผ ์ง์ ‘ ๋Œ€ํ™”ํ•˜๊ณ  ์‹ถ์œผ์‹ ๊ฐ€์š”?[[hugging-face-team]] <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## ์ฝ˜ํ…์ธ [[contents]] ์ €ํฌ ๊ธฐ์ˆ ๋ฌธ์„œ๋Š” ํฌ๊ฒŒ 5๊ฐœ ์„น์…˜์œผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - **์‹œ์ž‘ํ•˜๊ธฐ**์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๊ฐ„๋‹จํžˆ ํ›‘์–ด๋ณด๊ณ , ๋ณธ๊ฒฉ์ ์œผ๋กœ ๋›ฐ์–ด๋“ค ์ˆ˜ ์žˆ๊ฒŒ ์„ค์น˜ ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. - **ํŠœํ† ๋ฆฌ์–ผ**์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์ต์ˆ™ํ•ด์งˆ ์ˆ˜ ์žˆ๋„๋ก ์ž์„ธํ•˜๊ณ ๋„ ์‰ฝ๊ฒŒ ๊ธฐ๋ณธ์ ์ธ ๋ถ€๋ถ„์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. - **How-to ๊ฐ€์ด๋“œ**์—์„œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‚˜, ์ง์ ‘ ๋ชจ๋ธ์„ ์ž‘์„ฑํ•˜๊ณ  ๊ณต์œ ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๊ฐ™์ด ํŠน์ • ๋ชฉํ‘œ๋ฅผ ๋‹ฌ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. - **๊ฐœ๋… ๊ฐ€์ด๋“œ**์—์„œ ๐Ÿค— Transformers์˜ ์„ค๊ณ„ ์ฒ ํ•™๊ณผ ํ•จ๊ป˜ ๋ชจ๋ธ์ด๋‚˜ ํƒœ์Šคํฌ ๋’ค์— ์ˆจ๊ฒจ์ง„ ๊ฐœ๋…๋“ค๊ณผ ์•„์ด๋””์–ด๋ฅผ ํƒ๊ตฌํ•˜๊ณ  ์„ค๋ช…์„ ๋ง๋ถ™์ž…๋‹ˆ๋‹ค. - **API**์—์„œ ๋ชจ๋“  ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. - **๋ฉ”์ธ ํด๋ž˜์Šค**์—์„œ configuration, model, tokenizer, pipeline๊ณผ ๊ฐ™์ด ์ œ์ผ ์ค‘์š”ํ•œ ํด๋ž˜์Šค๋“ค์„ ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. - **๋ชจ๋ธ**์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์† ๊ตฌํ˜„๋œ ๊ฐ ๋ชจ๋ธ๊ณผ ์—ฐ๊ด€๋œ ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. - **๋‚ด๋ถ€ ์œ ํ‹ธ๋ฆฌํ‹ฐ**์—์„œ ๋‚ด๋ถ€์ ์œผ๋กœ ์‚ฌ์šฉ๋˜๋Š” ์œ ํ‹ธ๋ฆฌํ‹ฐ ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ### ์ง€์› ๋ชจ๋ธ[[supported-models]] <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CLIPSeg](model_doc/clipseg)** (from University of Gรถttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[Conditional DETR](model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[Donut](model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ERNIE](model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 1. **[ESM](model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 1. **[FLAN-T5](model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://openai.com/research/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://openai.com/research/better-language-models/) by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama). 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[Jukebox](model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[LiLT](model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[MarkupLM](model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[PEGASUS-X](model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoCBert](model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[Table Transformer](model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Time Series Transformer](model_doc/time_series_transformer)** (from HuggingFace). 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[ViTMSN](model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Whisper](model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 1. **[X-CLIP](model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### ์ง€์› ํ”„๋ ˆ์ž„์›Œํฌ[[supported-framework]] ์•„๋ž˜ ํ‘œ๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์† ๊ฐ ๋ชจ๋ธ์˜ ์ง€์› ํ˜„ํ™ฉ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ํ† ํฐํ™”๋ฅผ ํŒŒ์ด์ฌ (๋ณ„์นญ "slow") ๋˜๋Š” ๐Ÿค— Tokenizers (๋ณ„์นญ "fast") ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ํ•˜๋Š”์ง€; (Flax๋ฅผ ํ†ตํ•œ) Jax, PyTorch, TensorFlow ์ค‘ ์–ด๋–ค ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์ง€์›ํ•˜๋Š”์ง€ ํ‘œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:---------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CLIPSeg | โŒ | โŒ | โœ… | โŒ | โŒ | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | Conditional DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Deformable DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DonutSwin | โŒ | โŒ | โœ… | โŒ | โŒ | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | ERNIE | โŒ | โŒ | โœ… | โŒ | โŒ | | ESM | โœ… | โŒ | โœ… | โœ… | โŒ | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT NeoX Japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GroupViT | โŒ | โŒ | โœ… | โœ… | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Jukebox | โœ… | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | LiLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MarkupLM | โœ… | โœ… | โœ… | โŒ | โŒ | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileViT | โŒ | โŒ | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | PEGASUS-X | โŒ | โŒ | โœ… | โŒ | โŒ | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoCBert | โœ… | โŒ | โœ… | โŒ | โŒ | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Table Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Time Series Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | ViTMSN | โŒ | โŒ | โœ… | โŒ | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | Whisper | โœ… | โŒ | โœ… | โœ… | โŒ | | X-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/big_models.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํฐ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šคํ™” [[instantiating-a-big-model]] ๋งค์šฐ ํฐ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, RAM ์‚ฌ์šฉ์„ ์ตœ์†Œํ™”ํ•ด์•ผ ํ•˜๋Š” ๊ณผ์ œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ PyTorch ์›Œํฌํ”Œ๋กœ์šฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 2. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. 3. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ๋ฌด์ž‘์œ„ ๋ชจ๋ธ์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. 1๋‹จ๊ณ„์™€ 2๋‹จ๊ณ„ ๋ชจ๋‘ ๋ชจ๋ธ์˜ ์ „์ฒด ๋ฒ„์ „์„ ๋ฉ”๋ชจ๋ฆฌ์— ์ ์žฌํ•ด์•ผ ํ•˜๋ฉฐ, ๋Œ€๋ถ€๋ถ„ ๋ฌธ์ œ๊ฐ€ ์—†์ง€๋งŒ ๋ชจ๋ธ์ด ๊ธฐ๊ฐ€๋ฐ”์ดํŠธ๊ธ‰์˜ ์šฉ๋Ÿ‰์„ ์ฐจ์ง€ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๋ฉด ๋ณต์‚ฌ๋ณธ 2๊ฐœ๊ฐ€ RAM์„ ์ดˆ๊ณผํ•˜์—ฌ ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑ ์ด์Šˆ๋ฅผ ์•ผ๊ธฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋” ์‹ฌ๊ฐํ•œ ๋ฌธ์ œ๋Š” ๋ถ„์‚ฐ ํ•™์Šต์„ ์œ„ํ•ด `torch.distributed`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ํ”„๋กœ์„ธ์Šค๋งˆ๋‹ค ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ๋ณต์‚ฌ๋ณธ์„ 2๊ฐœ์”ฉ RAM์— ์ €์žฅํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. <Tip> ๋ฌด์ž‘์œ„๋กœ ์ƒ์„ฑ๋œ ๋ชจ๋ธ์€ "๋น„์–ด ์žˆ๋Š”" (์ฆ‰ ๊ทธ๋•Œ ๋ฉ”๋ชจ๋ฆฌ์— ์žˆ๋˜ ๊ฒƒ์œผ๋กœ ์ด๋ค„์ง„) ํ…์„œ๋กœ ์ดˆ๊ธฐํ™”๋˜๋ฉฐ ๋ฉ”๋ชจ๋ฆฌ ๊ณต๊ฐ„์„ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์ดˆ๊ธฐํ™”๋œ ๋ชจ๋ธ/ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์ข…๋ฅ˜์— ์ ํ•ฉํ•œ ๋ถ„ํฌ(์˜ˆ: ์ •๊ทœ ๋ถ„ํฌ)์— ๋”ฐ๋ฅธ ๋ฌด์ž‘์œ„ ์ดˆ๊ธฐํ™”๋Š” ๊ฐ€๋Šฅํ•œ ํ•œ ๋น ๋ฅด๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด ์ดˆ๊ธฐํ™”๋˜์ง€ ์•Š์€ ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•ด 3๋‹จ๊ณ„ ์ดํ›„์—๋งŒ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค! </Tip> ์ด ์•ˆ๋‚ด์„œ์—์„œ๋Š” Transformers๊ฐ€ ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์ œ๊ณตํ•˜๋Š” ์†”๋ฃจ์…˜์„ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. ์ฃผ์˜ํ•  ์ ์€ ์•„์ง ํ™œ๋ฐœํžˆ ๊ฐœ๋ฐœ ์ค‘์ธ ๋ถ„์•ผ์ด๋ฏ€๋กœ ์—ฌ๊ธฐ์„œ ์„ค๋ช…ํ•˜๋Š” API๊ฐ€ ์•ž์œผ๋กœ ์•ฝ๊ฐ„ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ์ƒค๋”ฉ๋œ ์ฒดํฌํฌ์ธํŠธ [[sharded-checkpoints]] 4.18.0 ๋ฒ„์ „ ์ดํ›„, 10GB ์ด์ƒ์˜ ๊ณต๊ฐ„์„ ์ฐจ์ง€ํ•˜๋Š” ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ๋Š” ์ž๋™์œผ๋กœ ์ž‘์€ ์กฐ๊ฐ๋“ค๋กœ ์ƒค๋”ฉ๋ฉ๋‹ˆ๋‹ค. `model.save_pretrained(save_dir)`๋ฅผ ์‹คํ–‰ํ•  ๋•Œ ํ•˜๋‚˜์˜ ๋‹จ์ผ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ฐ€์ง€๊ฒŒ ๋  ๋Œ€์‹ , ์—ฌ๋Ÿฌ ๋ถ€๋ถ„ ์ฒดํฌํฌ์ธํŠธ(๊ฐ๊ฐ์˜ ํฌ๊ธฐ๋Š” 10GB ๋ฏธ๋งŒ)์™€ ๋งค๊ฐœ๋ณ€์ˆ˜ ์ด๋ฆ„์„ ํ•ด๋‹น ํŒŒ์ผ์— ๋งคํ•‘ํ•˜๋Š” ์ธ๋ฑ์Šค๊ฐ€ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. `max_shard_size` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์ƒค๋”ฉ ์ „ ์ตœ๋Œ€ ํฌ๊ธฐ๋ฅผ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ์ด ์˜ˆ์ œ๋ฅผ ์œ„ํ•ด ์ƒค๋“œ ํฌ๊ธฐ๊ฐ€ ์ž‘์€ ์ผ๋ฐ˜ ํฌ๊ธฐ์˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ์ „ํ†ต์ ์ธ BERT ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ด…์‹œ๋‹ค. ```py from transformers import AutoModel model = AutoModel.from_pretrained("google-bert/bert-base-cased") ``` [`~PreTrainedModel.save_pretrained`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ €์žฅํ•˜๋ฉด, ๋ชจ๋ธ์˜ ๊ตฌ์„ฑ๊ณผ ๊ฐ€์ค‘์น˜๊ฐ€ ๋“ค์–ด์žˆ๋Š” ๋‘ ๊ฐœ์˜ ํŒŒ์ผ์ด ์žˆ๋Š” ์ƒˆ ํด๋”๊ฐ€ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค: ```py >>> import os >>> import tempfile >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir) ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model.bin'] ``` ์ด์ œ ์ตœ๋Œ€ ์ƒค๋“œ ํฌ๊ธฐ๋ฅผ 200MB๋กœ ์‚ฌ์šฉํ•ด ๋ด…์‹œ๋‹ค: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json'] ``` ๋ชจ๋ธ์˜ ๊ตฌ์„ฑ์— ๋”ํ•ด, ์„ธ ๊ฐœ์˜ ๋‹ค๋ฅธ ๊ฐ€์ค‘์น˜ ํŒŒ์ผ๊ณผ ํŒŒ๋ผ๋ฏธํ„ฐ ์ด๋ฆ„๊ณผ ํ•ด๋‹น ํŒŒ์ผ์˜ ๋งคํ•‘์ด ํฌํ•จ๋œ `index.json` ํŒŒ์ผ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ฒดํฌํฌ์ธํŠธ๋Š” [`~PreTrainedModel.from_pretrained`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์™„์ „ํžˆ ๋‹ค์‹œ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... new_model = AutoModel.from_pretrained(tmp_dir) ``` ํฐ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ด๋Ÿฌํ•œ ๋ฐฉ์‹์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ์ฃผ๋œ ์žฅ์ ์€ ์œ„์—์„œ ๋ณด์—ฌ์ค€ ํ๋ฆ„์˜ 2๋‹จ๊ณ„์—์„œ, ๊ฐ ์ƒค๋“œ๊ฐ€ ์ด์ „ ์ƒค๋“œ ๋‹ค์Œ์— ๋กœ๋“œ๋˜๋ฏ€๋กœ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์ด ๋ชจ๋ธ ํฌ๊ธฐ์™€ ๊ฐ€์žฅ ํฐ ์ƒค๋“œ์˜ ํฌ๊ธฐ๋ฅผ ์ดˆ๊ณผํ•˜์ง€ ์•Š๋Š”๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ์ด ์ธ๋ฑ์Šค ํŒŒ์ผ์€ ํ‚ค๊ฐ€ ์ฒดํฌํฌ์ธํŠธ์— ์žˆ๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ํ•ด๋‹น ๊ฐ€์ค‘์น˜๊ฐ€ ์–ด๋””์— ์ €์žฅ๋˜์–ด ์žˆ๋Š”์ง€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ด ์ธ๋ฑ์Šค๋ฅผ json๊ณผ ๊ฐ™์ด ๋กœ๋“œํ•˜๊ณ  ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import json >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f: ... index = json.load(f) >>> print(index.keys()) dict_keys(['metadata', 'weight_map']) ``` ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ๋Š” ํ˜„์žฌ ๋ชจ๋ธ์˜ ์ด ํฌ๊ธฐ๋งŒ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ์•ž์œผ๋กœ ๋‹ค๋ฅธ ์ •๋ณด๋ฅผ ์ถ”๊ฐ€ํ•  ๊ณ„ํš์ž…๋‹ˆ๋‹ค: ```py >>> index["metadata"] {'total_size': 433245184} ``` ๊ฐ€์ค‘์น˜ ๋งต์€ ์ด ์ธ๋ฑ์Šค์˜ ์ฃผ์š” ๋ถ€๋ถ„์œผ๋กœ, ๊ฐ ๋งค๊ฐœ๋ณ€์ˆ˜ ์ด๋ฆ„(PyTorch ๋ชจ๋ธ `state_dict`์—์„œ ๋ณดํ†ต ์ฐพ์„ ์ˆ˜ ์žˆ๋Š”)์„ ํ•ด๋‹น ํŒŒ์ผ์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> index["weight_map"] {'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin', 'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin', ... ``` ๋งŒ์•ฝ [`~PreTrainedModel.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ๋ชจ๋ธ ๋‚ด์—์„œ ์ด๋Ÿฌํ•œ ์ƒค๋”ฉ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ง์ ‘ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด (์ „์ฒด ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์œ„ํ•ด `model.load_state_dict()`๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ), [`~modeling_utils.load_sharded_checkpoint`]๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers.modeling_utils import load_sharded_checkpoint >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... load_sharded_checkpoint(model, tmp_dir) ``` ## ์ €(ไฝŽ)๋ฉ”๋ชจ๋ฆฌ ๋กœ๋”ฉ [[low-memory-loading]] ์ƒค๋”ฉ๋œ ์ฒดํฌํฌ์ธํŠธ๋Š” ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ ์ž‘์—… ํ๋ฆ„์˜ 2๋‹จ๊ณ„์—์„œ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ์ค„์ด์ง€๋งŒ, ์ €(ไฝŽ)๋ฉ”๋ชจ๋ฆฌ ์„ค์ •์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์šฐ๋ฆฌ์˜ Accelerate ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์‚ฌํ•ญ์€ ๋‹ค์Œ ๊ฐ€์ด๋“œ๋ฅผ ์ฐธ์กฐํ•ด์ฃผ์„ธ์š”: [Accelerate๋กœ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ (์˜๋ฌธ)](../en/main_classes/model#large-model-loading)
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/in_translation.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์—ด์‹ฌํžˆ ๋ฒˆ์—ญ ์ค‘์ž…๋‹ˆ๋‹ค. ์กฐ๊ธˆ ์ด๋”ฐ ๋งŒ๋‚˜์š”!
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/training.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ[[finetune-a-pretrained-model]] [[open-in-colab]] ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ฉด ์ƒ๋‹นํ•œ ์ด์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ„์‚ฐ ๋น„์šฉ๊ณผ ํƒ„์†Œ๋ฐœ์ž๊ตญ์„ ์ค„์ด๊ณ , ์ฒ˜์Œ๋ถ€ํ„ฐ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ฌ ํ•„์š” ์—†์ด ์ตœ์‹  ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋‹ค์–‘ํ•œ ์ž‘์—…์„ ์œ„ํ•ด ์‚ฌ์ „ ํ•™์Šต๋œ ์ˆ˜์ฒœ ๊ฐœ์˜ ๋ชจ๋ธ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ์ž์‹ ์˜ ์ž‘์—…๊ณผ ๊ด€๋ จ๋œ ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•ด ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๋ฏธ์„ธ ํŠœ๋‹์ด๋ผ๊ณ  ํ•˜๋Š” ๋งค์šฐ ๊ฐ•๋ ฅํ•œ ํ›ˆ๋ จ ๊ธฐ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹น์‹ ์ด ์„ ํƒํ•œ ๋”ฅ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: * ๐Ÿค— Transformers๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ [`Trainer`]. * Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ TensorFlow์—์„œ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ. * ๊ธฐ๋ณธ PyTorch์—์„œ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ. <a id='data-processing'></a> ## ๋ฐ์ดํ„ฐ์…‹ ์ค€๋น„[[prepare-a-dataset]] <Youtube id="_BZearw7f0w"/> ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋„๋ก ์ค€๋น„ํ•˜์„ธ์š”. ์ด์ „ ํŠœํ† ๋ฆฌ์–ผ์—์„œ ํ›ˆ๋ จ์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ๋“œ๋ ธ๋Š”๋ฐ, ์ง€๊ธˆ์ด ๋ฐฐ์šธ ๊ฑธ ๋˜์งš์„ ๊ธฐํšŒ์ž…๋‹ˆ๋‹ค! ๋จผ์ € [Yelp ๋ฆฌ๋ทฐ](https://huggingface.co/datasets/yelp_review_full) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset["train"][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ  ์„œ๋กœ ๋‹ค๋ฅธ ๊ธธ์ด์˜ ์‹œํ€€์Šค ํŒจ๋”ฉ ๋ฐ ์ž˜๋ผ๋‚ด๊ธฐ ์ „๋žต์„ ํฌํ•จํ•˜๋ ค๋ฉด ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹์„ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด ๐Ÿค— Dataset [`map`](https://huggingface.co/docs/datasets/process#map) ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋ฏธ์„ธ ํŠœ๋‹์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ์…‹์˜ ์ž‘์€ ๋ถ€๋ถ„ ์ง‘ํ•ฉ์„ ๋งŒ๋“ค์–ด ๋ฏธ์„ธ ํŠœ๋‹ ์ž‘์—… ์‹œ๊ฐ„์„ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Train ์—ฌ๊ธฐ์„œ๋ถ€ํ„ฐ๋Š” ์‚ฌ์šฉํ•˜๋ ค๋Š” ํ”„๋ ˆ์ž„์›Œํฌ์— ํ•ด๋‹นํ•˜๋Š” ์„น์…˜์„ ๋”ฐ๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ค๋ฅธ์ชฝ ์‚ฌ์ด๋“œ๋ฐ”์˜ ๋งํฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ด๋™ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ํŠน์ • ํ”„๋ ˆ์ž„์›Œํฌ์˜ ๋ชจ๋“  ์ฝ˜ํ…์ธ ๋ฅผ ์ˆจ๊ธฐ๋ ค๋ฉด ํ•ด๋‹น ํ”„๋ ˆ์ž„์›Œํฌ ๋ธ”๋ก์˜ ์˜ค๋ฅธ์ชฝ ์ƒ๋‹จ์— ์žˆ๋Š” ๋ฒ„ํŠผ์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค! <frameworkcontent> <pt> <Youtube id="nvBXf7s7vTI"/> ## ํŒŒ์ดํ† ์น˜ Trainer๋กœ ํ›ˆ๋ จํ•˜๊ธฐ[[train-with-pytorch-trainer]] ๐Ÿค— Transformers๋Š” ๐Ÿค— Transformers ๋ชจ๋ธ ํ›ˆ๋ จ์— ์ตœ์ ํ™”๋œ [`Trainer`] ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ง์ ‘ ์ž‘์„ฑํ•˜์ง€ ์•Š๊ณ ๋„ ์‰ฝ๊ฒŒ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`Trainer`] API๋Š” ๋กœ๊น…(logging), ๊ฒฝ์‚ฌ ๋ˆ„์ (gradient accumulation), ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision) ๋“ฑ ๋‹ค์–‘ํ•œ ํ›ˆ๋ จ ์˜ต์…˜๊ณผ ๊ธฐ๋Šฅ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. Yelp ๋ฆฌ๋ทฐ [๋ฐ์ดํ„ฐ์…‹ ์นด๋“œ](https://huggingface.co/datasets/yelp_review_full#data-fields)์—์„œ 5๊ฐœ์˜ ๋ ˆ์ด๋ธ”์ด ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) ``` <Tip> ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜ ์ค‘ ์ผ๋ถ€๊ฐ€ ์‚ฌ์šฉ๋˜์ง€ ์•Š๊ณ  ์ผ๋ถ€ ๊ฐ€์ค‘์น˜๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ํ‘œ์‹œ๋œ๋‹ค๋Š” ๊ฒฝ๊ณ ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ๊ฑฑ์ •๋งˆ์„ธ์š”. ์ด๊ฒƒ์€ ์˜ฌ๋ฐ”๋ฅธ ๋™์ž‘์ž…๋‹ˆ๋‹ค! ์‚ฌ์ „ ํ•™์Šต๋œ BERT ๋ชจ๋ธ์˜ ํ—ค๋“œ๋Š” ํ๊ธฐ๋˜๊ณ  ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ๋Œ€์ฒด๋ฉ๋‹ˆ๋‹ค. ์ด์ œ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์ง€์‹์œผ๋กœ ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ๋ฏธ์„ธ ํŠœ๋‹ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ### ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํ›ˆ๋ จ[[training-hyperparameters]] ๋‹ค์Œ์œผ๋กœ ์ •ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์™€ ๋‹ค์–‘ํ•œ ํ›ˆ๋ จ ์˜ต์…˜์„ ํ™œ์„ฑํ™”ํ•˜๊ธฐ ์œ„ํ•œ ํ”Œ๋ž˜๊ทธ๋ฅผ ํฌํ•จํ•˜๋Š” [`TrainingArguments`] ํด๋ž˜์Šค๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๊ธฐ๋ณธ ํ›ˆ๋ จ [ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments)๋กœ ์‹œ์ž‘ํ•˜์ง€๋งŒ, ์ž์œ ๋กญ๊ฒŒ ์‹คํ—˜ํ•˜์—ฌ ์—ฌ๋Ÿฌ๋ถ„๋“ค์—๊ฒŒ ๋งž๋Š” ์ตœ์ ์˜ ์„ค์ •์„ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์—์„œ ์ฒดํฌํฌ์ธํŠธ(checkpoints)๋ฅผ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### ํ‰๊ฐ€ ํ•˜๊ธฐ[[evaluate]] [`Trainer`]๋Š” ํ›ˆ๋ จ ์ค‘์— ๋ชจ๋ธ ์„ฑ๋Šฅ์„ ์ž๋™์œผ๋กœ ํ‰๊ฐ€ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ณด๊ณ ํ•  ํ•จ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [๐Ÿค— Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” [`evaluate.load`](https://huggingface.co/spaces/evaluate-metric/accuracy) ํ•จ์ˆ˜๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ„๋‹จํ•œ [`accuracy`]ํ•จ์ˆ˜๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค (์ž์„ธํ•œ ๋‚ด์šฉ์€ [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import numpy as np >>> import evaluate >>> metric = evaluate.load("accuracy") ``` `metric`์—์„œ [`~evaluate.compute`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ์˜ˆ์ธก์˜ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ธก์„ `compute`์— ์ „๋‹ฌํ•˜๊ธฐ ์ „์— ์˜ˆ์ธก์„ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋“  ๐Ÿค— Transformers ๋ชจ๋ธ์€ ๋กœ์ง“์œผ๋กœ ๋ฐ˜ํ™˜ํ•œ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š”): ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` ๋ฏธ์„ธ ํŠœ๋‹ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋ ค๋ฉด ํ›ˆ๋ จ ์ธ์ˆ˜์— `eval_strategy` ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ง€์ •ํ•˜์—ฌ ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir="test_trainer", eval_strategy="epoch") ``` ### ํ›ˆ๋ จ ํ•˜๊ธฐ[[trainer]] ๋ชจ๋ธ, ํ›ˆ๋ จ ์ธ์ˆ˜, ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹, ํ‰๊ฐ€ ํ•จ์ˆ˜๊ฐ€ ํฌํ•จ๋œ [`Trainer`] ๊ฐ์ฒด๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` ๊ทธ๋ฆฌ๊ณ  [`~transformers.Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.train() ``` </pt> <tf> <a id='keras'></a> <Youtube id="rnTGBy2ax1c"/> ## Keras๋กœ ํ…์„œํ”Œ๋กœ์šฐ ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ[[train-a-tensorflow-model-with-keras]] Keras API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ### Keras์šฉ ๋ฐ์ดํ„ฐ ๋กœ๋“œ[[loading-data-for-keras]] Keras API๋กœ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋ ค๋ฉด ๋ฐ์ดํ„ฐ์…‹์„ Keras๊ฐ€ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ์ž‘์€ ๊ฒฝ์šฐ, ์ „์ฒด๋ฅผ NumPy ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ Keras๋กœ ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋” ๋ณต์žกํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์ „์— ๋จผ์ € ์ด ์ž‘์—…์„ ์‹œ๋„ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. [GLUE ๋ฒค์น˜๋งˆํฌ](https://huggingface.co/datasets/glue)์˜ CoLA ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํ•œ ๋ฐ”์ด๋„ˆ๋ฆฌ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ์ž‘์—…์ด๋ฏ€๋กœ ์ง€๊ธˆ์€ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๋ถ„ํ• ๋งŒ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py from datasets import load_dataset dataset = load_dataset("glue", "cola") dataset = dataset["train"] # Just take the training split for now ``` ๋‹ค์Œ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ NumPy ๋ฐฐ์—ด๋กœ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์€ ์ด๋ฏธ 0๊ณผ 1๋กœ ๋œ ๋ฆฌ์ŠคํŠธ์ด๊ธฐ ๋•Œ๋ฌธ์— ํ† ํฐํ™”ํ•˜์ง€ ์•Š๊ณ  ๋ฐ”๋กœ NumPy ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True) # Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras tokenized_data = dict(tokenized_data) labels = np.array(dataset["label"]) # Label is already an array of 0 and 1 ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ชจ๋ธ์„ ๋กœ๋“œ, [`compile`](https://keras.io/api/models/model_training_apis/#compile-method), [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)ํ•ฉ๋‹ˆ๋‹ค: ```py from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased") # Lower learning rates are often better for fine-tuning transformers model.compile(optimizer=Adam(3e-5)) model.fit(tokenized_data, labels) ``` <Tip> ๋ชจ๋ธ์„ `compile()`ํ•  ๋•Œ ์†์‹ค ์ธ์ˆ˜๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค! ์ด ์ธ์ˆ˜๋ฅผ ๋น„์›Œ๋‘๋ฉด ํ—ˆ๊น… ํŽ˜์ด์Šค ๋ชจ๋ธ์€ ์ž‘์—…๊ณผ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์ ํ•ฉํ•œ ์†์‹ค์„ ์ž๋™์œผ๋กœ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์›ํ•œ๋‹ค๋ฉด ์–ธ์ œ๋“ ์ง€ ์ง์ ‘ ์†์‹ค์„ ์ง€์ •ํ•˜์—ฌ ์ด๋ฅผ ์žฌ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </Tip> ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ ์†Œ๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์—์„œ๋Š” ์ž˜ ์ž‘๋™ํ•˜์ง€๋งŒ, ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์—์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์™œ ๊ทธ๋Ÿด๊นŒ์š”? ํ† ํฐํ™”๋œ ๋ฐฐ์—ด๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ฉ”๋ชจ๋ฆฌ์— ์™„์ „ํžˆ ๋กœ๋“œํ•˜๊ณ  NumPy๋Š” "๋“ค์ญ‰๋‚ ์ญ‰ํ•œ" ๋ฐฐ์—ด์„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์—, ๋ชจ๋“  ํ† ํฐํ™”๋œ ์ƒ˜ํ”Œ์„ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์—์„œ ๊ฐ€์žฅ ๊ธด ์ƒ˜ํ”Œ์˜ ๊ธธ์ด๋งŒํผ ํŒจ๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐฐ์—ด์ด ํ›จ์”ฌ ๋” ์ปค์ง€๊ณ  ์ด ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์ธํ•ด ํ•™์Šต ์†๋„๋„ ๋Š๋ ค์ง‘๋‹ˆ๋‹ค! ### ๋ฐ์ดํ„ฐ๋ฅผ tf.data.Dataset์œผ๋กœ ๋กœ๋“œํ•˜๊ธฐ[[loading-data-as-a-tfdatadataset]] ํ•™์Šต ์†๋„๊ฐ€ ๋Š๋ ค์ง€๋Š” ๊ฒƒ์„ ํ”ผํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ๋ฅผ `tf.data.Dataset`์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์›ํ•œ๋‹ค๋ฉด ์ง์ ‘ `tf.data` ํŒŒ์ดํ”„๋ผ์ธ์„ ์ง์ ‘ ์ž‘์„ฑํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ์ด ์ž‘์—…์„ ๊ฐ„ํŽธํ•˜๊ฒŒ ์ˆ˜ํ–‰ํ•˜๋Š” ์ˆ˜ ์žˆ๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - [`~TFPreTrainedModel.prepare_tf_dataset`]: ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ์ด ๋ฐฉ๋ฒ•์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๋ฉ”์„œ๋“œ์ด๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์„ ๊ฒ€์‚ฌํ•˜์—ฌ ๋ชจ๋ธ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์—ด์„ ์ž๋™์œผ๋กœ ํŒŒ์•…ํ•˜๊ณ  ๋‚˜๋จธ์ง€๋Š” ๋ฒ„๋ ค์„œ ๋” ๋‹จ์ˆœํ•˜๊ณ  ์„ฑ๋Šฅ์ด ์ข‹์€ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - [`~datasets.Dataset.to_tf_dataset`]: ์ด ๋ฐฉ๋ฒ•์€ ์ข€ ๋” ๋‚ฎ์€ ์ˆ˜์ค€์ด๋ฉฐ, ํฌํ•จํ•  '์—ด'๊ณผ '๋ ˆ์ด๋ธ”'์„ ์ •ํ™•ํžˆ ์ง€์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ •ํ™•ํžˆ ์ œ์–ดํ•˜๊ณ  ์‹ถ์„ ๋•Œ ์œ ์šฉํ•˜๋ฉฐ, ํฌํ•จํ•  'columns'๊ณผ 'label_cols'์„ ์ •ํ™•ํžˆ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋จผ์ € ๋‹ค์Œ ์ฝ”๋“œ ์ƒ˜ํ”Œ๊ณผ ๊ฐ™์ด ํ† ํฌ๋‚˜์ด์ € ์ถœ๋ ฅ์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์—ด๋กœ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py def tokenize_dataset(data): # Keys of the returned dictionary will be added to the dataset as columns return tokenizer(data["text"]) dataset = dataset.map(tokenize_dataset) ``` ํ—ˆ๊น… ํŽ˜์ด์Šค ๋ฐ์ดํ„ฐ์…‹์€ ๊ธฐ๋ณธ์ ์œผ๋กœ ๋””์Šคํฌ์— ์ €์žฅ๋˜๋ฏ€๋กœ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ๋Š˜๋ฆฌ์ง€ ์•Š๋Š”๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š”! ์—ด์ด ์ถ”๊ฐ€๋˜๋ฉด ๋ฐ์ดํ„ฐ์…‹์—์„œ ๋ฐฐ์น˜๋ฅผ ์ŠคํŠธ๋ฆฌ๋ฐํ•˜๊ณ  ๊ฐ ๋ฐฐ์น˜์— ํŒจ๋”ฉ์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ํŒจ๋”ฉ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ํŒจ๋”ฉ ํ† ํฐ์˜ ์ˆ˜๋ฅผ ํฌ๊ฒŒ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> tf_dataset = model.prepare_tf_dataset(dataset, batch_size=16, shuffle=True, tokenizer=tokenizer) ``` ์œ„์˜ ์ฝ”๋“œ ์ƒ˜ํ”Œ์—์„œ๋Š” ๋ฐฐ์น˜๊ฐ€ ๋กœ๋“œ๋  ๋•Œ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ํŒจ๋”ฉํ•  ์ˆ˜ ์žˆ๋„๋ก `prepare_tf_dataset`์— ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹์˜ ๋ชจ๋“  ์ƒ˜ํ”Œ ๊ธธ์ด๊ฐ€ ๊ฐ™๊ณ  ํŒจ๋”ฉ์ด ํ•„์š”ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ด ์ธ์ˆ˜๋ฅผ ๊ฑด๋„ˆ๋›ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒ˜ํ”Œ์„ ์ฑ„์šฐ๋Š” ๊ฒƒ๋ณด๋‹ค ๋” ๋ณต์žกํ•œ ์ž‘์—…(์˜ˆ: ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด์˜ ํ† ํฐ ์†์ƒ ๋ชจ๋ธ๋ง)์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ํ† ํฐ์„ ์†์ƒ์‹œ์ผœ์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ, `collate_fn` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ˜ํ”Œ ๋ชฉ๋ก์„ ๋ฐฐ์น˜๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์›ํ•˜๋Š” ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•  ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [์˜ˆ์‹œ](https://github.com/huggingface/transformers/tree/main/examples) ๋˜๋Š” [๋…ธํŠธ๋ถ](https://huggingface.co/docs/transformers/notebooks)์„ ์ฐธ์กฐํ•˜์—ฌ ์ด ์ ‘๊ทผ ๋ฐฉ์‹์ด ์‹ค์ œ๋กœ ์ž‘๋™ํ•˜๋Š” ๋ชจ์Šต์„ ํ™•์ธํ•˜์„ธ์š”. `tf.data.Dataset`์„ ์ƒ์„ฑํ•œ ํ›„์—๋Š” ์ด์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  ํ›ˆ๋ จ(fit)ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py model.compile(optimizer=Adam(3e-5)) model.fit(tf_dataset) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a> ## ๊ธฐ๋ณธ ํŒŒ์ดํ† ์น˜๋กœ ํ›ˆ๋ จํ•˜๊ธฐ[[train-in-native-pytorch]] <frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`]๋Š” ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ฒ˜๋ฆฌํ•˜๋ฉฐ ํ•œ ์ค„์˜ ์ฝ”๋“œ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์„ ์„ ํ˜ธํ•˜๋Š” ์‚ฌ์šฉ์ž์˜ ๊ฒฝ์šฐ, ๊ธฐ๋ณธ PyTorch์—์„œ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ๋…ธํŠธ๋ถ์„ ๋‹ค์‹œ ์‹œ์ž‘ํ•˜๊ฑฐ๋‚˜ ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•ด ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ™•๋ณดํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py del model del trainer torch.cuda.empty_cache() ``` ๋‹ค์Œ์œผ๋กœ, 'ํ† ํฐํ™”๋œ ๋ฐ์ดํ„ฐ์…‹'์„ ์ˆ˜๋™์œผ๋กœ ํ›„์ฒ˜๋ฆฌํ•˜์—ฌ ํ›ˆ๋ จ๋ จ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. 1. ๋ชจ๋ธ์ด ์›์‹œ ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ํ—ˆ์šฉํ•˜์ง€ ์•Š์œผ๋ฏ€๋กœ `text` ์—ด์„ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. ๋ชจ๋ธ์—์„œ ์ธ์ˆ˜์˜ ์ด๋ฆ„์ด `labels`๋กœ ์ง€์ •๋  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•˜๋ฏ€๋กœ `label` ์—ด์˜ ์ด๋ฆ„์„ `labels`๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. ๋ฐ์ดํ„ฐ์…‹์˜ ํ˜•์‹์„ List ๋Œ€์‹  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋„๋ก ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_datasets.set_format("torch") ``` ๊ทธ๋ฆฌ๊ณ  ์•ž์„œ ํ‘œ์‹œ๋œ ๋Œ€๋กœ ๋ฐ์ดํ„ฐ์…‹์˜ ๋” ์ž‘์€ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ์ƒ์„ฑํ•˜์—ฌ ๋ฏธ์„ธ ์กฐ์ • ์†๋„๋ฅผ ๋†’์ž…๋‹ˆ๋‹ค: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader[[dataloader]] ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•œ 'DataLoader'๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ฐ์ดํ„ฐ ๋ฐฐ์น˜๋ฅผ ๋ฐ˜๋ณตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` ์˜ˆ์ธก์„ ์œ„ํ•œ ๋ ˆ์ด๋ธ” ๊ฐœ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) ``` ### ์˜ตํ‹ฐ๋งˆ์ด์ € ๋ฐ ํ•™์Šต ์†๋„ ์Šค์ผ€์ค„๋Ÿฌ[[optimizer-and-learning-rate-scheduler]] ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ํ•™์Šต ์†๋„ ์Šค์ผ€์ค„๋Ÿฌ๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ† ์น˜์—์„œ ์ œ๊ณตํ•˜๋Š” [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` [`Trainer`]์—์„œ ๊ธฐ๋ณธ ํ•™์Šต ์†๋„ ์Šค์ผ€์ค„๋Ÿฌ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, GPU์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ 'device'๋ฅผ ์ง€์ •ํ•˜์—ฌ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด CPU์—์„œ ํ›ˆ๋ จํ•˜๋ฉฐ ๋ช‡ ๋ถ„์ด ์•„๋‹Œ ๋ช‡ ์‹œ๊ฐ„์ด ๊ฑธ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> [Colaboratory](https://colab.research.google.com/) ๋˜๋Š” [SageMaker StudioLab](https://studiolab.sagemaker.aws/)๊ณผ ๊ฐ™์€ ํ˜ธ์ŠคํŒ… ๋…ธํŠธ๋ถ์ด ์—†๋Š” ๊ฒฝ์šฐ ํด๋ผ์šฐ๋“œ GPU์— ๋ฌด๋ฃŒ๋กœ ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์ด์ œ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ๐Ÿฅณ ### ํ›ˆ๋ จ ๋ฃจํ”„[[training-loop]] ํ›ˆ๋ จ ์ง„ํ–‰ ์ƒํ™ฉ์„ ์ถ”์ ํ•˜๋ ค๋ฉด [tqdm](https://tqdm.github.io/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠธ๋ ˆ์ด๋‹ ๋‹จ๊ณ„ ์ˆ˜์— ์ง„ํ–‰๋ฅ  ํ‘œ์‹œ์ค„์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### ํ‰๊ฐ€ ํ•˜๊ธฐ[[evaluate]] [`Trainer`]์— ํ‰๊ฐ€ ํ•จ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•œ ๋ฐฉ๋ฒ•๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ง์ ‘ ์ž‘์„ฑํ•  ๋•Œ๋„ ๋™์ผํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋ฒˆ์—๋Š” ๊ฐ ์—ํฌํฌ๊ฐ€ ๋๋‚  ๋•Œ๋งˆ๋‹ค ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ๋ณด๊ณ ํ•˜๋Š” ๋Œ€์‹ , [`~evaluate.add_batch`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  ๋ฐฐ์น˜๋ฅผ ๋ˆ„์ ํ•˜๊ณ  ๋งจ ๋งˆ์ง€๋ง‰์— ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> metric = evaluate.load("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a> ## ์ถ”๊ฐ€ ์ž๋ฃŒ[[additional-resources]] ๋” ๋งŽ์€ ๋ฏธ์„ธ ํŠœ๋‹ ์˜ˆ์ œ๋Š” ๋‹ค์Œ์„ ์ฐธ์กฐํ•˜์„ธ์š”: - [๐Ÿค— Trnasformers ์˜ˆ์ œ](https://github.com/huggingface/transformers/tree/main/examples)์—๋Š” PyTorch ๋ฐ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ ์ผ๋ฐ˜์ ์ธ NLP ์ž‘์—…์„ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š” ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - [๐Ÿค— Transformers ๋…ธํŠธ๋ถ](notebooks)์—๋Š” PyTorch ๋ฐ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ ํŠน์ • ์ž‘์—…์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ๋…ธํŠธ๋ถ์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/serialization.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[export-to-onnx]] ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์ œํ’ˆ ํ™˜๊ฒฝ์—์„œ ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ชจ๋ธ์„ ์ง๋ ฌํ™”๋œ ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ด๊ณ  ํŠน์ • ๋Ÿฐํƒ€์ž„๊ณผ ํ•˜๋“œ์›จ์–ด์—์„œ ๋กœ๋“œํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์œผ๋ฉด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ Transformers์˜ ํ™•์žฅ์œผ๋กœ, PyTorch ๋˜๋Š” TensorFlow์—์„œ ๋ชจ๋ธ์„ ONNX์™€ TFLite์™€ ๊ฐ™์€ ์ง๋ ฌํ™”๋œ ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” `exporters` ๋ชจ๋“ˆ์„ ํ†ตํ•ด ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ ๋˜ํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™” ๋„๊ตฌ ์„ธํŠธ๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํŠน์ • ํ•˜๋“œ์›จ์–ด์—์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ์‹คํ–‰ํ•  ๋•Œ ์ตœ๋Œ€ ํšจ์œจ์„ฑ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์•ˆ๋‚ด์„œ๋Š” ๐Ÿค— Optimum์„ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. TFLite๋กœ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๋Š” ์•ˆ๋‚ด์„œ๋Š” [TFLite๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ ํŽ˜์ด์ง€](tflite)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[export-to-onnx]] [ONNX (Open Neural Network eXchange)](http://onnx.ai)๋Š” PyTorch์™€ TensorFlow๋ฅผ ํฌํ•จํ•œ ๋‹ค์–‘ํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ์‹ฌ์ธต ํ•™์Šต ๋ชจ๋ธ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ๊ณตํ†ต ์—ฐ์‚ฐ์ž ์„ธํŠธ์™€ ๊ณตํ†ต ํŒŒ์ผ ํ˜•์‹์„ ์ •์˜ํ•˜๋Š” ์˜คํ”ˆ ํ‘œ์ค€์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ONNX ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ด์ง€๋ฉด ์ด๋Ÿฌํ•œ ์—ฐ์‚ฐ์ž๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹ ๊ฒฝ๋ง์„ ํ†ตํ•ด ๋ฐ์ดํ„ฐ๊ฐ€ ํ๋ฅด๋Š” ํ๋ฆ„์„ ๋‚˜ํƒ€๋‚ด๋Š” ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„(์ผ๋ฐ˜์ ์œผ๋กœ _์ค‘๊ฐ„ ํ‘œํ˜„_์ด๋ผ๊ณ  ํ•จ)๊ฐ€ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ํ‘œ์ค€ํ™”๋œ ์—ฐ์‚ฐ์ž์™€ ๋ฐ์ดํ„ฐ ์œ ํ˜•์„ ๊ฐ€์ง„ ๊ทธ๋ž˜ํ”„๋ฅผ ๋…ธ์ถœํ•จ์œผ๋กœ์จ, ONNX๋Š” ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„์— ์‰ฝ๊ฒŒ ์ „ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, PyTorch์—์„œ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ONNX ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ด๊ณ  TensorFlow์—์„œ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๊ทธ ๋ฐ˜๋Œ€๋„ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค). ONNX ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ธ ๋ชจ๋ธ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - [๊ทธ๋ž˜ํ”„ ์ตœ์ ํ™”](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization) ๋ฐ [์–‘์žํ™”](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization)์™€ ๊ฐ™์€ ๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ์„ ์œ„ํ•ด ์ตœ์ ํ™”๋ฉ๋‹ˆ๋‹ค. - ONNX Runtime์„ ํ†ตํ•ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`ORTModelForXXX` ํด๋ž˜์Šค๋“ค](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort)์„ ํ†ตํ•ด ๋™์ผํ•œ `AutoModel` API๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ์ด API๋Š” ๐Ÿค— Transformers์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. - [์ตœ์ ํ™”๋œ ์ถ”๋ก  ํŒŒ์ดํ”„๋ผ์ธ](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines)์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๐Ÿค— Transformers์˜ [`pipeline`] ํ•จ์ˆ˜์™€ ๋™์ผํ•œ API๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ ๊ตฌ์„ฑ ๊ฐ์ฒด๋ฅผ ํ™œ์šฉํ•˜์—ฌ ONNX ๋‚ด๋ณด๋‚ด๊ธฐ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ตฌ์„ฑ ๊ฐ์ฒด๋Š” ์—ฌ๋Ÿฌ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ๋ฏธ๋ฆฌ ์ค€๋น„๋˜์–ด ์žˆ์œผ๋ฉฐ ๋‹ค๋ฅธ ์•„ํ‚คํ…์ฒ˜์— ์‰ฝ๊ฒŒ ํ™•์žฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ๋ฆฌ ์ค€๋น„๋œ ๊ตฌ์„ฑ ๋ชฉ๋ก์€ [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/onnx/overview)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์„ ๋ชจ๋‘ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: - ๐Ÿค— Optimum์„ ์‚ฌ์šฉํ•˜์—ฌ CLI๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - `optimum.onnxruntime`์„ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Optimum์œผ๋กœ ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ ### CLI๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-transformers-model-to-onnx-with-cli]] ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋จผ์ € ์ถ”๊ฐ€ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜์„ธ์š”: ```bash pip install optimum[exporters] ``` ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ์ธ์ˆ˜๋ฅผ ํ™•์ธํ•˜๋ ค๋ฉด [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli)๋ฅผ ์ฐธ์กฐํ•˜๊ฑฐ๋‚˜ ๋ช…๋ น์ค„์—์„œ ๋„์›€๋ง์„ ๋ณด์„ธ์š”. ```bash optimum-cli export onnx --help ``` ์˜ˆ๋ฅผ ๋“ค์–ด, ๐Ÿค— Hub์—์„œ `distilbert/distilbert-base-uncased-distilled-squad`์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash optimum-cli export onnx --model distilbert/distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/ ``` ์œ„์™€ ๊ฐ™์ด ์ง„ํ–‰ ์ƒํ™ฉ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋กœ๊ทธ๊ฐ€ ํ‘œ์‹œ๋˜๊ณ  ๊ฒฐ๊ณผ์ธ `model.onnx`๊ฐ€ ์ €์žฅ๋œ ์œ„์น˜๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ```bash Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx... -[โœ“] ONNX model output names match reference model (start_logits, end_logits) - Validating ONNX Model output "start_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) - Validating ONNX Model output "end_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx ``` ์œ„์˜ ์˜ˆ์ œ๋Š” ๐Ÿค— Hub์—์„œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋Š” ๊ฒƒ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋กœ์ปฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ผ ๋•Œ์—๋Š” ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜์™€ ํ† ํฌ๋‚˜์ด์ € ํŒŒ์ผ์„ ๋™์ผํ•œ ๋””๋ ‰ํ† ๋ฆฌ(`local_path`)์— ์ €์žฅํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. CLI๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ์—๋Š” ๐Ÿค— Hub์˜ ์ฒดํฌํฌ์ธํŠธ ์ด๋ฆ„ ๋Œ€์‹  `model` ์ธ์ˆ˜์— `local_path`๋ฅผ ์ „๋‹ฌํ•˜๊ณ  `--task` ์ธ์ˆ˜๋ฅผ ์ œ๊ณตํ•˜์„ธ์š”. ์ง€์›๋˜๋Š” ์ž‘์—…์˜ ๋ชฉ๋ก์€ [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/task_manager)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. `task` ์ธ์ˆ˜๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์œผ๋ฉด ์ž‘์—…์— ํŠนํ™”๋œ ํ—ค๋“œ ์—†์ด ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋กœ ๊ธฐ๋ณธ ์„ค์ •๋ฉ๋‹ˆ๋‹ค. ```bash optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/ ``` ๊ทธ ๊ฒฐ๊ณผ๋กœ ์ƒ์„ฑ๋œ `model.onnx` ํŒŒ์ผ์€ ONNX ํ‘œ์ค€์„ ์ง€์›ํ•˜๋Š” ๋งŽ์€ [๊ฐ€์†๊ธฐ](https://onnx.ai/supported-tools.html#deployModel) ์ค‘ ํ•˜๋‚˜์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [ONNX Runtime](https://onnxruntime.ai/)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx") >>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx") >>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt") >>> outputs = model(**inputs) ``` Hub์˜ TensorFlow ์ฒดํฌํฌ์ธํŠธ์— ๋Œ€ํ•ด์„œ๋„ ๋™์ผํ•œ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [Keras organization](https://huggingface.co/keras-io)์—์„œ ์ˆœ์ˆ˜ํ•œ TensorFlow ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/ ``` ### `optimum.onnxruntime`์„ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-transformers-model-to-onnx-with-optimumonnxruntime]] CLI ๋Œ€์‹ ์— `optimum.onnxruntime`์„ ์‚ฌ์šฉํ•˜์—ฌ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ๋ฐฉ์‹์œผ๋กœ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง„ํ–‰ํ•˜์„ธ์š”: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = "distilbert_base_uncased_squad" >>> save_directory = "onnx/" >>> # Load a model from transformers and export it to ONNX >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True) >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> # Save the onnx model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) ``` ### ์ง€์›๋˜์ง€ ์•Š๋Š” ์•„ํ‚คํ…์ฒ˜์˜ ๋ชจ๋ธ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-model-for-an-unsupported-architecture]] ํ˜„์žฌ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์—†๋Š” ๋ชจ๋ธ์„ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•ด ๊ธฐ์—ฌํ•˜๋ ค๋ฉด, ๋จผ์ € [`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview)์—์„œ ์ง€์›๋˜๋Š”์ง€ ํ™•์ธํ•œ ํ›„ ์ง€์›๋˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ์—๋Š” [๐Ÿค— Optimum์— ๊ธฐ์—ฌ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute)ํ•˜์„ธ์š”. ### `transformers.onnx`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-model-with-transformersonnx]] <Tip warning={true}> `tranformers.onnx`๋Š” ๋” ์ด์ƒ ์œ ์ง€๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์œ„์—์„œ ์„ค๋ช…ํ•œ ๋Œ€๋กœ ๐Ÿค— Optimum์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด์„ธ์š”. ์ด ์„น์…˜์€ ํ–ฅํ›„ ๋ฒ„์ „์—์„œ ์ œ๊ฑฐ๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. </Tip> ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ์ถ”๊ฐ€ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜์„ธ์š”: ```bash pip install transformers[onnx] ``` `transformers.onnx` ํŒจํ‚ค์ง€๋ฅผ Python ๋ชจ๋“ˆ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ์ค€๋น„๋œ ๊ตฌ์„ฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค: ```bash python -m transformers.onnx --model=distilbert/distilbert-base-uncased onnx/ ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด `--model` ์ธ์ˆ˜์— ์ •์˜๋œ ์ฒดํฌํฌ์ธํŠธ์˜ ONNX ๊ทธ๋ž˜ํ”„๊ฐ€ ๋‚ด๋ณด๋‚ด์ง‘๋‹ˆ๋‹ค. ๐Ÿค— Hub์—์„œ ์ œ๊ณตํ•˜๋Š” ์ฒดํฌํฌ์ธํŠธ๋‚˜ ๋กœ์ปฌ์— ์ €์žฅ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋กœ ์ƒ์„ฑ๋œ `model.onnx` ํŒŒ์ผ์€ ONNX ํ‘œ์ค€์„ ์ง€์›ํ•˜๋Š” ๋งŽ์€ ๊ฐ€์†๊ธฐ ์ค‘ ํ•˜๋‚˜์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ONNX Runtime์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` ํ•„์š”ํ•œ ์ถœ๋ ฅ ์ด๋ฆ„(์˜ˆ: `["last_hidden_state"]`)์€ ๊ฐ ๋ชจ๋ธ์˜ ONNX ๊ตฌ์„ฑ์„ ํ™•์ธํ•˜์—ฌ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, DistilBERT์˜ ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"] ``` Hub์˜ TensorFlow ์ฒดํฌํฌ์ธํŠธ์— ๋Œ€ํ•ด์„œ๋„ ๋™์ผํ•œ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆœ์ˆ˜ํ•œ TensorFlow ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` ๋กœ์ปฌ์— ์ €์žฅ๋œ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜ ํŒŒ์ผ๊ณผ ํ† ํฌ๋‚˜์ด์ € ํŒŒ์ผ์„ ๋™์ผํ•œ ๋””๋ ‰ํ† ๋ฆฌ์— ์ €์žฅํ•œ ๋‹ค์Œ, transformers.onnx ํŒจํ‚ค์ง€์˜ --model ์ธ์ˆ˜๋ฅผ ์›ํ•˜๋Š” ๋””๋ ‰ํ† ๋ฆฌ๋กœ ์ง€์ •ํ•˜์—ฌ ONNX๋กœ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ```
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Accelerate๋ฅผ ํ™œ์šฉํ•œ ๋ถ„์‚ฐ ํ•™์Šต[[distributed-training-with-accelerate]] ๋ชจ๋ธ์ด ์ปค์ง€๋ฉด์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋Š” ์ œํ•œ๋œ ํ•˜๋“œ์›จ์–ด์—์„œ ๋” ํฐ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ํ›ˆ๋ จ ์†๋„๋ฅผ ๋ช‡ ๋ฐฐ๋กœ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ์ „๋žต์œผ๋กœ ๋“ฑ์žฅํ–ˆ์Šต๋‹ˆ๋‹ค. Hugging Face์—์„œ๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ํ•˜๋‚˜์˜ ๋จธ์‹ ์— ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋“  ์—ฌ๋Ÿฌ ๋จธ์‹ ์— ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋“  ๋ชจ๋“  ์œ ํ˜•์˜ ๋ถ„์‚ฐ ์„ค์ •์—์„œ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์‰ฝ๊ฒŒ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๊ธฐ ์œ„ํ•ด [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋ถ„์‚ฐ ํ™˜๊ฒฝ์—์„œ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ธฐ๋ณธ PyTorch ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ปค์Šคํ„ฐ๋งˆ์ด์ฆˆํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ด…์‹œ๋‹ค. ## ์„ค์ •[[setup]] ๐Ÿค— Accelerate ์„ค์น˜ ์‹œ์ž‘ํ•˜๊ธฐ: ```bash pip install accelerate ``` ๊ทธ ๋‹ค์Œ, [`~accelerate.Accelerator`] ๊ฐ์ฒด๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. [`~accelerate.Accelerator`]๋Š” ์ž๋™์œผ๋กœ ๋ถ„์‚ฐ ์„ค์ • ์œ ํ˜•์„ ๊ฐ์ง€ํ•˜๊ณ  ํ›ˆ๋ จ์— ํ•„์š”ํ•œ ๋ชจ๋“  ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ์žฅ์น˜์— ๋ชจ๋ธ์„ ๋ช…์‹œ์ ์œผ๋กœ ๋ฐฐ์น˜ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## ๊ฐ€์†ํ™”๋ฅผ ์œ„ํ•œ ์ค€๋น„[[prepare-to-accelerate]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๊ด€๋ จ๋œ ๋ชจ๋“  ํ›ˆ๋ จ ๊ฐ์ฒด๋ฅผ [`~accelerate.Accelerator.prepare`] ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—๋Š” ํ›ˆ๋ จ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ๋กœ๋”, ๋ชจ๋ธ ๋ฐ ์˜ตํ‹ฐ๋งˆ์ด์ €๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## ๋ฐฑ์›Œ๋“œ(Backward)[[backward]] ๋งˆ์ง€๋ง‰์œผ๋กœ ํ›ˆ๋ จ ๋ฃจํ”„์˜ ์ผ๋ฐ˜์ ์ธ `loss.backward()`๋ฅผ ๐Ÿค— Accelerate์˜ [`~accelerate.Accelerator.backward`] ๋ฉ”์†Œ๋“œ๋กœ ๋Œ€์ฒดํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ๋‹ค์Œ ์ฝ”๋“œ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด, ํ›ˆ๋ จ ๋ฃจํ”„์— ์ฝ”๋“œ ๋„ค ์ค„๋งŒ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ถ„์‚ฐ ํ•™์Šต์„ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## ํ•™์Šต[[train]] ๊ด€๋ จ ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•œ ํ›„์—๋Š” ์Šคํฌ๋ฆฝํŠธ๋‚˜ Colaboratory์™€ ๊ฐ™์€ ๋…ธํŠธ๋ถ์—์„œ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์„ธ์š”. ### ์Šคํฌ๋ฆฝํŠธ๋กœ ํ•™์Šตํ•˜๊ธฐ[[train-with-a-script]] ์Šคํฌ๋ฆฝํŠธ์—์„œ ํ›ˆ๋ จ์„ ์‹คํ–‰ํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ตฌ์„ฑ ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜๊ณ  ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```bash accelerate config ``` Then launch your training with: ```bash accelerate launch train.py ``` ### ๋…ธํŠธ๋ถ์œผ๋กœ ํ•™์Šตํ•˜๊ธฐ[[train-with-a-notebook]] Collaboratory์˜ TPU๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ, ๋…ธํŠธ๋ถ์—์„œ๋„ ๐Ÿค— Accelerate๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ๋‹ด๋‹นํ•˜๋Š” ๋ชจ๋“  ์ฝ”๋“œ๋ฅผ ํ•จ์ˆ˜๋กœ ๊ฐ์‹ธ์„œ [`~accelerate.notebook_launcher`]์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` ๐Ÿค— Accelerate ๋ฐ ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [documentation](https://huggingface.co/docs/accelerate)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/llm_tutorial.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ๋กœ ์ƒ์„ฑํ•˜๊ธฐ [[generation-with-llms]] [[open-in-colab]] LLM ๋˜๋Š” ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์€ ํ…์ŠคํŠธ ์ƒ์„ฑ์˜ ํ•ต์‹ฌ ๊ตฌ์„ฑ ์š”์†Œ์ž…๋‹ˆ๋‹ค. ๊ฐ„๋‹จํžˆ ๋งํ•˜๋ฉด, ์ฃผ์–ด์ง„ ์ž…๋ ฅ ํ…์ŠคํŠธ์— ๋Œ€ํ•œ ๋‹ค์Œ ๋‹จ์–ด(์ •ํ™•ํ•˜๊ฒŒ๋Š” ํ† ํฐ)๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ ์œ„ํ•ด ํ›ˆ๋ จ๋œ ๋Œ€๊ทœ๋ชจ ์‚ฌ์ „ ํ›ˆ๋ จ ๋ณ€ํ™˜๊ธฐ ๋ชจ๋ธ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ํ† ํฐ์„ ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์”ฉ ์˜ˆ์ธกํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ƒˆ๋กœ์šด ๋ฌธ์žฅ์„ ์ƒ์„ฑํ•˜๋ ค๋ฉด ๋ชจ๋ธ์„ ํ˜ธ์ถœํ•˜๋Š” ๊ฒƒ ์™ธ์— ๋” ๋ณต์žกํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์€ ๋ช‡ ๊ฐœ์˜ ์ดˆ๊ธฐ ์ž…๋ ฅ๊ฐ’์„ ์ œ๊ณตํ•œ ํ›„, ๊ทธ ์ถœ๋ ฅ์„ ๋‹ค์‹œ ๋ชจ๋ธ์— ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ˜๋ณต์ ์œผ๋กœ ํ˜ธ์ถœํ•˜๋Š” ์ถ”๋ก  ๊ณผ์ •์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers์—์„œ๋Š” [`~generation.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๊ฐ€ ์ด ์—ญํ• ์„ ํ•˜๋ฉฐ, ์ด๋Š” ์ƒ์„ฑ ๊ธฐ๋Šฅ์„ ๊ฐ€์ง„ ๋ชจ๋“  ๋ชจ๋ธ์—์„œ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ ๋‚ด์šฉ์„ ๋‹ค๋ฃจ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: * LLM์œผ๋กœ ํ…์ŠคํŠธ ์ƒ์„ฑ * ์ผ๋ฐ˜์ ์œผ๋กœ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ ํ•ด๊ฒฐ * LLM์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•œ ๋‹ค์Œ ๋‹จ๊ณ„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers bitsandbytes>=0.39.0 -q ``` ## ํ…์ŠคํŠธ ์ƒ์„ฑ [[generate-text]] [์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง(causal language modeling)](tasks/language_modeling)์„ ๋ชฉ์ ์œผ๋กœ ํ•™์Šต๋œ ์–ธ์–ด ๋ชจ๋ธ์€ ์ผ๋ จ์˜ ํ…์ŠคํŠธ ํ† ํฐ์„ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ , ๊ทธ ๊ฒฐ๊ณผ๋กœ ๋‹ค์Œ ํ† ํฐ์ด ๋‚˜์˜ฌ ํ™•๋ฅ  ๋ถ„ํฌ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. <!-- [GIF 1 -- FWD PASS] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_1_1080p.mov" ></video> <figcaption>"LLM์˜ ์ „๋ฐฉ ํŒจ์Šค"</figcaption> </figure> LLM๊ณผ ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์„ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ๋•Œ ํ•ต์‹ฌ์ ์ธ ๋ถ€๋ถ„์€ ์ด ํ™•๋ฅ  ๋ถ„ํฌ๋กœ๋ถ€ํ„ฐ ๋‹ค์Œ ํ† ํฐ์„ ์–ด๋–ป๊ฒŒ ๊ณ ๋ฅผ ๊ฒƒ์ธ์ง€์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฐ˜๋ณต ๊ณผ์ •์— ์‚ฌ์šฉ๋  ํ† ํฐ์„ ๊ฒฐ์ •ํ•˜๋Š” ํ•œ, ์–ด๋– ํ•œ ๋ฐฉ๋ฒ•๋„ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ํ™•๋ฅ  ๋ถ„ํฌ์—์„œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ† ํฐ์„ ์„ ํƒํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•  ์ˆ˜๋„ ์žˆ๊ณ , ๊ฒฐ๊ณผ ๋ถ„ํฌ์—์„œ ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์ „์— ์ˆ˜์‹ญ ๊ฐ€์ง€ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ณต์žกํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. <!-- [GIF 2 -- TEXT GENERATION] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_2_1080p.mov" ></video> <figcaption>"์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์€ ํ™•๋ฅ  ๋ถ„ํฌ์—์„œ ๋‹ค์Œ ํ† ํฐ์„ ๋ฐ˜๋ณต์ ์œผ๋กœ ์„ ํƒํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค."</figcaption> </figure> ์œ„์—์„œ ์„ค๋ช…ํ•œ ๊ณผ์ •์€ ์–ด๋–ค ์ข…๋ฃŒ ์กฐ๊ฑด์ด ์ถฉ์กฑ๋  ๋•Œ๊นŒ์ง€ ๋ฐ˜๋ณต์ ์œผ๋กœ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์‹œํ€€์Šค์˜ ๋(EOS ํ† ํฐ)์„ ์ถœ๋ ฅํ•  ๋•Œ๊นŒ์ง€๋ฅผ ์ข…๋ฃŒ ์กฐ๊ฑด์œผ๋กœ ํ•˜๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์€ ๊ฒฝ์šฐ์—๋Š” ๋ฏธ๋ฆฌ ์ •์˜๋œ ์ตœ๋Œ€ ๊ธธ์ด์— ๋„๋‹ฌํ–ˆ์„ ๋•Œ ์ƒ์„ฑ์ด ์ค‘๋‹จ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์˜ˆ์ƒ๋Œ€๋กœ ๋™์ž‘ํ•˜๊ธฐ ์œ„ํ•ด์„  ํ† ํฐ ์„ ํƒ ๋‹จ๊ณ„์™€ ์ •์ง€ ์กฐ๊ฑด์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ด์œ ๋กœ, ๊ฐ ๋ชจ๋ธ์—๋Š” ๊ธฐ๋ณธ ์ƒ์„ฑ ์„ค์ •์ด ์ž˜ ์ •์˜๋œ [`~generation.GenerationConfig`] ํŒŒ์ผ์ด ํ•จ๊ป˜ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ์ฝ”๋“œ๋ฅผ ํ™•์ธํ•ด๋ด…์‹œ๋‹ค! <Tip> ๊ธฐ๋ณธ LLM ์‚ฌ์šฉ์— ๊ด€์‹ฌ์ด ์žˆ๋‹ค๋ฉด, ์šฐ๋ฆฌ์˜ [`Pipeline`](pipeline_tutorial) ์ธํ„ฐํŽ˜์ด์Šค๋กœ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ LLM์€ ์–‘์žํ™”๋‚˜ ํ† ํฐ ์„ ํƒ ๋‹จ๊ณ„์—์„œ์˜ ๋ฏธ์„ธํ•œ ์ œ์–ด์™€ ๊ฐ™์€ ๊ณ ๊ธ‰ ๊ธฐ๋Šฅ๋“ค์„ ์ข…์ข… ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ž‘์—…์€ [`~generation.GenerationMixin.generate`]๋ฅผ ํ†ตํ•ด ๊ฐ€์žฅ ์ž˜ ์ˆ˜ํ–‰๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. LLM์„ ์ด์šฉํ•œ ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์€ ์ž์›์„ ๋งŽ์ด ์†Œ๋ชจํ•˜๋ฏ€๋กœ, ์ ์ ˆํ•œ ์ฒ˜๋ฆฌ๋Ÿ‰์„ ์œ„ํ•ด GPU์—์„œ ์‹คํ–‰๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋จผ์ €, ๋ชจ๋ธ์„ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”. ```python >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( ... "mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True ... ) ``` `from_pretrained` ํ•จ์ˆ˜๋ฅผ ํ˜ธ์ถœํ•  ๋•Œ 2๊ฐœ์˜ ํ”Œ๋ž˜๊ทธ๋ฅผ ์ฃผ๋ชฉํ•˜์„ธ์š”: - `device_map`์€ ๋ชจ๋ธ์ด GPU๋กœ ์ด๋™๋˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. - `load_in_4bit`๋Š” ๋ฆฌ์†Œ์Šค ์š”๊ตฌ ์‚ฌํ•ญ์„ ํฌ๊ฒŒ ์ค„์ด๊ธฐ ์œ„ํ•ด [4๋น„ํŠธ ๋™์  ์–‘์žํ™”](main_classes/quantization)๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์™ธ์—๋„ ๋ชจ๋ธ์„ ์ดˆ๊ธฐํ™”ํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์ด ์žˆ์ง€๋งŒ, LLM์„ ์ฒ˜์Œ ์‹œ์ž‘ํ•  ๋•Œ ์ด ์„ค์ •์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ์ด์–ด์„œ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ [ํ† ํฌ๋‚˜์ด์ €](tokenizer_summary)์œผ๋กœ ์ „์ฒ˜๋ฆฌํ•˜์„ธ์š”. ```python >>> from transformers import AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model_inputs = tokenizer(["A list of colors: red, blue"], return_tensors="pt").to(device) ``` `model_inputs` ๋ณ€์ˆ˜์—๋Š” ํ† ํฐํ™”๋œ ํ…์ŠคํŠธ ์ž…๋ ฅ๊ณผ ํ•จ๊ป˜ ์–ดํ…์…˜ ๋งˆ์Šคํฌ๊ฐ€ ๋“ค์–ด ์žˆ์Šต๋‹ˆ๋‹ค. [`~generation.GenerationMixin.generate`]๋Š” ์–ดํ…์…˜ ๋งˆ์Šคํฌ๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์•˜์„ ๊ฒฝ์šฐ์—๋„ ์ด๋ฅผ ์ถ”๋ก ํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์ง€๋งŒ, ์ตœ์ƒ์˜ ์„ฑ๋Šฅ์„ ์œ„ํ•ด์„œ๋Š” ๊ฐ€๋Šฅํ•˜๋ฉด ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋ฅผ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ [`~generation.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•ด ์ƒ์„ฑ๋œ ํ† ํฐ์„ ์–ป์€ ํ›„, ์ด๋ฅผ ์ถœ๋ ฅํ•˜๊ธฐ ์ „์— ํ…์ŠคํŠธ ํ˜•ํƒœ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”. ```python >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A list of colors: red, blue, green, yellow, black, white, and brown' ``` ์ด๊ฒŒ ์ „๋ถ€์ž…๋‹ˆ๋‹ค! ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋งŒ์œผ๋กœ LLM์˜ ๋Šฅ๋ ฅ์„ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ## ์ผ๋ฐ˜์ ์œผ๋กœ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ [[common-pitfalls]] [์ƒ์„ฑ ์ „๋žต](generation_strategies)์ด ๋งŽ๊ณ , ๊ธฐ๋ณธ๊ฐ’์ด ํ•ญ์ƒ ์‚ฌ์šฉ ์‚ฌ๋ก€์— ์ ํ•ฉํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถœ๋ ฅ์ด ์˜ˆ์ƒ๊ณผ ๋‹ค๋ฅผ ๋•Œ ํ”ํžˆ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ์™€ ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ชฉ๋ก์„ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") >>> tokenizer.pad_token = tokenizer.eos_token # Mistral has no pad token by default >>> model = AutoModelForCausalLM.from_pretrained( ... "mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True ... ) ``` ### ์ƒ์„ฑ๋œ ์ถœ๋ ฅ์ด ๋„ˆ๋ฌด ์งง๊ฑฐ๋‚˜ ๊ธธ๋‹ค [[generated-output-is-too-shortlong]] [`~generation.GenerationConfig`] ํŒŒ์ผ์—์„œ ๋ณ„๋„๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์œผ๋ฉด, `generate`๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์ตœ๋Œ€ 20๊ฐœ์˜ ํ† ํฐ์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `generate` ํ˜ธ์ถœ์—์„œ `max_new_tokens`์„ ์ˆ˜๋™์œผ๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ˜ํ™˜ํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ ํ† ํฐ์˜ ์ตœ๋Œ€ ์ˆ˜๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. LLM(์ •ํ™•ํ•˜๊ฒŒ๋Š” [๋””์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt))์€ ์ž…๋ ฅ ํ”„๋กฌํ”„ํŠธ๋„ ์ถœ๋ ฅ์˜ ์ผ๋ถ€๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> model_inputs = tokenizer(["A sequence of numbers: 1, 2"], return_tensors="pt").to("cuda") >>> # By default, the output will contain up to 20 tokens >>> generated_ids = model.generate(**model_inputs, pad_token_id=tokenizer.eos_token_id) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5' >>> # Setting `max_new_tokens` allows you to control the maximum length >>> generated_ids = model.generate(**model_inputs, pad_token_id=tokenizer.eos_token_id, max_new_tokens=50) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,' ``` ### ์ž˜๋ชป๋œ ์ƒ์„ฑ ๋ชจ๋“œ [[incorrect-generation-mode]] ๊ธฐ๋ณธ์ ์œผ๋กœ [`~generation.GenerationConfig`] ํŒŒ์ผ์—์„œ ๋ณ„๋„๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์œผ๋ฉด, `generate`๋Š” ๊ฐ ๋ฐ˜๋ณต์—์„œ ๊ฐ€์žฅ ํ™•๋ฅ ์ด ๋†’์€ ํ† ํฐ์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค(๊ทธ๋ฆฌ๋”” ๋””์ฝ”๋”ฉ). ํ•˜๋ ค๋Š” ์ž‘์—…์— ๋”ฐ๋ผ ์ด ๋ฐฉ๋ฒ•์€ ๋ฐ”๋žŒ์งํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ฑ—๋ด‡์ด๋‚˜ ์—์„ธ์ด ์ž‘์„ฑ๊ณผ ๊ฐ™์€ ์ฐฝ์˜์ ์ธ ์ž‘์—…์€ ์ƒ˜ํ”Œ๋ง์ด ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด, ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜๊ฑฐ๋‚˜ ๋ฒˆ์—ญ๊ณผ ๊ฐ™์€ ์ž…๋ ฅ ๊ธฐ๋ฐ˜ ์ž‘์—…์€ ๊ทธ๋ฆฌ๋”” ๋””์ฝ”๋”ฉ์ด ๋” ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `do_sample=True`๋กœ ์ƒ˜ํ”Œ๋ง์„ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ์ฃผ์ œ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์ด [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/how-to-generate)์—์„œ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python >>> # Set seed or reproducibility -- you don't need this unless you want full reproducibility >>> from transformers import set_seed >>> set_seed(0) >>> model_inputs = tokenizer(["I am a cat."], return_tensors="pt").to("cuda") >>> # LLM + greedy decoding = repetitive, boring output >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat. I am a cat. I am a cat. I am a cat' >>> # With sampling, the output becomes more creative! >>> generated_ids = model.generate(**model_inputs, do_sample=True) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat.\nI just need to be. I am always.\nEvery time' ``` ### ์ž˜๋ชป๋œ ํŒจ๋”ฉ [[wrong-padding-side]] LLM์€ [๋””์ฝ”๋” ์ „์šฉ](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt) ๊ตฌ์กฐ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์–ด, ์ž…๋ ฅ ํ”„๋กฌํ”„ํŠธ์— ๋Œ€ํ•ด ์ง€์†์ ์œผ๋กœ ๋ฐ˜๋ณต ์ฒ˜๋ฆฌ๋ฅผ ํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์˜ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅด๋ฉด ํŒจ๋”ฉ ์ž‘์—…์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. LLM์€ ํŒจ๋”ฉ ํ† ํฐ์—์„œ ์ž‘๋™์„ ์ด์–ด๊ฐ€๋„๋ก ์„ค๊ณ„๋˜์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ž…๋ ฅ ์™ผ์ชฝ์— ํŒจ๋”ฉ์ด ์ถ”๊ฐ€ ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋„ ๊ผญ `generate` ํ•จ์ˆ˜์— ์ „๋‹ฌ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ```python >>> # The tokenizer initialized above has right-padding active by default: the 1st sequence, >>> # which is shorter, has padding on the right side. Generation fails. >>> model_inputs = tokenizer( ... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ... ).to("cuda") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True)[0] '' >>> # With left-padding, it works as expected! >>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", padding_side="left") >>> tokenizer.pad_token = tokenizer.eos_token # Llama has no pad token by default >>> model_inputs = tokenizer( ... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ... ).to("cuda") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] '1, 2, 3, 4, 5, 6,' ``` <!-- TODO: when the prompting guide is ready, mention the importance of setting the right prompt in this section --> ## ์ถ”๊ฐ€ ์ž๋ฃŒ [[further-resources]] ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ ํ”„๋กœ์„ธ์Šค๋Š” ์ƒ๋Œ€์ ์œผ๋กœ ๋‹จ์ˆœํ•œ ํŽธ์ด์ง€๋งŒ, LLM์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜๋ ค๋ฉด ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์š”์†Œ๋ฅผ ๊ณ ๋ คํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์‰ฝ์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. LLM์— ๋Œ€ํ•œ ๋” ๊นŠ์€ ์ดํ•ด์™€ ํ™œ์šฉ์„ ์œ„ํ•œ ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: <!-- TODO: complete with new guides --> ### ๊ณ ๊ธ‰ ์ƒ์„ฑ ์‚ฌ์šฉ [[advanced-generate-usage]] 1. [๊ฐ€์ด๋“œ](generation_strategies)๋Š” ๋‹ค์–‘ํ•œ ์ƒ์„ฑ ๋ฐฉ๋ฒ•์„ ์ œ์–ดํ•˜๋Š” ๋ฐฉ๋ฒ•, ์ƒ์„ฑ ์„ค์ • ํŒŒ์ผ์„ ์„ค์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•, ์ถœ๋ ฅ์„ ์ŠคํŠธ๋ฆฌ๋ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. 2. [`~generation.GenerationConfig`]์™€ [`~generation.GenerationMixin.generate`], [generate-related classes](internal/generation_utils)๋ฅผ ์ฐธ์กฐํ•ด๋ณด์„ธ์š”. ### LLM ๋ฆฌ๋”๋ณด๋“œ [[llm-leaderboards]] 1. [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)๋Š” ์˜คํ”ˆ ์†Œ์Šค ๋ชจ๋ธ์˜ ํ’ˆ์งˆ์— ์ค‘์ ์„ ๋‘ก๋‹ˆ๋‹ค. 2. [Open LLM-Perf Leaderboard](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)๋Š” LLM ์ฒ˜๋ฆฌ๋Ÿ‰์— ์ค‘์ ์„ ๋‘ก๋‹ˆ๋‹ค. ### ์ง€์—ฐ ์‹œ๊ฐ„ ๋ฐ ์ฒ˜๋ฆฌ๋Ÿ‰ [[latency-and-throughput]] 1. ๋ฉ”๋ชจ๋ฆฌ ์š”๊ตฌ ์‚ฌํ•ญ์„ ์ค„์ด๋ ค๋ฉด, ๋™์  ์–‘์žํ™”์— ๋Œ€ํ•œ [๊ฐ€์ด๋“œ](main_classes/quantization)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ### ๊ด€๋ จ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ [[related-libraries]] 1. [`text-generation-inference`](https://github.com/huggingface/text-generation-inference)๋Š” LLM์„ ์œ„ํ•œ ์‹ค์ œ ์šด์˜ ํ™˜๊ฒฝ์— ์ ํ•ฉํ•œ ์„œ๋ฒ„์ž…๋‹ˆ๋‹ค. 2. [`optimum`](https://github.com/huggingface/optimum)์€ ํŠน์ • ํ•˜๋“œ์›จ์–ด ์žฅ์น˜์—์„œ LLM์„ ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Transformers๋ฅผ ํ™•์žฅํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค.
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/perf_train_tpu_tf.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TensorFlow๋กœ TPU์—์„œ ํ›ˆ๋ จํ•˜๊ธฐ[[training-on-tpu-with-tensorflow]] <Tip> ์ž์„ธํ•œ ์„ค๋ช…์ด ํ•„์š”ํ•˜์ง€ ์•Š๊ณ  ๋ฐ”๋กœ TPU ์ƒ˜ํ”Œ ์ฝ”๋“œ๋ฅผ ์‹œ์ž‘ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด [์šฐ๋ฆฌ์˜ TPU ์˜ˆ์ œ ๋…ธํŠธ๋ถ!](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)์„ ํ™•์ธํ•˜์„ธ์š”. </Tip> ### TPU๊ฐ€ ๋ฌด์—‡์ธ๊ฐ€์š”?[[what-is-a-tpu]] TPU๋Š” **ํ…์„œ ์ฒ˜๋ฆฌ ์žฅ์น˜**์ž…๋‹ˆ๋‹ค. Google์—์„œ ์„ค๊ณ„ํ•œ ํ•˜๋“œ์›จ์–ด๋กœ, GPU์ฒ˜๋Ÿผ ์‹ ๊ฒฝ๋ง ๋‚ด์—์„œ ํ…์„œ ์—ฐ์‚ฐ์„ ๋”์šฑ ๋น ๋ฅด๊ฒŒ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋„คํŠธ์›Œํฌ ํ›ˆ๋ จ๊ณผ ์ถ”๋ก  ๋ชจ๋‘์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ Google์˜ ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค๋ฅผ ํ†ตํ•ด ์ด์šฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, Google Colab๊ณผ Kaggle Kernel์„ ํ†ตํ•ด ์†Œ๊ทœ๋ชจ TPU๋ฅผ ๋ฌด๋ฃŒ๋กœ ์ง์ ‘ ์ด์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [๐Ÿค— Transformers์˜ ๋ชจ๋“  Tensorflow ๋ชจ๋ธ์€ Keras ๋ชจ๋ธ](https://huggingface.co/blog/tensorflow-philosophy)์ด๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๋ฌธ์„œ์—์„œ ๋‹ค๋ฃจ๋Š” ๋Œ€๋ถ€๋ถ„์˜ ๋ฉ”์†Œ๋“œ๋Š” ๋Œ€์ฒด๋กœ ๋ชจ๋“  Keras ๋ชจ๋ธ์„ ์œ„ํ•œ TPU ํ›ˆ๋ จ์— ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ํ•˜์ง€๋งŒ Transformer์™€ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ HuggingFace ์ƒํƒœ๊ณ„(hug-o-system?)์— ํŠนํ™”๋œ ๋ช‡ ๊ฐ€์ง€ ์‚ฌํ•ญ์ด ์žˆ์œผ๋ฉฐ, ํ•ด๋‹น ์‚ฌํ•ญ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•  ๋•Œ ๋ฐ˜๋“œ์‹œ ์–ธ๊ธ‰ํ•˜๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ### ์–ด๋–ค ์ข…๋ฅ˜์˜ TPU๊ฐ€ ์žˆ๋‚˜์š”?[[what-kinds-of-tpu-are-available]] ์‹ ๊ทœ ์‚ฌ์šฉ์ž๋Š” TPU์˜ ๋ฒ”์œ„์™€ ๋‹ค์–‘ํ•œ ์ด์šฉ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๋งค์šฐ ํ˜ผ๋ž€์Šค๋Ÿฌ์›Œํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. **TPU ๋…ธ๋“œ**์™€ **TPU VM**์˜ ์ฐจ์ด์ ์€ ๊ฐ€์žฅ ๋จผ์ € ์ดํ•ดํ•ด์•ผ ํ•  ํ•ต์‹ฌ์ ์ธ ๊ตฌ๋ถ„ ์‚ฌํ•ญ์ž…๋‹ˆ๋‹ค. **TPU ๋…ธ๋“œ**๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๋ฉด, ์‹ค์ œ๋กœ๋Š” ์›๊ฒฉ TPU๋ฅผ ๊ฐ„์ ‘์ ์œผ๋กœ ์ด์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋„คํŠธ์›Œํฌ์™€ ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ดˆ๊ธฐํ™”ํ•œ ๋‹ค์Œ, ์ด๋ฅผ ์›๊ฒฉ ๋…ธ๋“œ๋กœ ์ „๋‹ฌํ•  ๋ณ„๋„์˜ VM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. Google Colab์—์„œ TPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, **TPU ๋…ธ๋“œ** ๋ฐฉ์‹์œผ๋กœ ์ด์šฉํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. TPU ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ์ด๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ์‚ฌ์šฉ์ž์—๊ฒŒ ์˜ˆ๊ธฐ์น˜ ์•Š์€ ํ˜„์ƒ์ด ๋ฐœ์ƒํ•˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค! ํŠนํžˆ, TPU๋Š” ํŒŒ์ด์ฌ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ธฐ๊ธฐ(machine)์™€ ๋ฌผ๋ฆฌ์ ์œผ๋กœ ๋‹ค๋ฅธ ์‹œ์Šคํ…œ์— ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋กœ์ปฌ ๊ธฐ๊ธฐ์— ๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ฆ‰, ์ปดํ“จํ„ฐ์˜ ๋‚ด๋ถ€ ์ €์žฅ์†Œ์—์„œ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์€ ์ ˆ๋Œ€ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค! ๋กœ์ปฌ ๊ธฐ๊ธฐ์— ๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅํ•˜๋Š” ๋Œ€์‹ ์—, ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์ด ์›๊ฒฉ TPU ๋…ธ๋“œ์—์„œ ์‹คํ–‰ ์ค‘์ผ ๋•Œ์—๋„ ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์ด ๊ณ„์† ์ด์šฉํ•  ์ˆ˜ ์žˆ๋Š” Google Cloud Storage์— ๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. <Tip> ๋ฉ”๋ชจ๋ฆฌ์— ์žˆ๋Š” ๋ชจ๋“  ๋ฐ์ดํ„ฐ๋ฅผ `np.ndarray` ๋˜๋Š” `tf.Tensor`๋กœ ๋งž์ถœ ์ˆ˜ ์žˆ๋‹ค๋ฉด, Google Cloud Storage์— ์—…๋กœ๋“œํ•  ํ•„์š” ์—†์ด, Colab ๋˜๋Š” TPU ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด์„œ ํ•ด๋‹น ๋ฐ์ดํ„ฐ์— `fit()` ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> <Tip> **๐Ÿค—ํŠน์ˆ˜ํ•œ Hugging Face ํŒ๐Ÿค—:** TF ์ฝ”๋“œ ์˜ˆ์ œ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋Š” `Dataset.to_tf_dataset()` ๋ฉ”์†Œ๋“œ์™€ ๊ทธ ์ƒ์œ„ ๋ž˜ํผ(wrapper)์ธ `model.prepare_tf_dataset()`๋Š” ๋ชจ๋‘ TPU ๋…ธ๋“œ์—์„œ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ ์ด์œ ๋Š” `tf.data.Dataset`์„ ์ƒ์„ฑํ•˜๋”๋ผ๋„ โ€œ์ˆœ์ˆ˜ํ•œโ€ `tf.data` ํŒŒ์ดํ”„๋ผ์ธ์ด ์•„๋‹ˆ๋ฉฐ `tf.numpy_function` ๋˜๋Š” `Dataset.from_generator()`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ธฐ๋ณธ HuggingFace `Dataset`์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ์ „์†กํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด HuggingFace `Dataset`๋Š” ๋กœ์ปฌ ๋””์Šคํฌ์— ์žˆ๋Š” ๋ฐ์ดํ„ฐ๋กœ ์ง€์›๋˜๋ฉฐ ์›๊ฒฉ TPU ๋…ธ๋“œ๊ฐ€ ์ฝ์„ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. </Tip> TPU๋ฅผ ์ด์šฉํ•˜๋Š” ๋‘ ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ **TPU VM**์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. TPU VM์„ ์‚ฌ์šฉํ•  ๋•Œ, GPU VM์—์„œ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์ด TPU๊ฐ€ ์žฅ์ฐฉ๋œ ๊ธฐ๊ธฐ์— ์ง์ ‘ ์—ฐ๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ํŠนํžˆ ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ๊ณผ ๊ด€๋ จํ•˜์—ฌ, TPU VM์€ ๋Œ€์ฒด๋กœ ์ž‘์—…ํ•˜๊ธฐ ๋” ์‰ฝ์Šต๋‹ˆ๋‹ค. ์œ„์˜ ๋ชจ๋“  ๊ฒฝ๊ณ ๋Š” TPU VM์—๋Š” ํ•ด๋‹น๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค! ์ด ๋ฌธ์„œ๋Š” ์˜๊ฒฌ์ด ํฌํ•จ๋œ ๋ฌธ์„œ์ด๋ฉฐ, ์ €ํฌ์˜ ์˜๊ฒฌ์ด ์—ฌ๊ธฐ์— ์žˆ์Šต๋‹ˆ๋‹ค: **๊ฐ€๋Šฅํ•˜๋ฉด TPU ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ๋งˆ์„ธ์š”.** TPU ๋…ธ๋“œ๋Š” TPU VM๋ณด๋‹ค ๋” ๋ณต์žกํ•˜๊ณ  ๋””๋ฒ„๊น…ํ•˜๊ธฐ๊ฐ€ ๋” ์–ด๋ ต์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ํ–ฅํ›„์—๋Š” ์ง€์›๋˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. Google์˜ ์ตœ์‹  TPU์ธ TPUv4๋Š” TPU VM์œผ๋กœ๋งŒ ์ด์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, TPU ๋…ธ๋“œ๋Š” ์ ์  ๋” "๊ตฌ์‹" ์ด์šฉ ๋ฐฉ๋ฒ•์ด ๋  ๊ฒƒ์œผ๋กœ ์ „๋ง๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ TPU ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” Colab๊ณผ Kaggle Kernel์—์„œ๋งŒ ๋ฌด๋ฃŒ TPU ์ด์šฉ์ด ๊ฐ€๋Šฅํ•œ ๊ฒƒ์œผ๋กœ ํ™•์ธ๋˜์–ด, ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์ด๋ฅผ ๋‹ค๋ฃจ๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ด ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ์ด์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์„ค๋ช…์ด ๋‹ด๊ธด ์ฝ”๋“œ ์ƒ˜ํ”Œ์€ [TPU ์˜ˆ์ œ ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)์—์„œ ํ™•์ธํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ### ์–ด๋–ค ํฌ๊ธฐ์˜ TPU๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‚˜์š”?[[what-sizes-of-tpu-are-available]] ๋‹จ์ผ TPU(v2-8/v3-8/v4-8)๋Š” 8๊ฐœ์˜ ๋ณต์ œ๋ณธ(replicas)์„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. TPU๋Š” ์ˆ˜๋ฐฑ ๋˜๋Š” ์ˆ˜์ฒœ ๊ฐœ์˜ ๋ณต์ œ๋ณธ์„ ๋™์‹œ์— ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” **pod**๋กœ ์กด์žฌํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ TPU๋ฅผ ํ•˜๋‚˜ ์ด์ƒ ์‚ฌ์šฉํ•˜์ง€๋งŒ ์ „์ฒด Pod๋ณด๋‹ค ์ ๊ฒŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ(์˜ˆ๋ฅผ ๋“ค๋ฉด, v3-32), TPU ๊ตฌ์„ฑ์„ **pod ์Šฌ๋ผ์ด์Šค**๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. Colab์„ ํ†ตํ•ด ๋ฌด๋ฃŒ TPU์— ์ด์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๊ธฐ๋ณธ์ ์œผ๋กœ ๋‹จ์ผ v2-8 TPU๋ฅผ ์ œ๊ณต๋ฐ›์Šต๋‹ˆ๋‹ค. ### XLA์— ๋Œ€ํ•ด ๋“ค์–ด๋ณธ ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. XLA๋ž€ ๋ฌด์—‡์ด๊ณ  TPU์™€ ์–ด๋–ค ๊ด€๋ จ์ด ์žˆ๋‚˜์š”?[[i-keep-hearing-about-this-xla-thing-whats-xla-and-how-does-it-relate-to-tpus]] XLA๋Š” ์ตœ์ ํ™” ์ปดํŒŒ์ผ๋Ÿฌ๋กœ, TensorFlow์™€ JAX์—์„œ ๋ชจ๋‘ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. JAX์—์„œ๋Š” ์œ ์ผํ•œ ์ปดํŒŒ์ผ๋Ÿฌ์ด์ง€๋งŒ, TensorFlow์—์„œ๋Š” ์„ ํƒ ์‚ฌํ•ญ์ž…๋‹ˆ๋‹ค(ํ•˜์ง€๋งŒ TPU์—์„œ๋Š” ํ•„์ˆ˜์ž…๋‹ˆ๋‹ค!). Keras ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ๋•Œ ์ด๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋Š” ๊ฐ€์žฅ ์‰ฌ์šด ๋ฐฉ๋ฒ•์€ `jit_compile=True` ์ธ์ˆ˜๋ฅผ `model.compile()`์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ค๋ฅ˜๊ฐ€ ์—†๊ณ  ์„ฑ๋Šฅ์ด ์–‘ํ˜ธํ•˜๋‹ค๋ฉด, TPU๋กœ ์ „ํ™˜ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ๋‹ค๋Š” ์ข‹์€ ์‹ ํ˜ธ์ž…๋‹ˆ๋‹ค! TPU์—์„œ ๋””๋ฒ„๊น…ํ•˜๋Š” ๊ฒƒ์€ ๋Œ€๊ฐœ CPU/GPU๋ณด๋‹ค ์กฐ๊ธˆ ๋” ์–ด๋ ต๊ธฐ ๋•Œ๋ฌธ์—, TPU์—์„œ ์‹œ๋„ํ•˜๊ธฐ ์ „์— ๋จผ์ € XLA๋กœ CPU/GPU์—์„œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฌผ๋ก  ์˜ค๋ž˜ ํ•™์Šตํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ๊ณผ ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์ด ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋ช‡ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. <Tip> XLA๋กœ ์ปดํŒŒ์ผ๋œ ์ฝ”๋“œ๋Š” ๋Œ€์ฒด๋กœ ๋” ๋น ๋ฆ…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ TPU์—์„œ ์‹คํ–‰ํ•  ๊ณ„ํš์ด ์—†๋”๋ผ๋„, `jit_compile=True`๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ XLA ํ˜ธํ™˜์„ฑ์— ๋Œ€ํ•œ ์•„๋ž˜ ์ฃผ์˜ ์‚ฌํ•ญ์„ ๋ฐ˜๋“œ์‹œ ํ™•์ธํ•˜์„ธ์š”! </Tip> <Tip warning={true}> **๋ผˆ์•„ํ”ˆ ๊ฒฝํ—˜์—์„œ ์–ป์€ ํŒ:** `jit_compile=True`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์†๋„๋ฅผ ๋†’์ด๊ณ  CPU/GPU ์ฝ”๋“œ๊ฐ€ XLA์™€ ํ˜ธํ™˜๋˜๋Š”์ง€ ๊ฒ€์ฆํ•  ์ˆ˜ ์žˆ๋Š” ์ข‹์€ ๋ฐฉ๋ฒ•์ด์ง€๋งŒ, ์‹ค์ œ TPU์—์„œ ํ›ˆ๋ จํ•  ๋•Œ ๊ทธ๋Œ€๋กœ ๋‚จ๊ฒจ๋‘๋ฉด ๋งŽ์€ ๋ฌธ์ œ๋ฅผ ์ดˆ๋ž˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. XLA ์ปดํŒŒ์ผ์€ TPU์—์„œ ์•”์‹œ์ ์œผ๋กœ ์ด๋ค„์ง€๋ฏ€๋กœ, ์‹ค์ œ TPU์—์„œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— ํ•ด๋‹น ์ค„์„ ์ œ๊ฑฐํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! </Tip> ### ์ œ XLA ๋ชจ๋ธ๊ณผ ํ˜ธํ™˜ํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ•˜๋‚˜์š”?[[how-do-i-make-my-model-xla-compatible]] ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ, ์—ฌ๋Ÿฌ๋ถ„์˜ ์ฝ”๋“œ๋Š” ์ด๋ฏธ XLA์™€ ํ˜ธํ™˜๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค! ๊ทธ๋Ÿฌ๋‚˜ ํ‘œ์ค€ TensorFlow์—์„œ ์ž‘๋™ํ•˜์ง€๋งŒ, XLA์—์„œ๋Š” ์ž‘๋™ํ•˜์ง€ ์•Š๋Š” ๋ช‡ ๊ฐ€์ง€ ์‚ฌํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์•„๋ž˜ ์„ธ ๊ฐ€์ง€ ํ•ต์‹ฌ ๊ทœ์น™์œผ๋กœ ๊ฐ„์ถ”๋ ธ์Šต๋‹ˆ๋‹ค: <Tip> **ํŠน์ˆ˜ํ•œ HuggingFace ํŒ๐Ÿค—:** ์ €ํฌ๋Š” TensorFlow ๋ชจ๋ธ๊ณผ ์†์‹ค ํ•จ์ˆ˜๋ฅผ XLA์™€ ํ˜ธํ™˜๋˜๋„๋ก ์žฌ์ž‘์„ฑํ•˜๋Š” ๋ฐ ๋งŽ์€ ๋…ธ๋ ฅ์„ ๊ธฐ์šธ์˜€์Šต๋‹ˆ๋‹ค. ์ €ํฌ์˜ ๋ชจ๋ธ๊ณผ ์†์‹ค ํ•จ์ˆ˜๋Š” ๋Œ€๊ฐœ ๊ธฐ๋ณธ์ ์œผ๋กœ ๊ทœ์น™ #1๊ณผ #2๋ฅผ ๋”ฐ๋ฅด๋ฏ€๋กœ `transformers` ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ์ด๋ฅผ ๊ฑด๋„ˆ๋›ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ž์ฒด ๋ชจ๋ธ๊ณผ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ์ž‘์„ฑํ•  ๋•Œ๋Š” ์ด๋Ÿฌํ•œ ๊ทœ์น™์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! </Tip> #### XLA ๊ทœ์น™ #1: ์ฝ”๋“œ์—์„œ โ€œ๋ฐ์ดํ„ฐ ์ข…์† ์กฐ๊ฑด๋ฌธโ€์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค[[xla-rule-1-your-code-cannot-have-datadependent-conditionals]] ์–ด๋–ค `if`๋ฌธ๋„ `tf.Tensor` ๋‚ด๋ถ€์˜ ๊ฐ’์— ์ข…์†๋  ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด ์ฝ”๋“œ ๋ธ”๋ก์€ XLA๋กœ ์ปดํŒŒ์ผํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค! ```python if tf.reduce_sum(tensor) > 10: tensor = tensor / 2.0 ``` ์ฒ˜์Œ์—๋Š” ๋งค์šฐ ์ œํ•œ์ ์œผ๋กœ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ, ๋Œ€๋ถ€๋ถ„์˜ ์‹ ๊ฒฝ๋ง ์ฝ”๋“œ์—์„œ๋Š” ์ด๋ฅผ ์ˆ˜ํ–‰ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. `tf.cond`๋ฅผ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜([์—ฌ๊ธฐ](https://www.tensorflow.org/api_docs/python/tf/cond) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐ), ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์กฐ๊ฑด๋ฌธ์„ ์ œ๊ฑฐํ•˜๊ณ  ๋Œ€์‹  ์ง€ํ‘œ ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์˜๋ฆฌํ•œ ์ˆ˜ํ•™ ํŠธ๋ฆญ์„ ์ฐพ์•„๋‚ด์–ด ์ด ์ œํ•œ์„ ์šฐํšŒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32) tensor = tensor / (1.0 + sum_over_10) ``` ์ด ์ฝ”๋“œ๋Š” ์œ„์˜ ์ฝ”๋“œ์™€ ์ •ํ™•ํžˆ ๋™์ผํ•œ ํšจ๊ณผ๋ฅผ ๊ตฌํ˜„ํ•˜์ง€๋งŒ, ์กฐ๊ฑด๋ฌธ์„ ์ œ๊ฑฐํ•˜์—ฌ ๋ฌธ์ œ ์—†์ด XLA๋กœ ์ปดํŒŒ์ผ๋˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค! #### XLA ๊ทœ์น™ #2: ์ฝ”๋“œ์—์„œ "๋ฐ์ดํ„ฐ ์ข…์† ํฌ๊ธฐ"๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค[[xla-rule-2-your-code-cannot-have-datadependent-shapes]] ์ฝ”๋“œ์—์„œ ๋ชจ๋“  `tf.Tensor` ๊ฐ์ฒด์˜ ํฌ๊ธฐ๊ฐ€ ํ•ด๋‹น ๊ฐ’์— ์ข…์†๋  ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `tf.unique` ํ•จ์ˆ˜๋Š” ์ž…๋ ฅ์—์„œ ๊ฐ ๊ณ ์œ  ๊ฐ’์˜ ์ธ์Šคํ„ด์Šค ํ•˜๋‚˜๋ฅผ ํฌํ•จํ•˜๋Š” `tensor`๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— XLA๋กœ ์ปดํŒŒ์ผํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด ์ถœ๋ ฅ์˜ ํฌ๊ธฐ๋Š” ์ž…๋ ฅ `Tensor`๊ฐ€ ์–ผ๋งˆ๋‚˜ ๋ฐ˜๋ณต์ ์ธ์ง€์— ๋”ฐ๋ผ ๋ถ„๋ช…ํžˆ ๋‹ฌ๋ผ์งˆ ๊ฒƒ์ด๋ฏ€๋กœ, XLA๋Š” ์ด๋ฅผ ์ฒ˜๋ฆฌํ•˜์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค! ์ผ๋ฐ˜์ ์œผ๋กœ, ๋Œ€๋ถ€๋ถ„์˜ ์‹ ๊ฒฝ๋ง ์ฝ”๋“œ๋Š” ๊ธฐ๋ณธ๊ฐ’์œผ๋กœ ๊ทœ์น™ 2๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ฌธ์ œ๊ฐ€ ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ๋Œ€ํ‘œ์ ์ธ ์‚ฌ๋ก€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ€์žฅ ํ”ํ•œ ์‚ฌ๋ก€ ์ค‘ ํ•˜๋‚˜๋Š” **๋ ˆ์ด๋ธ” ๋งˆ์Šคํ‚น**์„ ์‚ฌ์šฉํ•˜์—ฌ ์†์‹ค(loss)์„ ๊ณ„์‚ฐํ•  ๋•Œ, ํ•ด๋‹น ์œ„์น˜๋ฅผ ๋ฌด์‹œํ•˜๋„๋ก ๋‚˜ํƒ€๋‚ด๊ธฐ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์„ ์Œ์ˆ˜ ๊ฐ’์œผ๋กœ ์„ค์ •ํ•˜๋Š” ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ” ๋งˆ์Šคํ‚น์„ ์ง€์›ํ•˜๋Š” NumPy๋‚˜ PyTorch ์†์‹ค ํ•จ์ˆ˜๋ฅผ ๋ณด๋ฉด [๋ถˆ ์ธ๋ฑ์‹ฑ](https://numpy.org/doc/stable/user/basics.indexing.html#boolean-array-indexing)์„ ์‚ฌ์šฉํ•˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ฝ”๋“œ๋ฅผ ์ž์ฃผ ์ ‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python label_mask = labels >= 0 masked_outputs = outputs[label_mask] masked_labels = labels[label_mask] loss = compute_loss(masked_outputs, masked_labels) mean_loss = torch.mean(loss) ``` ์ด ์ฝ”๋“œ๋Š” NumPy๋‚˜ PyTorch์—์„œ๋Š” ๋ฌธ์ œ ์—†์ด ์ž‘๋™ํ•˜์ง€๋งŒ, XLA์—์„œ๋Š” ์†์ƒ๋ฉ๋‹ˆ๋‹ค! ์™œ ๊ทธ๋Ÿด๊นŒ์š”? ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ์œ„์น˜๊ฐ€ ๋งˆ์Šคํ‚น๋˜๋Š”์ง€์— ๋”ฐ๋ผ `masked_outputs`์™€ `masked_labels`์˜ ํฌ๊ธฐ๊ฐ€ ๋‹ฌ๋ผ์ ธ์„œ, **๋ฐ์ดํ„ฐ ์ข…์† ํฌ๊ธฐ**๊ฐ€ ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ทœ์น™ #1๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์ด ์ฝ”๋“œ๋ฅผ ๋‹ค์‹œ ์ž‘์„ฑํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์ข…์†์  ๋ชจ์–‘ ํฌ๊ธฐ๊ฐ€ ์ •ํ™•ํžˆ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ์‚ฐ์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python label_mask = tf.cast(labels >= 0, tf.float32) loss = compute_loss(outputs, labels) loss = loss * label_mask # Set negative label positions to 0 mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask) ``` ์—ฌ๊ธฐ์„œ, ๋ชจ๋“  ์œ„์น˜์— ๋Œ€ํ•œ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์ง€๋งŒ, ํ‰๊ท ์„ ๊ณ„์‚ฐํ•  ๋•Œ ๋ถ„์ž์™€ ๋ถ„๋ชจ ๋ชจ๋‘์—์„œ ๋งˆ์Šคํฌ๋œ ์œ„์น˜๋ฅผ 0์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ฐ์ดํ„ฐ ์ข…์† ํฌ๊ธฐ๋ฅผ ๋ฐฉ์ง€ํ•˜๊ณ  XLA ํ˜ธํ™˜์„ฑ์„ ์œ ์ง€ํ•˜๋ฉด์„œ ์ฒซ ๋ฒˆ์งธ ๋ธ”๋ก๊ณผ ์ •ํ™•ํžˆ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ๊ทœ์น™ #1์—์„œ์™€ ๋™์ผํ•œ ํŠธ๋ฆญ์„ ์‚ฌ์šฉํ•˜์—ฌ `tf.bool`์„ `tf.float32`๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์ด๋ฅผ ์ง€ํ‘œ ๋ณ€์ˆ˜๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ํŠธ๋ฆญ์€ ๋งค์šฐ ์œ ์šฉํ•˜๋ฉฐ, ์ž์ฒด ์ฝ”๋“œ๋ฅผ XLA๋กœ ๋ณ€ํ™˜ํ•ด์•ผ ํ•  ๊ฒฝ์šฐ ๊ธฐ์–ตํ•ด ๋‘์„ธ์š”! #### XLA ๊ทœ์น™ #3: XLA๋Š” ๊ฐ๊ธฐ ๋‹ค๋ฅธ ์ž…๋ ฅ ํฌ๊ธฐ๊ฐ€ ๋‚˜ํƒ€๋‚  ๋•Œ๋งˆ๋‹ค ๋ชจ๋ธ์„ ๋‹ค์‹œ ์ปดํŒŒ์ผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค[[xla-rule-3-xla-will-need-to-recompile-your-model-for-every-different-input-shape-it-sees]] ์ด๊ฒƒ์€ ๊ฐ€์žฅ ํฐ ๋ฌธ์ œ์ž…๋‹ˆ๋‹ค. ์ž…๋ ฅ ํฌ๊ธฐ๊ฐ€ ๋งค์šฐ ๊ฐ€๋ณ€์ ์ธ ๊ฒฝ์šฐ, XLA๋Š” ๋ชจ๋ธ์„ ๋ฐ˜๋ณตํ•ด์„œ ๋‹ค์‹œ ์ปดํŒŒ์ผํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์„ฑ๋Šฅ์— ํฐ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋Š” ํ† ํฐํ™” ํ›„ ์ž…๋ ฅ ํ…์ŠคํŠธ์˜ ๊ธธ์ด๊ฐ€ ๊ฐ€๋ณ€์ ์ธ NLP ๋ชจ๋ธ์—์„œ ์ฃผ๋กœ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์—์„œ๋Š” ์ •์  ํฌ๊ธฐ๊ฐ€ ๋” ํ”ํ•˜๋ฉฐ, ํ•ด๋‹น ๊ทœ์น™์ด ํ›จ์”ฌ ๋œ ๋ฌธ์ œ์‹œ ๋ฉ๋‹ˆ๋‹ค. ๊ทœ์น™ #3์„ ์–ด๋–ป๊ฒŒ ์šฐํšŒํ•  ์ˆ˜ ์žˆ์„๊นŒ์š”? ํ•ต์‹ฌ์€ **ํŒจ๋”ฉ**์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ์ž…๋ ฅ์„ ๋™์ผํ•œ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•œ ๋‹ค์Œ, `attention_mask`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์–ด๋–ค XLA ๋ฌธ์ œ๋„ ์—†์ด ๊ฐ€๋ณ€ ํฌ๊ธฐ์—์„œ ๊ฐ€์ ธ์˜จ ๊ฒƒ๊ณผ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ณผ๋„ํ•œ ํŒจ๋”ฉ์€ ์‹ฌ๊ฐํ•œ ์†๋„ ์ €ํ•˜๋ฅผ ์•ผ๊ธฐํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ์ƒ˜ํ”Œ์„ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋ฉด, ๋ฌดํ•œํ•œ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ๊ตฌ์„ฑ๋œ ๋ฐฐ์น˜๊ฐ€ ์ƒ์„ฑ๋˜์–ด ๋งŽ์€ ์—ฐ์‚ฐ๊ณผ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋‚ญ๋น„๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ด ๋ฌธ์ œ์— ๋Œ€ํ•œ ์™„๋ฒฝํ•œ ํ•ด๊ฒฐ์ฑ…์€ ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ๋ช‡ ๊ฐ€์ง€ ํŠธ๋ฆญ์„ ์‹œ๋„ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•œ ๊ฐ€์ง€ ์œ ์šฉํ•œ ํŠธ๋ฆญ์€ **์ƒ˜ํ”Œ ๋ฐฐ์น˜๋ฅผ 32 ๋˜๋Š” 64 ํ† ํฐ๊ณผ ๊ฐ™์€ ์ˆซ์ž์˜ ๋ฐฐ์ˆ˜๊นŒ์ง€ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.** ์ด๋Š” ํ† ํฐ ์ˆ˜๊ฐ€ ์†Œํญ ์ฆ๊ฐ€ํ•˜์ง€๋งŒ, ๋ชจ๋“  ์ž…๋ ฅ ํฌ๊ธฐ๊ฐ€ 32 ๋˜๋Š” 64์˜ ๋ฐฐ์ˆ˜์—ฌ์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ณ ์œ ํ•œ ์ž…๋ ฅ ํฌ๊ธฐ์˜ ์ˆ˜๊ฐ€ ๋Œ€ํญ ์ค„์–ด๋“ญ๋‹ˆ๋‹ค. ๊ณ ์œ ํ•œ ์ž…๋ ฅ ํฌ๊ธฐ๊ฐ€ ์ ๋‹ค๋Š” ๊ฒƒ์€ XLA ์ปดํŒŒ์ผ ํšŸ์ˆ˜๊ฐ€ ์ ์–ด์ง„๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค! <Tip> **๐Ÿค—ํŠน์ˆ˜ํ•œ HuggingFace ํŒ๐Ÿค—:** ํ† ํฌ๋‚˜์ด์ €์™€ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์— ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋Š” ๋ฉ”์†Œ๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ `padding="max_length"` ๋˜๋Š” `padding="longest"`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŒจ๋”ฉ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ์ถœ๋ ฅํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €์™€ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” ๋‚˜ํƒ€๋‚˜๋Š” ๊ณ ์œ ํ•œ ์ž…๋ ฅ ํฌ๊ธฐ์˜ ์ˆ˜๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” `pad_to_multiple_of` ์ธ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! </Tip> ### ์‹ค์ œ TPU๋กœ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ•˜๋‚˜์š”?[[how-do-i-actually-train-my-model-on-tpu]] ํ›ˆ๋ จ์ด XLA์™€ ํ˜ธํ™˜๋˜๊ณ  (TPU ๋…ธ๋“œ/Colab์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ) ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ์ ์ ˆํ•˜๊ฒŒ ์ค€๋น„๋˜์—ˆ๋‹ค๋ฉด, TPU์—์„œ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์€ ๋†€๋ž๋„๋ก ์‰ฝ์Šต๋‹ˆ๋‹ค! ์ฝ”๋“œ์—์„œ ๋ช‡ ์ค„๋งŒ ์ถ”๊ฐ€ํ•˜์—ฌ, TPU๋ฅผ ์ดˆ๊ธฐํ™”ํ•˜๊ณ  ๋ชจ๋ธ๊ณผ ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ `TPUStrategy` ๋ฒ”์œ„ ๋‚ด์— ์ƒ์„ฑ๋˜๋„๋ก ๋ณ€๊ฒฝํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. [์šฐ๋ฆฌ์˜ TPU ์˜ˆ์ œ ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)์„ ์ฐธ์กฐํ•˜์—ฌ ์‹ค์ œ๋กœ ์ž‘๋™ํ•˜๋Š” ๋ชจ์Šต์„ ํ™•์ธํ•ด ๋ณด์„ธ์š”! ### ์š”์•ฝ[[summary]] ์—ฌ๊ธฐ์— ๋งŽ์€ ๋‚ด์šฉ์ด ํฌํ•จ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, TPU ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ชจ๋ธ์„ ์ค€๋น„ํ•  ๋•Œ ๋”ฐ๋ฅผ ์ˆ˜ ์žˆ๋Š” ๊ฐ„๋žตํ•œ ์ฒดํฌ๋ฆฌ์ŠคํŠธ๋กœ ์š”์•ฝํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: - ์ฝ”๋“œ๊ฐ€ XLA์˜ ์„ธ ๊ฐ€์ง€ ๊ทœ์น™์„ ๋”ฐ๋ฅด๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. - CPU/GPU์—์„œ `jit_compile=True`๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  XLA๋กœ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. - ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ๊ฐ€์ ธ์˜ค๊ฑฐ๋‚˜ TPU ํ˜ธํ™˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค([๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) ์ฐธ์กฐ) - ์ฝ”๋“œ๋ฅผ Colab(accelerator๊ฐ€ โ€œTPUโ€๋กœ ์„ค์ •๋จ) ๋˜๋Š” Google Cloud์˜ TPU VM์œผ๋กœ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ํ•ฉ๋‹ˆ๋‹ค. - TPU ์ดˆ๊ธฐํ™” ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค([๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) ์ฐธ์กฐ) - `TPUStrategy`๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ๊ณผ ๋ชจ๋ธ ์ƒ์„ฑ์ด `strategy.scope()` ๋‚ด์— ์žˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค([๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) ์ฐธ์กฐ) - TPU๋กœ ์ด๋™ํ•  ๋•Œ `jit_compile=True`๋ฅผ ๋‹ค์‹œ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! - ๐Ÿ™๐Ÿ™๐Ÿ™๐Ÿฅบ๐Ÿฅบ๐Ÿฅบ - model.fit()์„ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. - ์—ฌ๋Ÿฌ๋ถ„์ด ํ•ด๋ƒˆ์Šต๋‹ˆ๋‹ค!
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/perf_hardware.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ›ˆ๋ จ์šฉ ์‚ฌ์šฉ์ž ๋งž์ถคํ˜• ํ•˜๋“œ์›จ์–ด [[custom-hardware-for-training]] ๋ชจ๋ธ ํ›ˆ๋ จ๊ณผ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ํ•˜๋“œ์›จ์–ด๋Š” ์„ฑ๋Šฅ์— ํฐ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด, Tim Dettmer์˜ ํ›Œ๋ฅญํ•œ ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”. [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ ๋งํฌ](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/) (์˜์–ด๋กœ ์ž‘์„ฑ๋จ). GPU ์„ค์ •์— ๋Œ€ํ•œ ์‹ค์šฉ์ ์ธ ์กฐ์–ธ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## GPU [[gpu]] ๋” ํฐ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ๋•Œ๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์„ธ ๊ฐ€์ง€ ์˜ต์…˜์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ๋” ํฐ GPU - ๋” ๋งŽ์€ GPU - ๋” ๋งŽ์€ CPU ๋ฐ NVMe ([DeepSpeed-Infinity](../en/main_classes/deepspeed#nvme-support)๋ฅผ ํ†ตํ•œ ์˜คํ”„๋กœ๋“œ(offload)) ์šฐ์„ , ํ•˜๋‚˜์˜ GPU๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด๋ด…์‹œ๋‹ค. ### ์ „์› ๊ณต๊ธ‰๊ณผ ๋ƒ‰๊ฐ [[power-and-cooling]] ๋น„์‹ผ ๊ณ ์„ฑ๋Šฅ GPU๋ฅผ ๊ตฌ๋งคํ•œ ๊ฒฝ์šฐ, ์˜ฌ๋ฐ”๋ฅธ ์ „์› ๊ณต๊ธ‰๊ณผ ์ถฉ๋ถ„ํ•œ ๋ƒ‰๊ฐ์„ ์ œ๊ณตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **์ „์› ๊ณต๊ธ‰**: ์ผ๋ถ€ ๊ณ ์„ฑ๋Šฅ ์†Œ๋น„์ž์šฉ GPU๋Š” 2๊ฐœ ํ˜น์€ ๊ฐ€๋”๊ฐ€๋‹ค 3๊ฐœ์˜ PCI-E 8ํ•€ ์ „์› ์†Œ์ผ“์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์นด๋“œ์— ์žˆ๋Š” ์†Œ์ผ“ ์ˆ˜๋งŒํผ ๋…๋ฆฝ์ ์ธ 12V PCI-E 8ํ•€ ์ผ€์ด๋ธ”์ด ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ๊ฐ™์€ ์ผ€์ด๋ธ”์˜ ํ•œ์ชฝ ๋์— ์žˆ๋Š” 2๊ฐœ์˜ ์Šคํ”Œ๋ฆฟ(๋˜๋Š” ํ”ผ๊ทธํ…Œ์ผ(pigtail) ์ผ€์ด๋ธ”)์„ ์‚ฌ์šฉํ•˜์ง€ ๋งˆ์„ธ์š”. ์ฆ‰, GPU์— 2๊ฐœ์˜ ์†Œ์ผ“์ด ์žˆ๋‹ค๋ฉด, PSU(์ „์› ๊ณต๊ธ‰ ์žฅ์น˜)์—์„œ ์นด๋“œ๋กœ ์—ฐ๊ฒฐ๋˜๋Š” 2๊ฐœ์˜ PCI-E 8ํ•€ ์ผ€์ด๋ธ”์ด ํ•„์š”ํ•˜๋ฉฐ, ๋์— 2๊ฐœ์˜ PCI-E 8ํ•€ ์ปค๋„ฅํ„ฐ๊ฐ€ ์žˆ๋Š” ์ผ€์ด๋ธ”์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค! ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ์นด๋“œ์˜ ์ „์ฒด ์„ฑ๋Šฅ์„ ์ œ๋Œ€๋กœ ๋ฐœํœ˜ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ๊ฐ์˜ PCI-E 8ํ•€ ์ „์› ์ผ€์ด๋ธ”์€ PSU ์ชฝ์˜ 12V ๋ ˆ์ผ์— ์—ฐ๊ฒฐ๋˜์–ด์•ผ ํ•˜๋ฉฐ ์ตœ๋Œ€ 150W์˜ ์ „๋ ฅ์„ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋‹ค๋ฅธ GPU๋Š” PCI-E 12ํ•€ ์ปค๋„ฅํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ด๋Ÿฌํ•œ ์ปค๋„ฅํ„ฐ๋Š” ์ตœ๋Œ€ 500W-600W์˜ ์ „๋ ฅ์„ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ €๊ฐ€ํ˜• GPU๋Š” 6ํ•€ ์ปค๋„ฅํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ตœ๋Œ€ 75W์˜ ์ „๋ ฅ์„ ๊ณต๊ธ‰ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ GPU๊ฐ€ ์•ˆ์ •์ ์ธ ์ „์••์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋„๋ก ๊ณ ๊ธ‰ PSU๋ฅผ ์„ ํƒํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ถ€ ์ €ํ’ˆ์งˆ์˜ PSU๋Š” GPU๊ฐ€ ์ตœ๊ณ  ์„ฑ๋Šฅ์œผ๋กœ ๋™์ž‘ํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์ „์••์„ ์•ˆ์ •์ ์œผ๋กœ ๊ณต๊ธ‰ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌผ๋ก , PSU๋Š” GPU์— ์ „์›์„ ๊ณต๊ธ‰ํ•˜๊ธฐ์— ์ถฉ๋ถ„ํ•œ ์—ฌ๋ถ„์˜ ์ „๋ ฅ ์šฉ๋Ÿ‰์„ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. **๋ƒ‰๊ฐ**: GPU๊ฐ€ ๊ณผ์—ด๋˜๋ฉด ์„ฑ๋Šฅ์ด ์ €ํ•˜๋˜๊ณ  ์ตœ๋Œ€ ์„ฑ๋Šฅ์„ ๋ฐœํœ˜ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋„ˆ๋ฌด ๋œจ๊ฑฐ์›Œ์ง€๋ฉด ์ค‘์ง€๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU๊ฐ€ ๊ณผ์—ด๋  ๋•Œ ์ •ํ™•ํ•œ ์ ์ • ์˜จ๋„๋ฅผ ์•Œ๊ธฐ ์–ด๋ ค์šฐ๋‚˜, ์•„๋งˆ๋„ +80โ„ƒ ๋ฏธ๋งŒ์ด๋ฉด ์ข‹์ง€๋งŒ ๋” ๋‚ฎ์„์ˆ˜๋ก ์ข‹์Šต๋‹ˆ๋‹ค. 70โ„ƒ-75โ„ƒ ์ •๋„๊ฐ€ ํ›Œ๋ฅญํ•œ ์˜จ๋„ ๋ฒ”์œ„์ž…๋‹ˆ๋‹ค. ์„ฑ๋Šฅ ์ €ํ•˜๊ฐ€ ๋ฐœ์ƒํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๋Š” ์˜จ๋„๋Š” ๋Œ€๋žต 84โ„ƒ-90โ„ƒ ์ •๋„์ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์„ฑ๋Šฅ ์ €ํ•˜ ์ด์™ธ์—๋„ ์ง€์†์ ์œผ๋กœ ๋งค์šฐ ๋†’์€ ์˜จ๋„๋Š” GPU ์ˆ˜๋ช…์„ ๋‹จ์ถ•์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์–ด์„œ, ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์ธก๋ฉด ์ค‘ ํ•˜๋‚˜์ธ GPU ๊ฐ„ ์—ฐ๊ฒฐ ๋ฐฉ์‹์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๋‹ค์ค‘ GPU ์—ฐ๊ฒฐ ๋ฐฉ์‹ [[multigpu-connectivity]] ๋‹ค์ค‘ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ GPU ๊ฐ„์˜ ์—ฐ๊ฒฐ ๋ฐฉ์‹์€ ์ „์ฒด ํ›ˆ๋ จ ์‹œ๊ฐ„์— ํฐ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ GPU๊ฐ€ ๋™์ผํ•œ ๋ฌผ๋ฆฌ์  ๋…ธ๋“œ์— ์žˆ์„ ๊ฒฝ์šฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash nvidia-smi topo -m ``` ๋งŒ์•ฝ NVLink๋กœ ์—ฐ๊ฒฐ๋œ ๋“€์–ผ GPU ํ™˜๊ฒฝ์ด๋ผ๋ฉด, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A ``` NVLink๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š๋Š” ๋‹ค๋ฅธ ํ™˜๊ฒฝ์˜ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X PHB 0-11 N/A GPU1 PHB X 0-11 N/A ``` ์ด ๊ฒฐ๊ณผ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฒ”๋ก€๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค: ``` X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ``` ๋”ฐ๋ผ์„œ ์ฒซ ๋ฒˆ์งธ ๊ฒฐ๊ณผ์˜ `NV2`๋Š” GPU๊ฐ€ 2๊ฐœ์˜ NVLink๋กœ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋‚˜ํƒ€๋‚ด๊ณ , ๋‘ ๋ฒˆ์งธ ๊ฒฐ๊ณผ์˜ `PHB`๋Š” ์ผ๋ฐ˜์ ์ธ ์†Œ๋น„์ž์šฉ PCIe+๋ธŒ๋ฆฟ์ง€ ์„ค์ •์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์„ค์ •์—์„œ ์–ด๋–ค ์œ ํ˜•์˜ ์—ฐ๊ฒฐ ๋ฐฉ์‹์„ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ผ๋ถ€ ์—ฐ๊ฒฐ ๋ฐฉ์‹์€ GPU ๊ฐ„ ํ†ต์‹ ์„ ๋” ๋น ๋ฅด๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ์œผ๋ฉฐ(NVLink์™€ ๊ฐ™์ด), ์–ด๋–ค ์—ฐ๊ฒฐ ๋ฐฉ์‹์€ ๋” ๋Š๋ฆฌ๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(PHB์™€ ๊ฐ™์ด). ์‚ฌ์šฉํ•˜๋Š” ํ™•์žฅ์„ฑ ์†”๋ฃจ์…˜์˜ ์ข…๋ฅ˜์— ๋”ฐ๋ผ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ์ฃผ์š”ํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜๋„ ์žˆ๊ณ  ๋ฏธ๋ฏธํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. DDP์™€ ๊ฐ™์ด GPU๊ฐ€ ๊ฑฐ์˜ ๋™๊ธฐํ™”ํ•˜์ง€ ์•Š์•„๋„ ๋˜๋Š” ๊ฒฝ์šฐ, ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ๋Š๋ ค๋„ ํฐ ์˜ํ–ฅ์„ ๋ฐ›์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด ZeRO-DP์™€ ๊ฐ™์ด GPU๊ฐ„ ํ†ต์‹ ์ด ๋งŽ์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ, ๋” ๋น ๋ฅธ ํ›ˆ๋ จ์„ ์œ„ํ•ด์„œ๋Š” ๋” ๋น ๋ฅธ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. #### NVLink [[nvlink]] [NVLink](https://en.wikipedia.org/wiki/NVLink)๋Š” Nvidia์—์„œ ๊ฐœ๋ฐœํ•œ ์œ ์„  ๊ธฐ๋ฐ˜์˜ ์ง๋ ฌ ๋‹ค์ค‘ ๋ ˆ์ธ ๊ทผ๊ฑฐ๋ฆฌ ํ†ต์‹  ๋งํฌ์ž…๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ์„ธ๋Œ€์˜ NVLink๋Š” ๋” ๋น ๋ฅธ ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf)์—์„œ ์•„๋ž˜์™€ ๊ฐ™์€ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: > 3์„ธ๋Œ€ NVLinkยฎ > GA102 GPU๋Š” 4๊ฐœ์˜ x4 ๋งํฌ๋ฅผ ํฌํ•จํ•˜๋Š” NVIDIA์˜ 3์„ธ๋Œ€ NVLink ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํ™œ์šฉํ•˜๋ฉฐ, > ๊ฐ ๋งํฌ๋Š” ๋‘ ๊ฐœ์˜ GPU ๊ฐ„์— ๊ฐ ๋ฐฉํ–ฅ์œผ๋กœ ์ดˆ๋‹น 14.0625GB์˜ ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. > 4๊ฐœ์˜ ๋งํฌ๋Š” ๊ฐ ๋ฐฉํ–ฅ์— ์ดˆ๋‹น 56.25GB์˜ ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•˜๋ฉฐ, ๋‘ ๊ฐœ์˜ GPU ๊ฐ„์—๋Š” ์ดˆ๋‹น 112.5GB์˜ ์ด ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. > ๋‘ ๊ฐœ์˜ RTX 3090 GPU๋ฅผ NVLink๋ฅผ ์‚ฌ์šฉํ•ด SLI๋กœ ์—ฐ๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. > (3-Way ๋ฐ 4-Way SLI ๊ตฌ์„ฑ์€ ์ง€์›๋˜์ง€ ์•Š์Œ์— ์œ ์˜ํ•˜์„ธ์š”.) ๋”ฐ๋ผ์„œ `nvidia-smi topo -m`์˜ ๊ฒฐ๊ณผ์—์„œ `NVX`์˜ ๊ฐ’์ด ๋†’์„์ˆ˜๋ก ๋” ์ข‹์Šต๋‹ˆ๋‹ค. ์„ธ๋Œ€๋Š” GPU ์•„ํ‚คํ…์ฒ˜์— ๋”ฐ๋ผ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๋ฉด, openai-community/gpt2๋ฅผ ์ž‘์€ wikitext ์ƒ˜ํ”Œ๋กœ ํ•™์Šต์‹œํ‚ค๋Š” ์˜ˆ์ œ๋ฅผ ํ†ตํ•ด, NVLink๊ฐ€ ํ›ˆ๋ จ์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: | NVlink | Time | | ----- | ---: | | Y | 101s | | N | 131s | NVLink ์‚ฌ์šฉ ์‹œ ํ›ˆ๋ จ์ด ์•ฝ 23% ๋” ๋น ๋ฅด๊ฒŒ ์™„๋ฃŒ๋จ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ๋ฒค์น˜๋งˆํฌ์—์„œ๋Š” `NCCL_P2P_DISABLE=1`์„ ์‚ฌ์šฉํ•˜์—ฌ NVLink๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋„๋ก ์„ค์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฒค์น˜๋งˆํฌ ์ฝ”๋“œ์™€ ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash # DDP w/ NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` ํ•˜๋“œ์›จ์–ด: ๊ฐ๊ฐ 2๊ฐœ์˜ TITAN RTX 24GB + 2๊ฐœ์˜ NVLink (`NV2` in `nvidia-smi topo -m`) ์†Œํ”„ํŠธ์›จ์–ด: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
0
mavonic_private_repos/transformers/docs/source
mavonic_private_repos/transformers/docs/source/ko/contributing.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๊ธฐ [[contribute-to-transformers]] ๋ˆ„๊ตฌ๋‚˜ ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์šฐ๋ฆฌ๋Š” ๋ชจ๋“  ์‚ฌ๋žŒ์˜ ๊ธฐ์—ฌ๋ฅผ ์†Œ์ค‘ํžˆ ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ์ฝ”๋“œ ๊ธฐ์—ฌ๋Š” ์ปค๋ฎค๋‹ˆํ‹ฐ๋ฅผ ๋•๋Š” ์œ ์ผํ•œ ๋ฐฉ๋ฒ•์ด ์•„๋‹™๋‹ˆ๋‹ค. ์งˆ๋ฌธ์— ๋‹ตํ•˜๊ฑฐ๋‚˜ ๋‹ค๋ฅธ ์‚ฌ๋žŒ์„ ๋„์™€ ๋ฌธ์„œ๋ฅผ ๊ฐœ์„ ํ•˜๋Š” ๊ฒƒ๋„ ๋งค์šฐ ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋ฅผ ๋„๋ฆฌ ์•Œ๋ฆฌ๋Š” ๊ฒƒ๋„ ํฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค! ๋ฉ‹์ง„ ํ”„๋กœ์ ํŠธ๋“ค์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•œ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•ด ๋ธ”๋กœ๊ทธ ๊ฒŒ์‹œ๊ธ€์— ์–ธ๊ธ‰ํ•˜๊ฑฐ๋‚˜, ๋„์›€์ด ๋˜์—ˆ์„ ๋•Œ๋งˆ๋‹ค Twitter์— ์•Œ๋ฆฌ๊ฑฐ๋‚˜, ์ €์žฅ์†Œ์— โญ๏ธ ๋ฅผ ํ‘œ์‹œํ•˜์—ฌ ๊ฐ์‚ฌ ์ธ์‚ฌ๋ฅผ ์ „ํ•ด์ฃผ์„ธ์š”. ์–ด๋–ค ๋ฐฉ์‹์œผ๋กœ ๊ธฐ์—ฌํ•˜๋“  [ํ–‰๋™ ๊ทœ์น™](https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md)์„ ์ˆ™์ง€ํ•˜๊ณ  ์กด์ค‘ํ•ด์ฃผ์„ธ์š”. **์ด ์•ˆ๋‚ด์„œ๋Š” ๋ฉ‹์ง„ [scikit-learn ๊ธฐ์—ฌ ์•ˆ๋‚ด์„œ](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md)์—์„œ ํฐ ์˜๊ฐ์„ ๋ฐ›์•˜์Šต๋‹ˆ๋‹ค.** ## ๊ธฐ์—ฌํ•˜๋Š” ๋ฐฉ๋ฒ• [[ways-to-contribute]] ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ๊ธฐ์กด ์ฝ”๋“œ์˜ ๋ฏธํ•ด๊ฒฐ๋œ ๋ฌธ์ œ๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค. * ๋ฒ„๊ทธ ๋˜๋Š” ์ƒˆ๋กœ ์ถ”๊ฐ€๋˜๊ธธ ์›ํ•˜๋Š” ๊ธฐ๋Šฅ๊ณผ ๊ด€๋ จ๋œ ์ด์Šˆ๋ฅผ ์ œ์ถœํ•ฉ๋‹ˆ๋‹ค. * ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. * ์˜ˆ์ œ๋‚˜ ๋ฌธ์„œ์— ๊ธฐ์—ฌํ•ฉ๋‹ˆ๋‹ค. ์–ด๋””์„œ๋ถ€ํ„ฐ ์‹œ์ž‘ํ• ์ง€ ๋ชจ๋ฅด๊ฒ ๋‹ค๋ฉด, [Good First Issue](https://github.com/huggingface/transformers/contribute) ๋ชฉ๋ก์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ์ด ๋ชฉ๋ก์€ ์ดˆ๋ณด์ž๋„ ์ฐธ์—ฌํ•˜๊ธฐ ์‰ฌ์šด ์˜คํ”ˆ ์ด์Šˆ ๋ชฉ๋ก์„ ์ œ๊ณตํ•˜๋ฉฐ, ๋‹น์‹ ์ด ์˜คํ”ˆ์†Œ์Šค์— ์ฒ˜์Œ์œผ๋กœ ๊ธฐ์—ฌํ•˜๋Š” ๋ฐ ํฐ ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ์ € ์ž‘์—…ํ•˜๊ณ  ์‹ถ์€ ์ด์Šˆ์— ๋Œ“๊ธ€๋งŒ ๋‹ฌ์•„์ฃผ๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์กฐ๊ธˆ ๋” ๋„์ „์ ์ธ ์ž‘์—…์„ ์›ํ•œ๋‹ค๋ฉด, [Good Second Issue](https://github.com/huggingface/transformers/labels/Good%20Second%20Issue) ๋ชฉ๋ก๋„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ์ด๋ฏธ ๋‹น์‹ ์ด ์ž˜ ํ•˜๊ณ  ์žˆ๋‹ค๊ณ  ์ƒ๊ฐ๋˜๋”๋ผ๋„, ํ•œ ๋ฒˆ ์‹œ๋„ํ•ด๋ณด์„ธ์š”! ์šฐ๋ฆฌ๋„ ์—ฌ๋Ÿฌ๋ถ„์„ ๋„์šธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๐Ÿš€ > ์ปค๋ฎค๋‹ˆํ‹ฐ์— ์ด๋ฃจ์–ด์ง€๋Š” ๋ชจ๋“  ๊ธฐ์—ฌ๋Š” ๋˜‘๊ฐ™์ด ์†Œ์ค‘ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿฅฐ ## ๋ฏธํ•ด๊ฒฐ๋œ ๋ฌธ์ œ ์ˆ˜์ •ํ•˜๊ธฐ [[fixing-outstanding-issues]] ๊ธฐ์กด ์ฝ”๋“œ์—์„œ ๋ฐœ๊ฒฌํ•œ ๋ฌธ์ œ์ ์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์ฑ…์ด ๋– ์˜ค๋ฅธ ๊ฒฝ์šฐ, ์–ธ์ œ๋“ ์ง€ [๊ธฐ์—ฌ๋ฅผ ์‹œ์ž‘](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md/#create-a-pull-request)ํ•˜๊ณ  Pull Request๋ฅผ ์ƒ์„ฑํ•ด์ฃผ์„ธ์š”! ## ๋ฒ„๊ทธ ๊ด€๋ จ ์ด์Šˆ๋ฅผ ์ œ๊ธฐํ•˜๊ฑฐ๋‚˜ ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ ์š”์ฒญํ•˜๊ธฐ [[submitting-a-bugrelated-issue-or-feature-request]] ๋ฒ„๊ทธ ๊ด€๋ จ ์ด์Šˆ๋ฅผ ์ œ๊ธฐํ•˜๊ฑฐ๋‚˜ ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์„ ์š”์ฒญํ•  ๋•Œ๋Š” ๋‹ค์Œ ๊ฐ€์ด๋“œ๋ผ์ธ์„ ์ตœ๋Œ€ํ•œ ์ค€์ˆ˜ํ•ด์ฃผ์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ข‹์€ ํ”ผ๋“œ๋ฐฑ๊ณผ ํ•จ๊ป˜ ๋น ๋ฅด๊ฒŒ ๋‹ต๋ณ€ํ•ด ๋“œ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ๋ฒ„๊ทธ๋ฅผ ๋ฐœ๊ฒฌํ•˜์…จ๋‚˜์š”? [[did-you-find-a-bug]] ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์‚ฌ์šฉ ์ค‘์— ๊ฒช๋Š” ๋ฌธ์ œ๋ฅผ ๋ณด๊ณ ํ•ด์ฃผ๋Š” ์‚ฌ์šฉ์ž๋“ค ๋•๋ถ„์— ๋”์šฑ ๊ฒฌ๊ณ ํ•ด์ง€๊ณ  ์‹ ๋ขฐํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์Šˆ๋ฅผ ๋ณด๊ณ ํ•˜๊ธฐ ์ „์—, ๋ฒ„๊ทธ๊ฐ€ ์ด๋ฏธ **๋ณด๊ณ ๋˜์ง€ ์•Š์•˜๋Š”์ง€** ํ™•์ธํ•ด์ฃผ์„ธ์š”. (GitHub์˜ ์ด์Šˆ ํƒญ ์•„๋ž˜์˜ ๊ฒ€์ƒ‰ ๋ฐ”๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”). ์ด์Šˆ๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์ž์ฒด์—์„œ ๋ฐœ์ƒํ•œ ๋ฒ„๊ทธ์–ด์•ผ ํ•˜๋ฉฐ, ์ฝ”๋“œ์˜ ๋‹ค๋ฅธ ๋ถ€๋ถ„๊ณผ ๊ด€๋ จ๋œ ๊ฒƒ์ด ์•„๋‹ˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฒ„๊ทธ๊ฐ€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋ฌธ์ œ๋กœ ๋ฐœ์ƒํ•˜์˜€๋Š”์ง€ ํ™•์‹คํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ๋จผ์ € [ํฌ๋Ÿผ](https://discuss.huggingface.co/)์—์„œ ์งˆ๋ฌธํ•ด ์ฃผ์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ผ๋ฐ˜์ ์ธ ์งˆ๋ฌธ๋ณด๋‹ค ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๊ด€๋ จ๋œ ๋ฌธ์ œ๋ฅผ ๋” ๋น ๋ฅด๊ฒŒ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฒ„๊ทธ๊ฐ€ ์ด๋ฏธ ๋ณด๊ณ ๋˜์ง€ ์•Š์•˜๋‹ค๋Š” ๊ฒƒ์„ ํ™•์ธํ–ˆ๋‹ค๋ฉด, ๋‹ค์Œ ์ •๋ณด๋ฅผ ํฌํ•จํ•˜์—ฌ ์ด์Šˆ๋ฅผ ์ œ์ถœํ•ด ์ฃผ์„ธ์š”. ๊ทธ๋Ÿฌ๋ฉด ์šฐ๋ฆฌ๊ฐ€ ๋น ๋ฅด๊ฒŒ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ์‚ฌ์šฉ ์ค‘์ธ **์šด์˜์ฒด์ œ ์ข…๋ฅ˜์™€ ๋ฒ„์ „**, ๊ทธ๋ฆฌ๊ณ  **Python**, **PyTorch** ๋˜๋Š” **TensorFlow** ๋ฒ„์ „. * ๋ฒ„๊ทธ๋ฅผ 30์ดˆ ์ด๋‚ด๋กœ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ„๋‹จํ•˜๊ณ  ๋…๋ฆฝ์ ์ธ ์ฝ”๋“œ ์Šค๋‹ˆํŽซ. * ์˜ˆ์™ธ๊ฐ€ ๋ฐœ์ƒํ•œ ๊ฒฝ์šฐ *์ „์ฒด* ํŠธ๋ ˆ์ด์Šค๋ฐฑ. * ์Šคํฌ๋ฆฐ์ƒท๊ณผ ๊ฐ™์ด ๋„์›€์ด ๋  ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋˜๋Š” ์ถ”๊ฐ€ ์ •๋ณด๋ฅผ ์ฒจ๋ถ€ํ•ด ์ฃผ์„ธ์š”. ์šด์˜์ฒด์ œ์™€ ์†Œํ”„ํŠธ์›จ์–ด ๋ฒ„์ „์„ ์ž๋™์œผ๋กœ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash transformers-cli env ``` ์ €์žฅ์†Œ์˜ ๋ฃจํŠธ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ๋„ ๊ฐ™์€ ๋ช…๋ น์„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python src/transformers/commands/transformers_cli.py env ``` ### ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์„ ์›ํ•˜์‹œ๋‚˜์š”? [[do-you-want-a-new-feature]] ๐Ÿค— Transformers์—์„œ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์€ ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์ด ์žˆ๋‹ค๋ฉด, ๋‹ค์Œ ๋‚ด์šฉ์„ ํฌํ•จํ•˜์—ฌ ์ด์Šˆ๋ฅผ ์ œ์ถœํ•ด ์ฃผ์„ธ์š”: 1. ์ด ๊ธฐ๋Šฅ์ด ํ•„์š”ํ•œ *์ด์œ *๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•œ ๋ฌธ์ œ๋‚˜ ๋ถˆ๋งŒ๊ณผ ๊ด€๋ จ์ด ์žˆ๋‚˜์š”? ํ”„๋กœ์ ํŠธ์— ํ•„์š”ํ•œ ๊ธฐ๋Šฅ์ธ๊ฐ€์š”? ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋„์›€์ด ๋  ๋งŒํ•œ ๊ธฐ๋Šฅ์ธ๊ฐ€์š”? ์–ด๋–ค ๋‚ด์šฉ์ด๋“  ์—ฌ๋Ÿฌ๋ถ„์˜ ์ด์•ผ๊ธฐ๋ฅผ ๋“ฃ๊ณ  ์‹ถ์Šต๋‹ˆ๋‹ค! 2. ์š”์ฒญํ•˜๋Š” ๊ธฐ๋Šฅ์„ ์ตœ๋Œ€ํ•œ ์ž์„ธํžˆ ์„ค๋ช…ํ•ด ์ฃผ์„ธ์š”. ๋” ๋งŽ์€ ์ •๋ณด๋ฅผ ์ œ๊ณตํ• ์ˆ˜๋ก ๋” ๋‚˜์€ ๋„์›€์„ ๋“œ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. ํ•ด๋‹น ๊ธฐ๋Šฅ์˜ ์‚ฌ์šฉ๋ฒ•์„ ๋ณด์—ฌ์ฃผ๋Š” *์ฝ”๋“œ ์Šค๋‹ˆํŽซ*์„ ์ œ๊ณตํ•ด ์ฃผ์„ธ์š”. 4. ๊ธฐ๋Šฅ๊ณผ ๊ด€๋ จ๋œ ๋…ผ๋ฌธ์ด ์žˆ๋Š” ๊ฒฝ์šฐ ๋งํฌ๋ฅผ ํฌํ•จํ•ด ์ฃผ์„ธ์š”. ์ด์Šˆ๊ฐ€ ์ž˜ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด ์ด์Šˆ๊ฐ€ ์ƒ์„ฑ๋œ ์ˆœ๊ฐ„, ์ด๋ฏธ 80% ์ •๋„์˜ ์ž‘์—…์ด ์™„๋ฃŒ๋œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์Šˆ๋ฅผ ์ œ๊ธฐํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ๋งŒํ•œ [ํ…œํ”Œ๋ฆฟ](https://github.com/huggingface/transformers/tree/main/templates)๋„ ์ค€๋น„๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ## ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๊ณ  ์‹ถ์œผ์‹ ๊ฐ€์š”? [[do-you-want-to-implement-a-new-model]] ์ƒˆ๋กœ์šด ๋ชจ๋ธ์€ ๊ณ„์†ํ•ด์„œ ์ถœ์‹œ๋ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์—ฌ๋Ÿฌ๋ถ„์ด ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ๋‹ค์Œ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•ด ์ฃผ์„ธ์š”: * ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊ฐ„๋‹จํ•œ ์„ค๋ช…๊ณผ ๋…ผ๋ฌธ ๋งํฌ. * ๊ตฌํ˜„์ด ๊ณต๊ฐœ๋˜์–ด ์žˆ๋‹ค๋ฉด ๊ตฌํ˜„ ๋งํฌ. * ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค๋ฉด ๊ฐ€์ค‘์น˜ ๋งํฌ. ๋งŒ์•ฝ ๋ชจ๋ธ์„ ์ง์ ‘ ๊ธฐ์—ฌํ•˜๊ณ  ์‹ถ์œผ์‹œ๋‹ค๋ฉด, ์•Œ๋ ค์ฃผ์„ธ์š”. ๐Ÿค— Transformers์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! [๐Ÿค— Transformers์— ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•](https://huggingface.co/docs/transformers/add_new_model)์— ๋Œ€ํ•œ ๊ธฐ์ˆ ์ ์ธ ์•ˆ๋‚ด์„œ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋ฌธ์„œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์‹ถ์œผ์‹ ๊ฐ€์š”? [[do-you-want-to-add-documentation]] ์šฐ๋ฆฌ๋Š” ์–ธ์ œ๋‚˜ ๋” ๋ช…ํ™•ํ•˜๊ณ  ์ •ํ™•ํ•œ ๋ฌธ์„œ๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๊ฐœ์„ ์ ์„ ์ฐพ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์˜คํƒˆ์ž๋‚˜ ๋ถ€์กฑํ•œ ๋‚ด์šฉ, ๋ถ„๋ช…ํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ๋ถ€์ •ํ™•ํ•œ ๋‚ด์šฉ ๋“ฑ์„ ์•Œ๋ ค์ฃผ์‹œ๋ฉด ๊ฐœ์„ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๊ด€์‹ฌ์ด ์žˆ์œผ์‹œ๋‹ค๋ฉด ๋ณ€๊ฒฝํ•˜๊ฑฐ๋‚˜ ๊ธฐ์—ฌํ•˜์‹ค ์ˆ˜ ์žˆ๋„๋ก ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ๋ฌธ์„œ๋ฅผ ์ƒ์„ฑ, ๋นŒ๋“œ ๋ฐ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [README](https://github.com/huggingface/transformers/tree/main/docs) ๋ฌธ์„œ๋ฅผ ํ™•์ธํ•ด ์ฃผ์„ธ์š”. ## ํ’€ ๋ฆฌํ€˜์ŠคํŠธ(Pull Request) ์ƒ์„ฑํ•˜๊ธฐ [[create-a-pull-request]] ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๊ธฐ ์ „์— ๊ธฐ์กด์˜ Pull Request๋‚˜ ์ด์Šˆ๋ฅผ ๊ฒ€์ƒ‰ํ•˜์—ฌ ๋ˆ„๊ตฐ๊ฐ€ ์ด๋ฏธ ๋™์ผํ•œ ์ž‘์—…์„ ํ•˜๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ™•์‹คํ•˜์ง€ ์•Š๋‹ค๋ฉด ํ”ผ๋“œ๋ฐฑ์„ ๋ฐ›๊ธฐ ์œ„ํ•ด ์ด์Šˆ๋ฅผ ์—ด์–ด๋ณด๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๊ธฐ๋ณธ์ ์ธ `git` ์‚ฌ์šฉ ๋Šฅ๋ ฅ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. `git`์€ ์‚ฌ์šฉํ•˜๊ธฐ ์‰ฌ์šด ๋„๊ตฌ๋Š” ์•„๋‹ˆ์ง€๋งŒ, ๋งค์šฐ ํ›Œ๋ฅญํ•œ ๋งค๋‰ด์–ผ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์‰˜(shell)์—์„œ `git --help`์„ ์ž…๋ ฅํ•˜์—ฌ ํ™•์ธํ•ด๋ณด์„ธ์š”! ๋งŒ์•ฝ ์ฑ…์„ ์„ ํ˜ธํ•œ๋‹ค๋ฉด, [Pro Git](https://git-scm.com/book/en/v2)์€ ๋งค์šฐ ์ข‹์€ ์ฐธ๊ณ  ์ž๋ฃŒ๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๋ ค๋ฉด **[Python 3.8](https://github.com/huggingface/transformers/blob/main/setup.py#L426)** ์ด์ƒ์˜ ๋ฒ„์ „์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์—ฌ๋ฅผ ์‹œ์ž‘ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅด์„ธ์š”: 1. ์ €์žฅ์†Œ ํŽ˜์ด์ง€์—์„œ **[Fork](https://github.com/huggingface/transformers/fork)** ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ €์žฅ์†Œ๋ฅผ ํฌํฌํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ฝ”๋“œ์˜ ๋ณต์‚ฌ๋ณธ์ด ์—ฌ๋Ÿฌ๋ถ„์˜ GitHub ์‚ฌ์šฉ์ž ๊ณ„์ • ์•„๋ž˜์— ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. 2. ํฌํฌํ•œ ์ €์žฅ์†Œ๋ฅผ ๋กœ์ปฌ ๋””์Šคํฌ๋กœ ํด๋ก ํ•˜๊ณ , ๊ธฐ๋ณธ ์ €์žฅ์†Œ๋ฅผ ์›๊ฒฉ(remote)์œผ๋กœ ์ถ”๊ฐ€ํ•˜์„ธ์š”: ```bash git clone git@github.com:<your Github handle>/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. ๊ฐœ๋ฐœ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ์ €์žฅํ•  ์ƒˆ ๋ธŒ๋žœ์น˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```bash git checkout -b a-descriptive-name-for-my-changes ``` ๐Ÿšจ ์ ˆ๋Œ€ `main` ๋ธŒ๋žœ์น˜์—์„œ ์ž‘์—…ํ•˜์ง€ **๋งˆ์„ธ์š”!** 4. ๊ฐ€์ƒ ํ™˜๊ฒฝ์—์„œ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•˜์„ธ์š”: ```bash pip install -e ".[dev]" ``` ๋งŒ์•ฝ ์ด๋ฏธ ๊ฐ€์ƒ ํ™˜๊ฒฝ์— ๐Ÿค— Transformers๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋‹ค๋ฉด, `-e` ํ”Œ๋ž˜๊ทธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์„ค์น˜ํ•˜๊ธฐ ์ „์— `pip uninstall transformers`๋กœ ์ œ๊ฑฐํ•ด์ฃผ์„ธ์š”. ์—ฌ๋Ÿฌ๋ถ„์˜ ์šด์˜์ฒด์ œ์— ๋”ฐ๋ผ์„œ, ๊ทธ๋ฆฌ๊ณ  ๐Ÿค— Transformers์˜ ์„ ํƒ์  ์˜์กด์„ฑ์˜ ์ˆ˜๊ฐ€ ์ฆ๊ฐ€ํ•˜๋ฉด์„œ, ์ด ๋ช…๋ น์ด ์‹คํŒจํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿด ๊ฒฝ์šฐ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๋”ฅ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ(PyTorch, TensorFlow, ๊ทธ๋ฆฌ๊ณ /๋˜๋Š” Flax)๋ฅผ ์„ค์น˜ํ•œ ํ›„ ์•„๋ž˜ ๋ช…๋ น์„ ์‹คํ–‰ํ•ด์ฃผ์„ธ์š”: ```bash pip install -e ".[quality]" ``` ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ์ด๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. 5. ๋ธŒ๋žœ์น˜์—์„œ ๊ธฐ๋Šฅ์„ ๊ฐœ๋ฐœํ•˜์„ธ์š”. ์ฝ”๋“œ๋ฅผ ์ž‘์—…ํ•˜๋Š” ๋™์•ˆ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ(test suite)๊ฐ€ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ์˜ํ–ฅ์„ ๋ฐ›๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash pytest tests/<TEST_TO_RUN>.py ``` ํ…Œ์ŠคํŠธ์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ์ •๋ณด๋Š” [ํ…Œ์ŠคํŠธ](https://huggingface.co/docs/transformers/testing) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ๐Ÿค— Transformers๋Š” `black`๊ณผ `ruff`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์†Œ์Šค ์ฝ”๋“œ์˜ ํ˜•์‹์„ ์ผ๊ด€๋˜๊ฒŒ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ์ ์šฉํ•œ ํ›„์—๋Š” ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ์ž๋™์œผ๋กœ ์Šคํƒ€์ผ ๊ต์ • ๋ฐ ์ฝ”๋“œ ๊ฒ€์ฆ์„ ์ˆ˜ํ–‰ํ•˜์„ธ์š”: ```bash make fixup ``` ์ด๊ฒƒ์€ ๋˜ํ•œ ์ž‘์—… ์ค‘์ธ PR์—์„œ ์ˆ˜์ •ํ•œ ํŒŒ์ผ์—์„œ๋งŒ ์ž‘๋™ํ•˜๋„๋ก ์ตœ์ ํ™”๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฒ€์‚ฌ๋ฅผ ํ•˜๋‚˜์”ฉ ์‹คํ–‰ํ•˜๋ ค๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ์Šคํƒ€์ผ ๊ต์ •์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash make style ``` ๐Ÿค— Transformers๋Š” ๋˜ํ•œ `ruff`์™€ ๋ช‡ ๊ฐ€์ง€ ์‚ฌ์šฉ์ž ์ •์˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ฝ”๋”ฉ ์‹ค์ˆ˜๋ฅผ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. CI๋ฅผ ํ†ตํ•ด ํ’ˆ์งˆ ๊ด€๋ฆฌ๊ฐ€ ์ˆ˜ํ–‰๋˜์ง€๋งŒ, ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ๋™์ผํ•œ ๊ฒ€์‚ฌ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash make quality ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ƒˆ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ ์ผ๋ถ€ ํŒŒ์ผ์„ ์—…๋ฐ์ดํŠธํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ์•Š๋„๋ก ํ•˜๊ธฐ ์œ„ํ•œ ๋งŽ์€ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ์ด๋Ÿฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash make repo-consistency ``` ์ด๋Ÿฌํ•œ ๊ฒ€์‚ฌ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ณ  ๊ด€๋ จ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ [Pull Request์— ๋Œ€ํ•œ ๊ฒ€์‚ฌ](https://huggingface.co/docs/transformers/pr_checks) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ๋งŒ์•ฝ `docs/source` ๋””๋ ‰ํ„ฐ๋ฆฌ ์•„๋ž˜์˜ ๋ฌธ์„œ๋ฅผ ์ˆ˜์ •ํ•˜๋Š” ๊ฒฝ์šฐ, ๋ฌธ์„œ๊ฐ€ ๋นŒ๋“œ๋  ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ด ๊ฒ€์‚ฌ๋Š” Pull Request๋ฅผ ์—ด ๋•Œ๋„ CI์—์„œ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ๋กœ์ปฌ ๊ฒ€์‚ฌ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋ฌธ์„œ ๋นŒ๋”๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install ".[docs]" ``` ์ €์žฅ์†Œ์˜ ๋ฃจํŠธ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build ``` ์ด ๋ช…๋ น์€ `~/tmp/test-build` ํด๋”์— ๋ฌธ์„œ๋ฅผ ๋นŒ๋“œํ•˜๋ฉฐ, ์ƒ์„ฑ๋œ Markdown ํŒŒ์ผ์„ ์„ ํ˜ธํ•˜๋Š” ํŽธ์ง‘๊ธฐ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Pull Request๋ฅผ ์—ด ๋•Œ GitHub์—์„œ ๋ฌธ์„œ๋ฅผ ๋ฏธ๋ฆฌ ๋ณผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋งŒ์กฑํ•˜๋ฉด `git add`๋กœ ๋ณ€๊ฒฝ๋œ ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•˜๊ณ , `git commit`์œผ๋กœ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋กœ์ปฌ์— ๊ธฐ๋กํ•˜์„ธ์š”: ```bash git add modified_file.py git commit ``` [์ข‹์€ ์ปค๋ฐ‹ ๋ฉ”์‹œ์ง€](https://chris.beams.io/posts/git-commit/)๋ฅผ ์ž‘์„ฑํ•˜์—ฌ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ช…ํ™•ํ•˜๊ฒŒ ์ „๋‹ฌํ•˜์„ธ์š”! ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ํ”„๋กœ์ ํŠธ ์›๋ณธ ์ €์žฅ์†Œ์™€ ๋™๊ธฐํ™”ํ•˜๋ ค๋ฉด, PR์„ *์—ด๊ธฐ ์ „์—* ๋ธŒ๋žœ์น˜๋ฅผ `upstream/branch`๋กœ ๋ฆฌ๋ฒ ์ด์Šค(rebase)ํ•˜์„ธ์š”. ๋˜๋Š” ๊ด€๋ฆฌ์ž์˜ ์š”์ฒญ์— ์ด ์ž‘์—…์ด ํ•„์š”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash git fetch upstream git rebase upstream/main ``` ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ธŒ๋žœ์น˜์— ํ‘ธ์‹œํ•˜์„ธ์š”: ```bash git push -u origin a-descriptive-name-for-my-changes ``` ์ด๋ฏธ PR์„ ์—ด์—ˆ๋‹ค๋ฉด, `--force` ํ”Œ๋ž˜๊ทธ์™€ ํ•จ๊ป˜ ๊ฐ•์ œ ํ‘ธ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„์ง PR์ด ์—ด๋ฆฌ์ง€ ์•Š์•˜๋‹ค๋ฉด ์ •์ƒ์ ์œผ๋กœ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ํ‘ธ์‹œํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. 6. ์ด์ œ GitHub์—์„œ ํฌํฌํ•œ ์ €์žฅ์†Œ๋กœ ์ด๋™ํ•˜๊ณ  **Pull request(ํ’€ ๋ฆฌํ€˜์ŠคํŠธ)**๋ฅผ ํด๋ฆญํ•˜์—ฌ Pull Request๋ฅผ ์—ด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜์˜ [์ฒดํฌ๋ฆฌ์ŠคํŠธ](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md/#pull-request-checklist)์—์„œ ๋ชจ๋“  ํ•ญ๋ชฉ์— ์ฒดํฌ ํ‘œ์‹œ๋ฅผ ํ•˜์„ธ์š”. ์ค€๋น„๊ฐ€ ์™„๋ฃŒ๋˜๋ฉด ํ”„๋กœ์ ํŠธ ๊ด€๋ฆฌ์ž์—๊ฒŒ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ณด๋‚ด ๊ฒ€ํ† ๋ฅผ ์š”์ฒญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 7. ๊ด€๋ฆฌ์ž๊ฐ€ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ์š”์ฒญํ•ด๋„ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค. ํ•ต์‹ฌ ๊ธฐ์—ฌ์ž๋“ค๋„ ๋™์ผํ•œ ์ƒํ™ฉ์„ ๊ฒช์Šต๋‹ˆ๋‹ค! ๋ชจ๋‘๊ฐ€ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ Pull Request์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋„๋ก, ๋กœ์ปฌ ๋ธŒ๋žœ์น˜์—์„œ ์ž‘์—…ํ•˜๊ณ  ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ํฌํฌํ•œ ์ €์žฅ์†Œ๋กœ ํ‘ธ์‹œํ•˜์„ธ์š”. ๊ทธ๋Ÿฌ๋ฉด ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์ž๋™์œผ๋กœ Pull Request์— ๋‚˜ํƒ€๋‚ฉ๋‹ˆ๋‹ค. ### Pull Request ์ฒดํฌ๋ฆฌ์ŠคํŠธ [[pull-request-checklist]] โ˜ Pull Request ์ œ๋ชฉ์€ ๊ธฐ์—ฌ ๋‚ด์šฉ์„ ์š”์•ฝํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.<br> โ˜ Pull Request๊ฐ€ ์ด์Šˆ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๊ฒฝ์šฐ, Pull Request ์„ค๋ช…์— ์ด์Šˆ ๋ฒˆํ˜ธ๋ฅผ ์–ธ๊ธ‰ํ•˜์—ฌ ์—ฐ๊ด€๋˜์–ด ์žˆ์Œ์„ ์•Œ๋ ค์ฃผ์„ธ์š”. (์ด์Šˆ๋ฅผ ํ™•์ธํ•˜๋Š” ์‚ฌ๋žŒ๋“ค์ด ํ•ด๋‹น ์ด์Šˆ์— ๋Œ€ํ•œ ์ž‘์—…์ด ์ง„ํ–‰ ์ค‘์ž„์„ ์•Œ ์ˆ˜ ์žˆ๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค).<br> โ˜ ์ž‘์—…์ด ์ง„ํ–‰์ค‘์ด๋ผ๋ฉด ์ œ๋ชฉ ์•ž์— `[WIP]`๋ฅผ ๋ถ™์—ฌ์ฃผ์„ธ์š”. ์ค‘๋ณต ์ž‘์—…์„ ํ”ผํ•˜๊ณ  ๋ณ‘ํ•ฉํ•  ์ค€๋น„๊ฐ€ ๋œ PR๊ณผ ๊ตฌ๋ถ„ํ•˜๊ธฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค.<br> โ˜ ๊ธฐ์กด ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”.<br> โ˜ ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ, ํ•ด๋‹น ๊ธฐ๋Šฅ์— ๋Œ€ํ•œ ํ…Œ์ŠคํŠธ๋„ ์ถ”๊ฐ€ํ•˜์„ธ์š”.<br> - ์ƒˆ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ, `ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...)`์„ ์‚ฌ์šฉํ•˜์—ฌ ์ผ๋ฐ˜์ ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ํ™œ์„ฑํ™”ํ•˜์„ธ์š”. - ์ƒˆ `@slow` ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: `RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py`. - ์ƒˆ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ, ํ…Œ์ŠคํŠธ๋ฅผ ์ž‘์„ฑํ•˜๊ณ  ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: `RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py`. - CircleCI์—์„œ๋Š” ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š์ง€๋งŒ, GitHub Actions์—์„œ๋Š” ๋งค์ผ ๋ฐค ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค!<br> โ˜ ๋ชจ๋“  ๊ณต๊ฐœ ๋ฉ”์†Œ๋“œ๋Š” ์œ ์šฉํ•œ ๊ธฐ์ˆ ๋ฌธ์„œ๋ฅผ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค (์˜ˆ๋ฅผ ๋“ค์–ด [`modeling_bert.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py) ์ฐธ์กฐ).<br> โ˜ ์ €์žฅ์†Œ๊ฐ€ ๋น ๋ฅด๊ฒŒ ์„ฑ์žฅํ•˜๊ณ  ์žˆ์œผ๋ฏ€๋กœ ์ €์žฅ์†Œ์— ์ƒ๋‹นํ•œ ๋ถ€๋‹ด์„ ์ฃผ๋Š” ์ด๋ฏธ์ง€, ๋™์˜์ƒ ๋ฐ ๊ธฐํƒ€ ํ…์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ํŒŒ์ผ์€ ์ถ”๊ฐ€ํ•˜์ง€ ๋งˆ์„ธ์š”. ๋Œ€์‹  [`hf-internal-testing`](https://huggingface.co/hf-internal-testing)๊ณผ ๊ฐ™์€ Hub ์ €์žฅ์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋Ÿฌํ•œ ํŒŒ์ผ์„ ํ˜ธ์ŠคํŒ…ํ•˜๊ณ  URL๋กœ ์ฐธ์กฐํ•˜์„ธ์š”. ๋ฌธ์„œ์™€ ๊ด€๋ จ๋œ ์ด๋ฏธ์ง€๋Š” ๋‹ค์Œ ์ €์žฅ์†Œ์— ๋ฐฐ์น˜ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images). ์ด ๋ฐ์ดํ„ฐ์…‹ ์ €์žฅ์†Œ์—์„œ PR์„ ์—ด์–ด์„œ Hugging Face ๋ฉค๋ฒ„์—๊ฒŒ ๋ณ‘ํ•ฉ์„ ์š”์ฒญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Pull Request์—์„œ ์‹คํ–‰๋˜๋Š” ๊ฒ€์‚ฌ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋Š” [Pull Request์— ๋Œ€ํ•œ ๊ฒ€์‚ฌ](https://huggingface.co/docs/transformers/pr_checks) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ### ํ…Œ์ŠคํŠธ [[tests]] ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋™์ž‘๊ณผ ์—ฌ๋Ÿฌ ์˜ˆ์ œ๋ฅผ ํ…Œ์ŠคํŠธํ•  ์ˆ˜ ์žˆ๋Š” ๊ด‘๋ฒ”์œ„ํ•œ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ํ…Œ์ŠคํŠธ๋Š” [tests](https://github.com/huggingface/transformers/tree/main/tests) ํด๋”์—, ์˜ˆ์ œ ํ…Œ์ŠคํŠธ๋Š” [examples](https://github.com/huggingface/transformers/tree/main/examples) ํด๋”์— ์žˆ์Šต๋‹ˆ๋‹ค. ์†๋„๊ฐ€ ๋น ๋ฅธ `pytest`์™€ `pytest-xdist`๋ฅผ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ์˜ ๋ฃจํŠธ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  *ํ•˜์œ„ ํด๋” ๊ฒฝ๋กœ ๋˜๋Š” ํ…Œ์ŠคํŠธ ํŒŒ์ผ ๊ฒฝ๋กœ*๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model ``` ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `examples` ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ๋„ *ํ•˜์œ„ ํด๋” ๊ฒฝ๋กœ ๋˜๋Š” ํ…Œ์ŠคํŠธ ํŒŒ์ผ ๊ฒฝ๋กœ*๋ฅผ ์ง€์ •ํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ ๋ช…๋ น์€ PyTorch `examples` ๋””๋ ‰ํ„ฐ๋ฆฌ์˜ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ํ•˜์œ„ ํด๋”๋ฅผ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค: ```bash pip install -r examples/xxx/requirements.txt # only needed the first time python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification ``` ์ด๊ฒƒ์ด ์‹ค์ œ๋กœ `make test` ๋ฐ `make test-examples` ๋ช…๋ น์ด ๊ตฌํ˜„๋˜๋Š” ๋ฐฉ์‹์ž…๋‹ˆ๋‹ค (`pip install`์€ ์ œ์™ธํ•ฉ๋‹ˆ๋‹ค)! ๋˜ํ•œ ํŠน์ • ๊ธฐ๋Šฅ๋งŒ ํ…Œ์ŠคํŠธํ•˜๊ธฐ ์œ„ํ•œ ๋” ์ž‘์€ ํ…Œ์ŠคํŠธ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ์ ์œผ๋กœ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋Š” ๊ฑด๋„ˆ๋›ฐ์ง€๋งŒ `RUN_SLOW` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ `yes`๋กœ ์„ค์ •ํ•˜์—ฌ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋งŽ์€ ๊ธฐ๊ฐ€๋ฐ”์ดํŠธ ๋‹จ์œ„์˜ ๋ชจ๋ธ์ด ๋‹ค์šด๋กœ๋“œ๋˜๋ฏ€๋กœ ์ถฉ๋ถ„ํ•œ ๋””์Šคํฌ ๊ณต๊ฐ„, ์ข‹์€ ์ธํ„ฐ๋„ท ์—ฐ๊ฒฐ๊ณผ ๋งŽ์€ ์ธ๋‚ด๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค! <Tip warning={true}> ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด *ํ•˜์œ„ ํด๋” ๊ฒฝ๋กœ ๋˜๋Š” ํ…Œ์ŠคํŠธ ํŒŒ์ผ ๊ฒฝ๋กœ*๋ฅผ ์ง€์ •ํ•˜์„ธ์š”. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด `tests` ๋˜๋Š” `examples` ํด๋”์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๊ฒŒ ๋˜์–ด ๋งค์šฐ ๊ธด ์‹œ๊ฐ„์ด ๊ฑธ๋ฆฝ๋‹ˆ๋‹ค! </Tip> ```bash RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification ``` ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ…Œ์ŠคํŠธ ์ค‘์— ๊ธฐ๋ณธ์ ์œผ๋กœ ํ™œ์„ฑํ™”๋˜์ง€ ์•Š๋Š” ๋‹ค๋ฅธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: - `RUN_CUSTOM_TOKENIZERS`: ์‚ฌ์šฉ์ž ์ •์˜ ํ† ํฌ๋‚˜์ด์ € ํ…Œ์ŠคํŠธ๋ฅผ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. - `RUN_PT_FLAX_CROSS_TESTS`: PyTorch + Flax ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. - `RUN_PT_TF_CROSS_TESTS`: TensorFlow + PyTorch ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ํ™˜๊ฒฝ ๋ณ€์ˆ˜์™€ ์ถ”๊ฐ€ ์ •๋ณด๋Š” [testing_utils.py](src/transformers/testing_utils.py)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ํ…Œ์ŠคํŠธ ์‹คํ–‰๊ธฐ๋กœ `pytest`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ ์ž์ฒด์—์„œ๋Š” `pytest` ๊ด€๋ จ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ `unittest`๊ฐ€ ์™„์ „ํžˆ ์ง€์›๋œ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ `unittest`๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```bash python -m unittest discover -s tests -t . -v python -m unittest discover -s examples -t examples -v ``` ### ์Šคํƒ€์ผ ๊ฐ€์ด๋“œ [[style-guide]] ๋ฌธ์„œ๋Š” [Google Python ์Šคํƒ€์ผ ๊ฐ€์ด๋“œ](https://google.github.io/styleguide/pyguide.html)๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์ •๋ณด๋Š” [๋ฌธ์„œ ์ž‘์„ฑ ๊ฐ€์ด๋“œ](https://github.com/huggingface/transformers/tree/main/docs#writing-documentation---specification)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ### Windows์—์„œ ๊ฐœ๋ฐœ [[develop-on-windows]] Windows์—์„œ ๊ฐœ๋ฐœํ•  ๊ฒฝ์šฐ([Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/) ๋˜๋Š” WSL์—์„œ ์ž‘์—…ํ•˜์ง€ ์•Š๋Š” ํ•œ) Windows `CRLF` ์ค„ ๋ฐ”๊ฟˆ์„ Linux `LF` ์ค„ ๋ฐ”๊ฟˆ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋„๋ก git์„ ๊ตฌ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git config core.autocrlf input ``` Windows์—์„œ `make` ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋Š” ํ•œ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ MSYS2๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: 1. [MSYS2](https://www.msys2.org/)๋ฅผ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. `C:\msys64`์— ์„ค์น˜๋˜์—ˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค. 2. CLI์—์„œ `C:\msys64\msys2.exe`๋ฅผ ์—ฝ๋‹ˆ๋‹ค (์‹œ์ž‘ ๋ฉ”๋‰ด์—์„œ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•ด์•ผ ํ•จ). 3. ์‰˜์—์„œ ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์—ฌ: `pacman -Syu` ๋ฐ `pacman -S make`๋กœ `make`๋ฅผ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค. 4. ํ™˜๊ฒฝ ๋ณ€์ˆ˜ PATH์— `C:\msys64\usr\bin`์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ์ด์ œ ๋ชจ๋“  ํ„ฐ๋ฏธ๋„ (PowerShell, cmd.exe ๋“ฑ)์—์„œ `make`๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๐ŸŽ‰ ### ํฌํฌํ•œ ์ €์žฅ์†Œ๋ฅผ ์ƒ์œ„ ์›๋ณธ ๋ธŒ๋žœ์น˜(main)๊ณผ ๋™๊ธฐํ™”ํ•˜๊ธฐ (Hugging Face ์ €์žฅ์†Œ) [[sync-a-forked-repository-with-upstream-main-the-hugging-face-repository]] ํฌํฌํ•œ ์ €์žฅ์†Œ์˜ main ๋ธŒ๋žœ์น˜๋ฅผ ์—…๋ฐ์ดํŠธํ•  ๋•Œ, ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ผ ์ˆ˜ํ–‰ํ•ด์ฃผ์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๊ฐ upstream PR์— ์ฐธ์กฐ ๋…ธํŠธ๊ฐ€ ์ถ”๊ฐ€๋˜๋Š” ๊ฒƒ์„ ํ”ผํ•˜๊ณ  ์ด๋Ÿฌํ•œ PR์— ๊ด€์—ฌํ•˜๋Š” ๊ฐœ๋ฐœ์ž๋“ค์—๊ฒŒ ๋ถˆํ•„์š”ํ•œ ์•Œ๋ฆผ์ด ์ „์†ก๋˜๋Š” ๊ฒƒ์„ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. ๊ฐ€๋Šฅํ•˜๋ฉด ํฌํฌ๋œ ์ €์žฅ์†Œ์˜ ๋ธŒ๋žœ์น˜ ๋ฐ PR์„ ์‚ฌ์šฉํ•˜์—ฌ upstream๊ณผ ๋™๊ธฐํ™”ํ•˜์ง€ ๋งˆ์„ธ์š”. ๋Œ€์‹  ํฌํฌ๋œ main ์ €์žฅ์†Œ์— ์ง์ ‘ ๋ณ‘ํ•ฉํ•˜์„ธ์š”. 2. PR์ด ๋ฐ˜๋“œ์‹œ ํ•„์š”ํ•œ ๊ฒฝ์šฐ, ๋ธŒ๋žœ์น˜๋ฅผ ํ™•์ธํ•œ ํ›„ ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash git checkout -b your-branch-for-syncing git pull --squash --no-commit upstream main git commit -m '<your message without GitHub references>' git push --set-upstream origin your-branch-for-syncing ```
0