NLP Course documentation

把它們放在一起

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

把它們放在一起

Ask a Question Open In Colab Open In Studio Lab

在最後幾節中,我們一直在盡最大努力手工完成大部分工作。我們探討了標記化器的工作原理,並研究了標記化、到輸入ID的轉換、填充、截斷和注意掩碼。

然而,正如我們在第2節中所看到的,🤗 Transformers API可以通過一個高級函數為我們處理所有這些,我們將在這裡深入討論。當你直接在句子上調用標記器時,你會得到準備通過模型傳遞的輸入

from transformers import AutoTokenizer

checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

sequence = "I've been waiting for a HuggingFace course my whole life."

model_inputs = tokenizer(sequence)

這裡,model_inputs 變量包含模型良好運行所需的一切。對於DistilBERT,它包括輸入 ID和注意力掩碼(attention mask)。其他接受額外輸入的模型也會有標記器對象的輸出。

正如我們將在下面的一些示例中看到的,這種方法非常強大。首先,它可以標記單個序列:

sequence = "I've been waiting for a HuggingFace course my whole life."

model_inputs = tokenizer(sequence)

它還一次處理多個序列,並且API沒有任何變化:

sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

model_inputs = tokenizer(sequences)

它可以根據幾個目標進行填充:

# Will pad the sequences up to the maximum sequence length
model_inputs = tokenizer(sequences, padding="longest")

# Will pad the sequences up to the model max length
# (512 for BERT or DistilBERT)
model_inputs = tokenizer(sequences, padding="max_length")

# Will pad the sequences up to the specified max length
model_inputs = tokenizer(sequences, padding="max_length", max_length=8)

它還可以截斷序列:

sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

# Will truncate the sequences that are longer than the model max length
# (512 for BERT or DistilBERT)
model_inputs = tokenizer(sequences, truncation=True)

# Will truncate the sequences that are longer than the specified max length
model_inputs = tokenizer(sequences, max_length=8, truncation=True)

標記器對象可以處理到特定框架張量的轉換,然後可以直接發送到模型。例如,在下面的代碼示例中,我們提示標記器從不同的框架返回張量——"pt"返回Py Torch張量,"tf"返回TensorFlow張量,"np"返回NumPy數組:

sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

# Returns PyTorch tensors
model_inputs = tokenizer(sequences, padding=True, return_tensors="pt")

# Returns TensorFlow tensors
model_inputs = tokenizer(sequences, padding=True, return_tensors="tf")

# Returns NumPy arrays
model_inputs = tokenizer(sequences, padding=True, return_tensors="np")

特殊詞符(token)

如果我們看一下標記器返回的輸入 ID,我們會發現它們與之前的略有不同:

sequence = "I've been waiting for a HuggingFace course my whole life."

model_inputs = tokenizer(sequence)
print(model_inputs["input_ids"])

tokens = tokenizer.tokenize(sequence)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
[101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102]
[1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]

一個在開始時添加了一個標記(token) ID,一個在結束時添加了一個標記(token) ID。讓我們解碼上面的兩個ID序列,看看這是怎麼回事:

print(tokenizer.decode(model_inputs["input_ids"]))
print(tokenizer.decode(ids))
"[CLS] i've been waiting for a huggingface course my whole life. [SEP]"
"i've been waiting for a huggingface course my whole life."

標記器在開頭添加了特殊單詞[CLS],在結尾添加了特殊單詞[SEP]。這是因為模型是用這些數據預訓練的,所以為了得到相同的推理結果,我們還需要添加它們。請注意,有些模型不添加特殊單詞,或者添加不同的單詞;模型也可能只在開頭或結尾添加這些特殊單詞。在任何情況下,標記器都知道需要哪些詞符,並將為您處理這些詞符。

結束:從標記器到模型

現在我們已經看到了標記器對象在應用於文本時使用的所有單獨步驟,讓我們最後一次看看它如何處理多個序列(填充!),非常長的序列(截斷!),以及多種類型的張量及其主要API:

import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"]

tokens = tokenizer(sequences, padding=True, truncation=True, return_tensors="pt")
output = model(**tokens)