提供的例子执行异常,是不是需要设置特殊参数?

#12
by fukai - opened

image.png
python 3.8.10/cuda-12.1/transformers4.40.1/flash-attn2.5.6/gpu=V100

Alibaba-NLP org

您好,可以提供一下gte_test.py 的完整代码么,我这里实验下来代码是没问题的

我也遇到同样的问题,代码就是README中给的transformer部分。

embedding在初始化时使用了配置文件中的vocab_size,数值为151936

self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)

然而tokenizer生成的token id有大于151936的数值

{"input_ids": 
   [[360,36716,12,77878,19,10,1467,33938,41,1294,4,456,3996,3134,29191,46692,7,450,35166,70,41,1294,58836,53,12,3642,5045,21308,5608,10,117776,73203,250002],[250002,250002,250002,250002,250002,360,36716,12,77878,19,10,1467,33938,41,1294,4,456,3996,3134,29191,46692,7,450,35166,70,41,1294,58836,53,12,184679,61924,250002]],
"attention_mask":
[[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1],[0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]]}

所以导致 IndexError: index out of range in self

我看在配置文件中有指定 "<|endoftext|>" 的ID为151643,但是不知为何没有生效,实际匹配了错误的id 250002。还有184679这个ID也不知道是哪里来的。

Alibaba-NLP org

我也遇到同样的问题,代码就是README中给的transformer部分。

embedding在初始化时使用了配置文件中的vocab_size,数值为151936

self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)

然而tokenizer生成的token id有大于151936的数值

{"input_ids": 
   [[360,36716,12,77878,19,10,1467,33938,41,1294,4,456,3996,3134,29191,46692,7,450,35166,70,41,1294,58836,53,12,3642,5045,21308,5608,10,117776,73203,250002],[250002,250002,250002,250002,250002,360,36716,12,77878,19,10,1467,33938,41,1294,4,456,3996,3134,29191,46692,7,450,35166,70,41,1294,58836,53,12,184679,61924,250002]],
"attention_mask":
[[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1],[0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]]}

所以导致 IndexError: index out of range in self

我看在配置文件中有指定 "<|endoftext|>" 的ID为151643,但是不知为何没有生效,实际匹配了错误的id 250002。还有184679这个ID也不知道是哪里来的。另外例子中的tokenizer的文本可以提供一下么

环境

transformers              4.39.3
torch                     2.2.2

所使用代码

import torch
import torch.nn.functional as F

from torch import Tensor
from transformers import AutoTokenizer, AutoModel


# 下载地址 https://www.modelscope.cn/iic/gte_Qwen1.5-7B-instruct.git
# commit 5b1a8dc (grafted, HEAD -> master, origin/master, origin/HEAD) Update README.md
model_path = "/home/robin/github/gte_Qwen1.5-7B-instruct"


def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
    left_padding = attention_mask[:, -1].sum() == attention_mask.shape[0]
    if left_padding:
        return last_hidden_states[:, -1]
    else:
        sequence_lengths = attention_mask.sum(dim=1) - 1
        batch_size = last_hidden_states.shape[0]
        return last_hidden_states[
            torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths
        ]


def get_detailed_instruct(task_description: str, query: str) -> str:
    return f"Instruct: {task_description}\nQuery: {query}"


# Each query must come with a one-sentence instruction that describes the task
task = "Given a web search query, retrieve relevant passages that answer the query"
queries = [
    get_detailed_instruct(task, "how much protein should a female eat"),
    get_detailed_instruct(task, "summit define"),
]
# No need to add instruction for retrieval documents
documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments.",
]
input_texts = queries + documents

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
max_length = 8192

# Tokenize the input texts
batch_dict = tokenizer(
    input_texts,
    max_length=max_length,
    padding=True,
    truncation=True,
    return_tensors="pt",
)


model = AutoModel.from_pretrained(
    model_path, trust_remote_code=True, torch_dtype=torch.bfloat16
)

outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict["attention_mask"])

# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())

报错信息

Traceback (most recent call last):
  File "/home/robin/github/gte_Qwen1.5-7B-instruct/main.py", line 56, in <module>
    outputs = model(**batch_dict)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/robin/.cache/huggingface/modules/transformers_modules/gte_Qwen1.5-7B-instruct/modeling_qwen.py", line 1013, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 163, in forward
    return F.embedding(
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 2237, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self

生成的token id

{'input_ids': tensor([[250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,    360,
          36716,     12,  77878,     19,     10,   1467,  33938,     41,   1294,
              4,    456,   3996,   3134,  29191,  46692,      7,    450,  35166,
             70,     41,   1294,  58836,     53,     12,   3642,   5045,  21308,
           5608,     10, 117776,  73203, 250002],
        [250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002,    360,  36716,     12,  77878,     19,
             10,   1467,  33938,     41,   1294,      4,    456,   3996,   3134,
          29191,  46692,      7,    450,  35166,     70,     41,   1294,  58836,
             53,     12, 184679,  61924, 250002],
        [  1301,     10,   4537,  17997,   2256,      4,     70,   7915,    441,
             25,      7,  83080,  64209,    674,    111,  21308,    100,  24793,
             10,   4188,    953,     47,   2358,     83,   7621,  16190,      7,
            117,   5155,      5,   4966,      4,    237,    398,    831,   1957,
           1295,    903, 116287,      4,    398,     25,   1181,   3871,     47,
          51312,    450,   2174,    398,     25,    107,  41206,    214,    707,
          23189,    100,     10, 179365,      5,  38679,   1810,     70, 116287,
          35064,     47,   1957,   3642,   5045,  21308,    398,   5608,    186,
         118992,  12638,   5155,      5, 250002],
        [250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         155455,    111, 184679,    100,  14941,  83658, 134031,   1314,      5,
            152,    106,     70, 167375,   6275,    111,     10, 101120,    152,
             70,   2663,    111,     10, 101120,      5,    152,    116,     70,
         167375,  17366,      5,    152,    138,     10,  41714,    707,  36549,
            111,  41714,      7,  17721,     70, 119893,    111,   6626,    707,
           1286,  27759,      7,      5, 250002]]), 'attention_mask': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1],
        [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1]])}
Alibaba-NLP org

环境

transformers              4.39.3
torch                     2.2.2

所使用代码

import torch
import torch.nn.functional as F

from torch import Tensor
from transformers import AutoTokenizer, AutoModel


# 下载地址 https://www.modelscope.cn/iic/gte_Qwen1.5-7B-instruct.git
# commit 5b1a8dc (grafted, HEAD -> master, origin/master, origin/HEAD) Update README.md
model_path = "/home/robin/github/gte_Qwen1.5-7B-instruct"


def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
    left_padding = attention_mask[:, -1].sum() == attention_mask.shape[0]
    if left_padding:
        return last_hidden_states[:, -1]
    else:
        sequence_lengths = attention_mask.sum(dim=1) - 1
        batch_size = last_hidden_states.shape[0]
        return last_hidden_states[
            torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths
        ]


def get_detailed_instruct(task_description: str, query: str) -> str:
    return f"Instruct: {task_description}\nQuery: {query}"


# Each query must come with a one-sentence instruction that describes the task
task = "Given a web search query, retrieve relevant passages that answer the query"
queries = [
    get_detailed_instruct(task, "how much protein should a female eat"),
    get_detailed_instruct(task, "summit define"),
]
# No need to add instruction for retrieval documents
documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments.",
]
input_texts = queries + documents

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
max_length = 8192

# Tokenize the input texts
batch_dict = tokenizer(
    input_texts,
    max_length=max_length,
    padding=True,
    truncation=True,
    return_tensors="pt",
)


model = AutoModel.from_pretrained(
    model_path, trust_remote_code=True, torch_dtype=torch.bfloat16
)

outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict["attention_mask"])

# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())

报错信息

Traceback (most recent call last):
  File "/home/robin/github/gte_Qwen1.5-7B-instruct/main.py", line 56, in <module>
    outputs = model(**batch_dict)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/robin/.cache/huggingface/modules/transformers_modules/gte_Qwen1.5-7B-instruct/modeling_qwen.py", line 1013, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 163, in forward
    return F.embedding(
  File "/home/robin/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 2237, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self

生成的token id

{'input_ids': tensor([[250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,    360,
          36716,     12,  77878,     19,     10,   1467,  33938,     41,   1294,
              4,    456,   3996,   3134,  29191,  46692,      7,    450,  35166,
             70,     41,   1294,  58836,     53,     12,   3642,   5045,  21308,
           5608,     10, 117776,  73203, 250002],
        [250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002,    360,  36716,     12,  77878,     19,
             10,   1467,  33938,     41,   1294,      4,    456,   3996,   3134,
          29191,  46692,      7,    450,  35166,     70,     41,   1294,  58836,
             53,     12, 184679,  61924, 250002],
        [  1301,     10,   4537,  17997,   2256,      4,     70,   7915,    441,
             25,      7,  83080,  64209,    674,    111,  21308,    100,  24793,
             10,   4188,    953,     47,   2358,     83,   7621,  16190,      7,
            117,   5155,      5,   4966,      4,    237,    398,    831,   1957,
           1295,    903, 116287,      4,    398,     25,   1181,   3871,     47,
          51312,    450,   2174,    398,     25,    107,  41206,    214,    707,
          23189,    100,     10, 179365,      5,  38679,   1810,     70, 116287,
          35064,     47,   1957,   3642,   5045,  21308,    398,   5608,    186,
         118992,  12638,   5155,      5, 250002],
        [250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002, 250002,
         155455,    111, 184679,    100,  14941,  83658, 134031,   1314,      5,
            152,    106,     70, 167375,   6275,    111,     10, 101120,    152,
             70,   2663,    111,     10, 101120,      5,    152,    116,     70,
         167375,  17366,      5,    152,    138,     10,  41714,    707,  36549,
            111,  41714,      7,  17721,     70, 119893,    111,   6626,    707,
           1286,  27759,      7,      5, 250002]]), 'attention_mask': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1],
        [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1],
        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1]])}

感谢您提供的例子🙏,check了一下发现是modelscope版本的tokenizer.json 有问题,现在已经修复了,你可以更新一下再试一下

可以了!

zyznull changed discussion status to closed

Sign up or log in to comment