Got this error: AttributeError: 'NoneType' object has no attribute 'size'

#43
by latuan - opened

when i run this command to dense finetune the model:

torchrun --nproc_per_node {number of gpus} \
-m FlagEmbedding.BGE_M3.run \
--output_dir {path to save model} \
--model_name_or_path BAAI/bge-m3 \
--train_data ./toy_train_data \
--learning_rate 1e-5 \
--fp16 \
--num_train_epochs 5 \
--per_device_train_batch_size {large batch size; set 1 for toy data} \
--dataloader_drop_last True \
--normlized True \
--temperature 0.02 \
--query_max_len 64 \
--passage_max_len 256 \
--train_group_size 2 \
--logging_steps 10 \
--same_task_within_batch True \
--unified_finetuning False \
--use_self_distill False

I set unified_finetuning and use_self_distill to False and got this error:

File ". /modeling.py", line 262, in forward.
    targets = idxs * (p_sparse_vecs.size(0) // q_sparse_vecs.size(0))
AttributeError: 'NoneType' object has no attribute 'size'

How can I fix it? Thanks a lot =w=

Beijing Academy of Artificial Intelligence org

You can install the latest FlagEmbedding and try again.

when i run this command to dense finetune the model:

torchrun --nproc_per_node {number of gpus} \
-m FlagEmbedding.BGE_M3.run \
--output_dir {path to save model} \
--model_name_or_path BAAI/bge-m3 \
--train_data ./toy_train_data \
--learning_rate 1e-5 \
--fp16 \
--num_train_epochs 5 \
--per_device_train_batch_size {large batch size; set 1 for toy data} \
--dataloader_drop_last True \
--normlized True \
--temperature 0.02 \
--query_max_len 64 \
--passage_max_len 256 \
--train_group_size 2 \
--logging_steps 10 \
--same_task_within_batch True \
--unified_finetuning False \
--use_self_distill False

I set unified_finetuning and use_self_distill to False and got this error:

File ". /modeling.py", line 262, in forward.
    targets = idxs * (p_sparse_vecs.size(0) // q_sparse_vecs.size(0))
AttributeError: 'NoneType' object has no attribute 'size'

How can I fix it? Thanks a lot =w=

Thank you for finding this bug. We have fixed it in this commit.

Thanks for the early support !!

Sign up or log in to comment