Difference between the 2 models pytorch_model-00001-of-00002.bin and pytorch_model-00002-of-00002.bin

#3
by nicoleds - opened

Hi,

What's the difference between the 2 models pytorch_model-00001-of-00002.bin and pytorch_model-00002-of-00002.bin? And which model is used for inference when we run the code below:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("TheBloke/wizardLM-7B-HF")

model = AutoModelForCausalLM.from_pretrained("TheBloke/wizardLM-7B-HF")

Thanks

Both are used. It's called sharding - larger models get split up into multiple smaller files. It makes it easier for people to download (can use multiple threads to grab multiple files at once, which is often faster), and it used to help load models on multiple GPUs (one shard per GPU.) That's not needed any more as models can be split across GPUs on a per-layer basis. But sharding is still commonplace for unquantised models like this.

Expect to see:

  • Two files for 7B fp16 models
  • Three files for 13B fp16 models
  • Seven files for 33B fp16 models
  • Fourteen files for 65B fp16 models

It is possible to specify an arbitrary shard size, so some models have smaller shards and therefore many more files.

There's a json file called pytorch_model.bin.index.json that tells the code what files there are and what layers are in each file.

TLDR: Always download the whole repo and then don't worry about how many files there are. Transformers sorts it out automatically when you do model = AutoModelForCausalLM.from_pretrained(), loading all the files that are needed for that model.

Sign up or log in to comment