response
stringlengths
1
33.1k
instruction
stringlengths
22
582k
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # For Torchxla, it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE, drop_last=(accelerator.mixed_precision == "fp8"), ) return train_dataloader, eval_dataloader
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) return train_dataloader, eval_dataloader
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # On TPU it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) return train_dataloader, eval_dataloader
Gets a set of train, valid, and test dataloaders for a particular fold Args: accelerator (`Accelerator`): The main `Accelerator` object train_idxs (list of `int`): The split indices for the training dataset valid_idxs (list of `int`): The split indices for the validation dataset batch_size (`int`): The size of the minibatch. Default is 16
def get_fold_dataloaders( accelerator: Accelerator, dataset: DatasetDict, train_idxs: List[int], valid_idxs: List[int], batch_size: int = 16 ): """ Gets a set of train, valid, and test dataloaders for a particular fold Args: accelerator (`Accelerator`): The main `Accelerator` object train_idxs (list of `int`): The split indices for the training dataset valid_idxs (list of `int`): The split indices for the validation dataset batch_size (`int`): The size of the minibatch. Default is 16 """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = DatasetDict( { "train": dataset["train"].select(train_idxs), "validation": dataset["train"].select(valid_idxs), "test": dataset["validation"], } ) def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # On TPU it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) test_dataloader = DataLoader( tokenized_datasets["test"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) return train_dataloader, eval_dataloader, test_dataloader
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # On TPU it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE, drop_last=(accelerator.mixed_precision == "fp8"), ) return train_dataloader, eval_dataloader
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # On TPU it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) return train_dataloader, eval_dataloader
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # On TPU it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) return train_dataloader, eval_dataloader
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # On TPU it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) return train_dataloader, eval_dataloader
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # On TPU it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) return train_dataloader, eval_dataloader
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # For Torchxla, it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE, drop_last=(accelerator.mixed_precision == "fp8"), ) return train_dataloader, eval_dataloader
Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders.
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16): """ Creates a set of `DataLoader`s for the `glue` dataset, using "bert-base-cased" as the tokenizer. Args: accelerator (`Accelerator`): An `Accelerator` object batch_size (`int`, *optional*): The batch size for the train and validation DataLoaders. """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = load_dataset("glue", "mrpc") def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # On TPU it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) return train_dataloader, eval_dataloader
A context manager under which models are initialized with all parameters on the meta device, therefore creating an empty model. Useful when just initializing the model would blow the available RAM. Args: include_buffers (`bool`, *optional*): Whether or not to also put all buffers on the meta device while initializing. Example: ```python import torch.nn as nn from accelerate import init_empty_weights # Initialize a model with 100 billions parameters in no time and without using any RAM. with init_empty_weights(): tst = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)]) ``` <Tip warning={true}> Any model created under this context manager has no weights. As such you can't do something like `model.to(some_device)` with it. To load weights inside your empty model, see [`load_checkpoint_and_dispatch`]. Make sure to overwrite the default device_map param for [`load_checkpoint_and_dispatch`], otherwise dispatch is not called. </Tip>
def init_empty_weights(include_buffers: bool = None): """ A context manager under which models are initialized with all parameters on the meta device, therefore creating an empty model. Useful when just initializing the model would blow the available RAM. Args: include_buffers (`bool`, *optional*): Whether or not to also put all buffers on the meta device while initializing. Example: ```python import torch.nn as nn from accelerate import init_empty_weights # Initialize a model with 100 billions parameters in no time and without using any RAM. with init_empty_weights(): tst = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)]) ``` <Tip warning={true}> Any model created under this context manager has no weights. As such you can't do something like `model.to(some_device)` with it. To load weights inside your empty model, see [`load_checkpoint_and_dispatch`]. Make sure to overwrite the default device_map param for [`load_checkpoint_and_dispatch`], otherwise dispatch is not called. </Tip> """ if include_buffers is None: include_buffers = parse_flag_from_env("ACCELERATE_INIT_INCLUDE_BUFFERS", False) with init_on_device(torch.device("meta"), include_buffers=include_buffers) as f: yield f
A context manager under which models are initialized with all parameters on the specified device. Args: device (`torch.device`): Device to initialize all parameters on. include_buffers (`bool`, *optional*): Whether or not to also put all buffers on the meta device while initializing. Example: ```python import torch.nn as nn from accelerate import init_on_device with init_on_device(device=torch.device("cuda")): tst = nn.Liner(100, 100) # on `cuda` device ```
def init_on_device(device: torch.device, include_buffers: bool = None): """ A context manager under which models are initialized with all parameters on the specified device. Args: device (`torch.device`): Device to initialize all parameters on. include_buffers (`bool`, *optional*): Whether or not to also put all buffers on the meta device while initializing. Example: ```python import torch.nn as nn from accelerate import init_on_device with init_on_device(device=torch.device("cuda")): tst = nn.Liner(100, 100) # on `cuda` device ``` """ if include_buffers is None: include_buffers = parse_flag_from_env("ACCELERATE_INIT_INCLUDE_BUFFERS", False) # TODO(shingjan): remove the torch version check once older versions are deprecated if is_torch_version(">=", "2.0") and include_buffers: with device: yield return old_register_parameter = nn.Module.register_parameter if include_buffers: old_register_buffer = nn.Module.register_buffer def register_empty_parameter(module, name, param): old_register_parameter(module, name, param) if param is not None: param_cls = type(module._parameters[name]) kwargs = module._parameters[name].__dict__ kwargs["requires_grad"] = param.requires_grad module._parameters[name] = param_cls(module._parameters[name].to(device), **kwargs) def register_empty_buffer(module, name, buffer, persistent=True): old_register_buffer(module, name, buffer, persistent=persistent) if buffer is not None: module._buffers[name] = module._buffers[name].to(device) # Patch tensor creation if include_buffers: tensor_constructors_to_patch = { torch_function_name: getattr(torch, torch_function_name) for torch_function_name in ["empty", "zeros", "ones", "full"] } else: tensor_constructors_to_patch = {} def patch_tensor_constructor(fn): def wrapper(*args, **kwargs): kwargs["device"] = device return fn(*args, **kwargs) return wrapper try: nn.Module.register_parameter = register_empty_parameter if include_buffers: nn.Module.register_buffer = register_empty_buffer for torch_function_name in tensor_constructors_to_patch.keys(): setattr(torch, torch_function_name, patch_tensor_constructor(getattr(torch, torch_function_name))) yield finally: nn.Module.register_parameter = old_register_parameter if include_buffers: nn.Module.register_buffer = old_register_buffer for torch_function_name, old_torch_function in tensor_constructors_to_patch.items(): setattr(torch, torch_function_name, old_torch_function)
Activates full CPU offload for a model. As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device passed as they are needed, then offloaded again. Args: model (`torch.nn.Module`): The model to offload. execution_device (`torch.device`, *optional*): The device on which the forward pass of the model will be executed (should be a GPU). Will default to the model first parameter device. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to offload the buffers with the model parameters. state_dict (`Dict[str, torch.Tensor]`, *optional*): The state dict of the model that will be kept on CPU. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
def cpu_offload( model: nn.Module, execution_device: Optional[torch.device] = None, offload_buffers: bool = False, state_dict: Optional[Dict[str, torch.Tensor]] = None, preload_module_classes: Optional[List[str]] = None, ): """ Activates full CPU offload for a model. As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device passed as they are needed, then offloaded again. Args: model (`torch.nn.Module`): The model to offload. execution_device (`torch.device`, *optional*): The device on which the forward pass of the model will be executed (should be a GPU). Will default to the model first parameter device. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to offload the buffers with the model parameters. state_dict (`Dict[str, torch.Tensor]`, *optional*): The state dict of the model that will be kept on CPU. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. """ if execution_device is None: execution_device = next(iter(model.parameters())).device if state_dict is None: state_dict = {n: p.to("cpu") for n, p in model.state_dict().items()} add_hook_to_module(model, AlignDevicesHook(io_same_device=True), append=True) attach_align_device_hook( model, execution_device=execution_device, offload=True, offload_buffers=offload_buffers, weights_map=state_dict, preload_module_classes=preload_module_classes, ) return model
Offloads a model on the CPU and puts it back to an execution device when executed. The difference with [`cpu_offload`] is that the model stays on the execution device after the forward and is only offloaded again when the `offload` method of the returned `hook` is called. Useful for pipelines running a model in a loop. Args: model (`torch.nn.Module`): The model to offload. execution_device(`str`, `int` or `torch.device`, *optional*): The device on which the model should be executed. Will default to the MPS device if it's available, then GPU 0 if there is a GPU, and finally to the CPU. prev_module_hook (`UserCpuOffloadHook`, *optional*): The hook sent back by this function for a previous model in the pipeline you are running. If passed, its offload method will be called just before the forward of the model to which this hook is attached. Example: ```py model_1, hook_1 = cpu_offload_with_hook(model_1, cuda_device) model_2, hook_2 = cpu_offload_with_hook(model_2, cuda_device, prev_module_hook=hook_1) model_3, hook_3 = cpu_offload_with_hook(model_3, cuda_device, prev_module_hook=hook_2) hid_1 = model_1(input) for i in range(50): # model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop. hid_2 = model_2(hid_1) # model2 is offloaded to the CPU just before this forward. hid_3 = model_3(hid_3) # For model3, you need to manually call the hook offload method. hook_3.offload() ```
def cpu_offload_with_hook( model: torch.nn.Module, execution_device: Optional[Union[int, str, torch.device]] = None, prev_module_hook: Optional[UserCpuOffloadHook] = None, ): """ Offloads a model on the CPU and puts it back to an execution device when executed. The difference with [`cpu_offload`] is that the model stays on the execution device after the forward and is only offloaded again when the `offload` method of the returned `hook` is called. Useful for pipelines running a model in a loop. Args: model (`torch.nn.Module`): The model to offload. execution_device(`str`, `int` or `torch.device`, *optional*): The device on which the model should be executed. Will default to the MPS device if it's available, then GPU 0 if there is a GPU, and finally to the CPU. prev_module_hook (`UserCpuOffloadHook`, *optional*): The hook sent back by this function for a previous model in the pipeline you are running. If passed, its offload method will be called just before the forward of the model to which this hook is attached. Example: ```py model_1, hook_1 = cpu_offload_with_hook(model_1, cuda_device) model_2, hook_2 = cpu_offload_with_hook(model_2, cuda_device, prev_module_hook=hook_1) model_3, hook_3 = cpu_offload_with_hook(model_3, cuda_device, prev_module_hook=hook_2) hid_1 = model_1(input) for i in range(50): # model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop. hid_2 = model_2(hid_1) # model2 is offloaded to the CPU just before this forward. hid_3 = model_3(hid_3) # For model3, you need to manually call the hook offload method. hook_3.offload() ``` """ hook = CpuOffload(execution_device=execution_device, prev_module_hook=prev_module_hook) add_hook_to_module(model, hook, append=True) user_hook = UserCpuOffloadHook(model, hook) return model, user_hook
Activates full disk offload for a model. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again. Args: model (`torch.nn.Module`): The model to offload. offload_dir (`str` or `os.PathLike`): The folder in which to offload the model weights (or where the model weights are already offloaded). execution_device (`torch.device`, *optional*): The device on which the forward pass of the model will be executed (should be a GPU). Will default to the model's first parameter device. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to offload the buffers with the model parameters. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
def disk_offload( model: nn.Module, offload_dir: Union[str, os.PathLike], execution_device: Optional[torch.device] = None, offload_buffers: bool = False, preload_module_classes: Optional[List[str]] = None, ): """ Activates full disk offload for a model. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again. Args: model (`torch.nn.Module`): The model to offload. offload_dir (`str` or `os.PathLike`): The folder in which to offload the model weights (or where the model weights are already offloaded). execution_device (`torch.device`, *optional*): The device on which the forward pass of the model will be executed (should be a GPU). Will default to the model's first parameter device. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to offload the buffers with the model parameters. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. """ if not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")): offload_state_dict(offload_dir, model.state_dict()) if execution_device is None: execution_device = next(iter(model.parameters())).device weights_map = OffloadedWeightsLoader(save_folder=offload_dir) add_hook_to_module(model, AlignDevicesHook(io_same_device=True), append=True) attach_align_device_hook( model, execution_device=execution_device, offload=True, offload_buffers=offload_buffers, weights_map=weights_map, preload_module_classes=preload_module_classes, ) return model
Dispatches a model according to a given device map. Layers of the model might be spread across GPUs, offloaded on the CPU or even the disk. Args: model (`torch.nn.Module`): The model to dispatch. device_map (`Dict[str, Union[str, int, torch.device]]`): A dictionary mapping module names in the models `state_dict` to the device they should go to. Note that `"disk"` is accepted even if it's not a proper value for `torch.device`. main_device (`str`, `int` or `torch.device`, *optional*): The main execution device. Will default to the first device in the `device_map` different from `"cpu"` or `"disk"`. state_dict (`Dict[str, torch.Tensor]`, *optional*): The state dict of the part of the model that will be kept on CPU. offload_dir (`str` or `os.PathLike`): The folder in which to offload the model weights (or where the model weights are already offloaded). offload_index (`Dict`, *optional*): A dictionary from weight name to their information (`dtype`/ `shape` or safetensors filename). Will default to the index saved in `save_folder`. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to offload the buffers with the model parameters. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. force_hooks (`bool`, *optional*, defaults to `False`): Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a single device.
def dispatch_model( model: nn.Module, device_map: Dict[str, Union[str, int, torch.device]], main_device: Optional[torch.device] = None, state_dict: Optional[Dict[str, torch.Tensor]] = None, offload_dir: Optional[Union[str, os.PathLike]] = None, offload_index: Optional[Dict[str, str]] = None, offload_buffers: bool = False, skip_keys: Optional[Union[str, List[str]]] = None, preload_module_classes: Optional[List[str]] = None, force_hooks: bool = False, ): """ Dispatches a model according to a given device map. Layers of the model might be spread across GPUs, offloaded on the CPU or even the disk. Args: model (`torch.nn.Module`): The model to dispatch. device_map (`Dict[str, Union[str, int, torch.device]]`): A dictionary mapping module names in the models `state_dict` to the device they should go to. Note that `"disk"` is accepted even if it's not a proper value for `torch.device`. main_device (`str`, `int` or `torch.device`, *optional*): The main execution device. Will default to the first device in the `device_map` different from `"cpu"` or `"disk"`. state_dict (`Dict[str, torch.Tensor]`, *optional*): The state dict of the part of the model that will be kept on CPU. offload_dir (`str` or `os.PathLike`): The folder in which to offload the model weights (or where the model weights are already offloaded). offload_index (`Dict`, *optional*): A dictionary from weight name to their information (`dtype`/ `shape` or safetensors filename). Will default to the index saved in `save_folder`. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to offload the buffers with the model parameters. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. force_hooks (`bool`, *optional*, defaults to `False`): Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a single device. """ # Error early if the device map is incomplete. check_device_map(model, device_map) # for backward compatibility is_bnb_quantized = ( getattr(model, "is_quantized", False) or getattr(model, "is_loaded_in_8bit", False) ) and getattr(model, "quantization_method", "bitsandbytes") == "bitsandbytes" # We attach hooks if the device_map has at least 2 different devices or if # force_hooks is set to `True`. Otherwise, the model in already loaded # in the unique device and the user can decide where to dispatch the model. # If the model is quantized, we always force-dispatch the model if (len(set(device_map.values())) > 1) or is_bnb_quantized or force_hooks: if main_device is None: if set(device_map.values()) == {"cpu"} or set(device_map.values()) == {"cpu", "disk"}: main_device = "cpu" else: main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0] if main_device != "cpu": cpu_modules = [name for name, device in device_map.items() if device == "cpu"] if state_dict is None and len(cpu_modules) > 0: state_dict = extract_submodules_state_dict(model.state_dict(), cpu_modules) disk_modules = [name for name, device in device_map.items() if device == "disk"] if offload_dir is None and offload_index is None and len(disk_modules) > 0: raise ValueError( "We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules " f"need to be offloaded: {', '.join(disk_modules)}." ) if ( len(disk_modules) > 0 and offload_index is None and (not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json"))) ): disk_state_dict = extract_submodules_state_dict(model.state_dict(), disk_modules) offload_state_dict(offload_dir, disk_state_dict) execution_device = { name: main_device if device in ["cpu", "disk"] else device for name, device in device_map.items() } execution_device[""] = main_device offloaded_devices = ["disk"] if main_device == "cpu" or main_device == "mps" else ["cpu", "disk"] offload = {name: device in offloaded_devices for name, device in device_map.items()} save_folder = offload_dir if len(disk_modules) > 0 else None if state_dict is not None or save_folder is not None or offload_index is not None: device = main_device if offload_index is not None else None weights_map = OffloadedWeightsLoader( state_dict=state_dict, save_folder=save_folder, index=offload_index, device=device ) else: weights_map = None # When dispatching the model's parameters to the devices specified in device_map, we want to avoid allocating memory several times for the # tied parameters. The dictionary tied_params_map keeps track of the already allocated data for a given tied parameter (represented by its # original pointer) on each devices. tied_params = find_tied_parameters(model) tied_params_map = {} for group in tied_params: for param_name in group: # data_ptr() is enough here, as `find_tied_parameters` finds tied params simply by comparing `param1 is param2`, so we don't need # to care about views of tensors through storage_offset. data_ptr = recursive_getattr(model, param_name).data_ptr() tied_params_map[data_ptr] = {} # Note: To handle the disk offloading case, we can not simply use weights_map[param_name].data_ptr() as the reference pointer, # as we have no guarantee that safetensors' `file.get_tensor()` will always give the same pointer. attach_align_device_hook_on_blocks( model, execution_device=execution_device, offload=offload, offload_buffers=offload_buffers, weights_map=weights_map, skip_keys=skip_keys, preload_module_classes=preload_module_classes, tied_params_map=tied_params_map, ) # warn if there is any params on the meta device offloaded_devices_str = " and ".join( [device for device in set(device_map.values()) if device in ("cpu", "disk")] ) if len(offloaded_devices_str) > 0: logging.warning( f"Some parameters are on the meta device device because they were offloaded to the {offloaded_devices_str}." ) # Attaching the hook may break tied weights, so we retie them retie_parameters(model, tied_params) # add warning to cuda and to method def add_warning(fn, model): @wraps(fn) def wrapper(*args, **kwargs): warning_msg = "You shouldn't move a model that is dispatched using accelerate hooks." if str(fn.__name__) == "to": to_device = torch._C._nn._parse_to(*args, **kwargs)[0] if to_device is not None: logger.warning(warning_msg) else: logger.warning(warning_msg) for param in model.parameters(): if param.device == torch.device("meta"): raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.") return fn(*args, **kwargs) return wrapper model.to = add_warning(model.to, model) if is_npu_available(): model.npu = add_warning(model.npu, model) elif is_mlu_available(): model.mlu = add_warning(model.mlu, model) elif is_xpu_available(): model.xpu = add_warning(model.xpu, model) else: model.cuda = add_warning(model.cuda, model) # Check if we are using multi-gpus with RTX 4000 series use_multi_gpu = len([device for device in set(device_map.values()) if device not in ("cpu", "disk")]) > 1 if use_multi_gpu and not check_cuda_p2p_ib_support(): logger.warning( "We've detected an older driver with an RTX 4000 series GPU. These drivers have issues with P2P. " "This can affect the multi-gpu inference when using accelerate device_map." "Please make sure to update your driver to the latest version which resolves this." ) else: device = list(device_map.values())[0] # `torch.Tensor.to(<int num>)` is not supported by `torch_npu` (see this [issue](https://github.com/Ascend/pytorch/issues/16)). if is_npu_available() and isinstance(device, int): device = f"npu:{device}" elif is_mlu_available() and isinstance(device, int): device = f"mlu:{device}" elif is_xpu_available() and isinstance(device, int): device = f"xpu:{device}" if device != "disk": model.to(device) else: raise ValueError( "You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead." ) # Convert OrderedDict back to dict for easier usage model.hf_device_map = dict(device_map) return model
Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded and adds the various hooks that will make this model run properly (even if split across devices). Args: model (`torch.nn.Module`): The model in which we want to load a checkpoint. checkpoint (`str` or `os.PathLike`): The folder checkpoint to load. It can be: - a path to a file containing a whole model state dict - a path to a `.json` file containing the index to a sharded checkpoint - a path to a folder containing a unique `.index.json` file and the shards of a checkpoint. device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*): A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For more information about each option see [here](../concept_guides/big_model_inference#designing-a-device-map). Defaults to None, which means [`dispatch_model`] will not be called. max_memory (`Dict`, *optional*): A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. no_split_module_classes (`List[str]`, *optional*): A list of layer class names that should never be split across device (for instance any layer that has a residual connection). offload_folder (`str` or `os.PathLike`, *optional*): If the `device_map` contains any value `"disk"`, the folder where we will offload weights. offload_buffers (`bool`, *optional*, defaults to `False`): In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as well as the parameters. dtype (`str` or `torch.dtype`, *optional*): If provided, the weights will be converted to that type when loaded. offload_state_dict (`bool`, *optional*): If `True`, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit. Will default to `True` if the device map picked contains `"disk"` values. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. force_hooks (`bool`, *optional*, defaults to `False`): Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a single device. strict (`bool`, *optional*, defaults to `False`): Whether to strictly enforce that the keys in the checkpoint state_dict match the keys of the model's state_dict. Example: ```python >>> from accelerate import init_empty_weights, load_checkpoint_and_dispatch >>> from huggingface_hub import hf_hub_download >>> from transformers import AutoConfig, AutoModelForCausalLM >>> # Download the Weights >>> checkpoint = "EleutherAI/gpt-j-6B" >>> weights_location = hf_hub_download(checkpoint, "pytorch_model.bin") >>> # Create a model and initialize it with empty weights >>> config = AutoConfig.from_pretrained(checkpoint) >>> with init_empty_weights(): ... model = AutoModelForCausalLM.from_config(config) >>> # Load the checkpoint and dispatch it to the right devices >>> model = load_checkpoint_and_dispatch( ... model, weights_location, device_map="auto", no_split_module_classes=["GPTJBlock"] ... ) ```
def load_checkpoint_and_dispatch( model: nn.Module, checkpoint: Union[str, os.PathLike], device_map: Optional[Union[str, Dict[str, Union[int, str, torch.device]]]] = None, max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None, no_split_module_classes: Optional[List[str]] = None, offload_folder: Optional[Union[str, os.PathLike]] = None, offload_buffers: bool = False, dtype: Optional[Union[str, torch.dtype]] = None, offload_state_dict: Optional[bool] = None, skip_keys: Optional[Union[str, List[str]]] = None, preload_module_classes: Optional[List[str]] = None, force_hooks: bool = False, strict: bool = False, ): """ Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded and adds the various hooks that will make this model run properly (even if split across devices). Args: model (`torch.nn.Module`): The model in which we want to load a checkpoint. checkpoint (`str` or `os.PathLike`): The folder checkpoint to load. It can be: - a path to a file containing a whole model state dict - a path to a `.json` file containing the index to a sharded checkpoint - a path to a folder containing a unique `.index.json` file and the shards of a checkpoint. device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*): A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For more information about each option see [here](../concept_guides/big_model_inference#designing-a-device-map). Defaults to None, which means [`dispatch_model`] will not be called. max_memory (`Dict`, *optional*): A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. no_split_module_classes (`List[str]`, *optional*): A list of layer class names that should never be split across device (for instance any layer that has a residual connection). offload_folder (`str` or `os.PathLike`, *optional*): If the `device_map` contains any value `"disk"`, the folder where we will offload weights. offload_buffers (`bool`, *optional*, defaults to `False`): In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as well as the parameters. dtype (`str` or `torch.dtype`, *optional*): If provided, the weights will be converted to that type when loaded. offload_state_dict (`bool`, *optional*): If `True`, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit. Will default to `True` if the device map picked contains `"disk"` values. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. force_hooks (`bool`, *optional*, defaults to `False`): Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a single device. strict (`bool`, *optional*, defaults to `False`): Whether to strictly enforce that the keys in the checkpoint state_dict match the keys of the model's state_dict. Example: ```python >>> from accelerate import init_empty_weights, load_checkpoint_and_dispatch >>> from huggingface_hub import hf_hub_download >>> from transformers import AutoConfig, AutoModelForCausalLM >>> # Download the Weights >>> checkpoint = "EleutherAI/gpt-j-6B" >>> weights_location = hf_hub_download(checkpoint, "pytorch_model.bin") >>> # Create a model and initialize it with empty weights >>> config = AutoConfig.from_pretrained(checkpoint) >>> with init_empty_weights(): ... model = AutoModelForCausalLM.from_config(config) >>> # Load the checkpoint and dispatch it to the right devices >>> model = load_checkpoint_and_dispatch( ... model, weights_location, device_map="auto", no_split_module_classes=["GPTJBlock"] ... ) ``` """ if isinstance(device_map, str) and device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]: raise ValueError( "If passing a string for `device_map`, please choose 'auto', 'balanced', 'balanced_low_0' or " "'sequential'." ) if isinstance(device_map, str): if device_map != "sequential": max_memory = get_balanced_memory( model, max_memory=max_memory, no_split_module_classes=no_split_module_classes, dtype=dtype, low_zero=(device_map == "balanced_low_0"), ) device_map = infer_auto_device_map( model, max_memory=max_memory, no_split_module_classes=no_split_module_classes, dtype=dtype, offload_buffers=offload_buffers, ) if offload_state_dict is None and device_map is not None and "disk" in device_map.values(): offload_state_dict = True load_checkpoint_in_model( model, checkpoint, device_map=device_map, offload_folder=offload_folder, dtype=dtype, offload_state_dict=offload_state_dict, offload_buffers=offload_buffers, strict=strict, ) if device_map is None: return model return dispatch_model( model, device_map=device_map, offload_dir=offload_folder, offload_buffers=offload_buffers, skip_keys=skip_keys, preload_module_classes=preload_module_classes, force_hooks=force_hooks, )
Saves the current states of the models, optimizers, scaler, and RNG generators to a given directory. <Tip> If `safe_serialization` is `True`, models will be saved with `safetensors` while the rest are saved using native `pickle`. </Tip> Args: output_dir (`str` or `os.PathLike`): The name of the folder to save all relevant weights and states. model_states (`List[torch.nn.Module]`): A list of model states optimizers (`List[torch.optim.Optimizer]`): A list of optimizer instances schedulers (`List[torch.optim.lr_scheduler._LRScheduler]`): A list of learning rate schedulers dataloaders (`List[torch.utils.data.DataLoader]`): A list of dataloader instances to save their sampler states process_index (`int`): The current process index in the Accelerator state scaler (`torch.cuda.amp.GradScaler`, *optional*): An optional gradient scaler instance to save save_on_each_node (`bool`, *optional*): Whether to save on every node, or only the main node. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
def save_accelerator_state( output_dir: str, model_states: List[dict], optimizers: list, schedulers: list, dataloaders: list, process_index: int, scaler: GradScaler = None, save_on_each_node: bool = False, safe_serialization: bool = True, ): """ Saves the current states of the models, optimizers, scaler, and RNG generators to a given directory. <Tip> If `safe_serialization` is `True`, models will be saved with `safetensors` while the rest are saved using native `pickle`. </Tip> Args: output_dir (`str` or `os.PathLike`): The name of the folder to save all relevant weights and states. model_states (`List[torch.nn.Module]`): A list of model states optimizers (`List[torch.optim.Optimizer]`): A list of optimizer instances schedulers (`List[torch.optim.lr_scheduler._LRScheduler]`): A list of learning rate schedulers dataloaders (`List[torch.utils.data.DataLoader]`): A list of dataloader instances to save their sampler states process_index (`int`): The current process index in the Accelerator state scaler (`torch.cuda.amp.GradScaler`, *optional*): An optional gradient scaler instance to save save_on_each_node (`bool`, *optional*): Whether to save on every node, or only the main node. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`). """ output_dir = Path(output_dir) # Model states for i, state in enumerate(model_states): weights_name = WEIGHTS_NAME if not safe_serialization else SAFE_WEIGHTS_NAME if i > 0: weights_name = weights_name.replace(".", f"_{i}.") output_model_file = output_dir.joinpath(weights_name) save(state, output_model_file, save_on_each_node=save_on_each_node, safe_serialization=safe_serialization) logger.info(f"Model weights saved in {output_model_file}") # Optimizer states for i, opt in enumerate(optimizers): state = opt.state_dict() optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin" output_optimizer_file = output_dir.joinpath(optimizer_name) save(state, output_optimizer_file, save_on_each_node=save_on_each_node, safe_serialization=False) logger.info(f"Optimizer state saved in {output_optimizer_file}") # Scheduler states for i, scheduler in enumerate(schedulers): state = scheduler.state_dict() scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin" output_scheduler_file = output_dir.joinpath(scheduler_name) save(state, output_scheduler_file, save_on_each_node=save_on_each_node, safe_serialization=False) logger.info(f"Scheduler state saved in {output_scheduler_file}") # DataLoader states for i, dataloader in enumerate(dataloaders): sampler_name = f"{SAMPLER_NAME}.bin" if i == 0 else f"{SAMPLER_NAME}_{i}.bin" output_sampler_file = output_dir.joinpath(sampler_name) # Only save if we have our custom sampler from .data_loader import IterableDatasetShard, SeedableRandomSampler if isinstance(dataloader.dataset, IterableDatasetShard): sampler = dataloader.get_sampler() if isinstance(sampler, SeedableRandomSampler): save(sampler, output_sampler_file, save_on_each_node=save_on_each_node, safe_serialization=False) logger.info(f"Sampler state for dataloader {i} saved in {output_sampler_file}") # GradScaler state if scaler is not None: state = scaler.state_dict() output_scaler_file = output_dir.joinpath(SCALER_NAME) torch.save(state, output_scaler_file) logger.info(f"Gradient scaler state saved in {output_scaler_file}") # Random number generator states states = {} states_name = f"{RNG_STATE_NAME}_{process_index}.pkl" states["random_state"] = random.getstate() states["numpy_random_seed"] = np.random.get_state() states["torch_manual_seed"] = torch.get_rng_state() if is_xpu_available(): states["torch_xpu_manual_seed"] = torch.xpu.get_rng_state_all() else: states["torch_cuda_manual_seed"] = torch.cuda.get_rng_state_all() if is_torch_xla_available(): states["xm_seed"] = xm.get_rng_state() output_states_file = output_dir.joinpath(states_name) torch.save(states, output_states_file) logger.info(f"Random states saved in {output_states_file}") return output_dir
Loads states of the models, optimizers, scaler, and RNG generators from a given directory. Args: input_dir (`str` or `os.PathLike`): The name of the folder to load all relevant weights and states. models (`List[torch.nn.Module]`): A list of model instances optimizers (`List[torch.optim.Optimizer]`): A list of optimizer instances schedulers (`List[torch.optim.lr_scheduler._LRScheduler]`): A list of learning rate schedulers process_index (`int`): The current process index in the Accelerator state scaler (`torch.cuda.amp.GradScaler`, *optional*): An optional *GradScaler* instance to load map_location (`str`, *optional*): What device to load the optimizer state onto. Should be one of either "cpu" or "on_device". load_model_func_kwargs (`dict`, *optional*): Additional arguments that can be passed to the model's `load_state_dict` method.
def load_accelerator_state( input_dir, models, optimizers, schedulers, dataloaders, process_index, scaler=None, map_location=None, **load_model_func_kwargs, ): """ Loads states of the models, optimizers, scaler, and RNG generators from a given directory. Args: input_dir (`str` or `os.PathLike`): The name of the folder to load all relevant weights and states. models (`List[torch.nn.Module]`): A list of model instances optimizers (`List[torch.optim.Optimizer]`): A list of optimizer instances schedulers (`List[torch.optim.lr_scheduler._LRScheduler]`): A list of learning rate schedulers process_index (`int`): The current process index in the Accelerator state scaler (`torch.cuda.amp.GradScaler`, *optional*): An optional *GradScaler* instance to load map_location (`str`, *optional*): What device to load the optimizer state onto. Should be one of either "cpu" or "on_device". load_model_func_kwargs (`dict`, *optional*): Additional arguments that can be passed to the model's `load_state_dict` method. """ if map_location not in [None, "cpu", "on_device"]: raise TypeError( "Unsupported optimizer map location passed, please choose one of `None`, `'cpu'`, or `'on_device'`" ) if map_location is None: map_location = "cpu" elif map_location == "on_device": map_location = PartialState().device input_dir = Path(input_dir) # Model states for i, model in enumerate(models): ending = f"_{i}" if i > 0 else "" input_model_file = input_dir.joinpath(f"{SAFE_MODEL_NAME}{ending}.safetensors") if input_model_file.exists(): state_dict = load_file(input_model_file, device=str(map_location)) else: # Load with torch input_model_file = input_dir.joinpath(f"{MODEL_NAME}{ending}.bin") state_dict = torch.load(input_model_file, map_location=map_location) models[i].load_state_dict(state_dict, **load_model_func_kwargs) logger.info("All model weights loaded successfully") # Optimizer states for i, opt in enumerate(optimizers): optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin" input_optimizer_file = input_dir.joinpath(optimizer_name) optimizer_state = torch.load(input_optimizer_file, map_location=map_location) optimizers[i].load_state_dict(optimizer_state) logger.info("All optimizer states loaded successfully") # Scheduler states for i, scheduler in enumerate(schedulers): scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin" input_scheduler_file = input_dir.joinpath(scheduler_name) scheduler.load_state_dict(torch.load(input_scheduler_file)) logger.info("All scheduler states loaded successfully") for i, dataloader in enumerate(dataloaders): sampler_name = f"{SAMPLER_NAME}.bin" if i == 0 else f"{SAMPLER_NAME}_{i}.bin" input_sampler_file = input_dir.joinpath(sampler_name) # Only load if we have our custom sampler from .data_loader import IterableDatasetShard, SeedableRandomSampler if isinstance(dataloader.dataset, IterableDatasetShard): sampler = dataloader.get_sampler() if isinstance(sampler, SeedableRandomSampler): sampler = dataloader.set_sampler(torch.load(input_sampler_file)) logger.info("All dataloader sampler states loaded successfully") # GradScaler state if scaler is not None: input_scaler_file = input_dir.joinpath(SCALER_NAME) scaler.load_state_dict(torch.load(input_scaler_file)) logger.info("GradScaler state loaded successfully") # Random states try: states = torch.load(input_dir.joinpath(f"{RNG_STATE_NAME}_{process_index}.pkl")) random.setstate(states["random_state"]) np.random.set_state(states["numpy_random_seed"]) torch.set_rng_state(states["torch_manual_seed"]) if is_xpu_available(): torch.xpu.set_rng_state_all(states["torch_xpu_manual_seed"]) else: torch.cuda.set_rng_state_all(states["torch_cuda_manual_seed"]) if is_torch_xla_available(): xm.set_rng_state(states["xm_seed"]) logger.info("All random states loaded successfully") except Exception: logger.info("Could not load random states")
Saves the state of `obj` to `{path}/custom_checkpoint_{index}.pkl`
def save_custom_state(obj, path, index: int = 0, save_on_each_node: bool = False): """ Saves the state of `obj` to `{path}/custom_checkpoint_{index}.pkl` """ # Should this be the right way to get a qual_name type value from `obj`? save_location = Path(path) / f"custom_checkpoint_{index}.pkl" logger.info(f"Saving the state of {get_pretty_name(obj)} to {save_location}") save(obj.state_dict(), save_location, save_on_each_node=save_on_each_node)
Loads the state of `obj` at `{path}/custom_checkpoint_{index}.pkl`
def load_custom_state(obj, path, index: int = 0): """ Loads the state of `obj` at `{path}/custom_checkpoint_{index}.pkl` """ load_location = f"{path}/custom_checkpoint_{index}.pkl" logger.info(f"Loading the state of {get_pretty_name(obj)} from {load_location}") obj.load_state_dict(torch.load(load_location, map_location="cpu"))
Get the sampler associated to the dataloader Args: dataloader (`torch.utils.data.dataloader.DataLoader`): The data loader to split across several devices. Returns: `torch.utils.data.Sampler`: The sampler associated to the dataloader
def get_sampler(dataloader): """ Get the sampler associated to the dataloader Args: dataloader (`torch.utils.data.dataloader.DataLoader`): The data loader to split across several devices. Returns: `torch.utils.data.Sampler`: The sampler associated to the dataloader """ sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler) if sampler_is_batch_sampler: sampler = getattr(dataloader.sampler, "sampler", None) else: sampler = getattr(dataloader.batch_sampler, "sampler", None) return sampler
Wraps a PyTorch `DataLoader` to generate batches for one of the processes only. Depending on the value of the `drop_last` attribute of the `dataloader` passed, it will either stop the iteration at the first batch that would be too small / not present on all processes or loop with indices from the beginning. Args: dataloader (`torch.utils.data.dataloader.DataLoader`): The data loader to split across several devices. device (`torch.device`): The target device for the returned `DataLoader`. num_processes (`int`, *optional*): The number of processes running concurrently. Will default to the value given by [`~state.AcceleratorState`]. process_index (`int`, *optional*): The index of the current process. Will default to the value given by [`~state.AcceleratorState`]. split_batches (`bool`, *optional*, defaults to `False`): Whether the resulting `DataLoader` should split the batches of the original data loader across devices or yield full batches (in which case it will yield batches starting at the `process_index`-th and advancing of `num_processes` batches at each iteration). Another way to see this is that the observed batch size will be the same as the initial `dataloader` if this option is set to `True`, the batch size of the initial `dataloader` multiplied by `num_processes` otherwise. Setting this option to `True` requires that the batch size of the `dataloader` is a round multiple of `batch_size`. put_on_device (`bool`, *optional*, defaults to `False`): Whether or not to put the batches on `device` (only works if the batches are nested list, tuples or dictionaries of tensors). rng_types (list of `str` or [`~utils.RNGType`]): The list of random number generators to synchronize at the beginning of each iteration. Should be one or several of: - `"torch"`: the base torch random number generator - `"cuda"`: the CUDA random number generator (GPU only) - `"xla"`: the XLA random number generator (TPU only) - `"generator"`: the `torch.Generator` of the sampler (or batch sampler if there is no sampler in your dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type. dispatch_batches (`bool`, *optional*): If set to `True`, the datalaoder prepared is only iterated through on the main process and then the batches are split and broadcast to each process. Will default to `True` when the underlying dataset is an `IterableDataset`, `False` otherwise. even_batches (`bool`, *optional*, defaults to `True`): If set to `True`, in cases where the total batch size across all processes does not exactly divide the dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among all workers. slice_fn_for_dispatch (`Callable`, *optional*`): If passed, this function will be used to slice tensors across `num_processes`. Will default to [`~utils.slice_tensors`]. This argument is used only when `dispatch_batches` is set to `True` and will be ignored otherwise. use_seedable_sampler (`bool`, *optional*, defaults to `False`): Whether to use the [`~data_loader.SeedableRandomSampler`] instead of a `RandomSampler` for better reproducability. Comes at a cost of potentially different performances due to different shuffling algorithms but ensures results will be the *exact* same. Should be paired with `set_seed()` at every `self.set_epoch` non_blocking (`bool`, *optional*, defaults to `False`): If set to `True`, dataloader will utilize non-blocking host-to-device transfers. If the dataloader has `pin_memory` set to `True`, this will help to increase overlap between data transfer and computations. Returns: `torch.utils.data.dataloader.DataLoader`: A new data loader that will yield the portion of the batches <Tip warning={true}> `BatchSampler`s with varying batch sizes are not enabled by default. To enable this behaviour, set `even_batches` equal to `False` </Tip>
def prepare_data_loader( dataloader: DataLoader, device: Optional[torch.device] = None, num_processes: Optional[int] = None, process_index: Optional[int] = None, split_batches: bool = False, put_on_device: bool = False, rng_types: Optional[List[Union[str, RNGType]]] = None, dispatch_batches: Optional[bool] = None, even_batches: bool = True, slice_fn_for_dispatch: Optional[Callable] = None, use_seedable_sampler: bool = False, non_blocking: bool = False, ) -> DataLoader: """ Wraps a PyTorch `DataLoader` to generate batches for one of the processes only. Depending on the value of the `drop_last` attribute of the `dataloader` passed, it will either stop the iteration at the first batch that would be too small / not present on all processes or loop with indices from the beginning. Args: dataloader (`torch.utils.data.dataloader.DataLoader`): The data loader to split across several devices. device (`torch.device`): The target device for the returned `DataLoader`. num_processes (`int`, *optional*): The number of processes running concurrently. Will default to the value given by [`~state.AcceleratorState`]. process_index (`int`, *optional*): The index of the current process. Will default to the value given by [`~state.AcceleratorState`]. split_batches (`bool`, *optional*, defaults to `False`): Whether the resulting `DataLoader` should split the batches of the original data loader across devices or yield full batches (in which case it will yield batches starting at the `process_index`-th and advancing of `num_processes` batches at each iteration). Another way to see this is that the observed batch size will be the same as the initial `dataloader` if this option is set to `True`, the batch size of the initial `dataloader` multiplied by `num_processes` otherwise. Setting this option to `True` requires that the batch size of the `dataloader` is a round multiple of `batch_size`. put_on_device (`bool`, *optional*, defaults to `False`): Whether or not to put the batches on `device` (only works if the batches are nested list, tuples or dictionaries of tensors). rng_types (list of `str` or [`~utils.RNGType`]): The list of random number generators to synchronize at the beginning of each iteration. Should be one or several of: - `"torch"`: the base torch random number generator - `"cuda"`: the CUDA random number generator (GPU only) - `"xla"`: the XLA random number generator (TPU only) - `"generator"`: the `torch.Generator` of the sampler (or batch sampler if there is no sampler in your dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type. dispatch_batches (`bool`, *optional*): If set to `True`, the datalaoder prepared is only iterated through on the main process and then the batches are split and broadcast to each process. Will default to `True` when the underlying dataset is an `IterableDataset`, `False` otherwise. even_batches (`bool`, *optional*, defaults to `True`): If set to `True`, in cases where the total batch size across all processes does not exactly divide the dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among all workers. slice_fn_for_dispatch (`Callable`, *optional*`): If passed, this function will be used to slice tensors across `num_processes`. Will default to [`~utils.slice_tensors`]. This argument is used only when `dispatch_batches` is set to `True` and will be ignored otherwise. use_seedable_sampler (`bool`, *optional*, defaults to `False`): Whether to use the [`~data_loader.SeedableRandomSampler`] instead of a `RandomSampler` for better reproducability. Comes at a cost of potentially different performances due to different shuffling algorithms but ensures results will be the *exact* same. Should be paired with `set_seed()` at every `self.set_epoch` non_blocking (`bool`, *optional*, defaults to `False`): If set to `True`, dataloader will utilize non-blocking host-to-device transfers. If the dataloader has `pin_memory` set to `True`, this will help to increase overlap between data transfer and computations. Returns: `torch.utils.data.dataloader.DataLoader`: A new data loader that will yield the portion of the batches <Tip warning={true}> `BatchSampler`s with varying batch sizes are not enabled by default. To enable this behaviour, set `even_batches` equal to `False` </Tip> """ if dispatch_batches is None: if not put_on_device: dispatch_batches = False else: dispatch_batches = isinstance(dataloader.dataset, IterableDataset) if dispatch_batches and not put_on_device: raise ValueError("Using `dispatch_batches=True` requires `put_on_device=True`.") # Grab defaults from AcceleratorState state = AcceleratorState() if num_processes is None: num_processes = state.num_processes if process_index is None: process_index = state.process_index # Sanity check if split_batches: if dataloader.batch_size is not None: batch_size_for_check = dataloader.batch_size else: # For custom batch_sampler if hasattr(dataloader.batch_sampler, "batch_size"): batch_size_for_check = dataloader.batch_sampler.batch_size else: raise ValueError( "In order to use `split_batches==True` you must have a `batch_size` attribute either in the passed " "`dataloader` or `dataloader.batch_sampler` objects, and it has to return a natural number. " "Your `dataloader.batch_size` is None and `dataloader.batch_sampler` " f"(`{type(dataloader.batch_sampler)}`) does not have the `batch_size` attribute set." ) if batch_size_for_check > 1 and batch_size_for_check % num_processes != 0: raise ValueError( f"To use a `DataLoader` in `split_batches` mode, the batch size ({dataloader.batch_size}) " f"needs to be a round multiple of the number of processes ({num_processes})." ) new_dataset = dataloader.dataset # Iterable dataset doesn't like batch_sampler, but data_loader creates a default one for it new_batch_sampler = dataloader.batch_sampler if not isinstance(new_dataset, IterableDataset) else None sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler) synchronized_generator = None sampler = get_sampler(dataloader) if isinstance(sampler, RandomSampler) and use_seedable_sampler: # When iterating through the dataloader during distributed processes # we want to ensure that on each process we are iterating through the same # samples in the same order if a seed is set. This requires a tweak # to the `torch.utils.data.RandomSampler` class (if used). sampler = SeedableRandomSampler( data_source=sampler.data_source, replacement=sampler.replacement, num_samples=sampler._num_samples, generator=getattr(sampler, "generator", torch.Generator()), ) if isinstance(dataloader.sampler, RandomSampler) and state.distributed_type == DistributedType.XLA: # isinstance(dataloader.sampler, RandomSampler) indicates the original dataloader has `shuffle` enabled. generator = torch.Generator().manual_seed(42) dataloader.generator = generator dataloader.sampler.generator = generator # No change if no multiprocess if (num_processes != 1 or state.distributed_type == DistributedType.MEGATRON_LM) and not dispatch_batches: if isinstance(new_dataset, IterableDataset): if getattr(dataloader.dataset, "generator", None) is not None: synchronized_generator = dataloader.dataset.generator new_dataset = IterableDatasetShard( new_dataset, batch_size=dataloader.batch_size, drop_last=dataloader.drop_last, num_processes=num_processes, process_index=process_index, split_batches=split_batches, ) else: if not use_seedable_sampler and hasattr(sampler, "generator"): if sampler.generator is None: sampler.generator = torch.Generator() synchronized_generator = sampler.generator batch_sampler = dataloader.sampler if sampler_is_batch_sampler else dataloader.batch_sampler new_batch_sampler = BatchSamplerShard( batch_sampler, num_processes=num_processes, process_index=process_index, split_batches=split_batches, even_batches=even_batches, ) # We ignore all of those since they are all dealt with by our new_batch_sampler ignore_kwargs = [ "batch_size", "shuffle", "sampler", "batch_sampler", "drop_last", ] if rng_types is not None and synchronized_generator is None and "generator" in rng_types: rng_types.remove("generator") kwargs = { k: getattr(dataloader, k, _PYTORCH_DATALOADER_KWARGS[k]) for k in _PYTORCH_DATALOADER_KWARGS if k not in ignore_kwargs } # Need to provide batch_size as batch_sampler is None for Iterable dataset if new_batch_sampler is None: kwargs["drop_last"] = dataloader.drop_last kwargs["batch_size"] = ( dataloader.batch_size // num_processes if split_batches and not dispatch_batches else dataloader.batch_size ) if dispatch_batches: kwargs.pop("generator") dataloader = DataLoaderDispatcher( new_dataset, split_batches=split_batches, batch_sampler=new_batch_sampler, _drop_last=dataloader.drop_last, _non_blocking=non_blocking, slice_fn=slice_fn_for_dispatch, **kwargs, ) elif sampler_is_batch_sampler: dataloader = DataLoaderShard( new_dataset, device=device if put_on_device and state.distributed_type != DistributedType.XLA else None, sampler=new_batch_sampler, batch_size=dataloader.batch_size, rng_types=rng_types, _drop_last=dataloader.drop_last, _non_blocking=non_blocking, synchronized_generator=synchronized_generator, **kwargs, ) else: dataloader = DataLoaderShard( new_dataset, device=device if put_on_device and state.distributed_type != DistributedType.XLA else None, batch_sampler=new_batch_sampler, rng_types=rng_types, synchronized_generator=synchronized_generator, _drop_last=dataloader.drop_last, _non_blocking=non_blocking, **kwargs, ) if isinstance(sampler, SeedableRandomSampler) and use_seedable_sampler: dataloader.set_sampler(sampler) if state.distributed_type == DistributedType.XLA: return MpDeviceLoaderWrapper(dataloader, device) return dataloader
Creates a `torch.utils.data.DataLoader` that will efficiently skip the first `num_batches`.
def skip_first_batches(dataloader, num_batches=0): """ Creates a `torch.utils.data.DataLoader` that will efficiently skip the first `num_batches`. """ dataset = dataloader.dataset sampler_is_batch_sampler = False if isinstance(dataset, IterableDataset): new_batch_sampler = None else: sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler) batch_sampler = dataloader.sampler if sampler_is_batch_sampler else dataloader.batch_sampler new_batch_sampler = SkipBatchSampler(batch_sampler, skip_batches=num_batches) # We ignore all of those since they are all dealt with by our new_batch_sampler ignore_kwargs = [ "batch_size", "shuffle", "sampler", "batch_sampler", "drop_last", ] kwargs = { k: getattr(dataloader, k, _PYTORCH_DATALOADER_KWARGS[k]) for k in _PYTORCH_DATALOADER_KWARGS if k not in ignore_kwargs } # Need to provide batch_size as batch_sampler is None for Iterable dataset if new_batch_sampler is None: kwargs["drop_last"] = dataloader.drop_last kwargs["batch_size"] = dataloader.batch_size if isinstance(dataloader, DataLoaderDispatcher): if new_batch_sampler is None: # Need to manually skip batches in the dataloader kwargs["skip_batches"] = num_batches dataloader = DataLoaderDispatcher( dataset, split_batches=dataloader.split_batches, batch_sampler=new_batch_sampler, _drop_last=dataloader._drop_last, **kwargs, ) elif isinstance(dataloader, DataLoaderShard): if new_batch_sampler is None: # Need to manually skip batches in the dataloader kwargs["skip_batches"] = num_batches elif sampler_is_batch_sampler: kwargs["sampler"] = new_batch_sampler kwargs["batch_size"] = dataloader.batch_size else: kwargs["batch_sampler"] = new_batch_sampler dataloader = DataLoaderShard( dataset, device=dataloader.device, rng_types=dataloader.rng_types, synchronized_generator=dataloader.synchronized_generator, **kwargs, ) else: if new_batch_sampler is None: # Need to manually skip batches in the dataloader dataloader = SkipDataLoader(dataset, skip_batches=num_batches, **kwargs) else: dataloader = DataLoader(dataset, batch_sampler=new_batch_sampler, **kwargs) return dataloader
Adds a hook to a given module. This will rewrite the `forward` method of the module to include the hook, to remove this behavior and restore the original `forward` method, use `remove_hook_from_module`. <Tip warning={true}> If the module already contains a hook, this will replace it with the new hook passed by default. To chain two hooks together, pass `append=True`, so it chains the current and new hook into an instance of the `SequentialHook` class. </Tip> Args: module (`torch.nn.Module`): The module to attach a hook to. hook (`ModelHook`): The hook to attach. append (`bool`, *optional*, defaults to `False`): Whether the hook should be chained with an existing one (if module already contains a hook) or not. Returns: `torch.nn.Module`: The same module, with the hook attached (the module is modified in place, so the result can be discarded).
def add_hook_to_module(module: nn.Module, hook: ModelHook, append: bool = False): """ Adds a hook to a given module. This will rewrite the `forward` method of the module to include the hook, to remove this behavior and restore the original `forward` method, use `remove_hook_from_module`. <Tip warning={true}> If the module already contains a hook, this will replace it with the new hook passed by default. To chain two hooks together, pass `append=True`, so it chains the current and new hook into an instance of the `SequentialHook` class. </Tip> Args: module (`torch.nn.Module`): The module to attach a hook to. hook (`ModelHook`): The hook to attach. append (`bool`, *optional*, defaults to `False`): Whether the hook should be chained with an existing one (if module already contains a hook) or not. Returns: `torch.nn.Module`: The same module, with the hook attached (the module is modified in place, so the result can be discarded). """ if append and (getattr(module, "_hf_hook", None) is not None): old_hook = module._hf_hook remove_hook_from_module(module) hook = SequentialHook(old_hook, hook) if hasattr(module, "_hf_hook") and hasattr(module, "_old_forward"): # If we already put some hook on this module, we replace it with the new one. old_forward = module._old_forward else: old_forward = module.forward module._old_forward = old_forward module = hook.init_hook(module) module._hf_hook = hook def new_forward(module, *args, **kwargs): args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs) if module._hf_hook.no_grad: with torch.no_grad(): output = module._old_forward(*args, **kwargs) else: output = module._old_forward(*args, **kwargs) return module._hf_hook.post_forward(module, output) # Overriding a GraphModuleImpl forward freezes the forward call and later modifications on the graph will fail. # Reference: https://pytorch.slack.com/archives/C3PDTEV8E/p1705929610405409 if "GraphModuleImpl" in str(type(module)): module.__class__.forward = functools.update_wrapper(functools.partial(new_forward, module), old_forward) else: module.forward = functools.update_wrapper(functools.partial(new_forward, module), old_forward) return module
Removes any hook attached to a module via `add_hook_to_module`. Args: module (`torch.nn.Module`): The module to attach a hook to. recurse (`bool`, **optional**): Whether to remove the hooks recursively Returns: `torch.nn.Module`: The same module, with the hook detached (the module is modified in place, so the result can be discarded).
def remove_hook_from_module(module: nn.Module, recurse=False): """ Removes any hook attached to a module via `add_hook_to_module`. Args: module (`torch.nn.Module`): The module to attach a hook to. recurse (`bool`, **optional**): Whether to remove the hooks recursively Returns: `torch.nn.Module`: The same module, with the hook detached (the module is modified in place, so the result can be discarded). """ if hasattr(module, "_hf_hook"): module._hf_hook.detach_hook(module) delattr(module, "_hf_hook") if hasattr(module, "_old_forward"): # Overriding a GraphModuleImpl forward freezes the forward call and later modifications on the graph will fail. # Reference: https://pytorch.slack.com/archives/C3PDTEV8E/p1705929610405409 if "GraphModuleImpl" in str(type(module)): module.__class__.forward = module._old_forward else: module.forward = module._old_forward delattr(module, "_old_forward") if recurse: for child in module.children(): remove_hook_from_module(child, recurse) return module
Recursively attaches `AlignDevicesHook` to all submodules of a given model to make sure they have the right execution device Args: module (`torch.nn.Module`): The module where we want to attach the hooks. execution_device (`int`, `str` or `torch.device`): The device on which inputs and model weights should be placed before the forward pass. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. tied_params_map (Optional[Dict[int, Dict[torch.device, torch.Tensor]]], *optional*, defaults to `None`): A map of data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight for all others, instead of duplicating memory.
def attach_execution_device_hook( module: torch.nn.Module, execution_device: Union[int, str, torch.device], skip_keys: Optional[Union[str, List[str]]] = None, preload_module_classes: Optional[List[str]] = None, tied_params_map: Optional[Dict[int, Dict[torch.device, torch.Tensor]]] = None, ): """ Recursively attaches `AlignDevicesHook` to all submodules of a given model to make sure they have the right execution device Args: module (`torch.nn.Module`): The module where we want to attach the hooks. execution_device (`int`, `str` or `torch.device`): The device on which inputs and model weights should be placed before the forward pass. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. tied_params_map (Optional[Dict[int, Dict[torch.device, torch.Tensor]]], *optional*, defaults to `None`): A map of data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight for all others, instead of duplicating memory. """ if not hasattr(module, "_hf_hook") and len(module.state_dict()) > 0: add_hook_to_module( module, AlignDevicesHook(execution_device, skip_keys=skip_keys, tied_params_map=tied_params_map), ) # Break the recursion if we get to a preload module. if preload_module_classes is not None and module.__class__.__name__ in preload_module_classes: return for child in module.children(): attach_execution_device_hook(child, execution_device, tied_params_map=tied_params_map)
Recursively attaches `AlignDevicesHook` to all submodules of a given model that have direct parameters and/or buffers. Args: module (`torch.nn.Module`): The module where we want to attach the hooks. execution_device (`torch.device`, *optional*): The device on which inputs and model weights should be placed before the forward pass. offload (`bool`, *optional*, defaults to `False`): Whether or not the weights should be offloaded after the forward pass. weights_map (`Mapping[str, torch.Tensor]`, *optional*): When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to include the associated module's buffers when offloading. module_name (`str`, *optional*, defaults to `""`): The name of the module. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. tied_params_map (Optional[Dict[int, Dict[torch.device, torch.Tensor]]], *optional*, defaults to `None`): A map of data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight for all others, instead of duplicating memory.
def attach_align_device_hook( module: torch.nn.Module, execution_device: Optional[torch.device] = None, offload: bool = False, weights_map: Optional[Mapping] = None, offload_buffers: bool = False, module_name: str = "", skip_keys: Optional[Union[str, List[str]]] = None, preload_module_classes: Optional[List[str]] = None, tied_params_map: Optional[Dict[int, Dict[torch.device, torch.Tensor]]] = None, ): """ Recursively attaches `AlignDevicesHook` to all submodules of a given model that have direct parameters and/or buffers. Args: module (`torch.nn.Module`): The module where we want to attach the hooks. execution_device (`torch.device`, *optional*): The device on which inputs and model weights should be placed before the forward pass. offload (`bool`, *optional*, defaults to `False`): Whether or not the weights should be offloaded after the forward pass. weights_map (`Mapping[str, torch.Tensor]`, *optional*): When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to include the associated module's buffers when offloading. module_name (`str`, *optional*, defaults to `""`): The name of the module. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. tied_params_map (Optional[Dict[int, Dict[torch.device, torch.Tensor]]], *optional*, defaults to `None`): A map of data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight for all others, instead of duplicating memory. """ # Attach the hook on this module if it has any direct tensor. directs = named_module_tensors(module) full_offload = ( offload and preload_module_classes is not None and module.__class__.__name__ in preload_module_classes ) if len(list(directs)) > 0 or full_offload: if weights_map is not None: prefix = f"{module_name}." if len(module_name) > 0 else "" prefixed_weights_map = PrefixedDataset(weights_map, prefix) else: prefixed_weights_map = None hook = AlignDevicesHook( execution_device=execution_device, offload=offload, weights_map=prefixed_weights_map, offload_buffers=offload_buffers, place_submodules=full_offload, skip_keys=skip_keys, tied_params_map=tied_params_map, ) add_hook_to_module(module, hook, append=True) # We stop the recursion in case we hit the full offload. if full_offload: return # Recurse on all children of the module. for child_name, child in module.named_children(): child_name = f"{module_name}.{child_name}" if len(module_name) > 0 else child_name attach_align_device_hook( child, execution_device=execution_device, offload=offload, weights_map=weights_map, offload_buffers=offload_buffers, module_name=child_name, preload_module_classes=preload_module_classes, skip_keys=skip_keys, tied_params_map=tied_params_map, )
Recursively removes all hooks attached on the submodules of a given model. Args: module (`torch.nn.Module`): The module on which to remove all hooks.
def remove_hook_from_submodules(module: nn.Module): """ Recursively removes all hooks attached on the submodules of a given model. Args: module (`torch.nn.Module`): The module on which to remove all hooks. """ remove_hook_from_module(module) for child in module.children(): remove_hook_from_submodules(child)
Attaches `AlignDevicesHook` to all blocks of a given model as needed. Args: module (`torch.nn.Module`): The module where we want to attach the hooks. execution_device (`torch.device` or `Dict[str, torch.device]`, *optional*): The device on which inputs and model weights should be placed before the forward pass. It can be one device for the whole module, or a dictionary mapping module name to device. offload (`bool`, *optional*, defaults to `False`): Whether or not the weights should be offloaded after the forward pass. It can be one boolean for the whole module, or a dictionary mapping module name to boolean. weights_map (`Mapping[str, torch.Tensor]`, *optional*): When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to include the associated module's buffers when offloading. module_name (`str`, *optional*, defaults to `""`): The name of the module. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. tied_params_map (Optional[Dict[int, Dict[torch.device, torch.Tensor]]], *optional*, defaults to `None`): A map of data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight for all others, instead of duplicating memory.
def attach_align_device_hook_on_blocks( module: nn.Module, execution_device: Optional[Union[torch.device, Dict[str, torch.device]]] = None, offload: Union[bool, Dict[str, bool]] = False, weights_map: Mapping = None, offload_buffers: bool = False, module_name: str = "", skip_keys: Optional[Union[str, List[str]]] = None, preload_module_classes: Optional[List[str]] = None, tied_params_map: Optional[Dict[int, Dict[torch.device, torch.Tensor]]] = None, ): """ Attaches `AlignDevicesHook` to all blocks of a given model as needed. Args: module (`torch.nn.Module`): The module where we want to attach the hooks. execution_device (`torch.device` or `Dict[str, torch.device]`, *optional*): The device on which inputs and model weights should be placed before the forward pass. It can be one device for the whole module, or a dictionary mapping module name to device. offload (`bool`, *optional*, defaults to `False`): Whether or not the weights should be offloaded after the forward pass. It can be one boolean for the whole module, or a dictionary mapping module name to boolean. weights_map (`Mapping[str, torch.Tensor]`, *optional*): When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to include the associated module's buffers when offloading. module_name (`str`, *optional*, defaults to `""`): The name of the module. skip_keys (`str` or `List[str]`, *optional*): A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. tied_params_map (Optional[Dict[int, Dict[torch.device, torch.Tensor]]], *optional*, defaults to `None`): A map of data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight for all others, instead of duplicating memory. """ # If one device and one offload, we've got one hook. if not isinstance(execution_device, Mapping) and not isinstance(offload, dict): if not offload: hook = AlignDevicesHook( execution_device=execution_device, io_same_device=True, skip_keys=skip_keys, place_submodules=True, tied_params_map=tied_params_map, ) add_hook_to_module(module, hook) else: attach_align_device_hook( module, execution_device=execution_device, offload=True, weights_map=weights_map, offload_buffers=offload_buffers, module_name=module_name, skip_keys=skip_keys, tied_params_map=tied_params_map, ) return if not isinstance(execution_device, Mapping): execution_device = {key: execution_device for key in offload.keys()} if not isinstance(offload, Mapping): offload = {key: offload for key in execution_device.keys()} if module_name in execution_device and module_name in offload and not offload[module_name]: hook = AlignDevicesHook( execution_device=execution_device[module_name], offload_buffers=offload_buffers, io_same_device=(module_name == ""), place_submodules=True, skip_keys=skip_keys, tied_params_map=tied_params_map, ) add_hook_to_module(module, hook) attach_execution_device_hook(module, execution_device[module_name], tied_params_map=tied_params_map) elif module_name in execution_device and module_name in offload: attach_align_device_hook( module, execution_device=execution_device[module_name], offload=True, weights_map=weights_map, offload_buffers=offload_buffers, module_name=module_name, skip_keys=skip_keys, preload_module_classes=preload_module_classes, tied_params_map=tied_params_map, ) if not hasattr(module, "_hf_hook"): hook = AlignDevicesHook( execution_device=execution_device[module_name], io_same_device=(module_name == ""), skip_keys=skip_keys, tied_params_map=tied_params_map, ) add_hook_to_module(module, hook) attach_execution_device_hook( module, execution_device[module_name], preload_module_classes=preload_module_classes, skip_keys=skip_keys, tied_params_map=tied_params_map, ) elif module_name == "": hook = AlignDevicesHook( execution_device=execution_device.get(""), io_same_device=True, skip_keys=skip_keys, tied_params_map=tied_params_map, ) add_hook_to_module(module, hook) for child_name, child in module.named_children(): child_name = f"{module_name}.{child_name}" if len(module_name) > 0 else child_name attach_align_device_hook_on_blocks( child, execution_device=execution_device, offload=offload, weights_map=weights_map, offload_buffers=offload_buffers, module_name=child_name, preload_module_classes=preload_module_classes, skip_keys=skip_keys, tied_params_map=tied_params_map, )
Calculates the device map for `model` with an offset for PiPPy
def generate_device_map(model, num_processes: int = 1, no_split_module_classes=None, max_memory: dict = None): """ Calculates the device map for `model` with an offset for PiPPy """ if num_processes == 1: return infer_auto_device_map(model, no_split_module_classes=no_split_module_classes, clean_result=False) if max_memory is None: model_size, shared = calculate_maximum_sizes(model) # Split into `n` chunks for each GPU memory = (model_size + shared[0]) / num_processes memory = convert_bytes(memory) value, ending = memory.split(" ") # Add a chunk to deal with potential extra shared memory instances memory = math.ceil(float(value)) * 1.1 memory = f"{memory} {ending}" max_memory = {i: memory for i in range(num_processes)} device_map = infer_auto_device_map( model, max_memory=max_memory, no_split_module_classes=no_split_module_classes, clean_result=False, ) return device_map
Attaches the split points to the model based on `self.device_map` and generates a `PipelineStage`. Requires passing in needed `args` and `kwargs` as the model needs on the CPU. Users can pass in custom `num_chunks` as an optional hyper-parameter. By default will use `AcceleratorState.num_processes`
def build_pipeline(model, split_points, args, kwargs, num_chunks): """ Attaches the split points to the model based on `self.device_map` and generates a `PipelineStage`. Requires passing in needed `args` and `kwargs` as the model needs on the CPU. Users can pass in custom `num_chunks` as an optional hyper-parameter. By default will use `AcceleratorState.num_processes` """ # We need to annotate the split points in the model for PiPPy state = PartialState() annotate_split_points(model, {split_point: PipeSplitWrapper.SplitPoint.BEGINNING for split_point in split_points}) found_batch_size = find_pippy_batch_size(args, kwargs) if found_batch_size != num_chunks: if args is not None: args = pad_input_tensors(args, found_batch_size, num_chunks) if kwargs is not None: kwargs = pad_input_tensors(kwargs, found_batch_size, num_chunks) pipe = Pipe.from_tracing(model, num_chunks=num_chunks, example_args=args, example_kwargs=kwargs) stage = PipelineStage(pipe, state.local_process_index, device=state.device) return stage
Wraps `model` for pipeline parallel inference. Args: model (`torch.nn.Module`): A model we want to split for pipeline-parallel inference split_points (`str` or `List[str]`, defaults to 'auto'): How to generate the split points and chunk the model across each GPU. 'auto' will find the best balanced split given any model. Should be a list of layer names in the model to split by otherwise. no_split_module_classes (`List[str]`): A list of class names for layers we don't want to be split. example_args (tuple of model inputs): The expected inputs for the model that uses order-based inputs. Recommended to use this method if possible. example_kwargs (dict of model inputs) The expected inputs for the model that uses dictionary-based inputs. This is a *highly* limiting structure that requires the same keys be present at *all* inference calls. Not recommended unless the prior condition is true for all cases. num_chunks (`int`, defaults to the number of available GPUs): The number of different stages the Pipeline will have. By default it will assign one chunk per GPU, but this can be tuned and played with. In general one should have num_chunks >= num_gpus. gather_output (`bool`, defaults to `False`): If `True`, the output from the last GPU (which holds the true outputs) is sent across to all GPUs.
def prepare_pippy( model, split_points: Optional[Union[str, List[str]]] = "auto", no_split_module_classes: Optional[List[str]] = None, example_args: Optional[Tuple[Any]] = (), example_kwargs: Optional[Dict[str, Any]] = None, num_chunks: Optional[int] = None, gather_output: Optional[bool] = False, ): """ Wraps `model` for pipeline parallel inference. Args: model (`torch.nn.Module`): A model we want to split for pipeline-parallel inference split_points (`str` or `List[str]`, defaults to 'auto'): How to generate the split points and chunk the model across each GPU. 'auto' will find the best balanced split given any model. Should be a list of layer names in the model to split by otherwise. no_split_module_classes (`List[str]`): A list of class names for layers we don't want to be split. example_args (tuple of model inputs): The expected inputs for the model that uses order-based inputs. Recommended to use this method if possible. example_kwargs (dict of model inputs) The expected inputs for the model that uses dictionary-based inputs. This is a *highly* limiting structure that requires the same keys be present at *all* inference calls. Not recommended unless the prior condition is true for all cases. num_chunks (`int`, defaults to the number of available GPUs): The number of different stages the Pipeline will have. By default it will assign one chunk per GPU, but this can be tuned and played with. In general one should have num_chunks >= num_gpus. gather_output (`bool`, defaults to `False`): If `True`, the output from the last GPU (which holds the true outputs) is sent across to all GPUs. """ if not is_pippy_available(): raise ImportError( "`pippy` was not found to be installed on your system. Please " "install using `pip install torchpippy` or ensure you have at least version 0.2.0" ) state = PartialState() example_args = send_to_device(example_args, "cpu") example_kwargs = send_to_device(example_kwargs, "cpu") if num_chunks is None: num_chunks = state.num_processes if split_points == "auto": device_map = generate_device_map(model, num_chunks, no_split_module_classes=no_split_module_classes) split_points = [] for i in range(1, num_chunks): split_points.append(next(k for k, v in device_map.items() if v == i)) model.hf_split_points = split_points stage = build_pipeline(model, split_points, example_args, example_kwargs, num_chunks) model._original_forward = model.forward model._original_call = model.__call__ model.pippy_stage = stage model.hf_split_points = split_points def forward(*args, **kwargs): return pippy_forward(stage.forward, num_chunks, gather_output, *args, **kwargs) # To act like a decorator so that it can be popped when doing `extract_model_from_parallel` # Note: creates an infinite recursion loop with `generate` model_forward = MethodType(forward, model) forward.__wrapped__ = model_forward model.forward = forward return model
Verify a `PartialState` can be initialized.
def test_launch(): "Verify a `PartialState` can be initialized." _ = PartialState()
Launches a training function, using several processes or multiple nodes if it's possible in the current environment (TPU with multiple cores for instance). <Tip warning={true}> To use this function absolutely zero calls to a CUDA device must be made in the notebook session before calling. If any have been made, you will need to restart the notebook and make sure no cells use any CUDA capability. Setting `ACCELERATE_DEBUG_MODE="1"` in your environment will run a test before truly launching to ensure that none of those calls have been made. </Tip> Args: function (`Callable`): The training function to execute. If it accepts arguments, the first argument should be the index of the process run. args (`Tuple`): Tuple of arguments to pass to the function (it will receive `*args`). num_processes (`int`, *optional*): The number of processes to use for training. Will default to 8 in Colab/Kaggle if a TPU is available, to the number of GPUs available otherwise. mixed_precision (`str`, *optional*, defaults to `"no"`): If `fp16` or `bf16`, will use mixed precision training on multi-GPU. use_port (`str`, *optional*, defaults to `"29500"`): The port to use to communicate between processes when launching a multi-GPU training. master_addr (`str`, *optional*, defaults to `"127.0.0.1"`): The address to use for communication between processes. node_rank (`int`, *optional*, defaults to 0): The rank of the current node. num_nodes (`int`, *optional*, defaults to 1): The number of nodes to use for training. Example: ```python # Assume this is defined in a Jupyter Notebook on an instance with two GPUs from accelerate import notebook_launcher def train(*args): # Your training function here ... notebook_launcher(train, args=(arg1, arg2), num_processes=2, mixed_precision="fp16") ```
def notebook_launcher( function, args=(), num_processes=None, mixed_precision="no", use_port="29500", master_addr="127.0.0.1", node_rank=0, num_nodes=1, ): """ Launches a training function, using several processes or multiple nodes if it's possible in the current environment (TPU with multiple cores for instance). <Tip warning={true}> To use this function absolutely zero calls to a CUDA device must be made in the notebook session before calling. If any have been made, you will need to restart the notebook and make sure no cells use any CUDA capability. Setting `ACCELERATE_DEBUG_MODE="1"` in your environment will run a test before truly launching to ensure that none of those calls have been made. </Tip> Args: function (`Callable`): The training function to execute. If it accepts arguments, the first argument should be the index of the process run. args (`Tuple`): Tuple of arguments to pass to the function (it will receive `*args`). num_processes (`int`, *optional*): The number of processes to use for training. Will default to 8 in Colab/Kaggle if a TPU is available, to the number of GPUs available otherwise. mixed_precision (`str`, *optional*, defaults to `"no"`): If `fp16` or `bf16`, will use mixed precision training on multi-GPU. use_port (`str`, *optional*, defaults to `"29500"`): The port to use to communicate between processes when launching a multi-GPU training. master_addr (`str`, *optional*, defaults to `"127.0.0.1"`): The address to use for communication between processes. node_rank (`int`, *optional*, defaults to 0): The rank of the current node. num_nodes (`int`, *optional*, defaults to 1): The number of nodes to use for training. Example: ```python # Assume this is defined in a Jupyter Notebook on an instance with two GPUs from accelerate import notebook_launcher def train(*args): # Your training function here ... notebook_launcher(train, args=(arg1, arg2), num_processes=2, mixed_precision="fp16") ``` """ # Are we in a google colab or a Kaggle Kernel? in_colab = False in_kaggle = False if any(key.startswith("KAGGLE") for key in os.environ.keys()): in_kaggle = True elif "IPython" in sys.modules: in_colab = "google.colab" in str(sys.modules["IPython"].get_ipython()) try: mixed_precision = PrecisionType(mixed_precision.lower()) except ValueError: raise ValueError( f"Unknown mixed_precision mode: {args.mixed_precision.lower()}. Choose between {PrecisionType.list()}." ) if (in_colab or in_kaggle) and (os.environ.get("TPU_NAME", None) is not None): # TPU launch import torch_xla.distributed.xla_multiprocessing as xmp if len(AcceleratorState._shared_state) > 0: raise ValueError( "To train on TPU in Colab or Kaggle Kernel, the `Accelerator` should only be initialized inside " "your training function. Restart your notebook and make sure no cells initializes an " "`Accelerator`." ) if num_processes is None: num_processes = 8 launcher = PrepareForLaunch(function, distributed_type="TPU") print(f"Launching a training on {num_processes} TPU cores.") xmp.spawn(launcher, args=args, nprocs=num_processes, start_method="fork") elif in_colab and get_gpu_info()[1] < 2: # No need for a distributed launch otherwise as it's either CPU or one GPU. if torch.cuda.is_available(): print("Launching training on one GPU.") else: print("Launching training on one CPU.") function(*args) else: if num_processes is None: raise ValueError( "You have to specify the number of GPUs you would like to use, add `num_processes=...` to your call." ) if node_rank >= num_nodes: raise ValueError("The node_rank must be less than the number of nodes.") if num_processes > 1: # Multi-GPU launch from torch.multiprocessing import start_processes from torch.multiprocessing.spawn import ProcessRaisedException if len(AcceleratorState._shared_state) > 0: raise ValueError( "To launch a multi-GPU training from your notebook, the `Accelerator` should only be initialized " "inside your training function. Restart your notebook and make sure no cells initializes an " "`Accelerator`." ) # Check for specific libraries known to initialize CUDA that users constantly use problematic_imports = are_libraries_initialized("bitsandbytes") if len(problematic_imports) > 0: err = ( "Could not start distributed process. Libraries known to initialize CUDA upon import have been " "imported already. Please keep these imports inside your training function to try and help with this:" ) for lib_name in problematic_imports: err += f"\n\t* `{lib_name}`" raise RuntimeError(err) patched_env = dict( nproc=num_processes, node_rank=node_rank, world_size=num_nodes * num_processes, master_addr=master_addr, master_port=use_port, mixed_precision=mixed_precision, ) # Check for CUDA P2P and IB issues if not check_cuda_p2p_ib_support(): patched_env["nccl_p2p_disable"] = "1" patched_env["nccl_ib_disable"] = "1" # torch.distributed will expect a few environment variable to be here. We set the ones common to each # process here (the other ones will be set be the launcher). with patch_environment(**patched_env): # First dummy launch if os.environ.get("ACCELERATE_DEBUG_MODE", "false").lower() == "true": launcher = PrepareForLaunch(test_launch, distributed_type="MULTI_GPU") try: start_processes(launcher, args=(), nprocs=num_processes, start_method="fork") except ProcessRaisedException as e: err = "An issue was found when verifying a stable environment for the notebook launcher." if "Cannot re-initialize CUDA in forked subprocess" in e.args[0]: raise RuntimeError( f"{err}" "This likely stems from an outside import causing issues once the `notebook_launcher()` is called. " "Please review your imports and test them when running the `notebook_launcher()` to identify " "which one is problematic and causing CUDA to be initialized." ) from e else: raise RuntimeError(f"{err} The following error was raised: {e}") from e # Now the actual launch launcher = PrepareForLaunch(function, distributed_type="MULTI_GPU") print(f"Launching training on {num_processes} GPUs.") try: start_processes(launcher, args=args, nprocs=num_processes, start_method="fork") except ProcessRaisedException as e: if "Cannot re-initialize CUDA in forked subprocess" in e.args[0]: raise RuntimeError( "CUDA has been initialized before the `notebook_launcher` could create a forked subprocess. " "This likely stems from an outside import causing issues once the `notebook_launcher()` is called. " "Please review your imports and test them when running the `notebook_launcher()` to identify " "which one is problematic and causing CUDA to be initialized." ) from e else: raise RuntimeError(f"An issue was found when launching the training: {e}") from e else: # No need for a distributed launch otherwise as it's either CPU, GPU or MPS. if is_mps_available(): os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" print("Launching training on MPS.") elif torch.cuda.is_available(): print("Launching training on one GPU.") else: print("Launching training on CPU.") function(*args)
Launches a training function using several processes on CPU for debugging purposes. <Tip warning={true}> This function is provided for internal testing and debugging, but it's not intended for real trainings. It will only use the CPU. </Tip> Args: function (`Callable`): The training function to execute. args (`Tuple`): Tuple of arguments to pass to the function (it will receive `*args`). num_processes (`int`, *optional*, defaults to 2): The number of processes to use for training.
def debug_launcher(function, args=(), num_processes=2): """ Launches a training function using several processes on CPU for debugging purposes. <Tip warning={true}> This function is provided for internal testing and debugging, but it's not intended for real trainings. It will only use the CPU. </Tip> Args: function (`Callable`): The training function to execute. args (`Tuple`): Tuple of arguments to pass to the function (it will receive `*args`). num_processes (`int`, *optional*, defaults to 2): The number of processes to use for training. """ from torch.multiprocessing import start_processes with tempfile.NamedTemporaryFile() as tmp_file: # torch.distributed will expect a few environment variable to be here. We set the ones common to each # process here (the other ones will be set be the launcher). with patch_environment( world_size=num_processes, master_addr="127.0.0.1", master_port="29500", accelerate_mixed_precision="no", accelerate_debug_rdv_file=tmp_file.name, accelerate_use_cpu="yes", ): launcher = PrepareForLaunch(function, debug=True) start_processes(launcher, args=args, nprocs=num_processes, start_method="fork")
Returns a `logging.Logger` for `name` that can handle multiprocessing. If a log should be called on all processes, pass `main_process_only=False` If a log should be called on all processes and in order, also pass `in_order=True` Args: name (`str`): The name for the logger, such as `__file__` log_level (`str`, *optional*): The log level to use. If not passed, will default to the `LOG_LEVEL` environment variable, or `INFO` if not Example: ```python >>> from accelerate.logging import get_logger >>> from accelerate import Accelerator >>> logger = get_logger(__name__) >>> accelerator = Accelerator() >>> logger.info("My log", main_process_only=False) >>> logger.debug("My log", main_process_only=True) >>> logger = get_logger(__name__, log_level="DEBUG") >>> logger.info("My log") >>> logger.debug("My second log") >>> array = ["a", "b", "c", "d"] >>> letter_at_rank = array[accelerator.process_index] >>> logger.info(letter_at_rank, in_order=True) ```
def get_logger(name: str, log_level: str = None): """ Returns a `logging.Logger` for `name` that can handle multiprocessing. If a log should be called on all processes, pass `main_process_only=False` If a log should be called on all processes and in order, also pass `in_order=True` Args: name (`str`): The name for the logger, such as `__file__` log_level (`str`, *optional*): The log level to use. If not passed, will default to the `LOG_LEVEL` environment variable, or `INFO` if not Example: ```python >>> from accelerate.logging import get_logger >>> from accelerate import Accelerator >>> logger = get_logger(__name__) >>> accelerator = Accelerator() >>> logger.info("My log", main_process_only=False) >>> logger.debug("My log", main_process_only=True) >>> logger = get_logger(__name__, log_level="DEBUG") >>> logger.info("My log") >>> logger.debug("My second log") >>> array = ["a", "b", "c", "d"] >>> letter_at_rank = array[accelerator.process_index] >>> logger.info(letter_at_rank, in_order=True) ``` """ if log_level is None: log_level = os.environ.get("ACCELERATE_LOG_LEVEL", None) logger = logging.getLogger(name) if log_level is not None: logger.setLevel(log_level.upper()) logger.root.setLevel(log_level.upper()) return MultiProcessAdapter(logger, {})
Checks if the `AcceleratorState` has been initialized from `Accelerator`. Same as `AcceleratorState.initialized`, but works as a module method.
def is_initialized() -> bool: """ Checks if the `AcceleratorState` has been initialized from `Accelerator`. Same as `AcceleratorState.initialized`, but works as a module method. """ return AcceleratorState._shared_state != {}
Decorator to selectively run the decorated function on the main process only based on the `main_process_only` attribute in a class. Checks at function execution rather than initialization time, not triggering the initialization of the `PartialState`.
def on_main_process(function): """ Decorator to selectively run the decorated function on the main process only based on the `main_process_only` attribute in a class. Checks at function execution rather than initialization time, not triggering the initialization of the `PartialState`. """ @wraps(function) def execute_on_main_process(self, *args, **kwargs): if getattr(self, "main_process_only", False): return PartialState().on_main_process(function)(self, *args, **kwargs) else: return function(self, *args, **kwargs) return execute_on_main_process
Returns a list of all supported available trackers in the system
def get_available_trackers(): "Returns a list of all supported available trackers in the system" return _available_trackers
Takes in a list of potential tracker types and checks that: - The tracker wanted is available in that environment - Filters out repeats of tracker types - If `all` is in `log_with`, will return all trackers in the environment - If a tracker requires a `logging_dir`, ensures that `logging_dir` is not `None` Args: log_with (list of `str`, [`~utils.LoggerType`] or [`~tracking.GeneralTracker`], *optional*): A list of loggers to be setup for experiment tracking. Should be one or several of: - `"all"` - `"tensorboard"` - `"wandb"` - `"comet_ml"` - `"mlflow"` - `"dvclive"` If `"all"` is selected, will pick up all available trackers in the environment and initialize them. Can also accept implementations of `GeneralTracker` for custom trackers, and can be combined with `"all"`. logging_dir (`str`, `os.PathLike`, *optional*): A path to a directory for storing logs of locally-compatible loggers.
def filter_trackers( log_with: List[Union[str, LoggerType, GeneralTracker]], logging_dir: Union[str, os.PathLike] = None, ): """ Takes in a list of potential tracker types and checks that: - The tracker wanted is available in that environment - Filters out repeats of tracker types - If `all` is in `log_with`, will return all trackers in the environment - If a tracker requires a `logging_dir`, ensures that `logging_dir` is not `None` Args: log_with (list of `str`, [`~utils.LoggerType`] or [`~tracking.GeneralTracker`], *optional*): A list of loggers to be setup for experiment tracking. Should be one or several of: - `"all"` - `"tensorboard"` - `"wandb"` - `"comet_ml"` - `"mlflow"` - `"dvclive"` If `"all"` is selected, will pick up all available trackers in the environment and initialize them. Can also accept implementations of `GeneralTracker` for custom trackers, and can be combined with `"all"`. logging_dir (`str`, `os.PathLike`, *optional*): A path to a directory for storing logs of locally-compatible loggers. """ loggers = [] if log_with is not None: if not isinstance(log_with, (list, tuple)): log_with = [log_with] if "all" in log_with or LoggerType.ALL in log_with: loggers = [o for o in log_with if issubclass(type(o), GeneralTracker)] + get_available_trackers() else: for log_type in log_with: if log_type not in LoggerType and not issubclass(type(log_type), GeneralTracker): raise ValueError(f"Unsupported logging capability: {log_type}. Choose between {LoggerType.list()}") if issubclass(type(log_type), GeneralTracker): loggers.append(log_type) else: log_type = LoggerType(log_type) if log_type not in loggers: if log_type in get_available_trackers(): tracker_init = LOGGER_TYPE_TO_CLASS[str(log_type)] if tracker_init.requires_logging_directory: if logging_dir is None: raise ValueError( f"Logging with `{log_type}` requires a `logging_dir` to be passed in." ) loggers.append(log_type) else: logger.debug(f"Tried adding logger {log_type}, but package is unavailable in the system.") return loggers
Verifies that the model is on the hub and returns the model info.
def verify_on_hub(repo: str, token: str = None): "Verifies that the model is on the hub and returns the model info." try: return model_info(repo, token=token) except GatedRepoError: return "gated" except RepositoryNotFoundError: return "repo"
Checks what library spawned `error` when a model is not found
def check_has_model(error): """ Checks what library spawned `error` when a model is not found """ if is_timm_available() and isinstance(error, RuntimeError) and "Unknown model" in error.args[0]: return "timm" elif ( is_transformers_available() and isinstance(error, OSError) and "does not appear to have a file named" in error.args[0] ): return "transformers" else: return "unknown"
Creates an empty model from its parent library on the `Hub` to calculate the overall memory consumption. Args: model_name (`str`): The model name on the Hub library_name (`str`): The library the model has an integration with, such as `transformers`. Will be used if `model_name` has no metadata on the Hub to determine the library. trust_remote_code (`bool`, `optional`, defaults to `False`): Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. access_token (`str`, `optional`, defaults to `None`): The access token to use to access private or gated models on the Hub. (for use on the Gradio app) Returns: `torch.nn.Module`: The torch model that has been initialized on the `meta` device.
def create_empty_model(model_name: str, library_name: str, trust_remote_code: bool = False, access_token: str = None): """ Creates an empty model from its parent library on the `Hub` to calculate the overall memory consumption. Args: model_name (`str`): The model name on the Hub library_name (`str`): The library the model has an integration with, such as `transformers`. Will be used if `model_name` has no metadata on the Hub to determine the library. trust_remote_code (`bool`, `optional`, defaults to `False`): Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. access_token (`str`, `optional`, defaults to `None`): The access token to use to access private or gated models on the Hub. (for use on the Gradio app) Returns: `torch.nn.Module`: The torch model that has been initialized on the `meta` device. """ model_info = verify_on_hub(model_name, access_token) # Simplified errors if model_info == "gated": raise GatedRepoError( f"Repo for model `{model_name}` is gated. You must be authenticated to access it. Please run `huggingface-cli login`." ) elif model_info == "repo": raise RepositoryNotFoundError( f"Repo for model `{model_name}` does not exist on the Hub. If you are trying to access a private repo," " make sure you are authenticated via `huggingface-cli login` and have access." ) if library_name is None: library_name = getattr(model_info, "library_name", False) if not library_name: raise ValueError( f"Model `{model_name}` does not have any library metadata on the Hub, please manually pass in a `--library_name` to use (such as `transformers`)" ) if library_name == "transformers": if not is_transformers_available(): raise ImportError( f"To check `{model_name}`, `transformers` must be installed. Please install it via `pip install transformers`" ) print(f"Loading pretrained config for `{model_name}` from `transformers`...") if model_info.config is None: raise RuntimeError(f"Tried to load `{model_name}` with `transformers` but it does not have any metadata.") auto_map = model_info.config.get("auto_map", False) config = AutoConfig.from_pretrained(model_name, trust_remote_code=trust_remote_code, token=access_token) with init_empty_weights(): # remote code could specify a specific `AutoModel` class in the `auto_map` constructor = AutoModel if isinstance(auto_map, dict): value = None for key in auto_map.keys(): if key.startswith("AutoModelFor"): value = key break if value is not None: constructor = getattr(transformers, value) model = constructor.from_config(config, trust_remote_code=trust_remote_code) elif library_name == "timm": if not is_timm_available(): raise ImportError( f"To check `{model_name}`, `timm` must be installed. Please install it via `pip install timm`" ) print(f"Loading pretrained config for `{model_name}` from `timm`...") with init_empty_weights(): model = timm.create_model(model_name, pretrained=False) else: raise ValueError( f"Library `{library_name}` is not supported yet, please open an issue on GitHub for us to add support." ) return model
Creates a pretty table from a list of rows, minimal version of `tabulate`.
def create_ascii_table(headers: list, rows: list, title: str): "Creates a pretty table from a list of rows, minimal version of `tabulate`." sep_char, in_between = "│", "─" column_widths = [] for i in range(len(headers)): column_values = [row[i] for row in rows] + [headers[i]] max_column_width = max(len(value) for value in column_values) column_widths.append(max_column_width) formats = [f"%{column_widths[i]}s" for i in range(len(rows[0]))] pattern = f"{sep_char}{sep_char.join(formats)}{sep_char}" diff = 0 def make_row(left_char, middle_char, right_char): return f"{left_char}{middle_char.join([in_between * n for n in column_widths])}{in_between * diff}{right_char}" separator = make_row("├", "┼", "┤") if len(title) > sum(column_widths): diff = abs(len(title) - len(separator)) column_widths[-1] += diff # Update with diff separator = make_row("├", "┼", "┤") initial_rows = [ make_row("┌", in_between, "┐"), f"{sep_char}{title.center(len(separator) - 2)}{sep_char}", make_row("├", "┬", "┤"), ] table = "\n".join(initial_rows) + "\n" column_widths[-1] += diff centered_line = [text.center(column_widths[i]) for i, text in enumerate(headers)] table += f"{pattern % tuple(centered_line)}\n{separator}\n" for i, line in enumerate(rows): centered_line = [t.center(column_widths[i]) for i, t in enumerate(line)] table += f"{pattern % tuple(centered_line)}\n" table += f'└{"┴".join([in_between * n for n in column_widths])}┘' return table
Given an amount of `bytes` and `mixed_precision`, calculates how much training memory is needed for a batch size of 1. Args: bytes (`int`): The size of the model being trained. mixed_precision (`str`): The mixed precision that would be ran. msamp_config (`str`): The msamp config to estimate the training memory for if `mixed_precision` is set to `"fp8"`.
def estimate_training_usage(bytes: int, mixed_precision: str, msamp_config: str = None) -> dict: """ Given an amount of `bytes` and `mixed_precision`, calculates how much training memory is needed for a batch size of 1. Args: bytes (`int`): The size of the model being trained. mixed_precision (`str`): The mixed precision that would be ran. msamp_config (`str`): The msamp config to estimate the training memory for if `mixed_precision` is set to `"fp8"`. """ memory_sizes = {"model": -1, "optimizer": -1, "gradients": -1, "step": -1} fp32_size = bytes fp16_size = bytes // 2 if mixed_precision == "float32": memory_sizes["model"] = fp32_size memory_sizes["gradients"] = fp32_size memory_sizes["optimizer"] = fp32_size * 2 memory_sizes["step"] = fp32_size * 4 elif mixed_precision in ("float16", "bfloat16") or (mixed_precision == "fp8" and msamp_config is None): # With native `TransformersEngine`, there is no memory savings with FP8 # With mixed precision training, the model has weights stored # in FP16 and FP32 memory_sizes["model"] = fp32_size # 1.5 from weight gradient + computation (GEMM) memory_sizes["gradients"] = fp32_size + fp16_size # 2x from optimizer states memory_sizes["optimizer"] = fp32_size * 2 # Optimizer states memory_sizes["step"] = memory_sizes["optimizer"] return memory_sizes
Creates an empty model and gathers the data for the sizes
def gather_data(args): "Creates an empty model and gathers the data for the sizes" try: model = create_empty_model( args.model_name, library_name=args.library_name, trust_remote_code=args.trust_remote_code ) except (RuntimeError, OSError) as e: library = check_has_model(e) if library != "unknown": raise RuntimeError( f"Tried to load `{args.model_name}` with `{library}` but a possible model to load was not found inside the repo." ) raise e total_size, largest_layer = calculate_maximum_sizes(model) data = [] for dtype in args.dtypes: dtype_total_size = total_size dtype_largest_layer = largest_layer[0] dtype_training_size = estimate_training_usage(dtype_total_size, dtype) if dtype == "float16": dtype_total_size /= 2 dtype_largest_layer /= 2 elif dtype == "int8": dtype_total_size /= 4 dtype_largest_layer /= 4 elif dtype == "int4": dtype_total_size /= 8 dtype_largest_layer /= 8 data.append([dtype, dtype_largest_layer, dtype_total_size, dtype_training_size]) return data
Finds all cases of - after the first two characters and changes them to _
def clean_option(option): "Finds all cases of - after the first two characters and changes them to _" if option.startswith("--"): return option[2:].replace("-", "_")
Creates and saves a basic cluster config to be used on a local machine with potentially multiple GPUs. Will also set CPU if it is a CPU-only machine. Args: mixed_precision (`str`, *optional*, defaults to "no"): Mixed Precision to use. Should be one of "no", "fp16", or "bf16" save_location (`str`, *optional*, defaults to `default_json_config_file`): Optional custom save location. Should be passed to `--config_file` when using `accelerate launch`. Default location is inside the huggingface cache folder (`~/.cache/huggingface`) but can be overriden by setting the `HF_HOME` environmental variable, followed by `accelerate/default_config.yaml`. use_xpu (`bool`, *optional*, defaults to `False`): Whether to use XPU if available.
def write_basic_config(mixed_precision="no", save_location: str = default_json_config_file, use_xpu: bool = False): """ Creates and saves a basic cluster config to be used on a local machine with potentially multiple GPUs. Will also set CPU if it is a CPU-only machine. Args: mixed_precision (`str`, *optional*, defaults to "no"): Mixed Precision to use. Should be one of "no", "fp16", or "bf16" save_location (`str`, *optional*, defaults to `default_json_config_file`): Optional custom save location. Should be passed to `--config_file` when using `accelerate launch`. Default location is inside the huggingface cache folder (`~/.cache/huggingface`) but can be overriden by setting the `HF_HOME` environmental variable, followed by `accelerate/default_config.yaml`. use_xpu (`bool`, *optional*, defaults to `False`): Whether to use XPU if available. """ path = Path(save_location) path.parent.mkdir(parents=True, exist_ok=True) if path.exists(): print( f"Configuration already exists at {save_location}, will not override. Run `accelerate config` manually or pass a different `save_location`." ) return False mixed_precision = mixed_precision.lower() if mixed_precision not in ["no", "fp16", "bf16", "fp8"]: raise ValueError( f"`mixed_precision` should be one of 'no', 'fp16', 'bf16', or 'fp8'. Received {mixed_precision}" ) config = { "compute_environment": "LOCAL_MACHINE", "mixed_precision": mixed_precision, } if is_mlu_available(): num_mlus = torch.mlu.device_count() config["num_processes"] = num_mlus config["use_cpu"] = False if num_mlus > 1: config["distributed_type"] = "MULTI_MLU" else: config["distributed_type"] = "NO" elif torch.cuda.is_available(): num_gpus = torch.cuda.device_count() config["num_processes"] = num_gpus config["use_cpu"] = False if num_gpus > 1: config["distributed_type"] = "MULTI_GPU" else: config["distributed_type"] = "NO" elif is_xpu_available() and use_xpu: num_xpus = torch.xpu.device_count() config["num_processes"] = num_xpus config["use_cpu"] = False if num_xpus > 1: config["distributed_type"] = "MULTI_XPU" else: config["distributed_type"] = "NO" elif is_npu_available(): num_npus = torch.npu.device_count() config["num_processes"] = num_npus config["use_cpu"] = False if num_npus > 1: config["distributed_type"] = "MULTI_NPU" else: config["distributed_type"] = "NO" else: num_xpus = 0 config["use_cpu"] = True config["num_processes"] = 1 config["distributed_type"] = "NO" config["debug"] = False config = ClusterConfig(**config) config.to_json_file(path) return path
Update an existing config file with the latest defaults while maintaining the old configuration.
def update_config(args): """ Update an existing config file with the latest defaults while maintaining the old configuration. """ config_file = args.config_file if config_file is None and Path(default_config_file).exists(): config_file = default_config_file elif not Path(config_file).exists(): raise ValueError(f"The passed config file located at {config_file} doesn't exist.") config = load_config_from_file(config_file) if config_file.endswith(".json"): config.to_json_file(config_file) else: config.to_yaml_file(config_file) return config_file
Context manager to hide the terminal cursor
def hide(): "Context manager to hide the terminal cursor" try: hide_cursor() yield finally: show_cursor()
Mark the function with the key code so it can be handled in the register
def mark(key: str): """ Mark the function with the key code so it can be handled in the register """ def decorator(func): handle = getattr(func, "handle_key", []) handle += [key] func.handle_key = handle return func return decorator
Mark the function with the key codes so it can be handled in the register
def mark_multiple(*keys: List[str]): """ Mark the function with the key codes so it can be handled in the register """ def decorator(func): handle = getattr(func, "handle_key", []) handle += keys func.handle_key = handle return func return decorator
Adds KeyHandler metaclass to the class
def register(cls): """Adds KeyHandler metaclass to the class""" return KeyHandler(cls.__name__, cls.__bases__, cls.__dict__.copy())
Gets raw characters from inputs
def get_raw_chars(): "Gets raw characters from inputs" if os.name == "nt": import msvcrt encoding = "mbcs" # Flush the keyboard buffer while msvcrt.kbhit(): msvcrt.getch() if len(WIN_CH_BUFFER) == 0: # Read the keystroke ch = msvcrt.getch() # If it is a prefix char, get second part if ch in (b"\x00", b"\xe0"): ch2 = ch + msvcrt.getch() # Translate actual Win chars to bullet char types try: chx = chr(WIN_KEYMAP[ch2]) WIN_CH_BUFFER.append(chr(KEYMAP["mod_int"])) WIN_CH_BUFFER.append(chx) if ord(chx) in ( KEYMAP["insert"] - 1 << 9, KEYMAP["delete"] - 1 << 9, KEYMAP["pg_up"] - 1 << 9, KEYMAP["pg_down"] - 1 << 9, ): WIN_CH_BUFFER.append(chr(126)) ch = chr(KEYMAP["esc"]) except KeyError: ch = ch2[1] else: ch = ch.decode(encoding) else: ch = WIN_CH_BUFFER.pop(0) elif os.name == "posix": import termios import tty fd = sys.stdin.fileno() old_settings = termios.tcgetattr(fd) try: tty.setraw(fd) ch = sys.stdin.read(1) finally: termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) return ch
Gets a character from the keyboard and returns the key code
def get_character(): "Gets a character from the keyboard and returns the key code" char = get_raw_chars() if ord(char) in [KEYMAP["interrupt"], KEYMAP["newline"]]: return char elif ord(char) == KEYMAP["esc"]: combo = get_raw_chars() if ord(combo) == KEYMAP["mod_int"]: key = get_raw_chars() if ord(key) >= KEYMAP["arrow_begin"] - ARROW_KEY_FLAG and ord(key) <= KEYMAP["arrow_end"] - ARROW_KEY_FLAG: return chr(ord(key) + ARROW_KEY_FLAG) else: return KEYMAP["undefined"] else: return get_raw_chars() else: if char in string.printable: return char else: return KEYMAP["undefined"]
Extracts a function from `lines` of segmented source code with the name `name`. Args: lines (`List[str]`): Source code of a script seperated by line. name (`str`): The name of the function to extract. Should be either `training_function` or `main`
def get_function_contents_by_name(lines: List[str], name: str): """ Extracts a function from `lines` of segmented source code with the name `name`. Args: lines (`List[str]`): Source code of a script seperated by line. name (`str`): The name of the function to extract. Should be either `training_function` or `main` """ if name != "training_function" and name != "main": raise ValueError(f"Incorrect function name passed: {name}, choose either 'main' or 'training_function'") good_lines, found_start = [], False for line in lines: if not found_start and f"def {name}" in line: found_start = True good_lines.append(line) continue if found_start: if name == "training_function" and "def main" in line: return good_lines if name == "main" and "if __name__" in line: return good_lines good_lines.append(line)
Filters `lines` and removes any entries that start with a comment ('#') or is just a newline (' ') Args: lines (`List[str]`): Source code of a script seperated by line.
def clean_lines(lines: List[str]): """ Filters `lines` and removes any entries that start with a comment ('#') or is just a newline ('\n') Args: lines (`List[str]`): Source code of a script seperated by line. """ return [line for line in lines if not line.lstrip().startswith("#") and line != "\n"]
Tests whether the additional code inside of `feature_filename` was implemented in `base_filename`. This should be used when testing to see if `complete_*_.py` examples have all of the implementations from each of the `examples/by_feature/*` scripts. It utilizes `nlp_example.py` to extract out all of the repeated training code, so that only the new additional code is examined and checked. If something *other* than `nlp_example.py` should be used, such as `cv_example.py` for the `complete_cv_example.py` script, it should be passed in for the `secondary_filename` parameter. Args: base_filename (`str` or `os.PathLike`): The filepath of a single "complete" example script to test, such as `examples/complete_cv_example.py` feature_filename (`str` or `os.PathLike`): The filepath of a single feature example script. The contents of this script are checked to see if they exist in `base_filename` parser_only (`bool`): Whether to compare only the `main()` sections in both files, or to compare the contents of `training_loop()` secondary_filename (`str`, *optional*): A potential secondary filepath that should be included in the check. This function extracts the base functionalities off of "examples/nlp_example.py", so if `base_filename` is a script other than `complete_nlp_example.py`, the template script should be included here. Such as `examples/cv_example.py`
def compare_against_test(base_filename: str, feature_filename: str, parser_only: bool, secondary_filename: str = None): """ Tests whether the additional code inside of `feature_filename` was implemented in `base_filename`. This should be used when testing to see if `complete_*_.py` examples have all of the implementations from each of the `examples/by_feature/*` scripts. It utilizes `nlp_example.py` to extract out all of the repeated training code, so that only the new additional code is examined and checked. If something *other* than `nlp_example.py` should be used, such as `cv_example.py` for the `complete_cv_example.py` script, it should be passed in for the `secondary_filename` parameter. Args: base_filename (`str` or `os.PathLike`): The filepath of a single "complete" example script to test, such as `examples/complete_cv_example.py` feature_filename (`str` or `os.PathLike`): The filepath of a single feature example script. The contents of this script are checked to see if they exist in `base_filename` parser_only (`bool`): Whether to compare only the `main()` sections in both files, or to compare the contents of `training_loop()` secondary_filename (`str`, *optional*): A potential secondary filepath that should be included in the check. This function extracts the base functionalities off of "examples/nlp_example.py", so if `base_filename` is a script other than `complete_nlp_example.py`, the template script should be included here. Such as `examples/cv_example.py` """ with open(base_filename) as f: base_file_contents = f.readlines() with open(os.path.abspath(os.path.join("examples", "nlp_example.py"))) as f: full_file_contents = f.readlines() with open(feature_filename) as f: feature_file_contents = f.readlines() if secondary_filename is not None: with open(secondary_filename) as f: secondary_file_contents = f.readlines() # This is our base, we remove all the code from here in our `full_filename` and `feature_filename` to find the new content if parser_only: base_file_func = clean_lines(get_function_contents_by_name(base_file_contents, "main")) full_file_func = clean_lines(get_function_contents_by_name(full_file_contents, "main")) feature_file_func = clean_lines(get_function_contents_by_name(feature_file_contents, "main")) if secondary_filename is not None: secondary_file_func = clean_lines(get_function_contents_by_name(secondary_file_contents, "main")) else: base_file_func = clean_lines(get_function_contents_by_name(base_file_contents, "training_function")) full_file_func = clean_lines(get_function_contents_by_name(full_file_contents, "training_function")) feature_file_func = clean_lines(get_function_contents_by_name(feature_file_contents, "training_function")) if secondary_filename is not None: secondary_file_func = clean_lines( get_function_contents_by_name(secondary_file_contents, "training_function") ) _dl_line = "train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)\n" # Specific code in our script that differs from the full version, aka what is new new_feature_code = [] passed_idxs = [] # We keep track of the idxs just in case it's a repeated statement it = iter(feature_file_func) for i in range(len(feature_file_func) - 1): if i not in passed_idxs: line = next(it) if (line not in full_file_func) and (line.lstrip() != _dl_line): if "TESTING_MOCKED_DATALOADERS" not in line: new_feature_code.append(line) passed_idxs.append(i) else: # Skip over the `config['num_epochs'] = 2` statement _ = next(it) # Extract out just the new parts from the full_file_training_func new_full_example_parts = [] passed_idxs = [] # We keep track of the idxs just in case it's a repeated statement for i, line in enumerate(base_file_func): if i not in passed_idxs: if (line not in full_file_func) and (line.lstrip() != _dl_line): if "TESTING_MOCKED_DATALOADERS" not in line: new_full_example_parts.append(line) passed_idxs.append(i) # Finally, get the overall diff diff_from_example = [line for line in new_feature_code if line not in new_full_example_parts] if secondary_filename is not None: diff_from_two = [line for line in full_file_contents if line not in secondary_file_func] diff_from_example = [line for line in diff_from_example if line not in diff_from_two] return diff_from_example
Wraps around `kwargs` to help simplify launching from `subprocess`. Example: ```python # returns ['accelerate', 'launch', '--num_processes=2', '--device_count=2'] get_launch_command(num_processes=2, device_count=2) ```
def get_launch_command(**kwargs) -> list: """ Wraps around `kwargs` to help simplify launching from `subprocess`. Example: ```python # returns ['accelerate', 'launch', '--num_processes=2', '--device_count=2'] get_launch_command(num_processes=2, device_count=2) ``` """ command = ["accelerate", "launch"] for k, v in kwargs.items(): if isinstance(v, bool) and v: command.append(f"--{k}") elif v is not None: command.append(f"--{k}={v}") return command
Decorator that skips a test unconditionally
def skip(test_case): "Decorator that skips a test unconditionally" return unittest.skip("Test was skipped")(test_case)
Decorator marking a test as slow. Slow tests are skipped by default. Set the RUN_SLOW environment variable to a truthy value to run them.
def slow(test_case): """ Decorator marking a test as slow. Slow tests are skipped by default. Set the RUN_SLOW environment variable to a truthy value to run them. """ return unittest.skipUnless(_run_slow_tests, "test is slow")(test_case)
Decorator marking a test that must be only ran on the CPU. These tests are skipped when a GPU is available.
def require_cpu(test_case): """ Decorator marking a test that must be only ran on the CPU. These tests are skipped when a GPU is available. """ return unittest.skipUnless(torch_device == "cpu", "test requires only a CPU")(test_case)
Decorator marking a test that requires a hardware accelerator backend. These tests are skipped when there are no hardware accelerator available.
def require_non_cpu(test_case): """ Decorator marking a test that requires a hardware accelerator backend. These tests are skipped when there are no hardware accelerator available. """ return unittest.skipUnless(torch_device != "cpu", "test requires a GPU")(test_case)
Decorator marking a test that requires CUDA. These tests are skipped when there are no GPU available or when TorchXLA is available.
def require_cuda(test_case): """ Decorator marking a test that requires CUDA. These tests are skipped when there are no GPU available or when TorchXLA is available. """ return unittest.skipUnless(is_cuda_available() and not is_torch_xla_available(), "test requires a GPU")(test_case)
Decorator marking a test that requires XPU. These tests are skipped when there are no XPU available.
def require_xpu(test_case): """ Decorator marking a test that requires XPU. These tests are skipped when there are no XPU available. """ return unittest.skipUnless(is_xpu_available(), "test requires a XPU")(test_case)
Decorator marking a test that should be skipped for XPU.
def require_non_xpu(test_case): """ Decorator marking a test that should be skipped for XPU. """ return unittest.skipUnless(torch_device != "xpu", "test requires a non-XPU")(test_case)
Decorator marking a test that requires MLU. These tests are skipped when there are no MLU available.
def require_mlu(test_case): """ Decorator marking a test that requires MLU. These tests are skipped when there are no MLU available. """ return unittest.skipUnless(is_mlu_available(), "test require a MLU")(test_case)
Decorator marking a test that requires NPU. These tests are skipped when there are no NPU available.
def require_npu(test_case): """ Decorator marking a test that requires NPU. These tests are skipped when there are no NPU available. """ return unittest.skipUnless(is_npu_available(), "test require a NPU")(test_case)
Decorator marking a test that requires MPS backend. These tests are skipped when torch doesn't support `mps` backend.
def require_mps(test_case): """ Decorator marking a test that requires MPS backend. These tests are skipped when torch doesn't support `mps` backend. """ return unittest.skipUnless(is_mps_available(), "test requires a `mps` backend support in `torch`")(test_case)
Decorator marking a test that requires transformers and datasets. These tests are skipped when they are not.
def require_huggingface_suite(test_case): """ Decorator marking a test that requires transformers and datasets. These tests are skipped when they are not. """ return unittest.skipUnless( is_transformers_available() and is_datasets_available(), "test requires the Hugging Face suite", )(test_case)
Decorator marking a test that requires transformers. These tests are skipped when they are not.
def require_transformers(test_case): """ Decorator marking a test that requires transformers. These tests are skipped when they are not. """ return unittest.skipUnless(is_transformers_available(), "test requires the transformers library")(test_case)

DocuMint Dataset

The DocuMint Dataset is a collection of 100,000 Python functions and their corresponding docstrings, extracted from popular open-source repositories in the Free and open-source software (FLOSS) ecosystem. This dataset was created to train the DocuMint model, a fine-tuned variant of Google's CodeGemma-2B that generates high-quality docstrings for Python code functions. For more information on the model and its training procedure, please refer to the model card.

Dataset Description

The dataset consists of JSON-formatted entries, each containing a Python function definition (as the instruction) and its associated docstring (as the response). The functions were sourced from well-established and actively maintained projects, filtered based on metrics such as the number of contributors (> 50), commits (> 5k), stars (> 35k), and forks (> 10k).

Data Sources

Dataset Structure

Each entry in the dataset follows this structure:

{
  "instruction": "def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):\n    \"\"\"\n    Creates a set of `DataLoader`s for the `glue` dataset,\n    using \"bert-base-cased\" as the tokenizer.\n\n    Args:\n        accelerator (`Accelerator`):\n            An `Accelerator` object\n        batch_size (`int`, *optional*):\n            The batch size for the train and validation DataLoaders.\n    \"\"\"\n    tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n    datasets = load_dataset(\"glue\", \"mrpc\")\n\n    def tokenize_function(examples):\n        # max_length=None => use the model max length (it's actually the default)\n        outputs = tokenizer(examples[\"sentence1\"], examples[\"sentence2\"], truncation=True, max_length=None)\n        return outputs\n\n    # Apply the method we just defined to all the examples in all the splits of the dataset\n    # starting with the main process first:\n    with accelerator.main_process_first():\n        tokenized_datasets = datasets.map(\n            tokenize_function,\n            batched=True,\n            remove_columns=[\"idx\", \"sentence1\", \"sentence2\"],\n        )\n\n    # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the\n    # transformers library\n    tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\n\n    def collate_fn(examples):\n        # For Torchxla, it's best to pad everything to the same length or training will be very slow.\n        max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None\n        # When using mixed precision we want round multiples of 8/16\n        if accelerator.mixed_precision == \"fp8\":\n            pad_to_multiple_of = 16\n\t        elif accelerator.mixed_precision != \"no\":\n            pad_to_multiple_of = 8\n\t\t        else:\n            pad_to_multiple_of = None\n\n        return tokenizer.pad(\n            examples,\n            padding=\"longest\",\n            max_length=max_length,\n            pad_to_multiple_of=pad_to_multiple_of,\n            return_tensors=\"pt\",\n        )\n\n    # Instantiate dataloaders.\n    train_dataloader = DataLoader(\n        tokenized_datasets[\"train\"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True\n    )\n    eval_dataloader = DataLoader(\n        tokenized_datasets[\"validation\"],\n        shuffle=False,\n        collate_fn=collate_fn,\n        batch_size=EVAL_BATCH_SIZE,\n        drop_last=(accelerator.mixed_precision == \"fp8\"),\n    )\n\n    return train_dataloader, eval_dataloader",
  "response": "Creates a set of `DataLoader`s for the `glue` dataset,\nusing \"bert-base-cased\" as the tokenizer.\n\nArgs:\n    accelerator (`Accelerator`):\n        An `Accelerator` object\n    batch_size (`int`, *optional*):\n        The batch size for the train and validation DataLoaders."
}

Dataset Usecases

The DocuMint dataset can be used for various purposes related to code documentation and natural language processing tasks. Some potential usecases include:

  • Training and evaluating models for automatic docstring generation
  • Studying the characteristics and patterns of high-quality docstrings
  • Analyzing the relationship between code structure and its corresponding documentation
  • Developing tools for assisting developers in writing effective docstrings
  • Conducting research on the challenges and best practices in code documentation

Researchers, developers, and organizations interested in improving code documentation quality and automating the process of docstring generation can benefit from this dataset.

Citation

BibTeX:

@article{poudel2024documint,
  title={DocuMint: Docstring Generation for Python using Small Language Models},
  author={Poudel, Bibek and Cook, Adam and Traore, Sekou and Ameli, Shelah},
  journal={arXiv preprint arXiv:2405.10243},
  year={2024}
}

Model Card Contact

  • For questions or more information, please contact: {bpoudel3,acook46,staore1,oameli}@vols.utk.edu
Downloads last month
49
Edit dataset card