Adam (Adaptive moment estimation) is an adaptive learning rate optimizer, combining ideas from SGD
with momentum and RMSprop
to automatically scale the learning rate:
bitsandbytes also supports paged optimizers which take advantage of CUDAs unified memory to transfer memory from the GPU to the CPU when GPU memory is exhausted.
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
Parameters
torch.tensor
) —
The input parameters to optimize. float
, defaults to 1e-3) —
The learning rate. tuple(float, float)
, defaults to (0.9, 0.999)) —
The beta values are the decay rates of the first and second-order moment of the optimizer. float
, defaults to 1e-8) —
The epsilon value prevents division by zero in the optimizer. float
, defaults to 0.0) —
The weight decay value for the optimizer. bool
, defaults to False
) —
Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. int
, defaults to 32) —
The number of bits of the optimizer state. dict
, defaults to None
) —
A dictionary with additional arguments. int
, defaults to 4096) —
The minimum number of elements of the parameter tensors for 8-bit optimization. int
, defaults to 100) —
Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. bool
, defaults to True
) —
Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. bool
, defaults to False
) —
Whether the optimizer is a paged optimizer or not. Base Adam optimizer.
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
Parameters
torch.tensor
) —
The input parameters to optimize. float
, defaults to 1e-3) —
The learning rate. tuple(float, float)
, defaults to (0.9, 0.999)) —
The beta values are the decay rates of the first and second-order moment of the optimizer. float
, defaults to 1e-8) —
The epsilon value prevents division by zero in the optimizer. float
, defaults to 0.0) —
The weight decay value for the optimizer. bool
, defaults to False
) —
Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. int
, defaults to 32) —
The number of bits of the optimizer state. dict
, defaults to None
) —
A dictionary with additional arguments. int
, defaults to 4096) —
The minimum number of elements of the parameter tensors for 8-bit optimization. int
, defaults to 100) —
Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. bool
, defaults to True
) —
Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. bool
, defaults to False
) —
Whether the optimizer is a paged optimizer or not. 8-bit Adam optimizer.
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
Parameters
torch.tensor
) —
The input parameters to optimize. float
, defaults to 1e-3) —
The learning rate. tuple(float, float)
, defaults to (0.9, 0.999)) —
The beta values are the decay rates of the first and second-order moment of the optimizer. float
, defaults to 1e-8) —
The epsilon value prevents division by zero in the optimizer. float
, defaults to 0.0) —
The weight decay value for the optimizer. bool
, defaults to False
) —
Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. int
, defaults to 32) —
The number of bits of the optimizer state. dict
, defaults to None
) —
A dictionary with additional arguments. int
, defaults to 4096) —
The minimum number of elements of the parameter tensors for 8-bit optimization. int
, defaults to 100) —
Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. bool
, defaults to True
) —
Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. bool
, defaults to False
) —
Whether the optimizer is a paged optimizer or not. 32-bit Adam optimizer.
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
Parameters
torch.tensor
) —
The input parameters to optimize. float
, defaults to 1e-3) —
The learning rate. tuple(float, float)
, defaults to (0.9, 0.999)) —
The beta values are the decay rates of the first and second-order moment of the optimizer. float
, defaults to 1e-8) —
The epsilon value prevents division by zero in the optimizer. float
, defaults to 0.0) —
The weight decay value for the optimizer. bool
, defaults to False
) —
Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. int
, defaults to 32) —
The number of bits of the optimizer state. dict
, defaults to None
) —
A dictionary with additional arguments. int
, defaults to 4096) —
The minimum number of elements of the parameter tensors for 8-bit optimization. int
, defaults to 100) —
Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. bool
, defaults to True
) —
Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. bool
, defaults to False
) —
Whether the optimizer is a paged optimizer or not. Paged Adam optimizer.
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
Parameters
torch.tensor
) —
The input parameters to optimize. float
, defaults to 1e-3) —
The learning rate. tuple(float, float)
, defaults to (0.9, 0.999)) —
The beta values are the decay rates of the first and second-order moment of the optimizer. float
, defaults to 1e-8) —
The epsilon value prevents division by zero in the optimizer. float
, defaults to 0.0) —
The weight decay value for the optimizer. bool
, defaults to False
) —
Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. int
, defaults to 32) —
The number of bits of the optimizer state. dict
, defaults to None
) —
A dictionary with additional arguments. int
, defaults to 4096) —
The minimum number of elements of the parameter tensors for 8-bit optimization. int
, defaults to 100) —
Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. bool
, defaults to True
) —
Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. bool
, defaults to False
) —
Whether the optimizer is a paged optimizer or not. 8-bit paged Adam optimizer.
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 amsgrad = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True is_paged = False )
Parameters
torch.tensor
) —
The input parameters to optimize. float
, defaults to 1e-3) —
The learning rate. tuple(float, float)
, defaults to (0.9, 0.999)) —
The beta values are the decay rates of the first and second-order moment of the optimizer. float
, defaults to 1e-8) —
The epsilon value prevents division by zero in the optimizer. float
, defaults to 0.0) —
The weight decay value for the optimizer. bool
, defaults to False
) —
Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. int
, defaults to 32) —
The number of bits of the optimizer state. dict
, defaults to None
) —
A dictionary with additional arguments. int
, defaults to 4096) —
The minimum number of elements of the parameter tensors for 8-bit optimization. int
, defaults to 100) —
Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. bool
, defaults to True
) —
Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. bool
, defaults to False
) —
Whether the optimizer is a paged optimizer or not. Paged 32-bit Adam optimizer.