Stochastic gradient descent (SGD) is a basic gradient descent optimizer to minimize loss given a set of model parameters and updates the parameters in the opposite direction of the gradient. The update is performed on a randomly sampled mini-batch of data from the dataset.
bitsandbytes also supports momentum and Nesterov momentum to accelerate SGD by adding a weighted average of past gradients to the current gradient.
( params lr momentum = 0 dampening = 0 weight_decay = 0 nesterov = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )
( params lr momentum = 0 dampening = 0 weight_decay = 0 nesterov = False optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )
Parameters
torch.tensor
) —
The input parameters to optimize. float
) —
The learning rate. float
, defaults to 0) —
The momentum value speeds up the optimizer by taking bigger steps. float
, defaults to 0) —
The dampening value reduces the momentum of the optimizer. float
, defaults to 0.0) —
The weight decay value for the optimizer. bool
, defaults to False
) —
Whether to use Nesterov momentum. int
, defaults to 32) —
The number of bits of the optimizer state. object
, defaults to None
) —
An object with additional arguments. int
, defaults to 4096) —
The minimum number of elements of the parameter tensors for 8-bit optimization. int
, defaults to 100) —
Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. bool
, defaults to True
) —
Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. Base SGD optimizer.
( params lr momentum = 0 dampening = 0 weight_decay = 0 nesterov = False args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )
( params lr momentum = 0 dampening = 0 weight_decay = 0 nesterov = False args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )
Parameters
torch.tensor
) —
The input parameters to optimize. float
) —
The learning rate. float
, defaults to 0) —
The momentum value speeds up the optimizer by taking bigger steps. float
, defaults to 0) —
The dampening value reduces the momentum of the optimizer. float
, defaults to 0.0) —
The weight decay value for the optimizer. bool
, defaults to False
) —
Whether to use Nesterov momentum. object
, defaults to None
) —
An object with additional arguments. int
, defaults to 4096) —
The minimum number of elements of the parameter tensors for 8-bit optimization. int
, defaults to 100) —
Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. bool
, defaults to True
) —
Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. 8-bit SGD optimizer.
( params lr momentum = 0 dampening = 0 weight_decay = 0 nesterov = False args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )
( params lr momentum = 0 dampening = 0 weight_decay = 0 nesterov = False args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )
Parameters
torch.tensor
) —
The input parameters to optimize. float
) —
The learning rate. float
, defaults to 0) —
The momentum value speeds up the optimizer by taking bigger steps. float
, defaults to 0) —
The dampening value reduces the momentum of the optimizer. float
, defaults to 0.0) —
The weight decay value for the optimizer. bool
, defaults to False
) —
Whether to use Nesterov momentum. object
, defaults to None
) —
An object with additional arguments. int
, defaults to 4096) —
The minimum number of elements of the parameter tensors for 8-bit optimization. int
, defaults to 100) —
Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. bool
, defaults to True
) —
Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. 32-bit SGD optimizer.