Bitsandbytes documentation

AdaGrad

You are viewing v0.44.1 version. A newer version v0.45.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

AdaGrad

AdaGrad (Adaptive Gradient) is an adaptive learning rate optimizer. AdaGrad stores a sum of the squared past gradients for each parameter and uses it to scale their learning rate. This allows the learning rate to be automatically lower or higher depending on the magnitude of the gradient, eliminating the need to manually tune the learning rate.

Adagrad

class bitsandbytes.optim.Adagrad

< >

( params lr = 0.01 lr_decay = 0 weight_decay = 0 initial_accumulator_value = 0 eps = 1e-10 optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )

__init__

< >

( params lr = 0.01 lr_decay = 0 weight_decay = 0 initial_accumulator_value = 0 eps = 1e-10 optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )

Parameters

  • params (torch.tensor) — The input parameters to optimize.
  • lr (float, defaults to 1e-2) — The learning rate.
  • lr_decay (int, defaults to 0) — The learning rate decay.
  • weight_decay (float, defaults to 0.0) — The weight decay value for the optimizer.
  • initial_accumulator_value (int, defaults to 0) — The initial momemtum values.
  • eps (float, defaults to 1e-10) — The epsilon value prevents division by zero in the optimizer.
  • optim_bits (int, defaults to 32) — The number of bits of the optimizer state.
  • args (object, defaults to None) — An object with additional arguments.
  • min_8bit_size (int, defaults to 4096) — The minimum number of elements of the parameter tensors for 8-bit optimization.
  • percentile_clipping (int, defaults to 100) — Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability.
  • block_wise (bool, defaults to True) — Whether to independently quantize each block of tensors to reduce outlier effects and improve stability.

Base Adagrad optimizer.

Adagrad8bit

class bitsandbytes.optim.Adagrad8bit

< >

( params lr = 0.01 lr_decay = 0 weight_decay = 0 initial_accumulator_value = 0 eps = 1e-10 optim_bits = 8 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )

__init__

< >

( params lr = 0.01 lr_decay = 0 weight_decay = 0 initial_accumulator_value = 0 eps = 1e-10 optim_bits = 8 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )

Parameters

  • params (torch.tensor) — The input parameters to optimize.
  • lr (float, defaults to 1e-2) — The learning rate.
  • lr_decay (int, defaults to 0) — The learning rate decay.
  • weight_decay (float, defaults to 0.0) — The weight decay value for the optimizer.
  • initial_accumulator_value (int, defaults to 0) — The initial momemtum values.
  • eps (float, defaults to 1e-10) — The epsilon value prevents division by zero in the optimizer.
  • optim_bits (int, defaults to 8) — The number of bits of the optimizer state.
  • args (object, defaults to None) — An object with additional arguments.
  • min_8bit_size (int, defaults to 4096) — The minimum number of elements of the parameter tensors for 8-bit optimization.
  • percentile_clipping (int, defaults to 100) — Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability.
  • block_wise (bool, defaults to True) — Whether to independently quantize each block of tensors to reduce outlier effects and improve stability.

8-bit Adagrad optimizer.

Adagrad32bit

class bitsandbytes.optim.Adagrad32bit

< >

( params lr = 0.01 lr_decay = 0 weight_decay = 0 initial_accumulator_value = 0 eps = 1e-10 optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )

__init__

< >

( params lr = 0.01 lr_decay = 0 weight_decay = 0 initial_accumulator_value = 0 eps = 1e-10 optim_bits = 32 args = None min_8bit_size = 4096 percentile_clipping = 100 block_wise = True )

Parameters

  • params (torch.tensor) — The input parameters to optimize.
  • lr (float, defaults to 1e-2) — The learning rate.
  • lr_decay (int, defaults to 0) — The learning rate decay.
  • weight_decay (float, defaults to 0.0) — The weight decay value for the optimizer.
  • initial_accumulator_value (int, defaults to 0) — The initial momemtum values.
  • eps (float, defaults to 1e-10) — The epsilon value prevents division by zero in the optimizer.
  • optim_bits (int, defaults to 32) — The number of bits of the optimizer state.
  • args (object, defaults to None) — An object with additional arguments.
  • min_8bit_size (int, defaults to 4096) — The minimum number of elements of the parameter tensors for 8-bit optimization.
  • percentile_clipping (int, defaults to 100) — Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability.
  • block_wise (bool, defaults to True) — Whether to independently quantize each block of tensors to reduce outlier effects and improve stability.

32-bit Adagrad optimizer.

< > Update on GitHub