alexmarques commited on
Commit
620445a
1 Parent(s): f83d577

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ Weight quantization also reduces disk size requirements by approximately 50%.
33
  Only weights and activations of the linear operators within transformers blocks are quantized.
34
  Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
35
  Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
36
- Linear scaling factors are computed via by minimizong the mean squarred error (MSE).
37
  The [SmoothQuant](https://arxiv.org/abs/2211.10438) algorithm is used to alleviate outliers in the activations, whereas rhe [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization.
38
  Both algorithms are implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
39
  GPTQ used a 1% damping factor and 512 sequences sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
 
33
  Only weights and activations of the linear operators within transformers blocks are quantized.
34
  Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
35
  Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
36
+ Linear scaling factors are computed via by minimizing the mean squarred error (MSE).
37
  The [SmoothQuant](https://arxiv.org/abs/2211.10438) algorithm is used to alleviate outliers in the activations, whereas rhe [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization.
38
  Both algorithms are implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
39
  GPTQ used a 1% damping factor and 512 sequences sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).