comaniac's picture
Create README.md
3e9d5ca verified
|
raw
history blame contribute delete
No virus
241 Bytes
## Llama-3-70B-Instruct-FP8-v1
* Weights and activations are per-tensor quantized to float8_e4m3.
* Quantization with AutoFP8.
* Calibration dataset: Ultrachat (mgoin/ultrachat_2k)
* Samples: 1024
* Sequence length: 4096
## Evaluation
TBA