--- license: llama3 language: - en base_model: - meta-llama/Meta-Llama-3-8B pipeline_tag: text-generation tags: - llama-3 - llama - meta - facebook - amd - onnx --- # meta-llama/Meta-Llama-3-8B - ## Introduction This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset, and applying [onnxruntime-genai model builder](https://github.com/microsoft/onnxruntime-genai/tree/main/src/python/py/models) to convert to ONNX. - ## Quantization Strategy - ***Quantized Layers***: TBD - ***Weight***: TBD - ## Quick Start For quickstart, refer to AMD [RyzenAI-SW-EA](https://account.amd.com/en/member/ryzenai-sw-ea.html) #### Evaluation scores The perplexity measurement is run on the wikitext-2-raw-v1 (raw data) dataset provided by Hugging Face. Perplexity score measured for prompt length 2k is 6.753044. #### License Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved. license: llama3