--- base_model: - meta-llama/Meta-Llama-3-8B - nvidia/Llama3-ChatQA-1.5-8B library_name: transformers tags: - mergekit - peft - nvidia - chatqa-1.5 - chatqa - llama-3 - pytorch license: llama3 language: - en pipeline_tag: text-generation --- # Llama3-ChatQA-1.5-8B-lora This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit). ## LoRA Details This LoRA adapter was extracted from [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) and uses [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base. ### Parameters The following command was used to extract this LoRA adapter: ```sh mergekit-extract-lora meta-llama/Meta-Llama-3-8B nvidia/Llama3-ChatQA-1.5-8B OUTPUT_PATH --no-lazy-unpickle --rank=64 ```