This is the merged Llama 3 8B 1M base model obtained from merging the Llama 3 8B model with the LoRA extracted from Gradient AI's 1M context length Instruct model https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k The LoRA adapter is available @ https://huggingface.co/winglian/llama-3-1m-context-gradient-lora