This is the same as Yukang's Llama-2-70b-longlora-32k, except that the extra pad token has been stripped from the tokenizer to make it similar to the base Llama model (and it has been merged into the base model). Please refer to that page for more details.

It was created by merging LongAlpaca-70B-lora into Llama-2-70b, replacing the embed and norm layers as described in the LongLoRA repo, and removing the extra row and pad token.

This is not an instruct-tuned model, but a base model for further fine-tuning. It supports 32K of context with linear rope scaling of 8.

Downloads last month
35
Safetensors
Model size
69B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for grimulkan/llama2_70b_longlora_fp16_32k_ROPE8

Quantizations
2 models