Text Generation
Transformers
Safetensors
llama
llama2
text-generation-inference
Inference Endpoints

image/png

Model Card for Model ID

Llama2 13B finetuned to process longer (32K) contexts using interpolation and then further instruct finetuned with ShareGPT and Orca-Chat.

Model Details

Model Description

This is next version of the model trained and evaluated as part of the experiments described in the repo http://github.com/abacusai/Long-Context. This version was trained with a scaling factor of 8 and shows better reasoning and math abilites on State of the Art benchmarks

Model Sources [optional]

Direct Use

Since the model is instruct finetuned it can also be directly used for various prompted tasks. We have tested it on open book question answering using the long context to supply search results.

Bias, Risks, and Limitations

The model has not been evaluated for safety and is only intended for research and experiments.

Downloads last month
813
Safetensors
Model size
13B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for abacusai/Giraffe-13b-32k-v3

Merges
1 model
Quantizations
2 models

Datasets used to train abacusai/Giraffe-13b-32k-v3

Spaces using abacusai/Giraffe-13b-32k-v3 6