Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference

Is LLaMA-2-7B-32K already fine-tuned for answering questions from long text?

#18
by MathewOpt - opened

Is "togethercomputer/LLaMA-2-7B-32K" already trained for answering questions from long text? I see that it was fine-tuned on "20% Natural Instructions (NI), 20% Public Pool of Prompts (P3)". Please confirm. I tried the model as it is for question-answering using one of the samples in the Long Data Collections dataset, but it doesn't seem to answer correctly.

MathewOpt changed discussion title from Is "togethercomputer/LLaMA-2-7B-32K" already fine-tuned for answering questions from long text? to Is LLaMA-2-7B-32K already fine-tuned for answering questions from long text?

@MathewOpt We now have an instruct version that is fine-tunned on QA https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct

More details here: https://together.ai/blog/llama-2-7b-32k-instruct

Let us know if this works!

Sign up or log in to comment