Text Generation
Transformers
PyTorch
English
llama
sft
Inference Endpoints
text-generation-inference

Awesome, thanks

#3
by gsaivinay - opened

Thanks for this model with 8k context 🤯

base model: meta-llama/Llama-2-7b

is this a typo in Readme?

Also I'd like to ask few things:

  • Is this model trained with any coding data? given that it has 8k context, it'll be excellent use case for coding assistant.
  • Is there any prompt format for this model to answer questions based on context? for example using it for document chat using vector db.
  • did you happen to benchmark this model? while I can wait for open llm leaderboard, it may take few days for this model to be evaluated in the leaderboard.
OpenAssistant org

base model: "meta-llama/Llama-2-7b" - is this a typo in Readme?

Yes .. good catch, it is of course based on Llama-2-13b, should be fixed now!

Is this model trained with any coding data? given that it has 8k context, it'll be excellent use case for coding assistant.

From benchmarks we know that it is currently not performing well in coding tasks. We are further fine-tuning the model with more code related instruction data.

Is there any prompt format for this model to answer questions based on context? for example using it for document chat using vector db.

No, the context would probably best be placed in the <|prompter|> message.

did you happen to benchmark this model? while I can wait for open llm leaderboard, it may take few days for this model to be evaluated in the leaderboard.

Some benchmark results can be found here: https://tju01.github.io/ilm-eval/#?branch=oa-orca

gsaivinay changed discussion status to closed

evaluation available now

image.png

Sign up or log in to comment