GREAT MODEL

#4
by doberst - opened

Dear TinyLlama team - congratulations on an excellent project - we just fine-tuned the 2.5T checkpoint for an ongoing series of RAG-optimized fact-based question-answering models - very good results, especially for a 1.1B parameter model - you can see the results measured on a RAG benchmark test too - bling-tiny-llama-v0 - you may be interested to compare with the pythia-1b base which we finetuned as bling-1b-0.1 .... I will look forward to the 3T checkpoint - and will pursue additional experiments and projects with TinyLlama ... All the best, Darren

Sign up or log in to comment