Feedback - Chat template missing
Hey, I am looking for a small model that is finetuned for summarization that I can do some prompt engineering experiments with and hence, I tried your model with llama.cpp commit 3e0ba0e.
Here is a screenshot:
Having the <|im_end|>
and <|im_start|>
control tokens in the output that can be seen by the user in the UI look to me like the quantization was done without a proper chat template in the tokenizer_config.json file.
What do you think?
Thank you for your feedback.
Actually, I have fine-tuned and build this model only for conversation summarization task which was a part of my project where I need to generate a summary of the conversation which the user had with the SurvUday: The mental health chatbot (my another model) for internal use case. Considering that, I have not used any conversational chat-template and my dataset also contained only the conversations and corresponding summaries. But, I will try to improve the prompt template in further versions of this model.
But, you have suggested a great advice to fine-tune the LLM using a conversational chat template which can act as a full-fledged summarizer and also have the capability of conversation. I will consider fine-tuning and building such a model.
Here is a screenshot of the conversation summarization task:
I am sorry. I think I was too hasty texting you. By coincidence, you haven't provided a template, but I also tried a few other ggufs of llama-3.2-1b that had similar symptomps and then I noticed an error message in the logs. Turns out llama.cpp doesn't currently support vision/multimodal models and llama-3.2 models are such, so it wasn't on you! (Looks like I have to run those in Ollama or another app. At least for now.)
Thanks for your response though :-)
When it comes to summarization, here is why I am interested in it: At JabRef (It's a literature management app), we are having a feature that is supposed to summarize scientific papers and we support local models that are compatible with OpenAI API servers. As long as you have a good model and the server app that goes along with it, you can connect it to JabRef. I am still fiddling with a good system prompt though. Also, the system prompt for when it chunks could maybe be optimised too. Instead of putting the whole paper into context, if the context window becomes too large, we start chunking. So I was looking for a tiny model that supports really long context (65536 - 131072 tokens) to experiment a little, because doing long context experiments with a larger models takes so long...
Maybe you can use it as a pipline for dataset creation ;-)
For any scientific workflows out there, I strongly recommend reading the paper first, then do the summarization and then you can fix any errors in the summary. Confabulations and hallucinations are the worst :(
The good thing of the summarizer is that it speeds up everything, as it helps at brainstorming and also maybe some of the sentences are good enough to be used as is.
Thank you for your valuable advice. I look forward to it.
In my future projects, I will definitely focus on building summarization models that can handle really long contexts.