Evaluation of long sequence of conversation

#1
by cooee-ashutosh - opened

Hi, thanks for the amazing paper.
(I know this is different from what the paper is about, but still) Is this novel approach just for remembering large context of input or can it work on multi level inputs, for example is there a benchmarking on remembering context of more than 100 back and forth conversation ?
Although increasing context size does somewhat help in remembering context for longer time frame, but there is no clear evidence of this as far as I know.
I would love to know more about this cuz I am struggling to find a solution for this. Any input is appreciated.
Thanks.

Hi,

Thanks for your good question. I think the current version can not support multiple conversation well. Because during supervised fine-tuning, we used the prompt format like this

Below is a paper. Memorize the content and answer my question after the paper. {paper_content} \n Now the material ends. {question}

It follows well if asked in this prompt format. But conversation might not work well.

Regards,
Yukang Chen

Adding to Yukang's context here:

The SFT model we trained here is somewhat like the "long-context version" Alpaca, with QA from books and papers. Our experience is that it does remember some detailed contents such as the role and the plot.

We also observed that the data used in SFT influenced the model's behavior significantly. For the case you've mentioned, we believe some SFT data like extremely-long conversation data are really needed if we want the chat model to remember the history of so many rounds of conversations. Say, if there is a high-quality long-conversation version of GPT4ALL, etc, we believe training a chatbot with good memory won't be too difficult. However, we are still looking at how to collect such long-round chat data, which is quite challenging.

Do you plan on releasing the LongQA dataset Yukang? Thanks for your contributions!

Yes. It takes some steps for releasing. We are busy on the ICLR submission deadline (Sep 28). We plan to release it one or two weeks after the deadline. Thanks for your patience.

Hi,

We have release our data for long instruction following, LongAlpaca-12k, and the update models, LongAlpaca-7B/13B/70B. They are available in the following links. These models should be much better than the original SFT models. We use the alpaca prompt format it is more general than what we used previously.

https://huggingface.co/datasets/Yukang/LongAlpaca-12k
https://huggingface.co/Yukang/LongAlpaca-7B
https://huggingface.co/Yukang/LongAlpaca-13B
https://huggingface.co/Yukang/LongAlpaca-70B-lora

Regards,
Yukang Chen

Sign up or log in to comment