This is a very interesting model, and I'm very curious how you did this.

#10
by AARon99 - opened

What are you doing to the base Llama2 model? I have never used a model like this:

  1. Fine-tunes work extremely well, inputs after fine-tuning can be generalizations with respect to the fine-tuned dataset and the LLM responds very well.
  2. I've given the model thousands of tokens worth of code and it will lucidly edit portions while contextualizing how the code works. I even did this on an 8-bit exllama2 quantization using rope scaling to get 8,192 worth of tokens.

I've checked out your github trying to find out more, I've seen others asking similar questions about how you are doing this. I guess I'm just adding my voice to the choir.

Looking forward to Wxin-LM70B-v0.2!! I hope the questions don't come off the wrong way, there is just such little information with respect to the quality of the model. I appreciate the work done and the sharing of the model.

Xwin-LM org

Many thanks for your interest. We will work hard for better models, and release new models and details ASAP.

Looking forward to your feedback~

@nbl97 When can we expect new models/publication of the dataset?

Sign up or log in to comment