Really appreciate the work put into this! I have noticed a change in the model output since first release.

#3
by AARon99 - opened

Hello, I saw your discord message in mistral when you first uploaded the converted files. I snagged them right away and did some testing in oobabooga's textgen-webui. I made this post on reddit: https://www.reddit.com/r/Oobabooga/comments/18e5wi7/mixtral7b8expert_working_in_oobabooga_unquantized/
Discussing how interesting the model was.

The next day I wanted to mess around with it more, but noticed the spark I had experienced the previous day was gone. It failed the riddle question, giving answers very similar to other LLMs and the code wasn't formatted right and incorrect. I spent a while trying to pinpoint what happened, and a way to resolve the issue. The .py files you updated resulted in what seems to be significant changes in the models' output. I am now using the original .py files directly and forcing the model to use those local files instead of the .cache files it had overwritten with the updated .py data.

The files and an explanation are here: https://github.com/RandomInternetPreson/MiscFiles/blob/main/DiscoResearch/mixtral-7b-8expert/info.md

Anyway, I know the model is still a bit of an enigma and trying to run it is guess work, but wanted to reach out and let you know, I think you were running the model better with the original .py file.

Disco Research org
edited Dec 10, 2023

Yeah it seems to be the case that the change we made (following the equations from some other MoE papers) is not in line with the original implementation. It's interesting that you noticed this in normal use. Benchmarks for the current method were a few % higher. For now, I'll revert these changes in form of a pr and then you can get the new version by calling revision="refs/pr/5".

Oh wow that is extremely interesting!! It seemed that there really was something different about mixtral when I was using it initially, and it took me a while to figure out what happened. It is such a relief to know that the troubleshooting I did was accurate. I was thinking that's why the change was made, because it might have scored higher, I wish I understood the complexity of the issue better. I have personal questions I use with deterministic settings (as deterministic as possible) to see if I want to spend time working with a model and the change to the .py files was like night and day; I had some initial trepidation about making the post because I couldn't point to a score to demonstrate the difference. Thank you again, I'm really interested to see where this model goes and how it changes the open source llm landscape. I'm also really looking forward to trying out your fine-tuned version!!!

Sign up or log in to comment