Yi 34b 200k context update, will there be an updated version of this one?

#9
by Olafangensan - opened

The Yi 34b 200k has just been updated and is now passing the needle in the haystack test at 99.8% .
Will you be retraining any models? I personally use this model for all my RP stuff and I think it could benefit quite a bit from improved character/world information recall.

Olafangensan changed discussion status to closed
Olafangensan changed discussion status to open

Reopened, dumb miss click.

Olafangensan changed discussion status to closed
Olafangensan changed discussion status to open

brucethemoose may need some funding before retraining his model, that kind of thing can be expensive.

Makes sense, my thought as well. Was wondering about how much more he would need for this purpose too :)

I didn't train anything! Lol, this is just a merge.

I have an idea of exactly how I'd want to continue train a model (datasets, library) but certainly don't want to ask for funds without more testing and a block of time set out for it. In fact I've been pretty busy, apologies for not replying to this soon.

BUT, theoretically once could apply the loras from the constituent models in this merge to the new YI base. I intend to try this, either by asking the authors for the original loras or just extracting them.

Sign up or log in to comment