Private Models

#1
by OVAWARE - opened

Private Models help nobody, open source models do have dangers and problems however it is for the greater good, I urge you to release both you 200b and 600b model to the public.

Hey @OVAWARE ,

That is something we have in active discussion specially after all the emails we received.

We'll get back on that shortly.

Thanks.

deepnight-research changed discussion status to closed

Just an update!
We'll be making it available before Christmas.

100B is no longer gated either.

When should we expect the 600B model to be open?

Before the end of January, 2024
We're currently working with the 220B model. Only after opening that (before Christmas) we'll get to 600B.

Thanks

@deepnight-research really cool!

I converted the 100B model to run with llama.cpp: https://huggingface.co/imi2/saily-100b-gguf

The model at Q2_K(~3.4bpw) will load in 32gb of RAM and 24gb vram on a desktop. I can get 1.4 t/s with it, Lots of layers ~60 are loaded in gpu vram, the rest is in ram.

Thanks @imi2 for creating the GGUF version of the model.

We appreciate your efforts.

Sign up or log in to comment