Does this model able to run on 3060 12g?

#11
by cyx123 - opened

Do anyone here own 3060 12g? Can please share some experience about how long does it take to reply a message, need layer to CPU or something even more.

Do anyone here own 3060 12g? Can please share some experience about how long does it take to reply a message, need layer to CPU or something even more.

For my part I have the same graphics card as you, however an i7 processor. The time that I have the response I send and the one I receive is around 2s, it also depends on the number of tokens.
On the other hand was not someone English, when I ask the model to speak to me in another language this one does not manage to speak to me correctly. I guess he must have trained his model with it. https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered

I'm not an expert either.

Do anyone here own 3060 12g? Can please share some experience about how long does it take to reply a message, need layer to CPU or something even more.

For my part I have the same graphics card as you, however an i7 processor. The time that I have the response I send and the one I receive is around 2s, it also depends on the number of tokens.
On the other hand was not someone English, when I ask the model to speak to me in another language this one does not manage to speak to me correctly. I guess he must have trained his model with it. https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered

I'm not an expert either.

Thx very much actually i'm using 1660s. Right now thinking to upgrade 3060 or 3080 within this year =w=

I have 3060 12gb.This model fits into the video memory with maximum context tokens and works at an acceptable speed.I'm using occam fork of Kobold Ai and TavernAI as GUI.

Sign up or log in to comment