Request for Colab Version

#25
by zaeaz - opened

can we get a colab version? this sh#t ain't working in my local i have 6gb.. good cpu, i tried all those commands.. -pre-layer 978r978 - gpu - memeory 5 and stuff.. everything is wokring with just errors... and this is so fkn BAD., just give us a COLAB PLEASE..

or fix this SH T. for low end users... most people have LOW END PC SHT. not everyone have the fkn 24 GB. and sht, and fancy elon musks CARS.

CPU only ? maybe,???
optimize this SH8T. i want to talk NAUTY to CHATGPT. and unethical harmful stuf... for scientific purpose.

same man i wasted my days to get it to work but finally when it worked first it was getting out of ram and now it is running out of gpu memory few weeks ago i use to think i have high end pc which can run cyberpunk 2077 on highest setting possible with decent fps but now damn.
please optimize it or introduce the google collab version🙏I want to test its potential and limitation

i can run it on my cpu using llmac++ but it takes around 15 mins to start generating response for a small prompt 🥲 and setting parameter in it is disaster i also tried alpaca electron for gui of cpu version but it was little fast but wanst able to hold a continuous conversation

NO!!! THINK OF THE CHILDREN!!!

...just kidding. I can't seem to get the chat part to work properly, as it acts very strangly in chat mode, but you can still play with the model in notebook mode on this notebook that I made.

notebook I got this from some guy on the internet, but it is not working for alpaca but vacunia woking fine

I have
1050TI GPU with 4GB VRAM
CPU i7 8750h
RAM 16GB ddr4
using oobabooga ui

and as expected it wasn't even loading on my pc , then after some change in arguments i was able to run it (super slow text generation) . Need some more tweaks but as of now I use these arguments
"call python server.py --load-in-8bit --auto-devices --no-cache --gpu-memory 3800MiB --pre_layer 2 --chat --groupsize 128 --listen-port 6935" in the bat file

Sign up or log in to comment