Can anyone help me with detailed instructions on how to use this?

#20
by RextonT - opened

Greetings!

Really interested in trying this program out.

The only problem is that I suck at computers!!!

I know how to browse the internet, build spreadsheets and edit pdfs!

When it comes to anything beyond those tasks, I might as well be a chimpanzee with a keyboard. Since most fundamental computer tasks make no sense to me.

Do I download every single file on the "files and versions" tab?
Do I use this thing in my Chrome browser?
Or do I download it like iTunes and use it offline?

Believe it or not, I'm 29. But I need someone to explain this to me like I am 65 . This page: https://huggingface.co/ehartford/WizardLM-7B-Uncensored

Looks like another language and I don't know what to do!

Any help would be greatly appreciated.
Thanks

depends on what you want to do with it, you can run it with python using the transformers library but I am assuming you don't want to do any programming so what I would do if I were you is look at this repo: https://github.com/oobabooga/text-generation-webui. Look up a tutorial on how to use the 'oogabooga text-generation-webui' and if you have a decent enough PC you should be able to get it running. If you don't have a very good PC you can look up a google colab notebook with this model or try to find a hf space with this model. Orrrrrr you can navigate to https://chatgpt-clone-e5f.pages.dev/ and in a few weeks I will have an option set up on the site to use this very model and a few other uncensored models in a ui that looks just like chatgpt. rn tho the official gpt-3.5 (censored) is the only model available

https://chatgpt-clone-e5f.pages.dev/ is fully working with an uncensored model running off my PC if you are interested. It also has mobile support now so check that out.

https://chatgpt-clone-e5f.pages.dev/ is fully working with an uncensored model running off my PC if you are interested. It also has mobile support now so check that out.

Hi, I was wanting to use it, but when I went there and tested one of the examples and then my own prompt, it was seriously way too slow.
I feel like I could go to bed and wake up 8 hours later to see the response as half finished and still going. That is how slow It feels in my opinion.

If you want me to try it out more and actually use it without donating, then make the responses go faster... please.

Sorry if this was negative in any way, I just didn't like the response speed one bit.

I apologize, I wish I could make it go faster but as of right now I am just running it off my PC which means everyone that uses the site is connecting to my PC to use the model which is highly inefficient as I am only running a single gaming gpu as apposed to a bunch of dedicated a100s or something.

Also my daily tasks etc. are running off the same PC so it slows it down, as I write this too you the inference is running in the background using all my vram and almost all the shared memory. I am trying to work with some friends to run the inference on their hardware as well but I dunno how I am going to set it up with multiple different machines.

Also I'm 15 dawg I don't have the kinda money to really support this on my own, imma prolly stop hosting it on my PC in a week or two until I get enough money to find somewhere else to host it.

Send me some monero tho if u wanna help: 49bstF1DChsWawifqDS1YcjRTEuh7dM7VjH2oMtwy1zWRThweVFU8MKCmXQx6d5oFZ5gdBA3iBJLsWHHyMsiTi6gFjrmRZx

Or if you wanna host it on your own machine that would help too, hmu on discord @bear_cooper email me byteevangelist@librecheats.net

Here is the code if u want to run it on your own device. just run each cell.
https://colab.research.google.com/drive/1r6D6Gmp9O_luoEJ3dvTwHYJKH2BHdw4y?usp=sharing

If you dont want to do all the code, check out the space i made.
https://huggingface.co/spaces/johnwick123forevr/WizardLM7b.

Sign up or log in to comment