aao331 commited on
Commit
44827e0
1 Parent(s): fdce146

Update README.md

Browse files

Added documentation and removed typos.

Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -28,7 +28,7 @@ Additionally, a test chatbot based on this neural network is running on the twit
28
  - **Model type:** 13B LLM
29
  - **Language(s):** (NLP): English and colloquial Argentine Spanish
30
  - **License:** Free for non-commercial use
31
- - **Finetuned from model [optional]: https://huggingface.co/decapoda-research/llama-13b-hf
32
 
33
  ### Model Sources [optional]
34
 
@@ -47,7 +47,6 @@ This is a generic LLM chatbot that can be used to interact directly with humans.
47
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
48
  This bot is uncensored and may provide shocking answers. Also it contains bias present in the training material.
49
 
50
- [More Information Needed]
51
 
52
  ### Recommendations
53
 
@@ -57,8 +56,9 @@ Users (both direct and downstream) should be made aware of the risks, biases and
57
 
58
  ## How to Get Started with the Model
59
 
60
- The easiest way is to download the text-generation-webui application and place the model inside the 'models' directory.
61
- Then launch the web interface and run the model as a regular LLama-13B model.
 
62
 
63
  ## Model Card Contact
64
 
 
28
  - **Model type:** 13B LLM
29
  - **Language(s):** (NLP): English and colloquial Argentine Spanish
30
  - **License:** Free for non-commercial use
31
+ - **Finetuned from model:** https://huggingface.co/decapoda-research/llama-13b-hf
32
 
33
  ### Model Sources [optional]
34
 
 
47
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
48
  This bot is uncensored and may provide shocking answers. Also it contains bias present in the training material.
49
 
 
50
 
51
  ### Recommendations
52
 
 
56
 
57
  ## How to Get Started with the Model
58
 
59
+ The easiest way is to download the text-generation-webui application (https://github.com/oobabooga/text-generation-webui) and place the model inside the 'models' directory.
60
+ Then launch the web interface and run the model as a regular LLama-13B model. LoRA model don't require additional installation, but 4-bit mode (only uses 25% GPU VRAM) needs
61
+ additional installation steps detailed at https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md
62
 
63
  ## Model Card Contact
64