8bit-coder
commited on
Commit
β’
5c789b2
1
Parent(s):
51d42b1
updated readme
Browse files
README.md
CHANGED
@@ -22,7 +22,9 @@ Step 3. Navigate over to one of it's model folders and clone this repository:
|
|
22 |
Step 4. Launch the webui and replace the default instruction prompt with:
|
23 |
|
24 |
> You are an AI language model designed to assist the User by answering their questions, offering advice, and engaging in casual conversation in a friendly, helpful, and informative manner. You respond clearly, coherently, and you consideration of the conversation history.
|
|
|
25 |
User: Hey, how's it going?
|
|
|
26 |
Assistant: Hey there! I'm doing great, thank you. What can I help you with today? Let's have a fun chat!
|
27 |
|
28 |
Step 5. Change the settings to match this screenshot:
|
@@ -72,3 +74,5 @@ We had an issue with the latest AlpacaDataCleaned dataset where at around 90k li
|
|
72 |
## π¨βπ» Credits
|
73 |
|
74 |
Credits go to [Meta](https://github.com/facebookresearch/llama) for creating the foundational LLaMA models and [Stanford](https://github.com/tatsu-lab/stanford_alpaca) for the instructions on how to train. For the dataset, credits go to [AlpacaDataCleaned](https://github.com/gururise/AlpacaDataCleaned) and [codealpaca](https://github.com/sahil280114/codealpaca). Credits also go to [chavinlo](https://huggingface.co/chavinlo/alpaca-native) for creating the original Alpaca 7B Native model, the inspiration behind this model.
|
|
|
|
|
|
22 |
Step 4. Launch the webui and replace the default instruction prompt with:
|
23 |
|
24 |
> You are an AI language model designed to assist the User by answering their questions, offering advice, and engaging in casual conversation in a friendly, helpful, and informative manner. You respond clearly, coherently, and you consideration of the conversation history.
|
25 |
+
|
26 |
User: Hey, how's it going?
|
27 |
+
|
28 |
Assistant: Hey there! I'm doing great, thank you. What can I help you with today? Let's have a fun chat!
|
29 |
|
30 |
Step 5. Change the settings to match this screenshot:
|
|
|
74 |
## π¨βπ» Credits
|
75 |
|
76 |
Credits go to [Meta](https://github.com/facebookresearch/llama) for creating the foundational LLaMA models and [Stanford](https://github.com/tatsu-lab/stanford_alpaca) for the instructions on how to train. For the dataset, credits go to [AlpacaDataCleaned](https://github.com/gururise/AlpacaDataCleaned) and [codealpaca](https://github.com/sahil280114/codealpaca). Credits also go to [chavinlo](https://huggingface.co/chavinlo/alpaca-native) for creating the original Alpaca 7B Native model, the inspiration behind this model.
|
77 |
+
|
78 |
+
Lastly, credits go to the homies that stayed up all night again and again: 8bit, Ο, chug, Taddy, yoyodapro, Symax, and most importantly: stablediffusion for the beautiful artwork
|