winglian commited on
Commit
44a2c68
1 Parent(s): 80c7d2e
Files changed (2) hide show
  1. config.yml +2 -0
  2. tabbed.py +1 -0
config.yml CHANGED
@@ -1,4 +1,6 @@
1
  ---
 
 
2
  repo: TheBloke/wizard-vicuna-13B-GGML
3
  file: wizard-vicuna-13B.ggml.q5_1.bin
4
  llama_cpp:
 
1
  ---
2
+ #repo: TheBloke/wizard-mega-13B-GGML
3
+ #file: wizard-mega-13B.ggml.q5_1.bin
4
  repo: TheBloke/wizard-vicuna-13B-GGML
5
  file: wizard-vicuna-13B.ggml.q5_1.bin
6
  llama_cpp:
tabbed.py CHANGED
@@ -87,6 +87,7 @@ with gr.Blocks() as demo:
87
  - [Duplicate the Space](https://huggingface.co/spaces/openaccess-ai-collective/ggml-ui?duplicate=true) to skip the queue and run in a private space or to use your own GGML models.
88
  - When using your own models, simply update the [config.yml](https://huggingface.co/spaces/openaccess-ai-collective/ggml-ui/blob/main/config.yml)
89
  - Contribute at [https://github.com/OpenAccess-AI-Collective/ggml-webui](https://github.com/OpenAccess-AI-Collective/ggml-webui)
 
90
  """)
91
  with gr.Tab("Instruct"):
92
  gr.Markdown("# GGML Spaces Instruct Demo")
 
87
  - [Duplicate the Space](https://huggingface.co/spaces/openaccess-ai-collective/ggml-ui?duplicate=true) to skip the queue and run in a private space or to use your own GGML models.
88
  - When using your own models, simply update the [config.yml](https://huggingface.co/spaces/openaccess-ai-collective/ggml-ui/blob/main/config.yml)
89
  - Contribute at [https://github.com/OpenAccess-AI-Collective/ggml-webui](https://github.com/OpenAccess-AI-Collective/ggml-webui)
90
+ - Many thanks to [TheBloke](https://huggingface.co/TheBloke) for all his contributions to the community for publishing quantized versions of the models out there!
91
  """)
92
  with gr.Tab("Instruct"):
93
  gr.Markdown("# GGML Spaces Instruct Demo")