TheBloke commited on
Commit
36911f2
1 Parent(s): d2630c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -26,12 +26,23 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
26
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
27
  * [ctransformers](https://github.com/marella/ctransformers)
28
 
 
 
 
 
29
  ## Other repositories available
30
 
31
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GPTQ)
32
  * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GGML)
33
  * [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/gorilla-7B-fp16)
34
 
 
 
 
 
 
 
 
35
  ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
36
 
37
  llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
 
26
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
27
  * [ctransformers](https://github.com/marella/ctransformers)
28
 
29
+ **NOTE**: This is not a regular LLM. It is designed to allow LLMs to use tools by invoking APIs.
30
+
31
+ "Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla can write a semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. "
32
+
33
  ## Other repositories available
34
 
35
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GPTQ)
36
  * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GGML)
37
  * [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/gorilla-7B-fp16)
38
 
39
+ ## Prompt template
40
+
41
+ ```
42
+ ###USER: find me an API to generate cute cat images
43
+ ###ASSISTANT:
44
+ ```
45
+
46
  ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
47
 
48
  llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508