Spaces:
Runtime error
Runtime error
π Added Customising `bot` section
Browse files
README.md
CHANGED
@@ -69,16 +69,28 @@ You should then be able to visit `http://127.0.0.1:7860` to see the API document
|
|
69 |
```python
|
70 |
from megabots import bot, create_interface
|
71 |
|
72 |
-
demo = create_interface(
|
73 |
```
|
74 |
|
75 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
|
77 |
Large language models (LLMs) are powerful, but they can't answer questions about documents they haven't seen. If you want to use an LLM to answer questions about documents it was not trained on, you have to give it information about those documents. To solve this, we use "retrieval augmented generation."
|
78 |
|
79 |
In simple terms, when you have a question, you first search for relevant documents. Then, you give the documents and the question to the language model to generate an answer. To make this work, you need your documents in a searchable format (an index). This process involves two main steps: (1) preparing your documents for easy querying, and (2) using the retrieval augmented generation method.
|
80 |
|
81 |
-
`
|
82 |
|
83 |
```mermaid
|
84 |
sequenceDiagram
|
|
|
69 |
```python
|
70 |
from megabots import bot, create_interface
|
71 |
|
72 |
+
demo = create_interface(bot("qna-over-docs"))
|
73 |
```
|
74 |
|
75 |
+
# Customising bot
|
76 |
+
|
77 |
+
The `bot` function should serve as the starting point for creating and customising your bot. Below is a list of the available arguments in `bot`.
|
78 |
+
|
79 |
+
| Argument | Description |
|
80 |
+
| ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
81 |
+
| task | The type of bot to create. Available options: `qna-over-docs`. More comming soon |
|
82 |
+
| index | Specifies the index to use for the bot. It can either be a saved index file (e.g., `index.pkl`) or a directory of documents (e.g., `./index`). In the case of the directory the index will be automatically created. If no index is specified `bot` will look for `index.pkl` or `./index` |
|
83 |
+
| model | The name of the model to use for the bot. You can specify a different model by providing its name, like "text-davinci-003". Supported models: `gpt-3.5-turbo` (default),`text-davinci-003` More comming soon. |
|
84 |
+
| prompt_template | A string template for the prompt, which defines the format of the question and context passed to the model. The template should include placeholders for the variables specified in `prompt_variables`. |
|
85 |
+
| prompt_variables | A list of variables to be used in the prompt template. These variables are replaced with actual values when the bot processes a query. |
|
86 |
+
|
87 |
+
### How QnA bot works
|
88 |
|
89 |
Large language models (LLMs) are powerful, but they can't answer questions about documents they haven't seen. If you want to use an LLM to answer questions about documents it was not trained on, you have to give it information about those documents. To solve this, we use "retrieval augmented generation."
|
90 |
|
91 |
In simple terms, when you have a question, you first search for relevant documents. Then, you give the documents and the question to the language model to generate an answer. To make this work, you need your documents in a searchable format (an index). This process involves two main steps: (1) preparing your documents for easy querying, and (2) using the retrieval augmented generation method.
|
92 |
|
93 |
+
`qna-over-docs` uses FAISS to create an index of documents and GPT to generate answers.
|
94 |
|
95 |
```mermaid
|
96 |
sequenceDiagram
|