writinwaters
commited on
Commit
·
4cb6b27
1
Parent(s):
a92e785
Updated HTTP API Reference (document, chat assistant, session, chat) (#2994)
Browse files### What problem does this PR solve?
### Type of change
- [x] Documentation Update
- api/http_api_reference.md +250 -321
- api/python_api_reference.md +36 -36
api/http_api_reference.md
CHANGED
@@ -46,8 +46,6 @@ curl --request POST \
|
|
46 |
--header 'Authorization: Bearer {YOUR_API_KEY}' \
|
47 |
--data '{
|
48 |
"name": "test",
|
49 |
-
"chunk_count": 0,
|
50 |
-
"document_count": 0,
|
51 |
"chunk_method": "naive"
|
52 |
}'
|
53 |
```
|
@@ -105,7 +103,7 @@ curl --request POST \
|
|
105 |
|
106 |
### Response
|
107 |
|
108 |
-
|
109 |
|
110 |
```json
|
111 |
{
|
@@ -143,7 +141,7 @@ A successful response includes a JSON object like the following:
|
|
143 |
}
|
144 |
```
|
145 |
|
146 |
-
|
147 |
|
148 |
```json
|
149 |
{
|
@@ -191,7 +189,7 @@ curl --request DELETE \
|
|
191 |
|
192 |
### Response
|
193 |
|
194 |
-
|
195 |
|
196 |
```json
|
197 |
{
|
@@ -199,7 +197,7 @@ A successful response includes a JSON object like the following:
|
|
199 |
}
|
200 |
```
|
201 |
|
202 |
-
|
203 |
|
204 |
```json
|
205 |
{
|
@@ -268,7 +266,7 @@ curl --request PUT \
|
|
268 |
|
269 |
### Response
|
270 |
|
271 |
-
|
272 |
|
273 |
```json
|
274 |
{
|
@@ -276,7 +274,7 @@ A successful response includes a JSON object like the following:
|
|
276 |
}
|
277 |
```
|
278 |
|
279 |
-
|
280 |
|
281 |
```json
|
282 |
{
|
@@ -332,7 +330,7 @@ curl --request GET \
|
|
332 |
|
333 |
### Response
|
334 |
|
335 |
-
|
336 |
|
337 |
```json
|
338 |
{
|
@@ -375,7 +373,7 @@ A successful response includes a JSON object like the following:
|
|
375 |
}
|
376 |
```
|
377 |
|
378 |
-
|
379 |
|
380 |
```json
|
381 |
{
|
@@ -428,7 +426,7 @@ curl --request POST \
|
|
428 |
|
429 |
### Response
|
430 |
|
431 |
-
|
432 |
|
433 |
```json
|
434 |
{
|
@@ -436,7 +434,7 @@ A successful response includes a JSON object like the following:
|
|
436 |
}
|
437 |
```
|
438 |
|
439 |
-
|
440 |
|
441 |
```json
|
442 |
{
|
@@ -463,7 +461,7 @@ Updates configurations for a specified document.
|
|
463 |
- Body:
|
464 |
- `"name"`:`string`
|
465 |
- `"chunk_method"`:`string`
|
466 |
-
- `"parser_config"`:`
|
467 |
|
468 |
#### Request example
|
469 |
|
@@ -497,7 +495,7 @@ curl --request PUT \
|
|
497 |
- `"one"`: One
|
498 |
- `"knowledge_graph"`: Knowledge Graph
|
499 |
- `"email"`: Email
|
500 |
-
- `"parser_config"`: (*Body parameter*), `
|
501 |
The parsing configuration for the document:
|
502 |
- `"chunk_token_count"`: Defaults to `128`.
|
503 |
- `"layout_recognize"`: Defaults to `True`.
|
@@ -506,7 +504,7 @@ curl --request PUT \
|
|
506 |
|
507 |
### Response
|
508 |
|
509 |
-
|
510 |
|
511 |
```json
|
512 |
{
|
@@ -514,7 +512,7 @@ A successful response includes a JSON object like the following:
|
|
514 |
}
|
515 |
```
|
516 |
|
517 |
-
|
518 |
|
519 |
```json
|
520 |
{
|
@@ -538,7 +536,7 @@ Downloads a document from a specified dataset.
|
|
538 |
- Headers:
|
539 |
- `'Authorization: Bearer {YOUR_API_KEY}'`
|
540 |
- Output:
|
541 |
-
- `'{FILE_NAME}'
|
542 |
|
543 |
#### Request example
|
544 |
|
@@ -558,13 +556,13 @@ curl --request GET \
|
|
558 |
|
559 |
### Response
|
560 |
|
561 |
-
|
562 |
|
563 |
```text
|
564 |
test_2.
|
565 |
-
|
566 |
|
567 |
-
|
568 |
|
569 |
```json
|
570 |
{
|
@@ -611,14 +609,14 @@ curl --request GET \
|
|
611 |
The field by which documents should be sorted. Available options:
|
612 |
- `"create_time"` (default)
|
613 |
- `"update_time"`
|
614 |
-
- `"desc"`: (*Filter parameter*), `
|
615 |
Indicates whether the retrieved documents should be sorted in descending order. Defaults to `True`.
|
616 |
- `"document_id"`: (*Filter parameter*)
|
617 |
The ID of the document to retrieve. Defaults to `None`.
|
618 |
|
619 |
### Response
|
620 |
|
621 |
-
|
622 |
|
623 |
```json
|
624 |
{
|
@@ -661,7 +659,7 @@ A successful response includes a JSON object like the following:
|
|
661 |
}
|
662 |
```
|
663 |
|
664 |
-
|
665 |
|
666 |
```json
|
667 |
{
|
@@ -703,11 +701,11 @@ curl --request DELETE \
|
|
703 |
#### Request parameters
|
704 |
|
705 |
- `"ids"`: (*Body parameter*), `list[string]`
|
706 |
-
The IDs of the documents to delete.
|
707 |
|
708 |
### Response
|
709 |
|
710 |
-
|
711 |
|
712 |
```json
|
713 |
{
|
@@ -715,7 +713,7 @@ A successful response includes a JSON object like the following:
|
|
715 |
}.
|
716 |
```
|
717 |
|
718 |
-
|
719 |
|
720 |
```json
|
721 |
{
|
@@ -754,13 +752,14 @@ curl --request POST \
|
|
754 |
|
755 |
#### Request parameters
|
756 |
|
757 |
-
- `"dataset_id"`: (*Path parameter*)
|
758 |
-
|
759 |
-
|
|
|
760 |
|
761 |
### Response
|
762 |
|
763 |
-
|
764 |
|
765 |
```json
|
766 |
{
|
@@ -768,7 +767,7 @@ A successful response includes a JSON object like the following:
|
|
768 |
}
|
769 |
```
|
770 |
|
771 |
-
|
772 |
|
773 |
```json
|
774 |
{
|
@@ -807,13 +806,14 @@ curl --request DELETE \
|
|
807 |
|
808 |
#### Request parameters
|
809 |
|
810 |
-
- `"dataset_id"`: (*Path parameter*)
|
|
|
811 |
- `"document_ids"`: (*Body parameter*)
|
812 |
-
The IDs of the documents
|
813 |
|
814 |
### Response
|
815 |
|
816 |
-
|
817 |
|
818 |
```json
|
819 |
{
|
@@ -821,7 +821,7 @@ A successful response includes a JSON object like the following:
|
|
821 |
}
|
822 |
```
|
823 |
|
824 |
-
|
825 |
|
826 |
```json
|
827 |
{
|
@@ -847,7 +847,7 @@ Adds a chunk to a specified document in a specified dataset.
|
|
847 |
- `'content-Type: application/json'`
|
848 |
- `'Authorization: Bearer {YOUR_API_KEY}'`
|
849 |
- Body:
|
850 |
-
- `"content"`: string
|
851 |
- `"important_keywords"`: `list[string]`
|
852 |
|
853 |
#### Request example
|
@@ -858,20 +858,20 @@ curl --request POST \
|
|
858 |
--header 'Content-Type: application/json' \
|
859 |
--header 'Authorization: Bearer {YOUR_API_KEY}' \
|
860 |
--data '{
|
861 |
-
"content": "
|
862 |
}'
|
863 |
```
|
864 |
|
865 |
#### Request parameters
|
866 |
|
867 |
-
- `"content"`: (*Body parameter*)
|
868 |
-
|
869 |
- `"important_keywords`(*Body parameter*)
|
870 |
-
|
871 |
|
872 |
### Response
|
873 |
|
874 |
-
|
875 |
|
876 |
```json
|
877 |
{
|
@@ -892,7 +892,7 @@ A successful response includes a JSON object like the following:
|
|
892 |
}
|
893 |
```
|
894 |
|
895 |
-
|
896 |
|
897 |
```json
|
898 |
{
|
@@ -926,20 +926,22 @@ curl --request GET \
|
|
926 |
|
927 |
#### Request parameters
|
928 |
|
929 |
-
- `"dataset_id"`: (*Path parameter*)
|
930 |
-
|
931 |
-
- `"
|
932 |
-
The
|
933 |
-
- `"keywords"`(*Filter parameter*)
|
934 |
-
|
935 |
-
- `"
|
936 |
-
|
937 |
-
- `"
|
938 |
-
The
|
|
|
|
|
939 |
|
940 |
### Response
|
941 |
|
942 |
-
|
943 |
|
944 |
```json
|
945 |
{
|
@@ -983,7 +985,7 @@ A successful response includes a JSON object like the following:
|
|
983 |
}
|
984 |
```
|
985 |
|
986 |
-
|
987 |
|
988 |
```json
|
989 |
{
|
@@ -1025,11 +1027,11 @@ curl --request DELETE \
|
|
1025 |
#### Request parameters
|
1026 |
|
1027 |
- `"chunk_ids"`: (*Body parameter*)
|
1028 |
-
The
|
1029 |
|
1030 |
### Response
|
1031 |
|
1032 |
-
|
1033 |
|
1034 |
```json
|
1035 |
{
|
@@ -1037,7 +1039,7 @@ A successful response includes a JSON object like the following:
|
|
1037 |
}
|
1038 |
```
|
1039 |
|
1040 |
-
|
1041 |
|
1042 |
```json
|
1043 |
{
|
@@ -1081,16 +1083,18 @@ curl --request PUT \
|
|
1081 |
|
1082 |
#### Request parameters
|
1083 |
|
1084 |
-
- `"content"`: (*Body parameter*)
|
1085 |
-
|
1086 |
-
- `"important_keywords"`: (*Body parameter*)
|
1087 |
-
|
1088 |
-
- `"available"`: (*Body parameter*)
|
1089 |
-
|
|
|
|
|
1090 |
|
1091 |
### Response
|
1092 |
|
1093 |
-
|
1094 |
|
1095 |
```json
|
1096 |
{
|
@@ -1098,7 +1102,7 @@ A successful response includes a JSON object like the following:
|
|
1098 |
}
|
1099 |
```
|
1100 |
|
1101 |
-
|
1102 |
|
1103 |
```json
|
1104 |
{
|
@@ -1126,14 +1130,14 @@ Retrieves chunks from specified datasets.
|
|
1126 |
- `"question"`: `string`
|
1127 |
- `"datasets"`: `list[string]`
|
1128 |
- `"documents"`: `list[string]`
|
1129 |
-
- `"offset"`:
|
1130 |
-
- `"limit"`:
|
1131 |
-
- `"similarity_threshold"`: float
|
1132 |
-
- `"vector_similarity_weight"`: float
|
1133 |
-
- `"top_k"`:
|
1134 |
-
- `"rerank_id"`: string
|
1135 |
-
- `"keyword"`:
|
1136 |
-
- `"highlight"`:
|
1137 |
|
1138 |
#### Request example
|
1139 |
|
@@ -1155,50 +1159,36 @@ curl --request POST \
|
|
1155 |
|
1156 |
#### Request parameter
|
1157 |
|
1158 |
-
- `"question"`: (*Body parameter*)
|
1159 |
-
|
1160 |
-
|
1161 |
-
|
1162 |
-
|
1163 |
-
`None
|
1164 |
-
- `"
|
1165 |
-
The
|
1166 |
-
`None`
|
1167 |
-
- `"offset"`: (*Body parameter*)
|
1168 |
-
The beginning point of retrieved records
|
1169 |
-
`1`
|
1170 |
-
|
1171 |
- `"limit"`: (*Body parameter*)
|
1172 |
-
The maximum number of
|
1173 |
-
`30`
|
1174 |
-
|
1175 |
- `"similarity_threshold"`: (*Body parameter*)
|
1176 |
-
The minimum similarity score
|
1177 |
-
`0.2`
|
1178 |
-
|
1179 |
- `"vector_similarity_weight"`: (*Body parameter*)
|
1180 |
-
The weight of vector cosine similarity,
|
1181 |
-
`0.3`
|
1182 |
-
|
1183 |
- `"top_k"`: (*Body parameter*)
|
1184 |
-
|
1185 |
-
`1024`
|
1186 |
-
|
1187 |
- `"rerank_id"`: (*Body parameter*)
|
1188 |
-
ID of the rerank model
|
1189 |
-
|
1190 |
-
|
1191 |
-
- `
|
1192 |
-
|
1193 |
-
|
1194 |
-
|
1195 |
-
- `
|
1196 |
-
|
1197 |
-
`False`
|
1198 |
|
1199 |
### Response
|
1200 |
|
1201 |
-
|
1202 |
|
1203 |
```json
|
1204 |
{
|
@@ -1237,7 +1227,7 @@ A successful response includes a JSON object like the following:
|
|
1237 |
}
|
1238 |
```
|
1239 |
|
1240 |
-
|
1241 |
|
1242 |
```json
|
1243 |
{
|
@@ -1270,10 +1260,9 @@ Creates a chat assistant.
|
|
1270 |
- Body:
|
1271 |
- `"name"`: `string`
|
1272 |
- `"avatar"`: `string`
|
1273 |
-
- `"knowledgebases"`: `list[
|
1274 |
-
- `"
|
1275 |
-
- `"
|
1276 |
-
- `"prompt"`: `Prompt`
|
1277 |
|
1278 |
#### Request example
|
1279 |
|
@@ -1312,101 +1301,47 @@ curl --request POST \
|
|
1312 |
|
1313 |
#### Request parameters
|
1314 |
|
1315 |
-
- `"name"`: (*Body parameter*)
|
1316 |
-
The name of the
|
1317 |
-
- `"assistant"`
|
1318 |
-
|
1319 |
- `"avatar"`: (*Body parameter*)
|
1320 |
-
|
1321 |
-
- `"path"`
|
1322 |
-
|
1323 |
- `"knowledgebases"`: (*Body parameter*)
|
1324 |
-
|
1325 |
-
|
1326 |
-
|
1327 |
-
- `"
|
1328 |
-
|
1329 |
-
- `""`
|
1330 |
-
|
1331 |
-
- `"
|
1332 |
-
|
1333 |
-
-
|
1334 |
-
|
1335 |
-
- `"
|
1336 |
-
|
1337 |
-
-
|
1338 |
-
|
1339 |
-
|
1340 |
-
|
1341 |
-
|
1342 |
-
|
1343 |
-
- `"
|
1344 |
-
|
1345 |
-
|
1346 |
-
|
1347 |
-
- `"
|
1348 |
-
|
1349 |
-
- `
|
1350 |
-
|
1351 |
-
- `"
|
1352 |
-
|
1353 |
-
|
1354 |
-
|
1355 |
-
|
1356 |
-
Discourages the model from repeating the same information by penalizing repeated content.
|
1357 |
-
- `0.4`
|
1358 |
-
|
1359 |
-
- `"frequency_penalty"`: (*Body parameter*)
|
1360 |
-
Reduces the model’s tendency to repeat words frequently.
|
1361 |
-
- `0.7`
|
1362 |
-
|
1363 |
-
- `"max_tokens"`: (*Body parameter*)
|
1364 |
-
Sets the maximum length of the model’s output, measured in tokens (words or pieces of words).
|
1365 |
-
- `512`
|
1366 |
-
|
1367 |
-
---
|
1368 |
-
|
1369 |
-
##### Chat.Prompt parameters
|
1370 |
-
|
1371 |
-
- `"similarity_threshold"`: (*Body parameter*)
|
1372 |
-
Filters out chunks with similarity below this threshold.
|
1373 |
-
- `0.2`
|
1374 |
-
|
1375 |
-
- `"keywords_similarity_weight"`: (*Body parameter*)
|
1376 |
-
Weighted keywords similarity and vector cosine similarity; the sum of weights is 1.0.
|
1377 |
-
- `0.7`
|
1378 |
-
|
1379 |
-
- `"top_n"`: (*Body parameter*)
|
1380 |
-
Only the top N chunks above the similarity threshold will be fed to LLMs.
|
1381 |
-
- `8`
|
1382 |
-
|
1383 |
-
- `"variables"`: (*Body parameter*)
|
1384 |
-
Variables help with different chat strategies by filling in the 'System' part of the prompt.
|
1385 |
-
- `[{"key": "knowledge", "optional": True}]`
|
1386 |
-
|
1387 |
-
- `"rerank_model"`: (*Body parameter*)
|
1388 |
-
If empty, it uses vector cosine similarity; otherwise, it uses rerank score.
|
1389 |
-
- `""`
|
1390 |
-
|
1391 |
-
- `"empty_response"`: (*Body parameter*)
|
1392 |
-
If nothing is retrieved, this will be used as the response. Leave blank if LLM should provide its own opinion.
|
1393 |
-
- `None`
|
1394 |
-
|
1395 |
-
- `"opener"`: (*Body parameter*)
|
1396 |
-
The welcome message for clients.
|
1397 |
-
- `"Hi! I'm your assistant, what can I do for you?"`
|
1398 |
-
|
1399 |
-
- `"show_quote"`: (*Body parameter*)
|
1400 |
-
Indicates whether the source of the original text should be displayed.
|
1401 |
-
- `True`
|
1402 |
-
|
1403 |
-
- `"prompt"`: (*Body parameter*)
|
1404 |
-
Instructions for LLM to follow when answering questions, such as character design or answer length.
|
1405 |
-
- `"You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence 'The answer you are looking for is not found in the knowledge base!' Answers need to consider chat history. Here is the knowledge base: {knowledge} The above is the knowledge base."`
|
1406 |
|
1407 |
### Response
|
1408 |
|
1409 |
-
|
1410 |
|
1411 |
```json
|
1412 |
{
|
@@ -1476,7 +1411,7 @@ A successful response includes a JSON object like the following:
|
|
1476 |
}
|
1477 |
```
|
1478 |
|
1479 |
-
|
1480 |
|
1481 |
```json
|
1482 |
{
|
@@ -1500,7 +1435,12 @@ Updates configurations for a specified chat assistant.
|
|
1500 |
- Headers:
|
1501 |
- `'content-Type: application/json'`
|
1502 |
- `'Authorization: Bearer {YOUR_API_KEY}'`
|
1503 |
-
- Body:
|
|
|
|
|
|
|
|
|
|
|
1504 |
|
1505 |
#### Request example
|
1506 |
|
@@ -1516,11 +1456,47 @@ curl --request PUT \
|
|
1516 |
|
1517 |
#### Parameters
|
1518 |
|
1519 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1520 |
|
1521 |
### Response
|
1522 |
|
1523 |
-
|
1524 |
|
1525 |
```json
|
1526 |
{
|
@@ -1528,7 +1504,7 @@ A successful response includes a JSON object like the following:
|
|
1528 |
}
|
1529 |
```
|
1530 |
|
1531 |
-
|
1532 |
|
1533 |
```json
|
1534 |
{
|
@@ -1571,13 +1547,12 @@ curl --request DELETE \
|
|
1571 |
|
1572 |
#### Request parameters
|
1573 |
|
1574 |
-
- `"ids"`: (*Body parameter*)
|
1575 |
-
IDs of the
|
1576 |
-
- `None`
|
1577 |
|
1578 |
### Response
|
1579 |
|
1580 |
-
|
1581 |
|
1582 |
```json
|
1583 |
{
|
@@ -1585,7 +1560,7 @@ A successful response includes a JSON object like the following:
|
|
1585 |
}
|
1586 |
```
|
1587 |
|
1588 |
-
|
1589 |
|
1590 |
```json
|
1591 |
{
|
@@ -1619,33 +1594,24 @@ curl --request GET \
|
|
1619 |
|
1620 |
#### Request parameters
|
1621 |
|
1622 |
-
- `"page"`: (*Path parameter*)
|
1623 |
-
|
1624 |
-
|
1625 |
-
|
1626 |
-
- `"
|
1627 |
-
The
|
1628 |
-
- `
|
1629 |
-
|
1630 |
-
- `"
|
1631 |
-
|
1632 |
-
|
1633 |
-
|
1634 |
-
- `"
|
1635 |
-
|
1636 |
-
- `True`
|
1637 |
-
|
1638 |
-
- `"id"`: (*Path parameter*)
|
1639 |
-
The ID of the chat to retrieve.
|
1640 |
-
- `None`
|
1641 |
-
|
1642 |
-
- `"name"`: (*Path parameter*)
|
1643 |
-
The name of the chat to retrieve.
|
1644 |
-
- `None`
|
1645 |
|
1646 |
### Response
|
1647 |
|
1648 |
-
|
1649 |
|
1650 |
```json
|
1651 |
{
|
@@ -1724,7 +1690,7 @@ A successful response includes a JSON object like the following:
|
|
1724 |
}
|
1725 |
```
|
1726 |
|
1727 |
-
|
1728 |
|
1729 |
```json
|
1730 |
{
|
@@ -1733,11 +1699,11 @@ An error response includes a JSON object like the following:
|
|
1733 |
}
|
1734 |
```
|
1735 |
|
1736 |
-
## Create
|
1737 |
|
1738 |
**POST** `/api/v1/chat/{chat_id}/session`
|
1739 |
|
1740 |
-
|
1741 |
|
1742 |
### Request
|
1743 |
|
@@ -1763,29 +1729,13 @@ curl --request POST \
|
|
1763 |
|
1764 |
#### Request parameters
|
1765 |
|
1766 |
-
- `"
|
1767 |
-
The
|
1768 |
-
- `None`
|
1769 |
-
- `id` cannot be provided when creating.
|
1770 |
-
|
1771 |
-
- `"name"`: (*Body parameter*)
|
1772 |
-
The name of the created session.
|
1773 |
-
- `"New session"`
|
1774 |
-
|
1775 |
-
- `"messages"`: (*Body parameter*)
|
1776 |
-
The messages of the created session.
|
1777 |
-
- `[{"role": "assistant", "content": "Hi! I am your assistant, can I help you?"}]`
|
1778 |
-
- `messages` cannot be provided when creating.
|
1779 |
-
|
1780 |
-
- `"chat_id"`: (*Path parameter*)
|
1781 |
-
The ID of the associated chat.
|
1782 |
-
- `""`
|
1783 |
-
- `chat_id` cannot be changed.
|
1784 |
|
1785 |
|
1786 |
### Response
|
1787 |
|
1788 |
-
|
1789 |
|
1790 |
```json
|
1791 |
{
|
@@ -1808,7 +1758,7 @@ A successful response includes a JSON object like the following:
|
|
1808 |
}
|
1809 |
```
|
1810 |
|
1811 |
-
|
1812 |
|
1813 |
```json
|
1814 |
{
|
@@ -1819,16 +1769,7 @@ An error response includes a JSON object like the following:
|
|
1819 |
|
1820 |
---
|
1821 |
|
1822 |
-
|
1823 |
-
Chat Session APIs
|
1824 |
-
:::
|
1825 |
-
|
1826 |
-
---
|
1827 |
-
|
1828 |
-
=========MISSING CREATE SESSION API!==============
|
1829 |
-
|
1830 |
-
---
|
1831 |
-
## Update a chat session
|
1832 |
|
1833 |
**PUT** `/api/v1/chat/{chat_id}/session/{session_id}`
|
1834 |
|
@@ -1858,13 +1799,12 @@ curl --request PUT \
|
|
1858 |
|
1859 |
#### Request Parameter
|
1860 |
|
1861 |
-
- `"name`: (*Body Parameter)
|
1862 |
-
The name of the
|
1863 |
-
- `None`
|
1864 |
|
1865 |
### Response
|
1866 |
|
1867 |
-
|
1868 |
|
1869 |
```json
|
1870 |
{
|
@@ -1872,7 +1812,7 @@ A successful response includes a JSON object like the following:
|
|
1872 |
}
|
1873 |
```
|
1874 |
|
1875 |
-
|
1876 |
|
1877 |
```json
|
1878 |
{
|
@@ -1885,9 +1825,9 @@ An error response includes a JSON object like the following:
|
|
1885 |
|
1886 |
## List sessions
|
1887 |
|
1888 |
-
**GET** `/api/v1/chat/{chat_id}/session?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&name={
|
1889 |
|
1890 |
-
Lists sessions associated with a specified
|
1891 |
|
1892 |
### Request
|
1893 |
|
@@ -1906,33 +1846,24 @@ curl --request GET \
|
|
1906 |
|
1907 |
#### Request Parameters
|
1908 |
|
1909 |
-
- `"page"`: (*Path parameter*)
|
1910 |
-
|
1911 |
-
|
1912 |
-
|
1913 |
-
- `"
|
1914 |
-
The
|
1915 |
-
- `
|
1916 |
-
|
1917 |
-
- `"
|
1918 |
-
|
1919 |
-
|
1920 |
-
|
1921 |
-
- `"
|
1922 |
-
|
1923 |
-
- `True`
|
1924 |
-
|
1925 |
-
- `"id"`: (*Path parameter*)
|
1926 |
-
The ID of the session to retrieve.
|
1927 |
-
- `None`
|
1928 |
-
|
1929 |
-
- `"name"`: (*Path parameter*)
|
1930 |
-
The name of the session to retrieve.
|
1931 |
-
- `None`
|
1932 |
|
1933 |
### Response
|
1934 |
|
1935 |
-
|
1936 |
|
1937 |
```json
|
1938 |
{
|
@@ -1957,7 +1888,7 @@ A successful response includes a JSON object like the following:
|
|
1957 |
}
|
1958 |
```
|
1959 |
|
1960 |
-
|
1961 |
|
1962 |
```json
|
1963 |
{
|
@@ -1999,13 +1930,12 @@ curl --request DELETE \
|
|
1999 |
|
2000 |
#### Request Parameters
|
2001 |
|
2002 |
-
- `"ids"`: (*Body Parameter*)
|
2003 |
-
IDs of the sessions to delete.
|
2004 |
-
- `None`
|
2005 |
|
2006 |
### Response
|
2007 |
|
2008 |
-
|
2009 |
|
2010 |
```json
|
2011 |
{
|
@@ -2013,7 +1943,7 @@ A successful response includes a JSON object like the following:
|
|
2013 |
}
|
2014 |
```
|
2015 |
|
2016 |
-
|
2017 |
|
2018 |
```json
|
2019 |
{
|
@@ -2024,7 +1954,7 @@ An error response includes a JSON object like the following:
|
|
2024 |
|
2025 |
---
|
2026 |
|
2027 |
-
## Chat
|
2028 |
|
2029 |
**POST** `/api/v1/chat/{chat_id}/completion`
|
2030 |
|
@@ -2039,7 +1969,7 @@ Asks a question to start a conversation.
|
|
2039 |
- `'Authorization: Bearer {YOUR_API_KEY}'`
|
2040 |
- Body:
|
2041 |
- `"question"`: `string`
|
2042 |
-
- `"stream"`: `
|
2043 |
- `"session_id"`: `string`
|
2044 |
|
2045 |
#### Request example
|
@@ -2050,26 +1980,25 @@ curl --request POST \
|
|
2050 |
--header 'Content-Type: application/json' \
|
2051 |
--header 'Authorization: Bearer {YOUR_API_KEY}' \
|
2052 |
--data-binary '{
|
2053 |
-
"question": "
|
2054 |
"stream": true
|
2055 |
}'
|
2056 |
```
|
2057 |
|
2058 |
#### Request Parameters
|
2059 |
|
2060 |
-
- `"question"`: (*Body Parameter*)
|
2061 |
-
The question
|
2062 |
-
|
2063 |
-
|
2064 |
-
- `
|
2065 |
-
|
2066 |
-
`False`
|
2067 |
- `"session_id"`: (*Body Parameter*)
|
2068 |
-
The ID of session. If not provided, a new session will be generated
|
2069 |
|
2070 |
### Response
|
2071 |
|
2072 |
-
|
2073 |
|
2074 |
```json
|
2075 |
data: {
|
@@ -2171,7 +2100,7 @@ data:{
|
|
2171 |
}
|
2172 |
```
|
2173 |
|
2174 |
-
|
2175 |
|
2176 |
```json
|
2177 |
{
|
|
|
46 |
--header 'Authorization: Bearer {YOUR_API_KEY}' \
|
47 |
--data '{
|
48 |
"name": "test",
|
|
|
|
|
49 |
"chunk_method": "naive"
|
50 |
}'
|
51 |
```
|
|
|
103 |
|
104 |
### Response
|
105 |
|
106 |
+
Success:
|
107 |
|
108 |
```json
|
109 |
{
|
|
|
141 |
}
|
142 |
```
|
143 |
|
144 |
+
Failure:
|
145 |
|
146 |
```json
|
147 |
{
|
|
|
189 |
|
190 |
### Response
|
191 |
|
192 |
+
Success:
|
193 |
|
194 |
```json
|
195 |
{
|
|
|
197 |
}
|
198 |
```
|
199 |
|
200 |
+
Failure:
|
201 |
|
202 |
```json
|
203 |
{
|
|
|
266 |
|
267 |
### Response
|
268 |
|
269 |
+
Success:
|
270 |
|
271 |
```json
|
272 |
{
|
|
|
274 |
}
|
275 |
```
|
276 |
|
277 |
+
Failure:
|
278 |
|
279 |
```json
|
280 |
{
|
|
|
330 |
|
331 |
### Response
|
332 |
|
333 |
+
Success:
|
334 |
|
335 |
```json
|
336 |
{
|
|
|
373 |
}
|
374 |
```
|
375 |
|
376 |
+
Failure:
|
377 |
|
378 |
```json
|
379 |
{
|
|
|
426 |
|
427 |
### Response
|
428 |
|
429 |
+
Success:
|
430 |
|
431 |
```json
|
432 |
{
|
|
|
434 |
}
|
435 |
```
|
436 |
|
437 |
+
Failure:
|
438 |
|
439 |
```json
|
440 |
{
|
|
|
461 |
- Body:
|
462 |
- `"name"`:`string`
|
463 |
- `"chunk_method"`:`string`
|
464 |
+
- `"parser_config"`:`object`
|
465 |
|
466 |
#### Request example
|
467 |
|
|
|
495 |
- `"one"`: One
|
496 |
- `"knowledge_graph"`: Knowledge Graph
|
497 |
- `"email"`: Email
|
498 |
+
- `"parser_config"`: (*Body parameter*), `object`
|
499 |
The parsing configuration for the document:
|
500 |
- `"chunk_token_count"`: Defaults to `128`.
|
501 |
- `"layout_recognize"`: Defaults to `True`.
|
|
|
504 |
|
505 |
### Response
|
506 |
|
507 |
+
Success:
|
508 |
|
509 |
```json
|
510 |
{
|
|
|
512 |
}
|
513 |
```
|
514 |
|
515 |
+
Failure:
|
516 |
|
517 |
```json
|
518 |
{
|
|
|
536 |
- Headers:
|
537 |
- `'Authorization: Bearer {YOUR_API_KEY}'`
|
538 |
- Output:
|
539 |
+
- `'{FILE_NAME}'`????????
|
540 |
|
541 |
#### Request example
|
542 |
|
|
|
556 |
|
557 |
### Response
|
558 |
|
559 |
+
A successful response includes a text object like the following:
|
560 |
|
561 |
```text
|
562 |
test_2.
|
563 |
+
```????????????????
|
564 |
|
565 |
+
Failure:
|
566 |
|
567 |
```json
|
568 |
{
|
|
|
609 |
The field by which documents should be sorted. Available options:
|
610 |
- `"create_time"` (default)
|
611 |
- `"update_time"`
|
612 |
+
- `"desc"`: (*Filter parameter*), `boolean`
|
613 |
Indicates whether the retrieved documents should be sorted in descending order. Defaults to `True`.
|
614 |
- `"document_id"`: (*Filter parameter*)
|
615 |
The ID of the document to retrieve. Defaults to `None`.
|
616 |
|
617 |
### Response
|
618 |
|
619 |
+
Success:
|
620 |
|
621 |
```json
|
622 |
{
|
|
|
659 |
}
|
660 |
```
|
661 |
|
662 |
+
Failure:
|
663 |
|
664 |
```json
|
665 |
{
|
|
|
701 |
#### Request parameters
|
702 |
|
703 |
- `"ids"`: (*Body parameter*), `list[string]`
|
704 |
+
The IDs of the documents to delete. Defaults to `None`. If not specified, all documents in the dataset will be deleted.
|
705 |
|
706 |
### Response
|
707 |
|
708 |
+
Success:
|
709 |
|
710 |
```json
|
711 |
{
|
|
|
713 |
}.
|
714 |
```
|
715 |
|
716 |
+
Failure:
|
717 |
|
718 |
```json
|
719 |
{
|
|
|
752 |
|
753 |
#### Request parameters
|
754 |
|
755 |
+
- `"dataset_id"`: (*Path parameter*)
|
756 |
+
The dataset ID.
|
757 |
+
- `"document_ids"`: (*Body parameter*), `list[string]`
|
758 |
+
The IDs of the documents to parse.
|
759 |
|
760 |
### Response
|
761 |
|
762 |
+
Success:
|
763 |
|
764 |
```json
|
765 |
{
|
|
|
767 |
}
|
768 |
```
|
769 |
|
770 |
+
Failure:
|
771 |
|
772 |
```json
|
773 |
{
|
|
|
806 |
|
807 |
#### Request parameters
|
808 |
|
809 |
+
- `"dataset_id"`: (*Path parameter*)
|
810 |
+
The dataset ID
|
811 |
- `"document_ids"`: (*Body parameter*)
|
812 |
+
The IDs of the documents for which the parsing should be stopped.
|
813 |
|
814 |
### Response
|
815 |
|
816 |
+
Success:
|
817 |
|
818 |
```json
|
819 |
{
|
|
|
821 |
}
|
822 |
```
|
823 |
|
824 |
+
Failure:
|
825 |
|
826 |
```json
|
827 |
{
|
|
|
847 |
- `'content-Type: application/json'`
|
848 |
- `'Authorization: Bearer {YOUR_API_KEY}'`
|
849 |
- Body:
|
850 |
+
- `"content"`: `string`
|
851 |
- `"important_keywords"`: `list[string]`
|
852 |
|
853 |
#### Request example
|
|
|
858 |
--header 'Content-Type: application/json' \
|
859 |
--header 'Authorization: Bearer {YOUR_API_KEY}' \
|
860 |
--data '{
|
861 |
+
"content": "<SOME_CHUNK_CONTENT_HERE>"
|
862 |
}'
|
863 |
```
|
864 |
|
865 |
#### Request parameters
|
866 |
|
867 |
+
- `"content"`: (*Body parameter*), `string`, *Required*
|
868 |
+
The text content of the chunk.
|
869 |
- `"important_keywords`(*Body parameter*)
|
870 |
+
The key terms or phrases to tag with the chunk.
|
871 |
|
872 |
### Response
|
873 |
|
874 |
+
Success:
|
875 |
|
876 |
```json
|
877 |
{
|
|
|
892 |
}
|
893 |
```
|
894 |
|
895 |
+
Failure:
|
896 |
|
897 |
```json
|
898 |
{
|
|
|
926 |
|
927 |
#### Request parameters
|
928 |
|
929 |
+
- `"dataset_id"`: (*Path parameter*)
|
930 |
+
The dataset ID.
|
931 |
+
- `"document_id"`: (*Path parameter*)
|
932 |
+
The document ID.
|
933 |
+
- `"keywords"`(*Filter parameter*), `string`
|
934 |
+
The keywords used to match chunk content. Defaults to `None`
|
935 |
+
- `"offset"`(*Filter parameter*), `string`
|
936 |
+
The starting index for the chunks to retrieve. Defaults to `1`.
|
937 |
+
- `"limit"`(*Filter parameter*), `integer`
|
938 |
+
The maximum number of chunks to retrieve. Default: `1024`
|
939 |
+
- `"id"`(*Filter parameter*), `string`
|
940 |
+
The ID of the chunk to retrieve. Default: `None`
|
941 |
|
942 |
### Response
|
943 |
|
944 |
+
Success:
|
945 |
|
946 |
```json
|
947 |
{
|
|
|
985 |
}
|
986 |
```
|
987 |
|
988 |
+
Failure:
|
989 |
|
990 |
```json
|
991 |
{
|
|
|
1027 |
#### Request parameters
|
1028 |
|
1029 |
- `"chunk_ids"`: (*Body parameter*)
|
1030 |
+
The IDs of the chunks to delete. Defaults to `None`. If not specified, all chunks of the current document will be deleted.
|
1031 |
|
1032 |
### Response
|
1033 |
|
1034 |
+
Success:
|
1035 |
|
1036 |
```json
|
1037 |
{
|
|
|
1039 |
}
|
1040 |
```
|
1041 |
|
1042 |
+
Failure:
|
1043 |
|
1044 |
```json
|
1045 |
{
|
|
|
1083 |
|
1084 |
#### Request parameters
|
1085 |
|
1086 |
+
- `"content"`: (*Body parameter*), `string`
|
1087 |
+
The text content of the chunk.
|
1088 |
+
- `"important_keywords"`: (*Body parameter*), `list[string]`
|
1089 |
+
A list of key terms or phrases to tag with the chunk.
|
1090 |
+
- `"available"`: (*Body parameter*) `boolean`
|
1091 |
+
The chunk's availability status in the dataset. Value options:
|
1092 |
+
- `False`: Unavailable
|
1093 |
+
- `True`: Available
|
1094 |
|
1095 |
### Response
|
1096 |
|
1097 |
+
Success:
|
1098 |
|
1099 |
```json
|
1100 |
{
|
|
|
1102 |
}
|
1103 |
```
|
1104 |
|
1105 |
+
Failure:
|
1106 |
|
1107 |
```json
|
1108 |
{
|
|
|
1130 |
- `"question"`: `string`
|
1131 |
- `"datasets"`: `list[string]`
|
1132 |
- `"documents"`: `list[string]`
|
1133 |
+
- `"offset"`: `integer`
|
1134 |
+
- `"limit"`: `integer`
|
1135 |
+
- `"similarity_threshold"`: `float`
|
1136 |
+
- `"vector_similarity_weight"`: `float`
|
1137 |
+
- `"top_k"`: `integer`
|
1138 |
+
- `"rerank_id"`: `string`
|
1139 |
+
- `"keyword"`: `boolean`
|
1140 |
+
- `"highlight"`: `boolean`
|
1141 |
|
1142 |
#### Request example
|
1143 |
|
|
|
1159 |
|
1160 |
#### Request parameter
|
1161 |
|
1162 |
+
- `"question"`: (*Body parameter*), `string`, *Required*
|
1163 |
+
The user query or query keywords. Defaults to `""`.
|
1164 |
+
- `"datasets"`: (*Body parameter*) `list[string]`, *Required*
|
1165 |
+
The IDs of the datasets to search from.
|
1166 |
+
- `"documents"`: (*Body parameter*), `list[string]`
|
1167 |
+
The IDs of the documents to search from. Defaults to `None`.
|
1168 |
+
- `"offset"`: (*Body parameter*), `integer`
|
1169 |
+
The starting index for the documents to retrieve. Defaults to `1`.
|
|
|
|
|
|
|
|
|
|
|
1170 |
- `"limit"`: (*Body parameter*)
|
1171 |
+
The maximum number of chunks to retrieve. Defaults to `1024`.
|
|
|
|
|
1172 |
- `"similarity_threshold"`: (*Body parameter*)
|
1173 |
+
The minimum similarity score. Defaults to `0.2`.
|
|
|
|
|
1174 |
- `"vector_similarity_weight"`: (*Body parameter*)
|
1175 |
+
The weight of vector cosine similarity. Defaults to `0.3`. If x represents the vector cosine similarity, then (1 - x) is the term similarity weight.
|
|
|
|
|
1176 |
- `"top_k"`: (*Body parameter*)
|
1177 |
+
The number of chunks engaged in vector cosine computaton. Defaults to `1024`.
|
|
|
|
|
1178 |
- `"rerank_id"`: (*Body parameter*)
|
1179 |
+
The ID of the rerank model. Defaults to `None`.
|
1180 |
+
- `"keyword"`: (*Body parameter*), `boolean`
|
1181 |
+
Indicates whether to enable keyword-based matching:
|
1182 |
+
- `True`: Enable keyword-based matching.
|
1183 |
+
- `False`: Disable keyword-based matching (default).
|
1184 |
+
- `"highlight"`: (*Body parameter*), `boolean`
|
1185 |
+
Specifies whether to enable highlighting of matched terms in the results:
|
1186 |
+
- `True`: Enable highlighting of matched terms.
|
1187 |
+
- `False`: Disable highlighting of matched terms (default).
|
|
|
1188 |
|
1189 |
### Response
|
1190 |
|
1191 |
+
Success:
|
1192 |
|
1193 |
```json
|
1194 |
{
|
|
|
1227 |
}
|
1228 |
```
|
1229 |
|
1230 |
+
Failure:
|
1231 |
|
1232 |
```json
|
1233 |
{
|
|
|
1260 |
- Body:
|
1261 |
- `"name"`: `string`
|
1262 |
- `"avatar"`: `string`
|
1263 |
+
- `"knowledgebases"`: `list[string]`
|
1264 |
+
- `"llm"`: `object`
|
1265 |
+
- `"prompt"`: `object`
|
|
|
1266 |
|
1267 |
#### Request example
|
1268 |
|
|
|
1301 |
|
1302 |
#### Request parameters
|
1303 |
|
1304 |
+
- `"name"`: (*Body parameter*), `string`, *Required*
|
1305 |
+
The name of the chat assistant.
|
|
|
|
|
1306 |
- `"avatar"`: (*Body parameter*)
|
1307 |
+
Base64 encoding of the avatar. Defaults to `""`.
|
|
|
|
|
1308 |
- `"knowledgebases"`: (*Body parameter*)
|
1309 |
+
The IDs of the associated datasets. Defaults to `[""]`.
|
1310 |
+
- `"llm"`: (*Body parameter*), `object`
|
1311 |
+
The LLM settings for the chat assistant to create. Defaults to `None`. When the value is `None`, a dictionary with the following values will be generated as the default. An `llm` object contains the following attributes:
|
1312 |
+
- `"model_name"`, `string`
|
1313 |
+
The chat model name. If it is `None`, the user's default chat model will be returned.
|
1314 |
+
- `"temperature"`: `float`
|
1315 |
+
Controls the randomness of the model's predictions. A lower temperature increases the model's confidence in its responses; a higher temperature increases creativity and diversity. Defaults to `0.1`.
|
1316 |
+
- `"top_p"`: `float`
|
1317 |
+
Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones. Defaults to `0.3`
|
1318 |
+
- `"presence_penalty"`: `float`
|
1319 |
+
This discourages the model from repeating the same information by penalizing words that have already appeared in the conversation. Defaults to `0.2`.
|
1320 |
+
- `"frequency penalty"`: `float`
|
1321 |
+
Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
|
1322 |
+
- `"max_token"`: `integer`
|
1323 |
+
The maximum length of the model’s output, measured in the number of tokens (words or pieces of words). Defaults to `512`.
|
1324 |
+
- `"prompt"`: (*Body parameter*), `object`
|
1325 |
+
Instructions for the LLM to follow. A `prompt` object contains the following attributes:
|
1326 |
+
- `"similarity_threshold"`: `float` RAGFlow uses a hybrid of weighted keyword similarity and vector cosine similarity during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
|
1327 |
+
- `"keywords_similarity_weight"`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
|
1328 |
+
- `"top_n"`: `int` This argument specifies the number of top chunks with similarity scores above the `similarity_threshold` that are fed to the LLM. The LLM will *only* access these 'top N' chunks. The default value is `8`.
|
1329 |
+
- `"variables"`: `object[]` This argument lists the variables to use in the 'System' field of **Chat Configurations**. Note that:
|
1330 |
+
- `"knowledge"` is a reserved variable, which will be replaced with the retrieved chunks.
|
1331 |
+
- All the variables in 'System' should be curly bracketed.
|
1332 |
+
- The default value is `[{"key": "knowledge", "optional": True}]`
|
1333 |
+
- `"rerank_model"`: `string` If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to `""`.
|
1334 |
+
- `"empty_response"`: `string` If nothing is retrieved in the dataset for the user's question, this will be used as the response. To allow the LLM to improvise when nothing is found, leave this blank. Defaults to `None`.
|
1335 |
+
- `"opener"`: `string` The opening greeting for the user. Defaults to `"Hi! I am your assistant, can I help you?"`.
|
1336 |
+
- `"show_quote`: `boolean` Indicates whether the source of text should be displayed. Defaults to `True`.
|
1337 |
+
- `"prompt"`: `string` The prompt content. Defaults to `You are an intelligent assistant. Please summarize the content of the dataset to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.
|
1338 |
+
Here is the knowledge base:
|
1339 |
+
{knowledge}
|
1340 |
+
The above is the knowledge base.`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1341 |
|
1342 |
### Response
|
1343 |
|
1344 |
+
Success:
|
1345 |
|
1346 |
```json
|
1347 |
{
|
|
|
1411 |
}
|
1412 |
```
|
1413 |
|
1414 |
+
Failure:
|
1415 |
|
1416 |
```json
|
1417 |
{
|
|
|
1435 |
- Headers:
|
1436 |
- `'content-Type: application/json'`
|
1437 |
- `'Authorization: Bearer {YOUR_API_KEY}'`
|
1438 |
+
- Body:
|
1439 |
+
- `"name"`: `string`
|
1440 |
+
- `"avatar"`: `string`
|
1441 |
+
- `"knowledgebases"`: `list[string]`
|
1442 |
+
- `"llm"`: `object`
|
1443 |
+
- `"prompt"`: `object`
|
1444 |
|
1445 |
#### Request example
|
1446 |
|
|
|
1456 |
|
1457 |
#### Parameters
|
1458 |
|
1459 |
+
- `"name"`: (*Body parameter*), `string`, *Required*
|
1460 |
+
The name of the chat assistant.
|
1461 |
+
- `"avatar"`: (*Body parameter*)
|
1462 |
+
Base64 encoding of the avatar. Defaults to `""`.
|
1463 |
+
- `"knowledgebases"`: (*Body parameter*)
|
1464 |
+
The IDs of the associated datasets. Defaults to `[""]`.
|
1465 |
+
- `"llm"`: (*Body parameter*), `object`
|
1466 |
+
The LLM settings for the chat assistant to create. Defaults to `None`. When the value is `None`, a dictionary with the following values will be generated as the default. An `llm` object contains the following attributes:
|
1467 |
+
- `"model_name"`, `string`
|
1468 |
+
The chat model name. If it is `None`, the user's default chat model will be returned.
|
1469 |
+
- `"temperature"`: `float`
|
1470 |
+
Controls the randomness of the model's predictions. A lower temperature increases the model's confidence in its responses; a higher temperature increases creativity and diversity. Defaults to `0.1`.
|
1471 |
+
- `"top_p"`: `float`
|
1472 |
+
Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones. Defaults to `0.3`
|
1473 |
+
- `"presence_penalty"`: `float`
|
1474 |
+
This discourages the model from repeating the same information by penalizing words that have already appeared in the conversation. Defaults to `0.2`.
|
1475 |
+
- `"frequency penalty"`: `float`
|
1476 |
+
Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
|
1477 |
+
- `"max_token"`: `integer`
|
1478 |
+
The maximum length of the model’s output, measured in the number of tokens (words or pieces of words). Defaults to `512`.
|
1479 |
+
- `"prompt"`: (*Body parameter*), `object`
|
1480 |
+
Instructions for the LLM to follow. A `prompt` object contains the following attributes:
|
1481 |
+
- `"similarity_threshold"`: `float` RAGFlow uses a hybrid of weighted keyword similarity and vector cosine similarity during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
|
1482 |
+
- `"keywords_similarity_weight"`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
|
1483 |
+
- `"top_n"`: `int` This argument specifies the number of top chunks with similarity scores above the `similarity_threshold` that are fed to the LLM. The LLM will *only* access these 'top N' chunks. The default value is `8`.
|
1484 |
+
- `"variables"`: `object[]` This argument lists the variables to use in the 'System' field of **Chat Configurations**. Note that:
|
1485 |
+
- `"knowledge"` is a reserved variable, which will be replaced with the retrieved chunks.
|
1486 |
+
- All the variables in 'System' should be curly bracketed.
|
1487 |
+
- The default value is `[{"key": "knowledge", "optional": True}]`
|
1488 |
+
- `"rerank_model"`: `string` If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to `""`.
|
1489 |
+
- `"empty_response"`: `string` If nothing is retrieved in the dataset for the user's question, this will be used as the response. To allow the LLM to improvise when nothing is found, leave this blank. Defaults to `None`.
|
1490 |
+
- `"opener"`: `string` The opening greeting for the user. Defaults to `"Hi! I am your assistant, can I help you?"`.
|
1491 |
+
- `"show_quote`: `boolean` Indicates whether the source of text should be displayed. Defaults to `True`.
|
1492 |
+
- `"prompt"`: `string` The prompt content. Defaults to `You are an intelligent assistant. Please summarize the content of the dataset to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.
|
1493 |
+
Here is the knowledge base:
|
1494 |
+
{knowledge}
|
1495 |
+
The above is the knowledge base.`
|
1496 |
|
1497 |
### Response
|
1498 |
|
1499 |
+
Success:
|
1500 |
|
1501 |
```json
|
1502 |
{
|
|
|
1504 |
}
|
1505 |
```
|
1506 |
|
1507 |
+
Failure:
|
1508 |
|
1509 |
```json
|
1510 |
{
|
|
|
1547 |
|
1548 |
#### Request parameters
|
1549 |
|
1550 |
+
- `"ids"`: (*Body parameter*), `list[string]`
|
1551 |
+
The IDs of the chat assistants to delete. Defaults to `None`. If not specified, all chat assistants in the system will be deleted.
|
|
|
1552 |
|
1553 |
### Response
|
1554 |
|
1555 |
+
Success:
|
1556 |
|
1557 |
```json
|
1558 |
{
|
|
|
1560 |
}
|
1561 |
```
|
1562 |
|
1563 |
+
Failure:
|
1564 |
|
1565 |
```json
|
1566 |
{
|
|
|
1594 |
|
1595 |
#### Request parameters
|
1596 |
|
1597 |
+
- `"page"`: (*Path parameter*), `integer`
|
1598 |
+
Specifies the page on which the chat assistants will be displayed. Defaults to `1`.
|
1599 |
+
- `"page_size"`: (*Path parameter*), `integer`
|
1600 |
+
The number of chat assistants on each page. Defaults to `1024`.
|
1601 |
+
- `"orderby"`: (*Path parameter*), `string`
|
1602 |
+
The attribute by which the results are sorted. Available options:
|
1603 |
+
- `"create_time"` (default)
|
1604 |
+
- `"update_time"`
|
1605 |
+
- `"desc"`: (*Path parameter*), `boolean`
|
1606 |
+
Indicates whether the retrieved chat assistants should be sorted in descending order. Defaults to `True`.
|
1607 |
+
- `"id"`: (*Path parameter*), `string`
|
1608 |
+
The ID of the chat assistant to retrieve. Defaults to `None`.
|
1609 |
+
- `"name"`: (*Path parameter*), `string`
|
1610 |
+
The name of the chat assistant to retrieve. Defaults to `None`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1611 |
|
1612 |
### Response
|
1613 |
|
1614 |
+
Success:
|
1615 |
|
1616 |
```json
|
1617 |
{
|
|
|
1690 |
}
|
1691 |
```
|
1692 |
|
1693 |
+
Failure:
|
1694 |
|
1695 |
```json
|
1696 |
{
|
|
|
1699 |
}
|
1700 |
```
|
1701 |
|
1702 |
+
## Create session
|
1703 |
|
1704 |
**POST** `/api/v1/chat/{chat_id}/session`
|
1705 |
|
1706 |
+
Creates a chat session.
|
1707 |
|
1708 |
### Request
|
1709 |
|
|
|
1729 |
|
1730 |
#### Request parameters
|
1731 |
|
1732 |
+
- `"name"`: (*Body parameter*), `string`
|
1733 |
+
The name of the chat session to create.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1734 |
|
1735 |
|
1736 |
### Response
|
1737 |
|
1738 |
+
Success:
|
1739 |
|
1740 |
```json
|
1741 |
{
|
|
|
1758 |
}
|
1759 |
```
|
1760 |
|
1761 |
+
Failure:
|
1762 |
|
1763 |
```json
|
1764 |
{
|
|
|
1769 |
|
1770 |
---
|
1771 |
|
1772 |
+
## Update session
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1773 |
|
1774 |
**PUT** `/api/v1/chat/{chat_id}/session/{session_id}`
|
1775 |
|
|
|
1799 |
|
1800 |
#### Request Parameter
|
1801 |
|
1802 |
+
- `"name`: (*Body Parameter), `string`
|
1803 |
+
The name of the session to update.
|
|
|
1804 |
|
1805 |
### Response
|
1806 |
|
1807 |
+
Success:
|
1808 |
|
1809 |
```json
|
1810 |
{
|
|
|
1812 |
}
|
1813 |
```
|
1814 |
|
1815 |
+
Failure:
|
1816 |
|
1817 |
```json
|
1818 |
{
|
|
|
1825 |
|
1826 |
## List sessions
|
1827 |
|
1828 |
+
**GET** `/api/v1/chat/{chat_id}/session?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&name={session_name}&id={session_id}`
|
1829 |
|
1830 |
+
Lists sessions associated with a specified chat assistant.
|
1831 |
|
1832 |
### Request
|
1833 |
|
|
|
1846 |
|
1847 |
#### Request Parameters
|
1848 |
|
1849 |
+
- `"page"`: (*Path parameter*), `integer`
|
1850 |
+
Specifies the page on which the sessions will be displayed. Defaults to `1`.
|
1851 |
+
- `"page_size"`: (*Path parameter*), `integer`
|
1852 |
+
The number of sessions on each page. Defaults to `1024`.
|
1853 |
+
- `"orderby"`: (*Path parameter*), `string`
|
1854 |
+
The field by which sessions should be sorted. Available options:
|
1855 |
+
- `"create_time"` (default)
|
1856 |
+
- `"update_time"`
|
1857 |
+
- `"desc"`: (*Path parameter*), `boolean`
|
1858 |
+
Indicates whether the retrieved sessions should be sorted in descending order. Defaults to `True`.
|
1859 |
+
- `"id"`: (*Path parameter*), `string`
|
1860 |
+
The ID of the chat session to retrieve. Defaults to `None`.
|
1861 |
+
- `"name"`: (*Path parameter*) `string`
|
1862 |
+
The name of the chat session to retrieve. Defaults to `None`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1863 |
|
1864 |
### Response
|
1865 |
|
1866 |
+
Success:
|
1867 |
|
1868 |
```json
|
1869 |
{
|
|
|
1888 |
}
|
1889 |
```
|
1890 |
|
1891 |
+
Failure:
|
1892 |
|
1893 |
```json
|
1894 |
{
|
|
|
1930 |
|
1931 |
#### Request Parameters
|
1932 |
|
1933 |
+
- `"ids"`: (*Body Parameter*), `list[string]`
|
1934 |
+
The IDs of the sessions to delete. Defaults to `None`. If not specified, all sessions associated with the current chat assistant will be deleted.
|
|
|
1935 |
|
1936 |
### Response
|
1937 |
|
1938 |
+
Success:
|
1939 |
|
1940 |
```json
|
1941 |
{
|
|
|
1943 |
}
|
1944 |
```
|
1945 |
|
1946 |
+
Failure:
|
1947 |
|
1948 |
```json
|
1949 |
{
|
|
|
1954 |
|
1955 |
---
|
1956 |
|
1957 |
+
## Chat
|
1958 |
|
1959 |
**POST** `/api/v1/chat/{chat_id}/completion`
|
1960 |
|
|
|
1969 |
- `'Authorization: Bearer {YOUR_API_KEY}'`
|
1970 |
- Body:
|
1971 |
- `"question"`: `string`
|
1972 |
+
- `"stream"`: `boolean`
|
1973 |
- `"session_id"`: `string`
|
1974 |
|
1975 |
#### Request example
|
|
|
1980 |
--header 'Content-Type: application/json' \
|
1981 |
--header 'Authorization: Bearer {YOUR_API_KEY}' \
|
1982 |
--data-binary '{
|
1983 |
+
"question": "Hello!",
|
1984 |
"stream": true
|
1985 |
}'
|
1986 |
```
|
1987 |
|
1988 |
#### Request Parameters
|
1989 |
|
1990 |
+
- `"question"`: (*Body Parameter*), `string` *Required*
|
1991 |
+
The question to start an AI chat.
|
1992 |
+
- `"stream"`: (*Body Parameter*), `string`
|
1993 |
+
Indicates whether to output responses in a streaming way:
|
1994 |
+
- `True`: Enable streaming.
|
1995 |
+
- `False`: (Default) Disable streaming.
|
|
|
1996 |
- `"session_id"`: (*Body Parameter*)
|
1997 |
+
The ID of session. If not provided, a new session will be generated.???????????????
|
1998 |
|
1999 |
### Response
|
2000 |
|
2001 |
+
Success:
|
2002 |
|
2003 |
```json
|
2004 |
data: {
|
|
|
2100 |
}
|
2101 |
```
|
2102 |
|
2103 |
+
Failure:
|
2104 |
|
2105 |
```json
|
2106 |
{
|
api/python_api_reference.md
CHANGED
@@ -587,9 +587,9 @@ The key terms or phrases to tag with the chunk.
|
|
587 |
|
588 |
A `Chunk` object contains the following attributes:
|
589 |
|
590 |
-
- `id`: `str
|
591 |
-
- `content`: `str`
|
592 |
-
- `important_keywords`: `list[str]` A list of key terms or phrases
|
593 |
- `create_time`: `str` The time when the chunk was created (added to the document).
|
594 |
- `create_timestamp`: `float` The timestamp representing the creation time of the chunk, expressed in seconds since January 1, 1970.
|
595 |
- `knowledgebase_id`: `str` The ID of the associated dataset.
|
@@ -710,7 +710,7 @@ Updates content or configurations for the current chunk.
|
|
710 |
|
711 |
A dictionary representing the attributes to update, with the following keys:
|
712 |
|
713 |
-
- `"content"`: `str`
|
714 |
- `"important_keywords"`: `list[str]` A list of key terms or phrases to tag with the chunk.
|
715 |
- `"available"`: `bool` The chunk's availability status in the dataset. Value options:
|
716 |
- `False`: Unavailable
|
@@ -753,11 +753,11 @@ The user query or query keywords. Defaults to `""`.
|
|
753 |
|
754 |
#### datasets: `list[str]`, *Required*
|
755 |
|
756 |
-
The datasets to search from.
|
757 |
|
758 |
#### document: `list[str]`
|
759 |
|
760 |
-
The documents to search from. Defaults to `None`.
|
761 |
|
762 |
#### offset: `int`
|
763 |
|
@@ -771,7 +771,7 @@ The maximum number of chunks to retrieve. Defaults to `1024`.
|
|
771 |
|
772 |
The minimum similarity score. Defaults to `0.2`.
|
773 |
|
774 |
-
####
|
775 |
|
776 |
The weight of vector cosine similarity. Defaults to `0.3`. If x represents the vector cosine similarity, then (1 - x) is the term similarity weight.
|
777 |
|
@@ -792,7 +792,7 @@ Indicates whether to enable keyword-based matching:
|
|
792 |
|
793 |
#### highlight: `bool`
|
794 |
|
795 |
-
|
796 |
|
797 |
- `True`: Enable highlighting of matched terms.
|
798 |
- `False`: Disable highlighting of matched terms (default).
|
@@ -849,11 +849,9 @@ Creates a chat assistant.
|
|
849 |
|
850 |
### Parameters
|
851 |
|
852 |
-
The following shows the attributes of a `Chat` object:
|
853 |
-
|
854 |
#### name: `str`, *Required*
|
855 |
|
856 |
-
The name of the chat assistant
|
857 |
|
858 |
#### avatar: `str`
|
859 |
|
@@ -865,39 +863,41 @@ The IDs of the associated datasets. Defaults to `[""]`.
|
|
865 |
|
866 |
#### llm: `Chat.LLM`
|
867 |
|
868 |
-
The
|
869 |
-
|
870 |
-
An `LLM` object contains the following attributes:
|
871 |
|
872 |
-
- `model_name
|
873 |
The chat model name. If it is `None`, the user's default chat model will be returned.
|
874 |
-
- `temperature
|
875 |
-
Controls the randomness of the model's predictions. A lower temperature increases the model's
|
876 |
-
- `top_p
|
877 |
Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones. Defaults to `0.3`
|
878 |
-
- `presence_penalty
|
879 |
This discourages the model from repeating the same information by penalizing words that have already appeared in the conversation. Defaults to `0.2`.
|
880 |
-
- `frequency penalty
|
881 |
Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
|
882 |
-
- `max_token
|
883 |
-
|
884 |
|
885 |
#### prompt: `Chat.Prompt`
|
886 |
|
887 |
Instructions for the LLM to follow. A `Prompt` object contains the following attributes:
|
888 |
|
889 |
-
- `
|
890 |
-
- `
|
891 |
-
- `
|
892 |
-
- `
|
893 |
-
- `
|
894 |
-
-
|
895 |
-
-
|
896 |
-
|
897 |
-
- `
|
|
|
|
|
|
|
|
|
898 |
Here is the knowledge base:
|
899 |
{knowledge}
|
900 |
-
The above is the knowledge base
|
901 |
|
902 |
### Returns
|
903 |
|
@@ -942,11 +942,11 @@ A dictionary representing the attributes to update, with the following keys:
|
|
942 |
- `"top_p"`, `float` Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from.
|
943 |
- `"presence_penalty"`, `float` This discourages the model from repeating the same information by penalizing words that have appeared in the conversation.
|
944 |
- `"frequency penalty"`, `float` Similar to presence penalty, this reduces the model’s tendency to repeat the same words.
|
945 |
-
- `"max_token"`, `int`
|
946 |
- `"prompt"` : Instructions for the LLM to follow.
|
947 |
-
- `"similarity_threshold"`: `float`
|
948 |
-
- `"keywords_similarity_weight"`: `float`
|
949 |
-
- `"top_n"`: `int`
|
950 |
- `"variables"`: `list[dict[]]` If you use dialog APIs, the variables might help you chat with your clients with different strategies. The variables are used to fill in the 'System' part in prompt in order to give LLM a hint. The 'knowledge' is a very special variable which will be filled-in with the retrieved chunks. All the variables in 'System' should be curly bracketed. Defaults to `[{"key": "knowledge", "optional": True}]`
|
951 |
- `"rerank_model"`: `str` If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to `""`.
|
952 |
- `"empty_response"`: `str` If nothing is retrieved in the dataset for the user's question, this will be used as the response. To allow the LLM to improvise when nothing is retrieved, leave this blank. Defaults to `None`.
|
|
|
587 |
|
588 |
A `Chunk` object contains the following attributes:
|
589 |
|
590 |
+
- `id`: `str`: The chunk ID.
|
591 |
+
- `content`: `str` The text content of the chunk.
|
592 |
+
- `important_keywords`: `list[str]` A list of key terms or phrases tagged with the chunk.
|
593 |
- `create_time`: `str` The time when the chunk was created (added to the document).
|
594 |
- `create_timestamp`: `float` The timestamp representing the creation time of the chunk, expressed in seconds since January 1, 1970.
|
595 |
- `knowledgebase_id`: `str` The ID of the associated dataset.
|
|
|
710 |
|
711 |
A dictionary representing the attributes to update, with the following keys:
|
712 |
|
713 |
+
- `"content"`: `str` The text content of the chunk.
|
714 |
- `"important_keywords"`: `list[str]` A list of key terms or phrases to tag with the chunk.
|
715 |
- `"available"`: `bool` The chunk's availability status in the dataset. Value options:
|
716 |
- `False`: Unavailable
|
|
|
753 |
|
754 |
#### datasets: `list[str]`, *Required*
|
755 |
|
756 |
+
The IDs of the datasets to search from.
|
757 |
|
758 |
#### document: `list[str]`
|
759 |
|
760 |
+
The IDs of the documents to search from. Defaults to `None`.
|
761 |
|
762 |
#### offset: `int`
|
763 |
|
|
|
771 |
|
772 |
The minimum similarity score. Defaults to `0.2`.
|
773 |
|
774 |
+
#### vector_similarity_weight: `float`
|
775 |
|
776 |
The weight of vector cosine similarity. Defaults to `0.3`. If x represents the vector cosine similarity, then (1 - x) is the term similarity weight.
|
777 |
|
|
|
792 |
|
793 |
#### highlight: `bool`
|
794 |
|
795 |
+
Specifies whether to enable highlighting of matched terms in the results:
|
796 |
|
797 |
- `True`: Enable highlighting of matched terms.
|
798 |
- `False`: Disable highlighting of matched terms (default).
|
|
|
849 |
|
850 |
### Parameters
|
851 |
|
|
|
|
|
852 |
#### name: `str`, *Required*
|
853 |
|
854 |
+
The name of the chat assistant.
|
855 |
|
856 |
#### avatar: `str`
|
857 |
|
|
|
863 |
|
864 |
#### llm: `Chat.LLM`
|
865 |
|
866 |
+
The LLM settings for the chat assistant to create. Defaults to `None`. When the value is `None`, a dictionary with the following values will be generated as the default. An `LLM` object contains the following attributes:
|
|
|
|
|
867 |
|
868 |
+
- `model_name`: `str`
|
869 |
The chat model name. If it is `None`, the user's default chat model will be returned.
|
870 |
+
- `temperature`: `float`
|
871 |
+
Controls the randomness of the model's predictions. A lower temperature increases the model's confidence in its responses; a higher temperature increases creativity and diversity. Defaults to `0.1`.
|
872 |
+
- `top_p`: `float`
|
873 |
Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones. Defaults to `0.3`
|
874 |
+
- `presence_penalty`: `float`
|
875 |
This discourages the model from repeating the same information by penalizing words that have already appeared in the conversation. Defaults to `0.2`.
|
876 |
+
- `frequency penalty`: `float`
|
877 |
Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
|
878 |
+
- `max_token`: `int`
|
879 |
+
The maximum length of the model’s output, measured in the number of tokens (words or pieces of words). Defaults to `512`.
|
880 |
|
881 |
#### prompt: `Chat.Prompt`
|
882 |
|
883 |
Instructions for the LLM to follow. A `Prompt` object contains the following attributes:
|
884 |
|
885 |
+
- `similarity_threshold`: `float` RAGFlow uses a hybrid of weighted keyword similarity and vector cosine similarity during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
|
886 |
+
- `keywords_similarity_weight`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
|
887 |
+
- `top_n`: `int` This argument specifies the number of top chunks with similarity scores above the `similarity_threshold` that are fed to the LLM. The LLM will *only* access these 'top N' chunks. The default value is `8`.
|
888 |
+
- `variables`: `list[dict[]]` This argument lists the variables to use in the 'System' field of **Chat Configurations**. Note that:
|
889 |
+
- `knowledge` is a reserved variable, which will be replaced with the retrieved chunks.
|
890 |
+
- All the variables in 'System' should be curly bracketed.
|
891 |
+
- The default value is `[{"key": "knowledge", "optional": True}]`
|
892 |
+
|
893 |
+
- `rerank_model`: `str` If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to `""`.
|
894 |
+
- `empty_response`: `str` If nothing is retrieved in the dataset for the user's question, this will be used as the response. To allow the LLM to improvise when nothing is found, leave this blank. Defaults to `None`.
|
895 |
+
- `opener`: `str` The opening greeting for the user. Defaults to `"Hi! I am your assistant, can I help you?"`.
|
896 |
+
- `show_quote`: `bool` Indicates whether the source of text should be displayed. Defaults to `True`.
|
897 |
+
- `prompt`: `str` The prompt content. Defaults to `You are an intelligent assistant. Please summarize the content of the dataset to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.
|
898 |
Here is the knowledge base:
|
899 |
{knowledge}
|
900 |
+
The above is the knowledge base.`
|
901 |
|
902 |
### Returns
|
903 |
|
|
|
942 |
- `"top_p"`, `float` Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from.
|
943 |
- `"presence_penalty"`, `float` This discourages the model from repeating the same information by penalizing words that have appeared in the conversation.
|
944 |
- `"frequency penalty"`, `float` Similar to presence penalty, this reduces the model’s tendency to repeat the same words.
|
945 |
+
- `"max_token"`, `int` The maximum length of the model’s output, measured in the number of tokens (words or pieces of words).
|
946 |
- `"prompt"` : Instructions for the LLM to follow.
|
947 |
+
- `"similarity_threshold"`: `float` RAGFlow uses a hybrid of weighted keyword similarity and vector cosine similarity during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
|
948 |
+
- `"keywords_similarity_weight"`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
|
949 |
+
- `"top_n"`: `int` This argument specifies the number of top chunks with similarity scores above the `similarity_threshold` that are fed to the LLM. The LLM will *only* access these 'top N' chunks. The default value is `8`.
|
950 |
- `"variables"`: `list[dict[]]` If you use dialog APIs, the variables might help you chat with your clients with different strategies. The variables are used to fill in the 'System' part in prompt in order to give LLM a hint. The 'knowledge' is a very special variable which will be filled-in with the retrieved chunks. All the variables in 'System' should be curly bracketed. Defaults to `[{"key": "knowledge", "optional": True}]`
|
951 |
- `"rerank_model"`: `str` If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to `""`.
|
952 |
- `"empty_response"`: `str` If nothing is retrieved in the dataset for the user's question, this will be used as the response. To allow the LLM to improvise when nothing is retrieved, leave this blank. Defaults to `None`.
|