jinhai-2012 commited on
Commit
c8c0e6e
·
1 Parent(s): 95f8bbb

Remove defaults to 'None' (#2996)

Browse files

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>

Files changed (1) hide show
  1. api/http_api_reference.md +18 -18
api/http_api_reference.md CHANGED
@@ -324,9 +324,9 @@ curl --request GET \
324
  - `"desc"`: (*Path parameter*)
325
  Indicates whether the retrieved datasets should be sorted in descending order. Defaults to `True`.
326
  - `"id"`: (*Path parameter*)
327
- The ID of the dataset to retrieve. Defaults to `None`.
328
  - `"name"`: (*Path parameter*)
329
- The name of the dataset to retrieve. Defaults to `None`.
330
 
331
  ### Response
332
 
@@ -600,7 +600,7 @@ curl --request GET \
600
  - `"dataset_id"`: (*Path parameter*)
601
  The dataset ID.
602
  - `"keywords"`: (*Filter parameter*), `string`
603
- The keywords used to match document titles. Defaults to `None`.
604
  - `"offset"`: (*Filter parameter*), `integer`
605
  The starting index for the documents to retrieve. Typically used in conjunction with `limit`. Defaults to `1`.
606
  - `"limit"`: (*Filter parameter*), `integer`
@@ -612,7 +612,7 @@ curl --request GET \
612
  - `"desc"`: (*Filter parameter*), `boolean`
613
  Indicates whether the retrieved documents should be sorted in descending order. Defaults to `True`.
614
  - `"document_id"`: (*Filter parameter*)
615
- The ID of the document to retrieve. Defaults to `None`.
616
 
617
  ### Response
618
 
@@ -701,7 +701,7 @@ curl --request DELETE \
701
  #### Request parameters
702
 
703
  - `"ids"`: (*Body parameter*), `list[string]`
704
- The IDs of the documents to delete. Defaults to `None`. If not specified, all documents in the dataset will be deleted.
705
 
706
  ### Response
707
 
@@ -1027,7 +1027,7 @@ curl --request DELETE \
1027
  #### Request parameters
1028
 
1029
  - `"chunk_ids"`: (*Body parameter*)
1030
- The IDs of the chunks to delete. Defaults to `None`. If not specified, all chunks of the current document will be deleted.
1031
 
1032
  ### Response
1033
 
@@ -1164,7 +1164,7 @@ curl --request POST \
1164
  - `"datasets"`: (*Body parameter*) `list[string]`, *Required*
1165
  The IDs of the datasets to search from.
1166
  - `"documents"`: (*Body parameter*), `list[string]`
1167
- The IDs of the documents to search from. Defaults to `None`.
1168
  - `"offset"`: (*Body parameter*), `integer`
1169
  The starting index for the documents to retrieve. Defaults to `1`.
1170
  - `"limit"`: (*Body parameter*)
@@ -1176,7 +1176,7 @@ curl --request POST \
1176
  - `"top_k"`: (*Body parameter*)
1177
  The number of chunks engaged in vector cosine computaton. Defaults to `1024`.
1178
  - `"rerank_id"`: (*Body parameter*)
1179
- The ID of the rerank model. Defaults to `None`.
1180
  - `"keyword"`: (*Body parameter*), `boolean`
1181
  Indicates whether to enable keyword-based matching:
1182
  - `True`: Enable keyword-based matching.
@@ -1308,7 +1308,7 @@ curl --request POST \
1308
  - `"knowledgebases"`: (*Body parameter*)
1309
  The IDs of the associated datasets. Defaults to `[""]`.
1310
  - `"llm"`: (*Body parameter*), `object`
1311
- The LLM settings for the chat assistant to create. Defaults to `None`. When the value is `None`, a dictionary with the following values will be generated as the default. An `llm` object contains the following attributes:
1312
  - `"model_name"`, `string`
1313
  The chat model name. If it is `None`, the user's default chat model will be returned.
1314
  - `"temperature"`: `float`
@@ -1331,7 +1331,7 @@ curl --request POST \
1331
  - All the variables in 'System' should be curly bracketed.
1332
  - The default value is `[{"key": "knowledge", "optional": True}]`
1333
  - `"rerank_model"`: `string` If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to `""`.
1334
- - `"empty_response"`: `string` If nothing is retrieved in the dataset for the user's question, this will be used as the response. To allow the LLM to improvise when nothing is found, leave this blank. Defaults to `None`.
1335
  - `"opener"`: `string` The opening greeting for the user. Defaults to `"Hi! I am your assistant, can I help you?"`.
1336
  - `"show_quote`: `boolean` Indicates whether the source of text should be displayed. Defaults to `True`.
1337
  - `"prompt"`: `string` The prompt content. Defaults to `You are an intelligent assistant. Please summarize the content of the dataset to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.
@@ -1463,7 +1463,7 @@ curl --request PUT \
1463
  - `"knowledgebases"`: (*Body parameter*)
1464
  The IDs of the associated datasets. Defaults to `[""]`.
1465
  - `"llm"`: (*Body parameter*), `object`
1466
- The LLM settings for the chat assistant to create. Defaults to `None`. When the value is `None`, a dictionary with the following values will be generated as the default. An `llm` object contains the following attributes:
1467
  - `"model_name"`, `string`
1468
  The chat model name. If it is `None`, the user's default chat model will be returned.
1469
  - `"temperature"`: `float`
@@ -1486,7 +1486,7 @@ curl --request PUT \
1486
  - All the variables in 'System' should be curly bracketed.
1487
  - The default value is `[{"key": "knowledge", "optional": True}]`
1488
  - `"rerank_model"`: `string` If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to `""`.
1489
- - `"empty_response"`: `string` If nothing is retrieved in the dataset for the user's question, this will be used as the response. To allow the LLM to improvise when nothing is found, leave this blank. Defaults to `None`.
1490
  - `"opener"`: `string` The opening greeting for the user. Defaults to `"Hi! I am your assistant, can I help you?"`.
1491
  - `"show_quote`: `boolean` Indicates whether the source of text should be displayed. Defaults to `True`.
1492
  - `"prompt"`: `string` The prompt content. Defaults to `You are an intelligent assistant. Please summarize the content of the dataset to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.
@@ -1548,7 +1548,7 @@ curl --request DELETE \
1548
  #### Request parameters
1549
 
1550
  - `"ids"`: (*Body parameter*), `list[string]`
1551
- The IDs of the chat assistants to delete. Defaults to `None`. If not specified, all chat assistants in the system will be deleted.
1552
 
1553
  ### Response
1554
 
@@ -1605,9 +1605,9 @@ curl --request GET \
1605
  - `"desc"`: (*Path parameter*), `boolean`
1606
  Indicates whether the retrieved chat assistants should be sorted in descending order. Defaults to `True`.
1607
  - `"id"`: (*Path parameter*), `string`
1608
- The ID of the chat assistant to retrieve. Defaults to `None`.
1609
  - `"name"`: (*Path parameter*), `string`
1610
- The name of the chat assistant to retrieve. Defaults to `None`.
1611
 
1612
  ### Response
1613
 
@@ -1857,9 +1857,9 @@ curl --request GET \
1857
  - `"desc"`: (*Path parameter*), `boolean`
1858
  Indicates whether the retrieved sessions should be sorted in descending order. Defaults to `True`.
1859
  - `"id"`: (*Path parameter*), `string`
1860
- The ID of the chat session to retrieve. Defaults to `None`.
1861
  - `"name"`: (*Path parameter*) `string`
1862
- The name of the chat session to retrieve. Defaults to `None`.
1863
 
1864
  ### Response
1865
 
@@ -1931,7 +1931,7 @@ curl --request DELETE \
1931
  #### Request Parameters
1932
 
1933
  - `"ids"`: (*Body Parameter*), `list[string]`
1934
- The IDs of the sessions to delete. Defaults to `None`. If not specified, all sessions associated with the current chat assistant will be deleted.
1935
 
1936
  ### Response
1937
 
 
324
  - `"desc"`: (*Path parameter*)
325
  Indicates whether the retrieved datasets should be sorted in descending order. Defaults to `True`.
326
  - `"id"`: (*Path parameter*)
327
+ The ID of the dataset to retrieve.
328
  - `"name"`: (*Path parameter*)
329
+ The name of the dataset to retrieve.
330
 
331
  ### Response
332
 
 
600
  - `"dataset_id"`: (*Path parameter*)
601
  The dataset ID.
602
  - `"keywords"`: (*Filter parameter*), `string`
603
+ The keywords used to match document titles.
604
  - `"offset"`: (*Filter parameter*), `integer`
605
  The starting index for the documents to retrieve. Typically used in conjunction with `limit`. Defaults to `1`.
606
  - `"limit"`: (*Filter parameter*), `integer`
 
612
  - `"desc"`: (*Filter parameter*), `boolean`
613
  Indicates whether the retrieved documents should be sorted in descending order. Defaults to `True`.
614
  - `"document_id"`: (*Filter parameter*)
615
+ The ID of the document to retrieve.
616
 
617
  ### Response
618
 
 
701
  #### Request parameters
702
 
703
  - `"ids"`: (*Body parameter*), `list[string]`
704
+ The IDs of the documents to delete. If not specified, all documents in the dataset will be deleted.
705
 
706
  ### Response
707
 
 
1027
  #### Request parameters
1028
 
1029
  - `"chunk_ids"`: (*Body parameter*)
1030
+ The IDs of the chunks to delete. If not specified, all chunks of the current document will be deleted.
1031
 
1032
  ### Response
1033
 
 
1164
  - `"datasets"`: (*Body parameter*) `list[string]`, *Required*
1165
  The IDs of the datasets to search from.
1166
  - `"documents"`: (*Body parameter*), `list[string]`
1167
+ The IDs of the documents to search from.
1168
  - `"offset"`: (*Body parameter*), `integer`
1169
  The starting index for the documents to retrieve. Defaults to `1`.
1170
  - `"limit"`: (*Body parameter*)
 
1176
  - `"top_k"`: (*Body parameter*)
1177
  The number of chunks engaged in vector cosine computaton. Defaults to `1024`.
1178
  - `"rerank_id"`: (*Body parameter*)
1179
+ The ID of the rerank model.
1180
  - `"keyword"`: (*Body parameter*), `boolean`
1181
  Indicates whether to enable keyword-based matching:
1182
  - `True`: Enable keyword-based matching.
 
1308
  - `"knowledgebases"`: (*Body parameter*)
1309
  The IDs of the associated datasets. Defaults to `[""]`.
1310
  - `"llm"`: (*Body parameter*), `object`
1311
+ The LLM settings for the chat assistant to create. When the value is `None`, a dictionary with the following values will be generated as the default. An `llm` object contains the following attributes:
1312
  - `"model_name"`, `string`
1313
  The chat model name. If it is `None`, the user's default chat model will be returned.
1314
  - `"temperature"`: `float`
 
1331
  - All the variables in 'System' should be curly bracketed.
1332
  - The default value is `[{"key": "knowledge", "optional": True}]`
1333
  - `"rerank_model"`: `string` If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to `""`.
1334
+ - `"empty_response"`: `string` If nothing is retrieved in the dataset for the user's question, this will be used as the response. To allow the LLM to improvise when nothing is found, leave this blank.
1335
  - `"opener"`: `string` The opening greeting for the user. Defaults to `"Hi! I am your assistant, can I help you?"`.
1336
  - `"show_quote`: `boolean` Indicates whether the source of text should be displayed. Defaults to `True`.
1337
  - `"prompt"`: `string` The prompt content. Defaults to `You are an intelligent assistant. Please summarize the content of the dataset to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.
 
1463
  - `"knowledgebases"`: (*Body parameter*)
1464
  The IDs of the associated datasets. Defaults to `[""]`.
1465
  - `"llm"`: (*Body parameter*), `object`
1466
+ The LLM settings for the chat assistant to create. When the value is `None`, a dictionary with the following values will be generated as the default. An `llm` object contains the following attributes:
1467
  - `"model_name"`, `string`
1468
  The chat model name. If it is `None`, the user's default chat model will be returned.
1469
  - `"temperature"`: `float`
 
1486
  - All the variables in 'System' should be curly bracketed.
1487
  - The default value is `[{"key": "knowledge", "optional": True}]`
1488
  - `"rerank_model"`: `string` If it is not specified, vector cosine similarity will be used; otherwise, reranking score will be used. Defaults to `""`.
1489
+ - `"empty_response"`: `string` If nothing is retrieved in the dataset for the user's question, this will be used as the response. To allow the LLM to improvise when nothing is found, leave this blank.
1490
  - `"opener"`: `string` The opening greeting for the user. Defaults to `"Hi! I am your assistant, can I help you?"`.
1491
  - `"show_quote`: `boolean` Indicates whether the source of text should be displayed. Defaults to `True`.
1492
  - `"prompt"`: `string` The prompt content. Defaults to `You are an intelligent assistant. Please summarize the content of the dataset to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.
 
1548
  #### Request parameters
1549
 
1550
  - `"ids"`: (*Body parameter*), `list[string]`
1551
+ The IDs of the chat assistants to delete. If not specified, all chat assistants in the system will be deleted.
1552
 
1553
  ### Response
1554
 
 
1605
  - `"desc"`: (*Path parameter*), `boolean`
1606
  Indicates whether the retrieved chat assistants should be sorted in descending order. Defaults to `True`.
1607
  - `"id"`: (*Path parameter*), `string`
1608
+ The ID of the chat assistant to retrieve.
1609
  - `"name"`: (*Path parameter*), `string`
1610
+ The name of the chat assistant to retrieve.
1611
 
1612
  ### Response
1613
 
 
1857
  - `"desc"`: (*Path parameter*), `boolean`
1858
  Indicates whether the retrieved sessions should be sorted in descending order. Defaults to `True`.
1859
  - `"id"`: (*Path parameter*), `string`
1860
+ The ID of the chat session to retrieve.
1861
  - `"name"`: (*Path parameter*) `string`
1862
+ The name of the chat session to retrieve.
1863
 
1864
  ### Response
1865
 
 
1931
  #### Request Parameters
1932
 
1933
  - `"ids"`: (*Body Parameter*), `list[string]`
1934
+ The IDs of the sessions to delete. If not specified, all sessions associated with the current chat assistant will be deleted.
1935
 
1936
  ### Response
1937