writinwaters commited on
Commit
10bca92
·
1 Parent(s): 1a43942

Final touches to HTTP and Python API references (#3019)

Browse files

### What problem does this PR solve?


### Type of change


- [x] Documentation Update

api/http_api_reference.md CHANGED
@@ -5,7 +5,7 @@
5
 
6
  ---
7
 
8
- :::tip NOTE
9
  Dataset Management
10
  :::
11
 
@@ -32,7 +32,7 @@ Creates a dataset.
32
  - `"embedding_model"`: `string`
33
  - `"permission"`: `string`
34
  - `"chunk_method"`: `string`
35
- - `"parser_config"`: `Dataset.ParserConfig`
36
 
37
  #### Request example
38
 
@@ -86,11 +86,11 @@ curl --request POST \
86
  - `"laws"`: Laws
87
  - `"presentation"`: Presentation
88
  - `"picture"`: Picture
89
- - `"one"`:One
90
  - `"knowledge_graph"`: Knowledge Graph
91
  - `"email"`: Email
92
 
93
- - `"parser_config"`: (*Body parameter*)
94
  The configuration settings for the dataset parser. A `ParserConfig` object contains the following attributes:
95
  - `"chunk_token_count"`: Defaults to `128`.
96
  - `"layout_recognize"`: Defaults to `true`.
@@ -237,8 +237,8 @@ curl --request PUT \
237
  - `dataset_id`: (*Path parameter*)
238
  The ID of the dataset to update.
239
  - `"name"`: `string`
240
- The name of the dataset to update.
241
- - `"embedding_model"`: `string` The embedding model name to update.
242
  - Ensure that `"chunk_count"` is `0` before updating `"embedding_model"`.
243
  - `"chunk_method"`: `enum<string>` The chunking method for the dataset. Available options:
244
  - `"naive"`: General
@@ -572,7 +572,7 @@ curl --request GET \
572
  Success:
573
 
574
  ```text
575
- This is a test to verify the file download functionality.
576
  ```
577
 
578
  Failure:
@@ -938,7 +938,7 @@ Lists chunks in a specified document.
938
  ### Request
939
 
940
  - Method: GET
941
- - URL: `/api/v1/dataset/{dataset_id}/document/{document_id}/chunk?keywords={keywords}&offset={offset}&limit={limit}&id={id}`
942
  - Headers:
943
  - `'Authorization: Bearer {YOUR_API_KEY}'`
944
 
@@ -946,7 +946,7 @@ Lists chunks in a specified document.
946
 
947
  ```bash
948
  curl --request GET \
949
- --url http://{address}/api/v1/dataset/{dataset_id}/document/{document_id}/chunk?keywords={keywords}&offset={offset}&limit={limit}&id={id} \
950
  --header 'Authorization: Bearer {YOUR_API_KEY}'
951
  ```
952
 
@@ -956,13 +956,13 @@ curl --request GET \
956
  The associated dataset ID.
957
  - `document_ids`: (*Path parameter*)
958
  The associated document ID.
959
- - `"keywords"`(*Filter parameter*), `string`
960
  The keywords used to match chunk content.
961
- - `"offset"`(*Filter parameter*), `string`
962
  The starting index for the chunks to retrieve. Defaults to `1`.
963
- - `"limit"`(*Filter parameter*), `integer`
964
  The maximum number of chunks to retrieve. Default: `1024`
965
- - `"id"`(*Filter parameter*), `string`
966
  The ID of the chunk to retrieve.
967
 
968
  ### Response
@@ -1210,21 +1210,21 @@ curl --request POST \
1210
 
1211
  - `"question"`: (*Body parameter*), `string`, *Required*
1212
  The user query or query keywords.
1213
- - `"dataset_ids"`: (*Body parameter*) `list[string]`, *Required*
1214
- The IDs of the datasets to search from.
1215
  - `"document_ids"`: (*Body parameter*), `list[string]`
1216
- The IDs of the documents to search from.
1217
  - `"offset"`: (*Body parameter*), `integer`
1218
  The starting index for the documents to retrieve. Defaults to `1`.
1219
  - `"limit"`: (*Body parameter*)
1220
  The maximum number of chunks to retrieve. Defaults to `1024`.
1221
  - `"similarity_threshold"`: (*Body parameter*)
1222
  The minimum similarity score. Defaults to `0.2`.
1223
- - `"vector_similarity_weight"`: (*Body parameter*)
1224
  The weight of vector cosine similarity. Defaults to `0.3`. If x represents the vector cosine similarity, then (1 - x) is the term similarity weight.
1225
- - `"top_k"`: (*Body parameter*)
1226
  The number of chunks engaged in vector cosine computaton. Defaults to `1024`.
1227
- - `"rerank_id"`: (*Body parameter*)
1228
  The ID of the rerank model.
1229
  - `"keyword"`: (*Body parameter*), `boolean`
1230
  Indicates whether to enable keyword-based matching:
@@ -1335,7 +1335,7 @@ curl --request POST \
1335
  - `"dataset_ids"`: (*Body parameter*), `list[string]`
1336
  The IDs of the associated datasets.
1337
  - `"llm"`: (*Body parameter*), `object`
1338
- The LLM settings for the chat assistant to create. If it is not explicitly set, a dictionary with the following values will be generated as the default. An `llm` object contains the following attributes:
1339
  - `"model_name"`, `string`
1340
  The chat model name. If not set, the user's default chat model will be used.
1341
  - `"temperature"`: `float`
@@ -1349,7 +1349,7 @@ curl --request POST \
1349
  - `"max_token"`: `integer`
1350
  The maximum length of the model’s output, measured in the number of tokens (words or pieces of words). Defaults to `512`.
1351
  - `"prompt"`: (*Body parameter*), `object`
1352
- Instructions for the LLM to follow. A `prompt` object contains the following attributes:
1353
  - `"similarity_threshold"`: `float` RAGFlow uses a hybrid of weighted keyword similarity and vector cosine similarity during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
1354
  - `"keywords_similarity_weight"`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
1355
  - `"top_n"`: `int` This argument specifies the number of top chunks with similarity scores above the `similarity_threshold` that are fed to the LLM. The LLM will *only* access these 'top N' chunks. The default value is `8`.
@@ -1467,7 +1467,7 @@ curl --request PUT \
1467
  - `chat_id`: (*Path parameter*)
1468
  The ID of the chat assistant to update.
1469
  - `"name"`: (*Body parameter*), `string`, *Required*
1470
- The name of the chat assistant.
1471
  - `"avatar"`: (*Body parameter*), `string`
1472
  Base64 encoding of the avatar.
1473
  - `"dataset_ids"`: (*Body parameter*), `list[string]`
@@ -1603,19 +1603,19 @@ curl --request GET \
1603
 
1604
  #### Request parameters
1605
 
1606
- - `page`: (*Path parameter*), `integer`
1607
  Specifies the page on which the chat assistants will be displayed. Defaults to `1`.
1608
- - `page_size`: (*Path parameter*), `integer`
1609
  The number of chat assistants on each page. Defaults to `1024`.
1610
- - `orderby`: (*Path parameter*), `string`
1611
  The attribute by which the results are sorted. Available options:
1612
  - `create_time` (default)
1613
  - `update_time`
1614
- - `"desc"`: (*Path parameter*), `boolean`
1615
  Indicates whether the retrieved chat assistants should be sorted in descending order. Defaults to `true`.
1616
- - `id`: (*Path parameter*), `string`
1617
  The ID of the chat assistant to retrieve.
1618
- - `name`: (*Path parameter*), `string`
1619
  The name of the chat assistant to retrieve.
1620
 
1621
  ### Response
@@ -1775,7 +1775,7 @@ curl --request PUT \
1775
  --header 'Authorization: Bearer {YOUR_API_KEY}' \
1776
  --data '
1777
  {
1778
- "name": "Updated session"
1779
  }'
1780
  ```
1781
 
@@ -1786,7 +1786,7 @@ curl --request PUT \
1786
  - `session_id`: (*Path parameter*)
1787
  The ID of the session to update.
1788
  - `"name"`: (*Body Parameter), `string`
1789
- The name of the session to update.
1790
 
1791
  ### Response
1792
 
@@ -1818,7 +1818,7 @@ Lists sessions associated with a specified chat assistant.
1818
  ### Request
1819
 
1820
  - Method: GET
1821
- - URL: `/api/v1/chat/{chat_id}/session?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&name={dataset_name}&id={dataset_id}`
1822
  - Headers:
1823
  - `'Authorization: Bearer {YOUR_API_KEY}'`
1824
 
@@ -1949,7 +1949,7 @@ Failure:
1949
 
1950
  **POST** `/api/v1/chat/{chat_id}/completion`
1951
 
1952
- Asks a question to start a conversation.
1953
 
1954
  ### Request
1955
 
@@ -1972,7 +1972,7 @@ curl --request POST \
1972
  --header 'Authorization: Bearer {YOUR_API_KEY}' \
1973
  --data-binary '
1974
  {
1975
- "question": "Hello!",
1976
  "stream": true
1977
  }'
1978
  ```
@@ -1982,11 +1982,11 @@ curl --request POST \
1982
  - `chat_id`: (*Path parameter*)
1983
  The ID of the associated chat assistant.
1984
  - `"question"`: (*Body Parameter*), `string` *Required*
1985
- The question to start an AI chat.
1986
  - `"stream"`: (*Body Parameter*), `boolean`
1987
  Indicates whether to output responses in a streaming way:
1988
  - `true`: Enable streaming.
1989
- - `false`: (Default) Disable streaming.
1990
  - `"session_id"`: (*Body Parameter*)
1991
  The ID of session. If it is not provided, a new session will be generated.
1992
 
 
5
 
6
  ---
7
 
8
+ :::tip API GROUPING
9
  Dataset Management
10
  :::
11
 
 
32
  - `"embedding_model"`: `string`
33
  - `"permission"`: `string`
34
  - `"chunk_method"`: `string`
35
+ - `"parser_config"`: `object`
36
 
37
  #### Request example
38
 
 
86
  - `"laws"`: Laws
87
  - `"presentation"`: Presentation
88
  - `"picture"`: Picture
89
+ - `"one"`: One
90
  - `"knowledge_graph"`: Knowledge Graph
91
  - `"email"`: Email
92
 
93
+ - `"parser_config"`: (*Body parameter*), `object`
94
  The configuration settings for the dataset parser. A `ParserConfig` object contains the following attributes:
95
  - `"chunk_token_count"`: Defaults to `128`.
96
  - `"layout_recognize"`: Defaults to `true`.
 
237
  - `dataset_id`: (*Path parameter*)
238
  The ID of the dataset to update.
239
  - `"name"`: `string`
240
+ The revised name of the dataset.
241
+ - `"embedding_model"`: `string` The updated embedding model name.
242
  - Ensure that `"chunk_count"` is `0` before updating `"embedding_model"`.
243
  - `"chunk_method"`: `enum<string>` The chunking method for the dataset. Available options:
244
  - `"naive"`: General
 
572
  Success:
573
 
574
  ```text
575
+ This is a test to verify the file download feature.
576
  ```
577
 
578
  Failure:
 
938
  ### Request
939
 
940
  - Method: GET
941
+ - URL: `/api/v1/dataset/{dataset_id}/document/{document_id}/chunk?keywords={keywords}&offset={offset}&limit={limit}&id={chunk_id}`
942
  - Headers:
943
  - `'Authorization: Bearer {YOUR_API_KEY}'`
944
 
 
946
 
947
  ```bash
948
  curl --request GET \
949
+ --url http://{address}/api/v1/dataset/{dataset_id}/document/{document_id}/chunk?keywords={keywords}&offset={offset}&limit={limit}&id={chunk_id} \
950
  --header 'Authorization: Bearer {YOUR_API_KEY}'
951
  ```
952
 
 
956
  The associated dataset ID.
957
  - `document_ids`: (*Path parameter*)
958
  The associated document ID.
959
+ - `keywords`(*Filter parameter*), `string`
960
  The keywords used to match chunk content.
961
+ - `offset`(*Filter parameter*), `string`
962
  The starting index for the chunks to retrieve. Defaults to `1`.
963
+ - `limit`(*Filter parameter*), `integer`
964
  The maximum number of chunks to retrieve. Default: `1024`
965
+ - `id`(*Filter parameter*), `string`
966
  The ID of the chunk to retrieve.
967
 
968
  ### Response
 
1210
 
1211
  - `"question"`: (*Body parameter*), `string`, *Required*
1212
  The user query or query keywords.
1213
+ - `"dataset_ids"`: (*Body parameter*) `list[string]`
1214
+ The IDs of the datasets to search. If you do not set this argument, ensure that you set `"document_ids"`.
1215
  - `"document_ids"`: (*Body parameter*), `list[string]`
1216
+ The IDs of the documents to search. Ensure that all selected documents use the same embedding model. Otherwise, an error will occur. If you do not set this argument, ensure that you set `"dataset_ids"`.
1217
  - `"offset"`: (*Body parameter*), `integer`
1218
  The starting index for the documents to retrieve. Defaults to `1`.
1219
  - `"limit"`: (*Body parameter*)
1220
  The maximum number of chunks to retrieve. Defaults to `1024`.
1221
  - `"similarity_threshold"`: (*Body parameter*)
1222
  The minimum similarity score. Defaults to `0.2`.
1223
+ - `"vector_similarity_weight"`: (*Body parameter*), `weight`
1224
  The weight of vector cosine similarity. Defaults to `0.3`. If x represents the vector cosine similarity, then (1 - x) is the term similarity weight.
1225
+ - `"top_k"`: (*Body parameter*), `integer`
1226
  The number of chunks engaged in vector cosine computaton. Defaults to `1024`.
1227
+ - `"rerank_id"`: (*Body parameter*), `integer`
1228
  The ID of the rerank model.
1229
  - `"keyword"`: (*Body parameter*), `boolean`
1230
  Indicates whether to enable keyword-based matching:
 
1335
  - `"dataset_ids"`: (*Body parameter*), `list[string]`
1336
  The IDs of the associated datasets.
1337
  - `"llm"`: (*Body parameter*), `object`
1338
+ The LLM settings for the chat assistant to create. If it is not explicitly set, a JSON object with the following values will be generated as the default. An `llm` JSON object contains the following attributes:
1339
  - `"model_name"`, `string`
1340
  The chat model name. If not set, the user's default chat model will be used.
1341
  - `"temperature"`: `float`
 
1349
  - `"max_token"`: `integer`
1350
  The maximum length of the model’s output, measured in the number of tokens (words or pieces of words). Defaults to `512`.
1351
  - `"prompt"`: (*Body parameter*), `object`
1352
+ Instructions for the LLM to follow. If it is not explicitly set, a JSON object with the following values will be generated as the default. A `prompt` JSON object contains the following attributes:
1353
  - `"similarity_threshold"`: `float` RAGFlow uses a hybrid of weighted keyword similarity and vector cosine similarity during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
1354
  - `"keywords_similarity_weight"`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
1355
  - `"top_n"`: `int` This argument specifies the number of top chunks with similarity scores above the `similarity_threshold` that are fed to the LLM. The LLM will *only* access these 'top N' chunks. The default value is `8`.
 
1467
  - `chat_id`: (*Path parameter*)
1468
  The ID of the chat assistant to update.
1469
  - `"name"`: (*Body parameter*), `string`, *Required*
1470
+ The revised name of the chat assistant.
1471
  - `"avatar"`: (*Body parameter*), `string`
1472
  Base64 encoding of the avatar.
1473
  - `"dataset_ids"`: (*Body parameter*), `list[string]`
 
1603
 
1604
  #### Request parameters
1605
 
1606
+ - `page`: (*Filter parameter*), `integer`
1607
  Specifies the page on which the chat assistants will be displayed. Defaults to `1`.
1608
+ - `page_size`: (*Filter parameter*), `integer`
1609
  The number of chat assistants on each page. Defaults to `1024`.
1610
+ - `orderby`: (*Filter parameter*), `string`
1611
  The attribute by which the results are sorted. Available options:
1612
  - `create_time` (default)
1613
  - `update_time`
1614
+ - `desc`: (*Filter parameter*), `boolean`
1615
  Indicates whether the retrieved chat assistants should be sorted in descending order. Defaults to `true`.
1616
+ - `id`: (*Filter parameter*), `string`
1617
  The ID of the chat assistant to retrieve.
1618
+ - `name`: (*Filter parameter*), `string`
1619
  The name of the chat assistant to retrieve.
1620
 
1621
  ### Response
 
1775
  --header 'Authorization: Bearer {YOUR_API_KEY}' \
1776
  --data '
1777
  {
1778
+ "name": "<REVISED_SESSION_NAME_HERE>"
1779
  }'
1780
  ```
1781
 
 
1786
  - `session_id`: (*Path parameter*)
1787
  The ID of the session to update.
1788
  - `"name"`: (*Body Parameter), `string`
1789
+ The revised name of the session.
1790
 
1791
  ### Response
1792
 
 
1818
  ### Request
1819
 
1820
  - Method: GET
1821
+ - URL: `/api/v1/chat/{chat_id}/session?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&name={session_name}&id={session_id}`
1822
  - Headers:
1823
  - `'Authorization: Bearer {YOUR_API_KEY}'`
1824
 
 
1949
 
1950
  **POST** `/api/v1/chat/{chat_id}/completion`
1951
 
1952
+ Asks a question to start an AI-powered conversation.
1953
 
1954
  ### Request
1955
 
 
1972
  --header 'Authorization: Bearer {YOUR_API_KEY}' \
1973
  --data-binary '
1974
  {
1975
+ "question": "What is RAGFlow?",
1976
  "stream": true
1977
  }'
1978
  ```
 
1982
  - `chat_id`: (*Path parameter*)
1983
  The ID of the associated chat assistant.
1984
  - `"question"`: (*Body Parameter*), `string` *Required*
1985
+ The question to start an AI-powered conversation.
1986
  - `"stream"`: (*Body Parameter*), `boolean`
1987
  Indicates whether to output responses in a streaming way:
1988
  - `true`: Enable streaming.
1989
+ - `false`: Disable streaming (default).
1990
  - `"session_id"`: (*Body Parameter*)
1991
  The ID of session. If it is not provided, a new session will be generated.
1992
 
api/python_api_reference.md CHANGED
@@ -73,7 +73,7 @@ The chunking method of the dataset to create. Available options:
73
  - `"laws"`: Laws
74
  - `"presentation"`: Presentation
75
  - `"picture"`: Picture
76
- - `"one"`:One
77
  - `"knowledge_graph"`: Knowledge Graph
78
  - `"email"`: Email
79
 
@@ -210,8 +210,8 @@ Updates configurations for the current dataset.
210
 
211
  A dictionary representing the attributes to update, with the following keys:
212
 
213
- - `"name"`: `str` The name of the dataset to update.
214
- - `"embedding_model"`: `str` The embedding model name to update.
215
  - Ensure that `"chunk_count"` is `0` before updating `"embedding_model"`.
216
  - `"chunk_method"`: `str` The chunking method for the dataset. Available options:
217
  - `"naive"`: General
@@ -223,7 +223,7 @@ A dictionary representing the attributes to update, with the following keys:
223
  - `"laws"`: Laws
224
  - `"presentation"`: Presentation
225
  - `"picture"`: Picture
226
- - `"one"`:One
227
  - `"knowledge_graph"`: Knowledge Graph
228
  - `"email"`: Email
229
 
@@ -753,11 +753,11 @@ The user query or query keywords. Defaults to `""`.
753
 
754
  #### dataset_ids: `list[str]`, *Required*
755
 
756
- The IDs of the datasets to search from.
757
 
758
  #### document_ids: `list[str]`
759
 
760
- The IDs of the documents to search from. Defaults to `None`.
761
 
762
  #### offset: `int`
763
 
@@ -932,7 +932,7 @@ Updates configurations for the current chat assistant.
932
 
933
  A dictionary representing the attributes to update, with the following keys:
934
 
935
- - `"name"`: `str` The name of the chat assistant to update.
936
  - `"avatar"`: `str` Base64 encoding of the avatar. Defaults to `""`
937
  - `"dataset_ids"`: `list[str]` The datasets to update.
938
  - `"llm"`: `dict` The LLM settings:
@@ -1117,7 +1117,7 @@ session = assistant.create_session()
1117
  Session.update(update_message: dict)
1118
  ```
1119
 
1120
- Updates the current session name.
1121
 
1122
  ### Parameters
1123
 
@@ -1125,7 +1125,7 @@ Updates the current session name.
1125
 
1126
  A dictionary representing the attributes to update, with only one key:
1127
 
1128
- - `"name"`: `str` The name of the session to update.
1129
 
1130
  ### Returns
1131
 
@@ -1247,7 +1247,7 @@ assistant.delete_sessions(ids=["id_1","id_2"])
1247
  Session.ask(question: str, stream: bool = False) -> Optional[Message, iter[Message]]
1248
  ```
1249
 
1250
- Asks a question to start a conversation.
1251
 
1252
  ### Parameters
1253
 
@@ -1260,7 +1260,7 @@ The question to start an AI chat.
1260
  Indicates whether to output responses in a streaming way:
1261
 
1262
  - `True`: Enable streaming.
1263
- - `False`: (Default) Disable streaming.
1264
 
1265
  ### Returns
1266
 
@@ -1324,4 +1324,4 @@ while True:
1324
  for ans in session.ask(question, stream=True):
1325
  print(answer.content[len(cont):], end='', flush=True)
1326
  cont = answer.content
1327
- ```
 
73
  - `"laws"`: Laws
74
  - `"presentation"`: Presentation
75
  - `"picture"`: Picture
76
+ - `"one"`: One
77
  - `"knowledge_graph"`: Knowledge Graph
78
  - `"email"`: Email
79
 
 
210
 
211
  A dictionary representing the attributes to update, with the following keys:
212
 
213
+ - `"name"`: `str` The revised name of the dataset.
214
+ - `"embedding_model"`: `str` The updated embedding model name.
215
  - Ensure that `"chunk_count"` is `0` before updating `"embedding_model"`.
216
  - `"chunk_method"`: `str` The chunking method for the dataset. Available options:
217
  - `"naive"`: General
 
223
  - `"laws"`: Laws
224
  - `"presentation"`: Presentation
225
  - `"picture"`: Picture
226
+ - `"one"`: One
227
  - `"knowledge_graph"`: Knowledge Graph
228
  - `"email"`: Email
229
 
 
753
 
754
  #### dataset_ids: `list[str]`, *Required*
755
 
756
+ The IDs of the datasets to search. Defaults to `None`. If you do not set this argument, ensure that you set `document_ids`.
757
 
758
  #### document_ids: `list[str]`
759
 
760
+ The IDs of the documents to search. Defaults to `None`. You must ensure all selected documents use the same embedding model. Otherwise, an error will occur. If you do not set this argument, ensure that you set `dataset_ids`.
761
 
762
  #### offset: `int`
763
 
 
932
 
933
  A dictionary representing the attributes to update, with the following keys:
934
 
935
+ - `"name"`: `str` The revised name of the chat assistant.
936
  - `"avatar"`: `str` Base64 encoding of the avatar. Defaults to `""`
937
  - `"dataset_ids"`: `list[str]` The datasets to update.
938
  - `"llm"`: `dict` The LLM settings:
 
1117
  Session.update(update_message: dict)
1118
  ```
1119
 
1120
+ Updates the current session.
1121
 
1122
  ### Parameters
1123
 
 
1125
 
1126
  A dictionary representing the attributes to update, with only one key:
1127
 
1128
+ - `"name"`: `str` The revised name of the session.
1129
 
1130
  ### Returns
1131
 
 
1247
  Session.ask(question: str, stream: bool = False) -> Optional[Message, iter[Message]]
1248
  ```
1249
 
1250
+ Asks a question to start an AI-powered conversation.
1251
 
1252
  ### Parameters
1253
 
 
1260
  Indicates whether to output responses in a streaming way:
1261
 
1262
  - `True`: Enable streaming.
1263
+ - `False`: Disable streaming (default).
1264
 
1265
  ### Returns
1266
 
 
1324
  for ans in session.ask(question, stream=True):
1325
  print(answer.content[len(cont):], end='', flush=True)
1326
  cont = answer.content
1327
+ ```