{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "1962abb8",
   "metadata": {},
   "source": [
    "## Solr Retriever\n",
    "This notebook demonstrates how to retrieve data from a Solr instance."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f59aabc8",
   "metadata": {},
   "source": [
    "### Importing library"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e42d4b4d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d90826f8",
   "metadata": {},
   "source": [
    "<span style=\"color:blueviolet\">Step 1. Example Search query for search documnet in solr</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "2a5c78ee",
   "metadata": {},
   "outputs": [],
   "source": [
    "query = \"what is watson assistant\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af42787c",
   "metadata": {},
   "source": [
    "## Search documents\n",
    "<span style=\"color:blueviolet\">Step 2. Please provide here Solr instance for query</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "08c15148",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_answer_solr(query):\n",
    "    response = requests.get(f'http://150.239.171.68:8983/solr/redbooks/select?q='+query+'&q.op=AND&defType=dismax&wt=json')\n",
    "    json_data = response.json()\n",
    "    print(json_data['response']['numFound'], \"documents found.\")\n",
    "    for document in json_data['response']['docs']:\n",
    "        print(\"Name =\", document['content'])\n",
    "   "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "94bd71d0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "260 documents found.\n",
      "Name = [' \\n \\n stream_size 14179  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.html.HtmlParser  \\n stream_content_type application/html  \\n Content-Encoding UTF-8  \\n resourceName /root/ibm_cloud_docs_process2/watson-assistant/understand-questions.html  \\n Content-Type text/html; charset=UTF-8  \\n  \\n \\n   \\n\\n copyright:\\n  years: 2021, 2023\\nlastupdated: \"2023-04-25\" \\n\\n subcollection: watson-assistant \\n\\n  \\n\\n {{site.data.keyword.attribute-definition-list}} \\n\\n Understanding your users\\' questions or requests \\n\\n {: #understand-questions} \\n\\n Actions represent the tasks or questions that your assistant can help customers with. Each action has a beginning and an end, making up a conversation between the assistant and a customer. Learn how to begin an action, where it understands and recognizes a goal based on the words a customer uses to ask a question or make a request. \\n\\n Beginning an action \\n\\n {: #understand-questions-start} \\n\\n Each assistant can include as many actions as you need to have conversations with your users. You design each individual action to recognize a specific question or request, and when it does, the action starts. \\n\\n When you create a new action, your first task is to enter one phrase that a customer types or says to start the conversation about a specific topic. This phrase determines the problem that your customer has or the question your user asks. \\n\\n To get going, you need to enter only one phrase, for example:  What are your store hours? . \\n\\n After you enter the phrase, it is stored in  Customer starts with , at the start of the action. \\n\\n  Customer starts with images/customer-starts-with.png   \\n\\n Testing your phrase \\n\\n {: #understand-questions-testing} \\n\\n Before even doing anything else with your action, you can already start checking that your assistant recognizes the starting phrase. \\n\\n \\t Click the  Preview  button. \\n\\t Enter your first phrase, for example:  What are your store hours? . \\n\\t \\n If you see  There are no additional steps for this action , that means the action recognizes the phrase. (And it\\'s because you haven\\'t added anything else to your action.) \\n\\n  Preview images/new-action-preview.png   \\n\\n \\n\\t \\n If the assistant doesn\\'t understand the phrase, you\\'ll see the built-in action  No action matches . For more information, see  rect https://test.cloud.ibm.com/docs/watson-assistant?topic=watson-assistant-handle-errors#no-action-matches When the assistant can\\'t understand your customer\\'s request . \\n\\n  No action matches images/new-action-preview-no-match.png   \\n\\n \\n \\n\\n Adding more examples \\n\\n {: #understand-questions-adding-more-examples} \\n\\n When you\\'re creating a new action, one example phrase is enough to start with. You can build the rest of your action with steps before adding more example phrases. Then, return to  Customer starts with  and add 10 or more variations of the same question or request, using words that your customers commonly use. For example: \\n\\n \\t  Are you open on the weekend?  \\n\\t  How late are you open today?  \\n\\t  Get store hours  \\n\\t  What time do you open?  \\n\\t  Are you open now?  \\n \\n\\n Each phrase can be up to 1,024 characters in length. \\n\\n By adding these phrases, your assistant learns what is the right action for what a customer wants. The additional examples build the training data that the machine learning engine of Watson Assistant uses to create a natural language processing model. The model is customized to understand your uniquely defined actions. \\n\\n Uploading phrases \\n\\n {: #understand-questions-uploading-examples} \\n\\n If you have many example phrases, you can upload them from a comma-separated value (CSV) file than to define them one by one. If you are migrating intent information from the classic {{site.data.keyword.conversationshort}} experience to example phrases in the new {{site.data.keyword.conversationshort}} experience, see  rect /docs/watson-assistant?topic=watson-assistant-migrate-intents-entities Migrating intents and entities . \\n\\n \\t \\n Collect the phrases into a CSV file. Save the CSV file with UTF-8 encoding and no byte order mark (BOM). \\n\\n \\t \\n If you are creating a new CSV file to upload phrases, the format for each line in the file is as follows:\\n     text\\n    <phrase> \\n    Where  <phrase>  is the text of a user example phrase. If youâ€™re using a spreadsheet to create a CSV file, put all your phrases into column 1, as shown in the following example: \\n\\n  Example spreadsheet to upload phrases images/uploading-phrases-spreadsheet.png   \\n\\n \\n\\t \\n If you  rect /docs/watson-assistant?topic=watson-assistant-migrate-intents-entities#migrate-intents-download downloaded intents from the classic experience , the format for each line in the file is as follows:\\n     text\\n    <phrase>,<intent> \\n    Where  <phrase>  is the text of a user example phrase, and  <intent>  is the name of the intent. For example:\\n     text\\n    Tell me the current weather conditions.,weather_conditions\\n    Is it raining?,weather_conditions\\n    What\\'s the temperature?,weather_conditions  \\n\\n Only one intent can be uploaded per action, so the  <intent>  information listed in the second column of the CSV file must be the same.\\n{: important} \\n\\n \\n \\n\\n \\n\\t \\n Go to  Customer starts with  at the start of the action. \\n\\n \\n\\t \\n Click the  Upload  icon  Upload icon images/upload-icon.png  . \\n\\n \\n\\t \\n Select a file from your computer. \\n\\n The file is validated and uploaded, and the system trains itself on the new data. \\n\\n \\n \\n\\n Downloading phrases \\n\\n {: #understand-questions-downloading-examples} \\n\\n You can download your example phrases to a CSV file, so you can then upload and reuse them in another {{site.data.keyword.conversationshort}} application. \\n\\n \\t \\n Go to  Customer starts with  at the start of the action. \\n\\n \\n\\t \\n Click the  Download  icon  Download icon images/download-icon.png  . \\n\\n Your example phrases are downloaded to a CSV file. \\n\\n \\n \\n\\n Asking clarifying questions \\n\\n {: #understand-questions-ask-clarifying-question} \\n\\n When your assistant finds that more than one action might fulfill a customer\\'s request, it can automatically ask for clarification. Instead of guessing which action to take, your assistant shows a list of the possible actions to the customer, and asks the customer to pick the right one. \\n\\n  Shows a sample conversation between a user and the assistant, where the assistant asks for clarification from the user. images/disambig-demo.png   \\n\\n Any  Created by you  action that might match the customer\\'s input can be included in the choices that are listed by a clarifying question. The  Set by assistant  actions are never included. \\n\\n In the assistant output, the possible actions are listed by name. The default name for an action is the text of the first example message that you add to it (such as  I want to open an account ), but you can change this name to something more descriptive. \\n\\n The order in which the actions are listed might change. In fact, the actions themselves that are included in the list might change. This behavior is intended. As part of development that is in progress to help the assistant learn automatically from user choices, the actions that are included and their order in the list is randomized on purpose. Randomizing the order helps to prevent bias that can be introduced by a percentage of people who always pick the first option without carefully reviewing all of their choices beforehand. \\n\\n Customizing clarification \\n\\n {: #understand-questions-disambiguation-config} \\n\\n To customize clarification, you can:\\n- Choose a default  action response mode , which modifies the assistant\\'s behavior when asking questions. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-action-response-modes Action response modes .\\n- Change settings like the wording your assistant uses to introduce the clarification list. \\n\\n To change settings, complete the following steps: \\n\\n \\t \\n From the  Actions  page of the assistant, click  Global settings   Gear icon ../../icons/settings.svg  . \\n\\n \\n\\t \\n On the  Action response modes  tab, you can make the following changes in the  Ask clarifying question  section: \\n\\n \\n \\n\\n | Field | Default text | Description |\\n   |---|---|---|\\n   |  Assistant says  |  Did you mean:  | The text that is displayed before the list of clarification choices. You can change it to something else, such as  What do you want to do?  or  Pick what to do next . |\\n   |  Connection to support  |  Connect to support  | The assistant can include a choice to connect to other support in the list of clarifying questions. If the customer picks this choice, the assistant uses your  Fallback  action. You can change it to something else, such as  Talk to a live agent  or  Search for the answer . |\\n   |  No action matches  |  None of the above  | The choice that customers can click when none of the other choices are right. If the customer picks this choice, the assistant uses your  No action matches  action. You can change it to something else, such as  I need something else  or  These aren\\'t what I want . Or, you can remove the text to omit offering this choice.\\n   |  One action matches  |  Something else  | If an assistant prioritizes one action that it thinks matches the customer need, it can clarify the match by asking the customer to confirm. This choice accompanies the single action in case the customer needs something else. You can change it to something else, such as  I need something else  or  This isn\\'t what I want . |\\n   {: caption=\"Ask clarifying question settings\" caption-side=\"top\"} \\n\\n \\t \\n Click  Save , and then click  Close . \\n\\n \\n\\t \\n Publish a new version of your assistant to the live environment to apply the customizations. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-publish Publishing your content . \\n\\n \\n \\n\\n Disabling clarifying questions \\n\\n {: #understand-questions-disambiguation-disable} \\n\\n You can disable clarifying questions for all actions. \\n\\n To disable clarification for all actions: \\n\\n \\t From the  Actions  page of the assistant, click  Global settings   Gear icon ../../icons/settings.svg  . \\n\\t On the  Action response modes  tab, in the  Customize modes  section, ensure that the Response modes switch is set to  Off . \\n\\t In the  Ask clarifying question  section, set the  Enable disambiguation  switch to  Off . \\n\\t Click  Save , and then click  Close . \\n\\t Publish a new version of your assistant to the live environment to disable clarification. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-publish Publishing your content . \\n \\n\\n Excluding an action from clarifying questions \\n\\n {: #understand-questions-exclude-from-clarifying} \\n\\n You can also prevent a single action from being used in a clarifying question. The effect of this choice depends on the confidence score for the action that you exclude. \\n\\n If the action has the highest confidence score for a customer\\'s question, no clarifying question is asked, and the action is triggered. \\n\\n If the action doesn\\'t have the highest confidence score, the action is excluded from the list of choices in the clarifying question. \\n\\n For more information about confidence scores, see  rect #understand-questions-confidence-scoring Confidence scoring . \\n\\n To exclude an action from clarification: \\n\\n \\t \\n From the action editor, click the  Action settings  icon  Gear icon ../../icons/settings.svg  . \\n\\n \\n\\t \\n In Action Settings, toggle the  Ask clarifying question  switch to  Off . \\n\\n \\n \\n\\n Coordinating how multiple actions start \\n\\n {: #understand-questions-multiple-actions} \\n\\n As you work on your assistant, it\\'s a good idea to coordinate customer phrase examples across multiple actions. It\\'s important to distinguish how each action is triggered. When a user enters a question or request, the phrase is evaluated across all the  Customer starts with  examples in every action. If two actions have similar phrase examples, then the wrong action might get triggered by your user\\'s question. \\n\\n Confidence scoring \\n\\n {: #understand-questions-confidence-scoring} \\n\\n Behind the scenes, Watson Assistant determines a confidence score for each phrase. The score is absolute, meaning that a confidence score is assigned based on a predetermined scale, and not relative to other customer phrases. This approach adds flexibility in case multiple questions or requests are detected in a single user input. It also means that the system might not trigger an action at all, if a phrase has a low confidence score. As confidence scores change, your action examples might need restructuring. \\n\\n To learn more about review and testing confidence scores, see  rect /docs/watson-assistant?topic=watson-assistant-review#review-debug-confidence Action confidence score  in  rect /docs/watson-assistant?topic=watson-assistant-review Reviewing and debugging your work . \\n  ']\n",
      "Name = [' \\n \\n stream_size 233529  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.html.HtmlParser  \\n stream_content_type application/html  \\n Content-Encoding UTF-8  \\n resourceName /root/ibm_cloud_docs_process2/watson-assistant/release-notes.html  \\n Content-Type text/html; charset=UTF-8  \\n  \\n \\n   \\n\\n copyright:\\n  years: 2015, 2023\\nlastupdated: \"2023-04-25\" \\n\\n keywords: Watson Assistant release notes \\n\\n subcollection: watson-assistant \\n\\n content-type: release-note \\n\\n  \\n\\n {{site.data.keyword.attribute-definition-list}} \\n\\n Release notes for Watson Assistant \\n\\n {: #watson-assistant-release-notes} \\n\\n Find out what\\'s new in {{site.data.keyword.conversationfull}}.\\n{: shortdesc} \\n\\n This topic describes the new features, changes, and bug fixes in each release of the product. For more information about changes in the web chat integration, see the  rect /docs/watson-assistant?topic=watson-assistant-release-notes-chat Web chat release notes . \\n\\n 24 April 2023 \\n\\n {: #watson-assistant-apr242023}\\n{: release-note}\\nAction response modes randomization behavior \\n:   The action response modes beta now uses the same randomization behavior during clarification that your actions have without response modes enabled. Previous to this change, when action response modes were enabled, the clarification feature no longer periodically modified the options for clarification. Randomizing the clarification helps prevent bias that can be introduced by a percentage of people who always pick the first option without carefully reviewing all of their choices beforehand. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-action-response-modes Action response modes  or  rect /docs/watson-assistant?topic=watson-assistant-understand-questions#understand-questions-ask-clarifying-question Asking clarifying questions . \\n\\n 21 April 2023 \\n\\n {: #watson-assistant-apr212023}\\n{: release-note}\\nCollections\\n:   You can use a  collection  to organize your actions. You can put actions into folder-style groups based on whatever categorization you need at your organization, such as by use case, internal team, or status. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-collections Organizing actions in collections . \\n\\n 18 April 2023 \\n\\n {: #watson-assistant-apr182023}\\n{: release-note}\\nActivity log\\n:   The  Activity log  is a beta feature that is available for evaluation and testing. Use the activity log to track changes. It gives you visibility into the modifications that are made to your assistant. It is available for Plus plans and higher. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-activity-log Activity log . \\n\\n 16 April 2023 \\n\\n {: #watson-assistant-apr162023}\\n{: release-note}\\nAllow changing topics in free text and regex responses\\n:   By default, customers can\\'t change topics when the assistant is asking for a free text response or when an utterance matches the pattern in a regex response. Now free text and regex customer response types have a setting to allow a user to digress and change topics. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-change-topic#change-topic-free-text-regex Enabling changing the topic for free text and regex customer responses . \\n\\n 7 April 2023 \\n\\n {: #watson-assistant-apr072023}\\n{: release-note}\\nNever return choice when customer changes topics\\n:   If a customer changes a topic during a conversation, there might be some situations when you might not want them to return to the previous action. If you need to do this, a new  Never return  choice is available in  Action settings . For more information, see  rect /docs/watson-assistant?topic=watson-assistant-change-topic#change-topic-never-return Disabling returning to the original topic . \\n\\n 22 March 2023 \\n\\n {: #watson-assistant-mar222023}\\n{: release-note} \\n\\n Autolearning\\n:    Autolearning  is a beta feature that is available for evaluation and testing purposes in English-language assistants only. Use autolearning to enable your assistant to learn from interactions with your customers and improve responses. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-autolearn Using autolearning to improve assistant responses . \\n\\n 21 March 2023 \\n\\n {: #watson-assistant-mar212023}\\n{: release-note} \\n\\n Action response modes\\n:    Action response modes  is a beta feature that is available for evaluation and testing purposes. Choose a default  action response mode , which modifies the assistant\\'s behavior when asking clarifying questions. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-action-response-modes Action response modes . \\n\\n 20 March 2023 \\n\\n {: #watson-assistant-mar202023}\\n{: release-note} \\n\\n Auto-save setting removed from Global Settings\\n:  The  Auto-save  setting was removed from Global Settings. It was a user preference that allowed you to disable automatic saving of actions and applied to all assistants in your instance. Changing the setting applied only to you and didn\\'t affect other users. Actions continue to be automatically saved as you work. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-save-actions Saving your actions . \\n\\n 16 March 2023 \\n\\n {: #watson-assistant-mar162023}\\n{: release-note} \\n\\n New algorithm version  Latest (20 Dec 2022)  provides improved irrelevance detection \\n:   A new algorithm version is available. The  Latest (20 Dec 2022)  version includes a new irrelevance detection implementation to improve off-topic detection accuracy. \\n\\n Improvements include:\\n   - Relevant user inputs are expected to get higher confidence, so they are less likely to be considered irrelevant or require clarification\\n   - Irrelevance detection is improved in the presence of direct entity references\\n   - Irrelevance detection is more stable across small changes to input\\n   - Intent detection is more stable regarding occurrence of numerics, such as postal codes\\n   - For German-language assistants, intent detection is more robust in the presence of umlauts  \\n\\n This algorithm was first introduced as the  Beta  version in June 2022. Since then, support for more languages has been added. This algorithm version was stabilized in December 2022 with minor enhancements since that time. \\n\\n With this new release, the June 1, 2022 version is now labeled as  Previous (01 Jun 2022) . The oldest release labeled as  01 Jan 2022  is no longer available for training. As of now, the new  Beta  version has the same behavior as the  Latest (20 Dec 2022)  version. Updates to the  Beta  version will be released soon. \\n\\n For more information, see  rect /docs/watson-assistant?topic=watson-assistant-algorithm-version Algorithm version and training . \\n\\n 10 March 2023 \\n\\n {: #watson-assistant-mar102023}\\n{: release-note} \\n\\n Dialog session variables now available in Preview\\n:   If you are using dialog in the new experience, you can now see session variables for dialog when debugging in Preview. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-review#review-variable-values Variable values in Preview . \\n\\n 6 March 2023 \\n\\n {: #watson-assistant-mar062023}\\n{: release-note} \\n\\n Improvements to algorithm version beta\\n:   Improvements to the current  Beta  algorithm version include:\\n   - Relevant examples are expected to get higher confidence\\n   - For Spanish-language assistants, intent detection is improved in the presence of direct entity references\\n   - Intent detection is more stable regarding occurrence of numerics, such as postal codes\\n   - Intent detection now accounts for fuzzy closed entity mentions\\n   - For German-language assistants, intent detection is more robust in the presence of umlauts  \\n\\n For more information, see  rect /docs/watson-assistant?topic=watson-assistant-algorithm-version Algorithm version and training . \\n\\n 3 March 2023 \\n\\n {: #watson-assistant-mar032023}\\n{: release-note} \\n\\n Adding and using multiple environments\\n:   Each assistant has a draft and live environment. For Enterprise plans, you can now add up to three environments as a staging area to test your assistant before deployment. You can build content in the draft environment and test versions of your content in the extra environments. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-multiple-environments Adding and using multiple environments . \\n\\n Confirmation to return to previous action\\n:   If a customer digresses and changes to a new topic, assistants now ask a \"yes or no\" confirmation question that the customers want to return to the previous action. Previously, the assistant returned to the previous action without asking. New assistants are set to use this confirmation by default. For more information, see  rect https://test.cloud.ibm.com/docs/watson-assistant?topic=watson-assistant-change-topic#change-topic-confirmation Confirmation to return to previous topic . \\n\\n 1 March 2023 \\n\\n {: #watson-assistant-mar012023}\\n{: release-note} \\n\\n Unrecognized requests group names\\n:   This change improves the group names for unrecognized requests. For groups with examples phrased in the form of question, the group name can be more indicative of a question rather than a request. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-analytics-recognition Use unrecognized requests to get action recommendations . \\n\\n 23 February 2023 \\n\\n {: #watson-assistant-feb232023}\\n{: release-note} \\n\\n Private variables excluded from logs\\n:   Private context variables are no longer saved in logs or sent to external services using log webhooks. Private variables are any values stored inside the following objects: \\n\\n  - `context.integrations.*.private` (accessible from actions as `system_integrations.*.private`)\\n- `context.integrations.*.$private`\\n- `context.skills.*.user_defined.private`\\n- `context.skills.*.user_defined.$private`\\n- `context.private`\\n- `context.$private`\\n  \\n\\n 16 February 2023 \\n\\n {: #watson-assistant-feb162023}\\n{: release-note} \\n\\n Improvements to setting variable values\\n:   When you use  Set variable values  on an action step: \\n\\n \\t \\n The available choices now match by type. For example, if you want to set a date variable, the choices are limited to other date variables. Previously, all variables of all types were listed as choices.  \\n\\n \\n\\t \\n You can set a scalar value for each variable type. For example, you can set a specific date for a date variable or set a specific number for a number variable. \\n\\n \\n \\n\\n For more information, see  rect https://cloud.ibm.com/docs/watson-assistant?topic=watson-assistant-manage-info#store-session-variable Storing a value in a session variable . \\n\\n Confirmation and free text response types setting default \\n:   The  Confirmation  and  Free text  response types are now set to  Always ask for this information  by default. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-collect-info#collect-info-skip-step Skipping steps, always asking steps, or never asking steps . \\n\\n 13 February 2023 \\n\\n {: #watson-assistant-feb132023}\\n{: release-note} \\n\\n Response variations\\n:   In actions, you can add  response variations  so that your assistant can respond to the same request in different ways. You can choose to rotate through the response variations sequentially or in random order. For more information, see  rect /docs/watson-assistant/watson-assistant?topic=watson-assistant-respond#respond-variations Adding variations . \\n\\n Microsoft Teams integration\\n:   A  rect /docs/watson-assistant?topic=watson-assistant-deploy-microsoft-teams Microsoft Teams integration  is now available to connect your assistant with the people, content, and tools that your business or community needs to chat, call, and collaborate.  \\n\\n 3 February 2023 \\n\\n {: #watson-assistant-feb032023}\\n{: release-note} \\n\\n Action conditions (beta)\\n:   An action condition is a boolean test, based on some runtime value; the action executes only if the test evaluates as true. This test can be applied to any variable. By defining action conditions, you can do things such as control user access to actions or create date-specific actions. This is a beta feature that is available for evaluation and testing purposes. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-action-conditions Adding conditions to an action . \\n\\n 2 February 2023 \\n\\n {: #watson-assistant-feb022023}\\n{: release-note} \\n\\n Detect trigger words (beta)\\n:   Use the  Trigger word detected  action to add words or phrases to two separate groups. The first group connects customers with an agent, when itâ€™s important for a customer to speak with a live agent rather than activate any further actions. The second group shows customers a customizable warning message, used to discourage customers from interacting with your assistant in unacceptable ways, such as using profanity. This action is included with all new assistants created as of this date. This is a beta feature that is available for evaluation and testing purposes. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-trigger-phrases Detecting trigger words . \\n\\n Changes to unrecognized requests algorithm\\n:   In  Analyze , the  Recognition  page lets you view groups of similar unrecognized requests. You can use the requests as example phrases in new or existing actions to address questions and issues that aren\\'t being answered by your assistant. With this release, the criteria for grouping the requests is relaxed for customers with lesser amounts of data. Also, the group names have been improved with better grammar and to be more representative of the requests. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-analytics-recognition Use unrecognized requests to get action recommendations . \\n\\n 1 February 2023 \\n\\n {: #watson-assistant-feb012023}\\n{: release-note} \\n\\n Actions templates updated with new design and new choices\\n:   The actions template catalog has a new design that lets you select multiple templates at the same time. It also has new and updated templates, including starter kits you can use with external services such as Google and HubSpot. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-actions-templates Building actions from a template . \\n\\n 26 January 2023 \\n\\n {: #watson-assistant-jan262023}\\n{: release-note} \\n\\n Display formats for variables\\n:   In  Global settings  for actions,  Display formats  lets you specify the display formats for variables that use date, time, numbers, currency, or percentages. You can also choose a default locale to use if one isn\\'t provided by the client application. This lets you make sure that the format of a variable that\\'s displayed in the web chat is what you want for your assistant. For example, you can choose to have the output of a time variable appear in HH:MM format instead of HH:MM:SS. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-actions-global-settings#actions-global-settings-display-formats Display formats  \\n\\n 18 January 2023 \\n\\n {: #watson-assistant-jan182023}\\n{: release-note} \\n\\n Algorithm version stability improvement\\n:   As of this date, the  Latest (01 Jun 2022)  and  Beta  algorithm versions now have more stable behavior across retrained models, in the presence of overlapping entities (the same entity value belonging to more than one entity type). Previously, when there were overlapping entities definitions, confidences could differ across different retraining. With this improvement, you can expect to see similar confidences. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-algorithm-version Algorithm version and training . \\n\\n 12 January 2023 \\n\\n {: #watson-assistant-jan122023}\\n{: release-note} \\n\\n Autocorrection setting for actions\\n:   The  Global settings  for actions now include an  Autocorrection  setting.  Autocorrection  fixes misspellings that users make in their requests. The corrected words are used to match to an action.  \\n\\n While the setting is new, the autocorrection feature was already used automatically by all English-language assistants. Autocorrection is also available in French-language assistants, but is disabled by default. The autocorrection setting isn\\'t available for any other languages. The new setting lets you disable or enable autocorrection if necessary. \\n\\n For more information, see  rect /docs/watson-assistant?topic=watson-assistant-autocorrection Autocorrecting user input . \\n\\n Improved experience when setting a variable value\\n:   The dropdown list for setting a variable value within an action step has a new organization. The new list is intended to provide an improved experience. \\n\\n 11 January 2023 \\n\\n {: #watson-assistant-jan112023}\\n{: release-note} \\n\\n Algorithm version 01-Jun-2022 uses enhanced intent detection by default\\n:   As of this date, the algorithm version  Latest (01-Jun-2022)  now uses enhanced intent detection by default. Before this change, some skills that did not include a specific algorithm version selection inadvertently used  Previous (01-Jan-2022) . You can notice small changes in intent detection behavior when changes are made to an assistant that previously didn\\'t have enhanced intent detection enabled. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-algorithm-version Algorithm version and training . \\n\\n 6 December 2022 \\n\\n {: #watson-assistant-dec062022}\\n{: release-note} \\n\\n Updated expression methods\\n:   The following new and updated methods are available in expressions:\\n   - The  rect /docs/watson-assistant?topic=watson-assistant-expression-methods-actions#expression-methods-actions-arrays-join-to-array  Array.joinToArray()   method now supports a new boolean parameter you can use to specify that the data type of values from the input array should be preserved in the returned array.\\n   - The new  rect /docs/watson-assistant?topic=watson-assistant-expression-methods-actions#expression-methods-actions-strings-toJson  String.toJson()   method parses a string containing JSON data and returns a JSON object or array. This method is supported in both actions and dialog. \\n\\n 5 December 2022 \\n\\n {: #watson-assistant-dec052022}\\n{: release-note} \\n\\n Live integrations deleted in assistants created before June 24, 2022\\n:   For assistants created before June 24, 2022, using the new {{site.data.keyword.conversationshort}} user experience, the live integrations for these assistants were mistakenly deleted during a software upgrade. These integrations should now be restored. If you are still experiencing issues, please contact IBM support. \\n\\n Unsupported HTML removed from text responses in channel integrations\\n:   HTML tags (except for links) are now automatically removed from text responses that are sent to the Facebook, WhatsApp, and Slack integrations, because those channels do not support HTML formatting. HTML tags are still handled appropriately in channels that support them (such as the web chat) and stored in the session history. \\n\\n 2 December 2022 \\n\\n {: #watson-assistant-dec022022}\\n{: release-note} \\n\\n Pause response type\\n:   Use a  Pause  response to have your assistant wait for a specified interval before displaying the next response. This pause might be to allow time for a request to complete, or simply to mimic the appearance of a live agent who might pause between responses. The pause can be of any duration from 1 to 10 seconds. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-respond#respond-pause-response Pause response . \\n\\n 17 November 2022 \\n\\n {: #watson-assistant-nov172022}\\n{: release-note} \\n\\n Use unrecognized requests to get action recommendations\\n:   In  Analyze , the new  Recognition  page lets you view groups of similar unrecognized requests. You can use the requests as example phrases in new or existing actions to address questions and issues that aren\\'t being answered by your assistant. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-analytics-recognition Use unrecognized requests to get action recommendations . \\n\\n 15 November 2022 \\n\\n {: #watson-assistant-nov152022}\\n{: release-note} \\n\\n Journeys\\n:   Beginning with web chat version 6.9.0, you can now create  journeys  to guide your customers through tasks they can already complete on your website. A journey is an interactive, multipart response that can combine text, video, and images, presented in sequence in a small window superimposed over your website. \\n\\n  Journeys are available as a beta feature. For more information, see [Guiding customers with journeys](/docs/watson-assistant?topic=watson-assistant-journeys).\\n  \\n\\n 10 November 2022 \\n\\n {: #watson-assistant-nov102022}\\n{: release-note} \\n\\n Dynamic options\\n:   Within the options customer response, you can use the  dynamic  setting to generate the list when you need to ask questions that are potentially different each time and for each customer. You need to set up a list variable as the source of the options. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-dynamic-options Dynamic options . \\n\\n Extension inspector\\n:   You can use the new extension inspector in the action editor  Preview  pane to debug problems with custom extensions. The extension inspector shows detailed information about what data is being sent to and returned from an external API. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-call-extension#extension-debug Debugging failures . \\n\\n 3 November 2022 \\n\\n {: #watson-assistant-nov032022}\\n{: release-note} \\n\\n Never ask a step\\n:   There may be some situations where you need a step to never ask a question because you anticipate there might be redundant questions in the conversation. A new setting,  Never ask , is now available for any step that expects a customer response. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-collect-info#collect-info-skip-step Skipping steps, always asking steps, or never asking steps . \\n\\n Action notes\\n:   You can now add free-form notes to each action. Within each action, you can use  Action notes  to add a description, documentation, comments, or any other annotations to help you keep track of your work as you build an action. For more information, see  rect https://cloud.ibm.com/docs/watson-assistant?topic=watson-assistant-build-actions-overview#build-actions-overview-use Using the action editor . \\n\\n Variable values in Preview\\n:  Viewing action variables in Preview has been improved. Now you can see the history of all action variables, rather than one action at a time. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-review#review-variable-values Variable values . \\n\\n 21 October 2022 \\n\\n {: #watson-assistant-oct212022}\\n{: release-note} \\n\\n Algorithm version updates\\n:   The algorithm version setting for both actions and dialog now includes three choices: beta, latest, and previous. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-algorithm-version Algorithm version and training . \\n\\n 12 October 2022 \\n\\n {: #watson-assistant-oct122022}\\n{: release-note} \\n\\n  now(String timezone)  method output includes time zone offset\\n:   The string returned from the  now(String timezone)  method now includes the time zone offset (such as  -05:00 ). The new format is  yyyy-MM-dd HH:mm:ss \\'GMT\\'XXX  (where  XXX  represents the time zone offset). This change enables accurate time zone computations when used with other date and time methods such as  before ,  after , and  reformatDateTime . \\n\\n  If you have an existing action or dialog that depends on the previous format, you can adapt it by reformatting the output using `now(timezone).reformatDateTime(\\'yyyy-MM-dd HH:mm:ss\\')`.\\n\\nFor more information, see [Expression language methods for actions](https://cloud.ibm.com/docs/watson-assistant?topic=watson-assistant-expression-methods-actions#expression-methods-actions-now).\\n  \\n\\n 23 September 2022 \\n\\n {: #watson-assistant-sep232022}\\n{: release-note} \\n\\n Upload an image as a preview background\\n:   On the  Preview  page, you can now upload an image of your organization\\'s website as a background. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-preview-share Previewing and sharing your assistant . \\n\\n 16 September 2022 \\n\\n {: #watson-assistant-sep162022}\\n{: release-note} \\n\\n Session ID information on Analyze page\\n:   Session ID information for conversations is now displayed on the Conversations tab of the Analyze page. You can also filter customer conversation data by the session ID. From the Conversations tab of the Analyze page, use the Keyword filter to search by session ID. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-analytics-conversations#analytics-conversations-filtering Filtering conversations . \\n\\n The ability to filter on session ID has limited support for conversations that occurred before this feature release. For all conversations that occurred before 16 September 2022, you can filter only by a single session ID at a time. \\n\\n 9 September 2022 \\n\\n {: #watson-assistant-sep092022}\\n{: release-note} \\n\\n New operators available for building conditions\\n:   Several new operators are available for building conditions in your actions. The options response type now has the  is any of  and  is none of  operators available. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-step-conditions#operators Operators . \\n\\n Copy actions to other assistants\\n:   You can copy an action from one assistant to another. When you copy an action, references to other actions, variables, and saved responses are also copied. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-copy-action Copying an action to another assistant . \\n\\n Filter variables and saved responses by name\\n:   You can now find variables and saved responses more easily. On the Actions page, you can filter variables you created or saved responses you added. Click the search icon, then enter a search string. Your list of variable or saved responses filters to match what you enter. \\n\\n 1 September 2022 \\n\\n {: #watson-assistant-sep012022}\\n{: release-note} \\n\\n Conditioning on days of the week\\n:   You can now condition a step on days of the week. This feature is available with the  date  response type and the  Current date  built-in variable. \\n\\n  For example, you might [define a customer response](/docs/watson-assistant?topic=watson-assistant-collect-info#choose-type) in step 1 with the date response type. When the customer responds to that step, they choose a date. You can then condition a later step on whether the date that the customer chose is Wednesday.\\n  \\n\\n New operators available for building conditions\\n:   Several new operators are available for building conditions in your actions. The free text response type now has the  contains ,  does not contain ,  matches , and  does not match  operators available. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-step-conditions#operators Operators . \\n\\n Extensions support for arrays\\n:   Custom extensions now support passing arrays as parameters and accessing arrays in response variables. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-call-extension Calling a custom extension . \\n\\n 26 August 2022 \\n\\n {: #watson-assistant-aug262022}\\n{: release-note} \\n\\n New filter on the Analyze page\\n:   You can now filter customer conversation data by the  Greet customer  system action. From the Conversations tab of the Analyze page, open the Actions filter and select  Greet customer . For more information, see  rect /docs/watson-assistant?topic=watson-assistant-analytics-conversations#analytics-conversations-filtering Filtering conversations . \\n\\n Filter actions by name\\n:   You can now find actions more easily. On the Actions page, you can filter actions by name. Click the search icon, then enter a search string. Your list of actions filters to match what you enter. \\n\\n 12 August 2022 \\n\\n {: #watson-assistant-aug122022}\\n{: release-note} \\n\\n Actions templates\\n:   When creating actions, you can choose a template that relates to the problem youâ€™re trying to solve. Templates help tailor your actions to include items specific to your business need. The examples in each template can also help you to learn how actions work. Actions templates include features such as intents, entities, condition-based responses, synonyms, response validations, and agent fallback. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-actions-templates Building actions from a template . \\n\\n Channel name variable\\n:   The  Channel name  integration variable lets you add step conditions using these channels: web chat, phone, SMS, WhatsApp, Slack, or Facebook Messenger. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-step-conditions Adding conditions to a step . \\n\\n 11 August 2022 \\n\\n {: #watson-assistant-aug112022}\\n{: release-note} \\n\\n Algorithm version options available in more languages\\n:   Algorithm version options are now available in Arabic, Czech, and Dutch. This allows you to choose which {{site.data.keyword.conversationshort}} algorithm to apply to your future trainings. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-algorithm-version Algorithm version and training . \\n\\n 9 August 2022 \\n\\n {: #watson-assistant-aug092022}\\n{: release-note} \\n\\n New API methods\\n:   The v2 API now supports new  Environments  and  Releases  methods: \\n\\n  - **Environments**: Retrieve information about the environments associated with an assistant.\\n\\n- **Releases**: Retrieve information about the releases (versions) that have been published for an assistant, and assign an available release to an environment.\\n  \\n\\n For more information, see the v2  rect https://cloud.ibm.com/apidocs/assistant/assistant-v2 API Reference {: external}. \\n\\n 5 August 2022 \\n\\n {: #watson-assistant-aug052022}\\n{: release-note} \\n\\n Initial value of session variables\\n:   You can now set the initial value of a session variable to an expression. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-manage-info#create-session-variable Creating a session variable . \\n\\n Uploading intents\\n:   If you created intents in the classic {{site.data.keyword.conversationshort}} experience, you can migrate your intents to actions in the new {{site.data.keyword.conversationshort}} experience. This can provide a helpful starting point when you are ready to start building actions in the new experience. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-upload-download-actions#upload-download-actions-upload-intents Uploading intents as actions . \\n\\n 19 July 2022 \\n\\n {: #watson-assistant-jul192022}\\n{: release-note} \\n\\n Changes to publishing and environments\\n:   You can now publish versions of your content without assigning to the live environment, allowing you to make continuous updates before customers see it in production. Also, the formerly separate pages for your draft and live environments now appear as tabs on a single  Environments  page, from which you can set up unique configurations for building and testing in the draft environment, and for your customers in the live environment. For more information, see the  rect /docs/watson-assistant?topic=watson-assistant-publish-overview Publishing overview . \\n\\n Logs reader role\\n:   Identity and Access Management now includes a new service role,  Logs Reader , which lets you grant access to Analytics without assigning the Manager role. Use Logs Reader in combination with the Reader or Writer role to provide access to the Analytics page. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-access-control Managing access . \\n\\n 15 July 2022 \\n\\n {: #watson-assistant-jul152022}\\n{: release-note} \\n\\n Segment extension\\n:   The Segment extension is now available for Enterprise plans. With this extension, you can use  rect https://segment.com/ Segment {: external} to capture and centralize data about your customers\\' behavior, including their interactions with your assistant. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-segment-add Sending events to Segment . \\n\\n New expression property\\n:   You can now use the  .literal  property to return the exact response that a customer specifies. This property is helpful if a customer uses a synonym of an option, and you want your assistant to respond with the exact phrase they specified. To set this property, click  Set variable values  and assign a session variable to the step variable. Add the  .literal  property to the step variable. Use the session variable in the assistant\\'s response to display the customer\\'s input. \\n\\n  For example, suppose you have an option called `plant` that has `fern` as a synonym. A customer might say `buy a fern`. In this case, you can use the `.literal` property so the assistant\\'s response uses the customer\\'s input. Your assistant might respond, `Great! I see you want to buy a fern.`\\n  \\n\\n 11 July 2022 \\n\\n {: #watson-assistant-jul112022}\\n{: release-note} \\n\\n Ability to duplicate an action\\n:   You can duplicate an action to reuse information in a new action. When you duplicate an action, the new action includes everything except example phrases. Click the overflow menu on the action you want and select  Duplicate . \\n\\n New demo site\\n:   Explore our  rect https://www.ibm.com/products/watson-assistant/demos/lendyr/demo.html interactive demo site {: external} to learn how {{site.data.keyword.conversationshort}} can be used to build powerful, scalable experiences for your users. \\n\\n 24 June 2022 \\n\\n {: #watson-assistant-jun242022}\\n{: release-note} \\n\\n Algorithm version options available in more languages\\n:   Algorithm version options are now available in Chinese (Traditional), Japanese, and Korean. This allows you to choose which {{site.data.keyword.conversationshort}} algorithm to apply to your future trainings. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-algorithm-version Algorithm version and training . \\n\\n 16 June 2022 \\n\\n {: #watson-assistant-jun162022}\\n{: release-note} \\n\\n Algorithm version\\n:   Algorithm version allows you to choose which {{site.data.keyword.conversationshort}} algorithm to apply to your future trainings. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-algorithm-version Algorithm version and training . \\n\\n Algorithm beta version (2022-06-10)\\n:   Algorithm beta version (2022-06-10) includes a new irrelevance detection algorithm to improve off-topic detection accuracy. Utterances with similar meanings are expected to have more similar confidences in comparison to previous irrelevance detection algorithms. For example, the training utterance  please suggest route from times square  has 100% confidence at runtime. Currently in IBM Cloud, the utterance  please suggest route from central park  gets a low confidence and could be flagged as irrelevant. With beta version (2022-06-10), the same utterance is expected to be predicted correctly with a ~46% confidence. \\n\\n 3 June 2022 \\n\\n {: #watson-assistant-jun032022}\\n{: release-note} \\n\\n Extensions support for importing API document with unsupported methods\\n:   When building a custom extension, you can now import an API document even if it contains operations with required array parameters, which are not supported. The unsupported operations are automatically disabled, but this does not affect other operations. (Previously, the entire API document was rejected if it contained unsupported operations.) For more information, see  rect /docs/watson-assistant?topic=watson-assistant-build-custom-extension Building a custom extension . \\n\\n 27 May 2022 \\n\\n {: #watson-assistant-may272022}\\n{: release-note} \\n\\n Support for custom extensions and dialog in Actions preview panel\\n:   You can now view your entire assistant from the  Actions preview  panel, including custom extensions and dialog. This allows you to have a complete view of how an action is working. For more information about previewing actions, see  rect /docs/watson-assistant?topic=watson-assistant-review Reviewing and debugging your actions . \\n\\n 19 May 2022 \\n\\n {: #watson-assistant-may192022}\\n{: release-note} \\n\\n Sign out due to inactivity setting\\n:   {{site.data.keyword.conversationshort}} now uses the  Sign out due to inactivity setting  from Identity & Access Management (IAM). {{site.data.keyword.cloud_notm}} account owners can select the time it takes before an inactive user is signed out and their credentials are required again. The default is 2 hours. \\n\\n An inactive user will see two messages. The first message alerts them about an upcoming session expiration and provides a choice to renew. If they remain inactive, a second session expiration message appears and they will need to log in again. \\n\\n For more information, see  rect /docs/account?topic=account-iam-work-sessions&interface=ui#sessions-inactivity Setting the sign out due to inactivity duration {: external}. \\n\\n 12 May 2022 \\n\\n {: #watson-assistant-may122022}\\n{: release-note} \\n\\n Ability to upload and download example phrases and upload saved customer responses\\n:   You can now upload and download example phrases from  Customer starts with  at the start of an action. This can be useful if you have a large number of example phrases and don\\'t want to define them one by one. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-understand-questions#understand-questions-adding-more-examples Adding more examples . \\n\\n You can also now upload saved customer responses from the  Saved responses  page. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-collect-info#uploading-saved-customer-response Uploading saved customer responses . \\n\\n The ability to upload example phrases and saved customer responses is also helpful if you\\'re using the classic {{site.data.keyword.conversationshort}} and want to migrate your intents and entities to the new {{site.data.keyword.conversationshort}}. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-migrate-intents-entities Migrating intents and entities . \\n\\n 5 May 2022 \\n\\n {: #watson-assistant-may052022}\\n{: release-note} \\n\\n Success/failure variable for extensions\\n:   Each call to a custom extension now returns a  Ran successfully  response variable, which you can use to check the success or failure of the call. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-call-extension#extension-check-success Checking success or failure . \\n\\n 28 April 2022 \\n\\n {: #watson-assistant-apr282022}\\n{: release-note} \\n\\n Definitive calculation for abandoned actions\\n:   Abandonment is now definitively calculated for your actions. On the  Analytics  page, actions are no longer considered  Ongoing  in the action completion analysis. An action is considered abandoned if it was not completed after 1 hour of inactivity and doesn\\'t meet the criteria for any other incompletion reason (escalated to agent, started a new action, or stuck on a step). This change applies only to actions data after April 26, 2022. For more information about action incompletion, see  rect /docs/watson-assistant?topic=watson-assistant-analytics-action-completion#incomplete-reasons Reasons for incompletion . \\n\\n Managing operations in extensions\\n:   When you add a custom extension to an assistant, you can now choose which operations and response properties will be available to actions. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-add-custom-extension Adding an extension to your assistant . \\n\\n 21 April 2022 \\n\\n {: #watson-assistant-apr212022}\\n{: release-note} \\n\\n Ability to duplicate a step\\n:   You can now duplicate a step so you don\\'t have to re-create variable settings and customizations. Duplicating a step is helpful when you need to add a step similar to a previous step, but with minor modifications. For more information about how to duplicate a step, see  rect /docs/watson-assistant?topic=watson-assistant-build-actions-overview#build-actions-overview-duplicate-step Duplicating a step . \\n\\n Markdown supported in action editor\\n:   The action editor now supports basic Markdown syntax. As you type, the action editor renders the Markdown so you can see the content as your customers will when they interact with the assistant. \\n\\n 5 April 2022 \\n\\n {: #watson-assistant-apr052022}\\n{: release-note} \\n\\n Dialog feature available\\n:   The dialog feature is available. If you have a dialog-based assistant that was built using the classic {{site.data.keyword.conversationshort}}, you can now migrate your dialog skill to the new {{site.data.keyword.conversationshort}} experience. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-migrate-overview Migrating to the new experience . \\n\\n 28 March 2022 \\n\\n {: #watson-assistant-mar282022}\\n{: release-note} \\n\\n New service desk support reference implementation\\n:   You can use the reference implementation details to integrate the web chat with the Kustomer service desk. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-deploy-web-chat#deploy-web-chat-haa Adding service desk support . \\n\\n 18 March 2022 \\n\\n {: #watson-assistant-mar182022}\\n{: release-note} \\n\\n Custom extensions\\n:   If you need to integrate your assistant with an external service that has a REST API, you can now build a custom extension by importing an OpenAPI document. Your assistant can then send requests to the external service and receive response data it can use in the conversation. For example, you might use an extension to interact with a ticketing or customer relationship management (CRM) system, or to retrieve real-time data such as mortgage rates or weather conditions. \\n\\n For more information about custom extensions, see  rect /docs/watson-assistant?topic=watson-assistant-build-custom-extension Building a custom extension  and  rect /docs/watson-assistant?topic=watson-assistant-call-extension Calling a custom extension . \\n\\n Confirmation customer response type\\n:   The confirmation customer response type is now available. Use this response type when a customer\\'s response must be either Yes or No. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-collect-info#customer-response-type-confirmation Confirmation . \\n\\n Search integration highlights text in browser\\n:   Search results in {{site.data.keyword.conversationshort}} include a link. Now, when a customer clicks the link, search results are highlighted in their browser so itâ€™s easier for them to see the relevant content. This feature is supported on Chromium browsers, including Google Chrome and Microsoft Edge. \\n\\n 24 February 2022 \\n\\n {: #watson-assistant-feb242022}\\n{: release-note} \\n\\n Regex customer response type\\n:   The regex customer response type is now available. Use this response type to capture a value that must conform to a particular pattern or format, such as an email address or telephone number. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-collect-info#customer-response-type-regex Regex . \\n\\n 17 February 2022 \\n\\n {: #watson-assistant-feb172022}\\n{: release-note} \\n\\n Adding users from the Manage menu\\n:   If you want to collaborate with others on your assistants, you can now quickly add users with Administrator and Manager access from the  Manage  menu in your assistant. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-access-control#access-control-add-users Adding users from the Manage menu . \\n\\n Preview page share link\\n:   When you use the  Copy link to share  button to share your assistant, the shared assistant now mirrors the Preview page. If you share the assistant with a colleague, they are able to see the assistant with any customizations that you made on the Preview page. \\n\\n 10 February 2022 \\n\\n {: #watson-assistant-feb102022}\\n{: release-note} \\n\\n Links in assistant responses can be configured to open in a new tab\\n:   When you build an action, your assistant responses can include links. If you\\'re using web chat, you can now control whether the link opens in a new tab. To enable a link to open in a new tab, select  Open link in new tab  from the  Insert link  configuration window. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-respond Adding assistant responses . \\n\\n 9 February 2022 \\n\\n {: #watson-assistant-feb092022}\\n{: release-note} \\n\\n All instances now default to new experience\\n:    All new instances of {{site.data.keyword.conversationshort}} now direct users to the new product experience by default. \\n\\n  {{site.data.keyword.conversationshort}} has been completely overhauled to simplify the end-to-end process of building and deploying a virtual assistant, reducing time to launch and enabling nontechnical authors to create virtual assistants without involving developers. For more information about the new {{site.data.keyword.conversationshort}}, and instructions for switching between the new and old experiences, see [Welcome to the new {{site.data.keyword.conversationshort}}](/docs/watson-assistant?topic=watson-assistant-welcome-new-assistant).\\n\\nIf you would like to send us feedback on the new experience, please use [this form](https://form.asana.com/?k=vvRdQAmGMFAeEGRryhTA2w&d=8612789739828){: external}.\\n  \\n\\n 3 February 2022 \\n\\n {: #watson-assistant-feb032022}\\n{: release-note} \\n\\n Customize the Preview page background\\n:   You can now change the background of the  Preview  page to one of your organization\\'s web pages so you can preview and test your assistant from a customer\\'s perspective. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-preview-share Previewing and sharing your assistant . \\n\\n Add a type to session variables\\n:   When you create a session variable, you can now assign a type to the variable. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-manage-info#create-session-variable Creating a session variable . After a type is assigned to a variable, you can set more explicit conditions on that variable. Previously, you were able to check only whether session variables were  defined  or  not defined . With variable types, you can create conditions based on the type of the variable (for example,  account balance < 100  or  departure date is after today ). For more information, see  rect /docs/watson-assistant?topic=watson-assistant-step-conditions#operators Operators . \\n\\n Create saved customer responses\\n:   You can now create saved customer responses. There might be some questions that your assistant needs to ask in different steps and actions. For example, a banking assistant might have different actions that ask for a customer\\'s account number. Instead of building the same response over and over, you can create a saved customer response and reuse it across steps in multiple actions. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-collect-info#saved-customer-responses Saving and reusing customer responses . \\n\\n 13 January 2022 \\n\\n {: #watson-assistant-jan132022}\\n{: release-note} \\n\\n New setting for options customer response type\\n:    In actions, a new  List options  setting allows you to enable or disable the options customer response from appearing in a list. This can be useful to prevent a phone integration from reading a long list of options to the customer. As part of this change, all customer response types now have a  Settings  icon.  Allow skipping  has moved from  Edit Response  and is now found in the new settings. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-collect-info Collecting information from your customers . \\n\\n 24 December 2021 \\n\\n {: #watson-assistant-dec242021}\\n{: release-note} \\n\\n Apache Log4j security vulnerability updates\\n:   {{site.data.keyword.conversationshort}} upgraded to using Log4j version 2.17.0, which addresses all of the Critical severity and High severity Log4j CVEs, specifically CVE-2021-45105, CVE-2021-45046, and CVE-2021-44228. \\n\\n 3 December 2021 \\n\\n {: #watson-assistant-dec032021}\\n{: release-note} \\n\\n Configure webhook timeout\\n:   From the  Pre-message webhook  and  Post-message webhook  configuration pages, you can configure the webhook timeout length from a minimum of 1 second to a maximum of 30 seconds. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-webhook-overview Extending your assistant with webhooks . \\n\\n User-based switching between {{site.data.keyword.conversationshort}} experiences\\n:   Previously, switching between the new {{site.data.keyword.conversationshort}} experience and the classic {{site.data.keyword.conversationshort}} experience was instance-based. For example, if a user switched from the classic experience to the new experience, all users of that {{site.data.keyword.conversationshort}} instance were switched to the new experience. Now, switching between the experiences is user-based. So, any user of a {{site.data.keyword.conversationshort}} instance can switch between the new and classic experiences, and other users of that {{site.data.keyword.conversationshort}} instance are not affected. \\n\\n 27 November 2021 \\n\\n {: #watson-assistant-nov272021}\\n{: release-note} \\n\\n New API version\\n:   The current API version is now  2021-11-27 . This version introduces the following changes: \\n\\n  - The `output.text` object is no longer returned in `message` responses. All responses, including text responses, are returned only in the `output.generic` array.\\n  \\n\\n 12 November 2021 \\n\\n {: #watson-assistant-nov122021}\\n{: release-note} \\n\\n Completion analytic information\\n:   On the  Analyze  page, the  How often  chart can now also show the percentage of complete actions. Use the icon in the upper right of the chart to toggle between a line chart that shows the percentage of complete actions and a bar chart that shows the number of complete and incomplete actions. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-analytics-action-completion#analytics-improving-completion Improving completion . \\n\\n Preview page update\\n:   The  Test integrations  panel no longer exists on the  Preview  page. You can manage your draft web chat channel from the  Preview  page. However, all other draft environment integrations are managed from the  Draft environment  page. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-preview-share Previewing and sharing your assistant . \\n\\n 4 November 2021 \\n\\n {: #watson-assistant-nov042021}\\n{: release-note} \\n\\n Draft and Live Environment pages\\n:   Two pages,  Draft environment  and  Live environment , help you to see how your channels and resolution methods are connected, both for testing/preview and for live deployment. The Draft environment page is new as of this release. The Live environment page was previously named Connect. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-publish-overview Overview: Publishing and deploying your assistant . \\n\\n Add variables to links\\n:   When including a link in an assistant response, you can now access and use variables. In the URL field for a link, type a dollar sign ( $ ) character to see a list of variables to choose from. \\n\\n 25 October 2021 \\n\\n {: #watson-assistant-oct252021}\\n{: release-note} \\n\\n Facebook and Slack integrations now available\\n:   The new Watson Assistant now includes integrations for  rect /docs/watson-assistant?topic=watson-assistant-deploy-facebook Facebook Messenger  and  rect /docs/watson-assistant?topic=watson-assistant-deploy-slack Slack . \\n\\n Analytics for draft and live environments\\n:   The  Analyze  page now lets you see analytics data for either the draft or live environments. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-analytics-overview Use analytics to review your entire assistant at a glance . \\n\\n 7 October 2021 \\n\\n {: #watson-assistant-oct072021}\\n{: release-note} \\n\\n The new Watson Assistant\\n:   The new Watson Assistant is now available! This new experience, focused on using  actions  to build customer conversations, is designed to make it simple enough for  anyone  to build a virtual assistant. Building, testing, publishing, and analyzing your assistant can all now be done in one simple and intuitive interface. \\n\\n  - New **navigation** provides a workflow for building, previewing, publishing, and analyzing your assistant.\\n- Each assistant has a **home page** with a task list to help you get started.\\n- Build conversations with **actions**, which represent the tasks you want your assistant to help your customers with. Each action contains a series of steps that represent individual exchanges with a customer.\\n- A new way to **publish** lets you review and debug your work in a draft environment before going live to your customers.\\n- Use a new suite of **analytics** to improve your assistant. Review which actions are being completed to see what your customers want help with, determine if your assistant understands and addresses customer needs, and decide how can you make your assistant better.\\n- New **[documentation](/docs/watson-assistant)** focuses on the workflow of building, deploying, and improving your assistant.\\n  \\n\\n Older release notes might link to the  rect /docs/assistant documentation for the classic Watson Assistant experience {: external}. \\n\\n 21 September 2021 \\n\\n {: #watson-assistant-sep212021}\\n{: release-note} \\n\\n Analytics Overview change\\n:   To improve reliability, the Values column has been removed from  Top entities  on the  Analytics Overview  page. Top Entities continues to provide counts of entity types. For more information, see  rect /docs/assistant?topic=assistant-logs-overview#logs-overview-tops Top intents and top entities {: external} \\n\\n 16 September 2021 \\n\\n {: #watson-assistant-sep162021}\\n{: release-note} \\n\\n Enhanced intent detection for French, Italian, and Spanish dialog skills\\n:   The new intent detection model improves your assistant\\'s ability to understand what customers want. This model is now available in dialog skills using French, Italian, and Spanish. For more information, see  rect /docs/assistant?topic=assistant-intent-detection Improved intent recognition {: external}. \\n\\n Change to the irrelevance detection option\\n:   As of this release, new English dialog skills no longer include the option to choose between the  Enhanced  or  Existing  irrelevance detection. By default, intent detection and irrelevance detection are paired like this: \\n\\n  - If you use the dialog skill options to choose enhanced intent detection, it is automatically paired with enhanced irrelevance detection.\\n- If you use the dialog skill options to choose existing intent detection, it is automatically paired with existing irrelevance detection.\\n\\nFor more information, see [Defining what\\'s irrelevant](/docs/assistant?topic=assistant-irrelevance-detection){: external} and [Improved intent recognition](/docs/assistant?topic=assistant-intent-detection){: external}.\\n\\nIf necessary, you can use the [Update workspace API](/apidocs/assistant/assistant-v1?curl=#updateworkspace){: external} to set your English-language assistant to one of the four combinations of intent and irrelevance detection:\\n\\n- Enhanced intent recognition and enhanced irrelevance detection\\n- Enhanced intent recognition and existing irrelevance detection\\n- Existing intent recognition and enhanced irrelevance detection\\n- Existing intent recognition and existing irrelevance detection\\n\\nFor French, Italian, and Spanish, you can use the API to set your assistant to these combinations:\\n- Enhanced intent recognition and enhanced irrelevance detection\\n- Existing intent recognition and existing irrelevance detection\\n  \\n\\n 15 September 2021 \\n\\n {: #watson-assistant-sep152021}\\n{: release-note} \\n\\n Dialog skill \"Try it out\" improvements\\n:   The  Try it out  pane now includes these changes: \\n\\n  - It now includes runtime warnings in addition to runtime errors.\\n\\n- For dialog skills, the **Try it out** pane now uses the [React](https://reactjs.org/){: external} UI framework similar to the rest of the {{site.data.keyword.conversationshort}} user interface. You shouldn\\'t see any change in behavior or functionality. As a part of the update, dialog skill error handling has been improved within the \"Try it out\" pane. This update was enabled on these dates:\\n\\n    - September 9, 2021 in the Tokyo and Seoul data centers\\n    - September 13, 2021 in the London, Sydney, and Washington, D.C. data centers\\n    - September 15, 2021 in the Dallas and Frankfurt data centers\\n  \\n\\n 13 September 2021 \\n\\n {: #watson-assistant-sep132021}\\n{: release-note} \\n\\n Dialog skill \"Try it out\" improvements\\n:   For dialog skills, the  Try it out  pane now uses the  rect https://reactjs.org/ React {: external} UI framework similar to the rest of the {{site.data.keyword.conversationshort}} user interface. You shouldn\\'t see any change in behavior or functionality. As a part of the update, dialog skill error handling has been improved within the \"Try it out\" pane. This update was enabled on September 9, 2021 in the Tokyo and Seoul data centers. On September 13, 2021, the update was enabled in the London, Sydney, and Washington, D.C. data centers. \\n\\n Disambiguation feature updates\\n:   The dialog skill disambiguation feature now includes improved features: \\n\\n  - **Increased control**: The frequency and depth of disambiguation can now be controlled by using the **sensitivity** parameter in the [workspace API](/apidocs/assistant/assistant-v1#updateworkspace){: external}. There are 5 levels of sensitivity:\\n    - `high`\\n    - `medium_high`\\n    - `medium`\\n    - `medium_low`\\n    - `low`\\n\\n    The default (`auto`) is `medium_high` if this option is not set.\\n\\n- **More predictable**: The new disambiguation feature is more stable and predictable. The choices shown may sometimes vary slightly to enable learning and analytics, but the order and depth of disambiguation is largely stable.\\n\\nThese new features may affect various metrics, such as disambiguation rate and click rates, as well as influence conversation-level key performance indicators such as containment.\\n\\nIf the new disambiguation algorithm works differently than expected for your assistant, you can adjust it using the sensitivity parameter in the update workspace API. For more information, see [Update workspace](/apidocs/assistant/assistant-v1#updateworkspace){: external}.\\n  \\n\\n 9 September 2021 \\n\\n {: #watson-assistant-sep092021}\\n{: release-note} \\n\\n Actions skill improvements\\n:   Actions skills now include these new features: \\n\\n  - **Change conversation topic**: In general, an action is designed to lead a customer through a particular process without any interruptions. In real life, however, conversations almost never follow such a simple flow. In the middle of a conversation, customers might get distracted, ask questions about related issues, misunderstand something, or just change their minds about what they want to do. The **Change conversation topic** feature enables your assistant to handle these digressions, dynamically responding to the user by changing the conversation topic as needed. For more information, see [Changing the topic of the conversation](/docs/assistant?topic=assistant-actions#actions-change-topic){: external}.\\n\\n- **Fallback action**: The built-in action, *Fallback*, provides a way to automatically connect customers to a human agent if they need more help. This action helps you to handle errors in the conversation, and is triggered by these conditions:\\n    - Step validation failed: The customer repeatedly gave answers that were not valid for the expected customer response type.\\n    - Agent requested: The customer directly asked to be connected to a human agent.\\n    - No action matches: The customer repeatedly made requests or asked questions that the assistant did not understand.\\n\\n    For more information, see [Set by assistant actions](/docs/assistant?topic=assistant-actions#actions-builtin){: external}.\\n  \\n\\n Dialog skill \"Try it out\" improvements\\n:   For dialog skills, the  Try it out  pane now uses the  rect https://reactjs.org/ React {: external} UI framework similar to the rest of the {{site.data.keyword.conversationshort}} user interface. You shouldn\\'t see any change in behavior or functionality. As a part of the update, dialog skill error handling has been improved within the \"Try it out\" pane. This update will be implemented incrementally, starting with service instances in the Tokyo and Seoul data centers. \\n\\n 2 September 2021 \\n\\n {: #watson-assistant-sep022021}\\n{: release-note} \\n\\n Deploy your assistant on the phone in minutes\\n:   We have partnered with  rect https://intelepeer.com/ IntelePeer {: external} to enable you to generate a phone number for free within the phone integration. Simply choose to generate a free number when following the prompts to create a phone integration, finish the setup, and a number is assigned to your assistant. These numbers are robust and ready for production. \\n\\n Connect to your existing service desks\\n:   We have added step-by-step documentation for connecting to  rect /docs/assistant?topic=assistant-deploy-phone-genesys Genesys {: external} and  rect /docs/assistant?topic=assistant-deploy-phone-flex Twilio Flex {: external} over the phone. Easily hand off to your live agents when your customers require telephony support from your service team. {{site.data.keyword.conversationshort}} deploys on the phone via SIP, so most phone based service desks can easily be integrated via SIP trunking standards. \\n\\n 23 August 2021 \\n\\n {: #watson-assistant-aug232021}\\n{: release-note} \\n\\n Intent detection updates\\n:   Intent detection for the English language has been updated with the addition of new word-piece algorithms. These algorithms improve tolerance for out-of-vocabulary words and misspelling. This change affects only English-language assistants, and only if the enhanced intent recognition model is enabled. (For more information about the enhanced intent recognition model, and how to determine whether it is enabled, see  rect /docs/assistant?topic=assistant-intent-detection Improved intent recognition {: external}.) \\n\\n Automatic retraining of old skills and workspaces\\n:   As of August 23, 2021, {{site.data.keyword.conversationshort}} enabled automatic retraining of existing skills in order to take advantage of updated algorithms. The {{site.data.keyword.conversationshort}} service will continually monitor all ML models, and will automatically retrain those models that have not been retrained within the previous 6 months. For more information, see  rect /docs/assistant?topic=assistant-skill-auto-retrain Automatic retraining of old skills and workspaces {: external}. \\n\\n 19 August 2021 \\n\\n {: #watson-assistant-aug192021}\\n{: release-note} \\n\\n Actions preview now includes debug mode and variable values\\n:   When previewing your actions, you can use  debug mode  and  variable values  to ensure your assistant is working the way you expect. \\n\\n  **Debug mode** allows you to go to the corresponding step by clicking on a step locator next to each message. It shows you the confidence score of top three possible action when the input triggers an action. You can also follow the step in the action editor along with the conversation flow.\\n\\n**Variable values** shows you a list of the variables and their values of current action and the session variables. You can check and edit variables during the conversation flow.\\n  \\n\\n 17 August 2021 \\n\\n {: #watson-assistant-aug172021}\\n{: release-note} \\n\\n New service desk support reference implementation\\n:   You can use the reference implementation details to integrate the web chat with the Oracle B2C Service service desk. For more information, see  rect /docs/assistant?topic=assistant-deploy-web-chat#deploy-web-chat-haa Adding service desk support {: external}. \\n\\n 29 July 2021 \\n\\n {: #watson-assistant-jul292021}\\n{: release-note} \\n\\n Salesforce and Zendesk deployment changes\\n:   The Salesforce and Zendesk integrations have been updated to use the  rect /docs/assistant?topic=assistant-release-notes-chat#4.5.0 new chat history widget {: external}. The updated deployment process applies to all new deployments, including any redeployments of existing Salesforce and Zendesk connections. However, existing deployments are not affected and do not need to be modified or redeployed at this time. \\n\\n Fallback value for session variables\\n:   In action skills, you can now set a fallback value for session variables. This feature lets you to define a value for a session variable if a user-defined value isn\\'t found. To learn more, see  rect /docs/assistant?topic=assistant-actions#actions-variables-global Defining session variables {: external}. \\n\\n 16 July 2021 \\n\\n {: #watson-assistant-jul162021}\\n{: release-note} \\n\\n Logging API changes\\n:   The internal storage and processing of logs has changed. Some undocumented fields or filters might no longer be available. (Undocumented features are not officially supported and might change without notice.) \\n\\n New API version\\n:   The current API version (v1 and v2) is now  2021-06-14 . The following changes were made with this version: \\n\\n  - The `metadata` property of entities detected at run time is deprecated. For detailed information about detected system entities, see the `interpretation` property.\\n- The data types of certain entity mentions are no longer automatically converted:\\n    - Numbers in scientific notation (such as `1E10`), which were previously converted to numbers\\n    - Boolean values (such as `false`), which were previously converted to booleans\\n\\nThese values are now returned as strings.\\n  \\n\\n 17 June 2021 \\n\\n {: #watson-assistant-jun172021}\\n{: release-note} \\n\\n Actions skill now generally available\\n:   As of this release, the beta program has ended, and actions skills are available for general use. \\n\\n  An actions skill contains actions that represent the tasks you want your assistant to help your customers with. Each action contains a series of steps that represent individual exchanges with a customer. Building the conversation that your assistant has with your customers is fundamentally about deciding which steps, or which user interactions, are required to complete an action. After you identify the list of steps, you can then focus on writing engaging content to turn each interaction into a positive experience for your customer. For more information, see [Actions skill overview](/docs/assistant?topic=assistant-actions-overview){: external}.\\n  \\n\\n Date and time response types\\n:   New to action skills, these response types allow you to collect date and time information from customers as they answer questions or make requests. For more information, see  rect /docs/assistant?topic=assistant-actions#actions-response-types Response types {: external}. \\n\\n New built-in variables\\n:   Two kinds of built-in variables are now available for action skills. \\n\\n  - **Set by assistant** variables include the common and essential variables `Now`, `Current time`, and `Current date`.\\n- **Set by integration** variables are `Timezone` and `Locale` and are available to use when connected to a webhook or integration.\\n\\nFor more information, see [Adding and referencing variables](/docs/assistant?topic=assistant-actions#actions-variables){: external}.\\n  \\n\\n Universal language model now generally available\\n:   You now can build an assistant in any language you want to support. If a dedicated language model is not available for your target language, create a skill that uses the universal language model. The universal model applies a set of shared linguistic characteristics and rules from multiple languages as a starting point. It then learns from training data written in the target language that you add to it. For more information, see  rect /docs/assistant?topic=assistant-assistant-language#assistant-language-universal Understanding the universal language model {: external}. \\n\\n 3 June 2021 \\n\\n {: #watson-assistant-jun032021}\\n{: release-note} \\n\\n Log webhook support for actions and search skills\\n:   The log webhook now supports messages exchanged with actions skills and search skills, in addition to dialog skills. For more information, see  rect /docs/assistant?topic=assistant-webhook-log Logging activity with a webhook {: external}. \\n\\n 27 May 2021 \\n\\n {: #watson-assistant-may272021}\\n{: release-note} \\n\\n Change to conversation skill choices\\n:   When adding skills to new or existing assistant, the conversation skill choices have been combined, so that you pick from either an actions skill or a dialog skill. \\n\\n  With this change:\\n- New assistants can use up to two skills, either actions and search or dialog and search. Previously, new assistants could use up to three skills: actions, dialog, and search.\\n- Existing assistants that already use an actions skill and a dialog skill together can continue to use both.\\n- The ability to use actions and dialog skills together in a new assistant is planned for 2H 2021.\\n  \\n\\n 20 May 2021 \\n\\n {: #watson-assistant-may202021}\\n{: release-note} \\n\\n Actions skill improvement\\n: Actions now include a new choice,  Go to another action , for what to do next in a step. This feature lets you can call one action from another action, to switch the conversation flow to another action to perform a certain task. If you have a portion of an action that can be applied across multiple use cases you can build it once and call to it from each action. This new option is available in the  And then  section of each step. For more information, see  rect /docs/assistant?topic=assistant-actions#actions-what-next Deciding what to do next {: external}. \\n\\n 21 April 2021 \\n\\n {: #watson-assistant-apr212021}\\n{: release-note} \\n\\n Preview button for testing your assistant\\n:   For testing your assistant, the new Preview button replaces the previous Preview tile in Integrations. \\n\\n New checklist with steps to go live\\n:   Each assistant includes a checklist that you can use to ensure you\\'re ready to go live. \\n\\n Actions skill improvement\\n:   Actions now include currency and percentage response types. \\n\\n Learn what\\'s new\\n:   The  What\\'s new  choice on the help menu opens a list of highlighting recent features. \\n\\n 14 April 2021 \\n\\n {: #watson-assistant-apr142021}\\n{: release-note} \\n\\n Actions skill improvement\\n:   Actions now include a free text response type, allowing you to capture special instructions or requests that a customer wants to pass along. \\n\\n 8 April 2021 \\n\\n {: #watson-assistant-apr082021}\\n{: release-note} \\n\\n Deploy your assistant to WhatsApp - now generally available\\n:   Make your assistant available through WhatsApp messaging so it can exchange messages with your customers where they are. This integration, which is now generally available, creates a connection between your assistant and WhatsApp by using Twilio as a provider. For more information, see  rect /docs/assistant?topic=assistant-deploy-whatsapp Integrating with WhatsApp {: external}. \\n\\n Web chat home screen now generally available\\n:   Ease your customers into the conversation by adding a home screen to your web chat window. The home screen greets your customers and shows conversation starter messages that customers can click to easily start chatting with the assistant. For more information about the home screen feature, see  rect /docs/assistant?topic=assistant-deploy-web-chat#deploy-web-chat-home-screen Configuring the home screen {: external}. The home screen feature is now enabled by default for all new web chat deployments. Also, you can now access context variables from the home screen. Note that initial context must be set using a  conversation_start  node. For more information, see  rect /docs/assistant?topic=assistant-dialog-start#dialog-start-welcome Starting the conversation {: external}. \\n\\n Connect to human agent response type allows more text\\n:   In a dialog skill, the response type  Connect to human agent  now allows 320 characters in the  Response when agents are online  and  Response when no agents are online  fields. The previous limit was 100 characters. \\n\\n Legacy system entities deprecated\\n:   In January 2020, a new version of the system entities was introduced. As of April 2021, only the new version of the system entities is supported for all languages. The option to switch to using the legacy version is no longer available.     \\n\\n 6 April 2021 \\n\\n {: #watson-assistant-apr062021}\\n{: release-note} \\n\\n Service API endpoint change\\n:   As explained in  rect #watson-assistant-dec122019 December 2019 , as part of work done to fully support IAM authentication, the endpoint you use to access your {{site.data.keyword.conversationshort}} service programmatically is changing. The old endpoint URLs are deprecated and  will be retired on 26 May 2021 . Update your API calls to use the new URLs. \\n\\n  The pattern for the endpoint URL changes from `gateway-{location}.watsonplatform.net/assistant/api/` to `api.{location}.assistant.watson.cloud.ibm.com/`. The domain, location, and offering identifier are different in the new endpoint. For more information, see [Updating endpoint URLs from watsonplatform.net](/docs/watson?topic=watson-endpoint-change){: external}.\\n\\n- If your service instance API credentials show the old endpoint, create a new credential and start using it today. After you update your custom applications to use the new credential, you can delete the old one.\\n\\n- For a web chat integration, you might need to take action depending on when and how you created your integration.\\n\\n    - If you tied your deployment to a specific web chat version by using the `clientVersion` parameter and specified a version earlier than version 3.3.0, update the parameter value to use version 3.3.0 or later. Web chat integrations that use the latest or 3.3.0 and later versions will not be impacted by the endpoint deprecation.\\n\\n    - If you created your web chat integration before May 2020, check the code snippet that you embedded in your web page to see if it refers to `watsonplatform.net`. If so, you must edit the code snippet to use the new URL syntax. For example, change the following URL:\\n\\n        ```html\\n         <script src=\"https://assistant-web.watsonplatform.net/loadWatsonAssistantChat.js\"></script>\\n        ```\\n\\n        The correct syntax to use for the source service URL looks like this:\\n\\n        ```code\\n        src=\"https://web-chat.global.assistant.watson.appdomain.cloud/loadWatsonAssistantChat.js\"\\n        ```\\n\\n- If your web chat integration connects to a Salesforce service desk, then you must edit the API call that is included in the code snippet that you added to the Visualforce Page that you created in Salesforce. From Salesforce, search for *Visualforce Pages*, and find your page. In the `<iframe>` snippet that you pasted into the page, make the following change:\\n\\n  Replace: `src=â€œhttps://assistant-integrations-{location}.watsonplatform.net/public/salesforcewebâ€�` with a url with this syntax:\\n\\n    ```code\\n    src=\"https://integrations.{location}.assistant.watson.appdomain.cloud/public/salesforceweb/{integration-id}/agent_application?version=2020-09-24\"\\n    ```\\n    {: codeblock}\\n\\n  From the Web chat integration Salesforce live agent setup page, find the *Visualforce page markup* field. Look for the `src` parameter in the `<iframe>` element. It contains the full URL to use, including the appropriate `{location}` and `{integration-id}` values for your instance.\\n\\n- For a Slack integration that is over 7 months old, make sure the Request URL is using the proper endpoint.\\n\\n    - Go to the [Slack API](https://api.slack.com/){: external} web page. Click *Your Apps* to find your assistant app. Click *Event Subscriptions* from the navigation pane.\\n    - Edit the Request URL.\\n\\n  For example, if the URL has the syntax: `https://assistant-slack-{location}.watsonplatform.net/public/message`, change it to have this syntax:\\n\\n  ```code\\n  https://integrations.{location}.assistant.watson.appdomain.cloud/public/slack/{integration-id}/message?version=2020-09-24\\n  ```\\n  {: codeblock}\\n\\n  Check the *Generated request URL* field in the Slack integration setup page for the full URL to use, which includes the appropriate `{location}` and `{integration-id}` values for your instance.\\n\\n- For a Facebook Messenger integration that is over 7 months old, make sure the Callback URL is using the proper endpoint.\\n\\n    - Go to the [Facebook for Developers](https://developers.facebook.com/apps/){: external} web page.\\n    - Open your app, and then select *Messenger>Settings* from the navigation pane.\\n    - Scroll down to the *Webhooks* section and edit the *Callback URL* field.\\n\\n      For example, if the URL has the syntax: `https://assistant-facebook-{location}.watsonplatform.net/public/message/`, change it to have this syntax:\\n\\n      ```code\\n      https://integrations.{location}.assistant.watson.appdomain.cloud/public/facebook/{integration-id}/message?version=2020-09-24\\n      ```\\n      {: codeblock}\\n\\n      Check the *Generated callback URL* field in the Facebook Messenger integration setup page for the full URL to use, which includes the appropriate `{location}` and `{integration-id}` values for your instance.\\n\\n- For a Phone integration, if you connect to existing speech service instances, make sure those speech services use credentials that were generated with the latest endpoint syntax (a URL that starts with `https://api.{location}.speech-to-text.watson.cloud.ibm.com/`).\\n\\n- For a search skill, if you connect to an existing {{site.data.keyword.discoveryshort}} service instance, make sure the {{site.data.keyword.discoveryshort}} service uses credentials that were generated with the supported syntax (a URL that starts with `https://api.{location}.discovery.watson.cloud.ibm.com/`).\\n\\n- If you are using [Jupyter notebooks](/docs/assistant?topic=assistant-logs-resources#logs-resources-jupyter-logs){: external} to do advanced analytics, check your Jupyter notebook files to make sure they don\\'t specify URLs with the old `watsonplatform.net` syntax. If so, update your files.\\n\\n- No action is required for the following integration types:\\n\\n    - Intercom\\n    - SMS with Twilio\\n    - WhatsApp with Twilio\\n    - Zendesk service desk connection from web chat\\n  \\n\\n 23 March 2021 \\n\\n {: #watson-assistant-mar232021}\\n{: release-note} \\n\\n Actions skill improvement\\n:   Actions have a new toolbar making it easier to send feedback, access settings, save, and close. \\n\\n 17 March 2021 \\n\\n {: #watson-assistant-mar172021}\\n{: release-note} \\n\\n Channel transfer response type\\n:   Dialog skills now include a channel transfer response type. If your assistant uses multiple integrations to support different channels for interaction with users, there might be some situations when a customer begins a conversation in one channel but then needs to transfer to a different channel. The most common such situation is transferring a conversation to the web chat integration, to take advantage of web chat features such as service desk integration. For more information, see  rect /docs/assistant?topic=assistant-dialog-overview#dialog-overview-add-channel-transfer Adding a Channel transfer response type {: external}. \\n\\n Intercom and WhatsApp integrations now available in Lite plan\\n:   The integrations for Intercom and WhatsApp are now available in the Lite plan for {{site.data.keyword.conversationshort}}. For more information, see  rect /docs/assistant?topic=assistant-deploy-intercom Integrating with Intercom  and  rect /docs/assistant?topic=assistant-deploy-whatsapp Integrating with WhatsApp {: external}. \\n\\n 16 March 2021 \\n\\n {: #watson-assistant-mar162021}\\n{: release-note} \\n\\n Session history now generally available\\n:   Session history allows your web chats to maintain conversation history and context when users refresh a page or change to a different page on the same website. It is enabled by default. For more information about this feature, see  rect https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=tutorials-session-history Session history {: external}. \\n\\n  Session history persists within only one browser tab, not across multiple tabs. The dialog provides an option for links to open in a new tab or the same tab. See [this example](https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=tutorials-session-history#Tutorial1){: external} for more information on how to format links to open in the same tab.\\n\\nSession history saves changes that are made to messages with the [pre:receive event](https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-events#prereceive){: external} so that messages still look the same on rerender. This data is only saved for the length of the session. If you prefer to discard the data, set `event.updateHistory = false;` so the message is rerendered without the changes that were made in the pre:receive event.\\n\\n[instance.updateHistoryUserDefined()](https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-instance-methods#updateHistoryUserDefined){: external} provides a way to save state for any message response. With the state saved, a response can be rerendered with the same state. This saved state is available in the `history.user_defined` section of the message response on reload. The data is saved during the user session. When the session expires, the data is discarded.\\n\\nTwo new history events, [history:begin](https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-events#historybegin){: external} and [history:end](https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-events#historyend){: external} announce the beginning and end of the history of a reloaded session. These events can be used to view the messages that are being reloaded. The history:begin event allows you to edit the messages before they are displayed.\\n\\nSee this example for more information on saving the state of [customResponse](https://web-chat.global.assistant.watson.cloud.ibm.com/testfest.html?to=api-events#customresponse){: external} types in session history.\\n  \\n\\n Channel switching\\n:   You can now create a dialog response type to functionally generate a connect-to-agent response within channels other than web chat. If a user is in a channel such as Slack or Facebook, they can trigger a channel transfer response type. The user receives a link that forwards them to your organization\\'s website where a connection to an agent response can be started within web chat. For more information, see  rect /docs/assistant?topic=assistant-dialog-overview#dialog-overview-add-channel-transfer Adding a Channel transfer response type {: external}. \\n\\n 11 March 2021 \\n\\n {: #watson-assistant-mar112021}\\n{: release-note} \\n\\n Actions skill improvement\\n:   Updated the page where you configure a step with an  Options  reply constraint. Now it\\'s clearer that you have a choice to make about whether to always ask for the option value or to skip asking. For more information, see  rect /docs/assistant?topic=assistant-actions#actions-response-types Apply reply constraints {: external}. \\n\\n 4 March 2021 \\n\\n {: #watson-assistant-mar042021}\\n{: release-note} \\n\\n Support for every language!\\n:   You now can build an assistant in any language you want to support. If a dedicated language model is not available for your target language, create a skill that uses the universal language model. The universal model applies a set of shared linguistic characteristics and rules from multiple languages as a starting point. It then learns from training data written in the target language that you add to it. \\n\\n  The universal model is available as a beta feature. For more information, see [Understanding the universal language model](/docs/assistant?topic=assistant-assistant-language#watson-assistant-language-universal){: external}.\\n  \\n\\n Actions skill improvement\\n:   Now you can indicate whether or not to ask for a number when you apply a number reply constraint to a step. Test how changes to this setting might help speed up a customer\\'s interaction. Under the right circumstances, it can be useful to let a number mention be recognized and stored without having to explicitly ask the customer for it. For more information, see  rect /docs/assistant?topic=assistant-actions#actions-response-types Applying reply constraints {: external}. \\n\\n 1 March 2021 \\n\\n {: #watson-assistant-mar012021}\\n{: release-note} \\n\\n Introducing the  Enterprise  plan!\\n:   The Enterprise plan includes all of the market differentiating features of the Plus plan, but with higher capacity limits, additional security features, custom onboarding support to get you going, and a lower overall cost at higher volumes. \\n\\n  To have a dedicated environment provisioned for your business, request the *Enterprise with Data Isolation* plan. To submit a request online, go to [http://ibm.biz/contact-wa-enterprise](http://ibm.biz/contact-wa-enterprise){: external}.\\n\\nThe Enterprise plan is replacing the Premium plan. The Premium plan is being retired today. Existing Premium plan users are not impacted. They can continue to work in their Premium instances and create instances up to the 30-instance limit. New users do not see the Premium plan as an option when they create a service instance.\\n\\nFor more information, see the [Pricing](https://www.ibm.com/cloud/watson-assistant/pricing/){: external} page.\\n  \\n\\n Other plan changes\\n:   Our pricing has been revised to reflect the features we\\'ve added that help you build an assistant that functions as a powerful omnichannel SaaS application. \\n\\n  Starting on 1 March 2021, the Plus plan starts at $140 per month and includes your first 1,000 monthly users. You pay $14 for each additional 100 active users per month. Use of the voice capabilities that are provided by the *Phone* integration are available for an additional $9 per 100 users per month.\\n\\nThe Plus Trial plan was renamed to Trial.\\n  \\n\\n SOC 2 compliance\\n:   {{site.data.keyword.conversationshort}} is SOC 2 Type 2 compliant, so you know your data is secure. \\n\\n  The System and Organization Controls framework, developed by the American Institute of Certified Public Accountants (AICPA), is a standard for controls that protect information stored in the cloud. SOC 2 reports provide details about the nature of internal controls that are implemented to protect customer-owned data. For more information, see [IBM Cloud compliance programs](https://www.ibm.com/cloud/compliance/global){: external}.\\n  \\n\\n 25 February 2021 \\n\\n {: #watson-assistant-feb252021}\\n{: release-note} \\n\\n Search skill can emphasize the answer\\n:   You can configure the search skill to highlight text in the search result passage that {{site.data.keyword.discoveryshort}} determines to be the exact answer to the customer\\'s question. For more information, see  rect /docs/assistant?topic=assistant-skill-search-add Creating a search skill {: external}. \\n\\n Integration changes\\n:   The following changes were made to the integrations: \\n\\n  - The name of *Preview link* integration changed to *Preview*.\\n- The *Web chat* and *Preview* integrations are no longer added automatically to every new assistant.\\n\\n    The integrations continue to be added to the *My first assistant* that is generated for you automatically when you first create a new service instance.\\n  \\n\\n Message and log webhooks are generally available\\n:   The premessage, postmessage, and log webhooks are now generally available. For more information about them, see  rect /docs/assistant?topic=assistant-webhook-overview Webhook overview {: external}. \\n\\n 11 February 2021 \\n\\n {: #watson-assistant-feb112021}\\n{: release-note} \\n\\n The  user_id  value is easier to access\\n:   The  user_id  property is used for billing purposes. Previously, it was available from the context object as follows: \\n\\n  - v2: `context.global.system.user_id`\\n- v1: `context.metadata.user_id`\\n\\nThe property is now specified at the root of the `/message` request in addition to the context object. The built-in integrations typically set this property for you. If you\\'re using a custom application and don\\'t specify a `user_id`, the `user_id` is set to the `session_id` (v2) or `conversation_id`(v1) value.\\n  \\n\\n Digression bug fix\\n:   Fixed a bug where digression setting changes that were made to a node with slots were not being saved. \\n\\n 5 February 2021 \\n\\n {: #watson-assistant-feb052021}\\n{: release-note} \\n\\n Documentation update\\n:   The phone and  SMS with Twilio  deployment documentation was updated to include instructions for migrating from {{site.data.keyword.iva_short}}. For more information, see  rect /docs/assistant?topic=assistant-deploy-phone#deploy-phone-migrate-from-va Integrating with phone {: external} and  rect /docs/assistant?topic=assistant-deploy-sms#deploy-sms-migrate-from-va Integrating with  SMS with Twilio  {: external}. \\n\\n 27 January 2021 \\n\\n {: #watson-assistant-jan272021}\\n{: release-note} \\n\\n German language improvements\\n:   A word decomposition function was added to the intent and entity recognition models for German-language dialog skills. \\n\\n  A characteristic of the German language is that some words are formed by concatenating separate words to form a single compound word. For example, \"festnetznummer\" (landline number) concatenates the words \"festnetz\" (landline) and \"nummer\" (number). When your customers chat with your assistant, they might write a compound word as a single word, as hyphenated words, or as separate words. Previously, the variants resulted in different intent confidence scores and different entity mention counts based on your training data. With the addition of the word decomposition function, the models now treat all compound word variants as equivalent. This update means you no longer need to add examples of every variant of the compound words to your training data.\\n  \\n\\n 19 January 2021 \\n\\n {: #watson-assistant-jan192021}\\n{: release-note} \\n\\n The  Phone  and  SMS with Twilio  integrations are now generally available!\\n:   For more information, see: \\n\\n  - [Integrating with phone](/docs/assistant?topic=assistant-deploy-phone){: external}\\n- [Integrating with *SMS with Twilio*](/docs/assistant?topic=assistant-deploy-sms){: external}\\n  \\n\\n  Preview link  change\\n:   When you create a preview link, you can now test your skill from a chat window that is embedded in the page. You can also copy the URL that is provided, and open it in a web browser to see an IBM-branded web page with the web chat embedded in it. You can share the URL to the public IBM web page with others to get help with testing or for demoing purposes. For more information, see  rect /docs/assistant?topic=assistant-deploy-web-link Testing your assistant {: external}. \\n\\n Import and export UI changes\\n:   The label on buttons for importing skills changed from  Import  to  Upload , and the label on buttons for exporting skills changed from  Export  to  Download . \\n\\n Coverage metric change\\n:   The coverage metric now looks for nodes that were processed with a node condition that includes the  anything_else  special condition instead of nodes that are named  Anything else . For more information, see  rect /docs/assistant?topic=assistant-dialog-start Starting and ending the dialog {: external}. \\n\\n 15 January 2021 \\n\\n {: #watson-assistant-jan152021}\\n{: release-note} \\n\\n Use new webhooks to process messages!\\n:   A set of new webhooks is available as a beta feature. You can use the webhooks to perform preprocessing tasks on incoming messages and postprocessing tasks on the corresponding responses. You can use the new log webhook to log each message with an external service. For more information, see  rect /docs/assistant?topic=assistant-webhook-overview Webhook overview {: external}. \\n\\n New service desk support reference implementation\\n:   You can use the reference implementation details to integrate the web chat with the NICE inContact service desk. For more information, see  rect /docs/assistant?topic=assistant-deploy-web-chat#deploy-web-chat-haa Adding service desk support {: external}. \\n\\n Phone and  SMS with Twilio  integration updates\\n:   The phone integration now enables you to specify more than one phone number, and the numbers can be imported from a comma-separated values (CSV) file. The  SMS with Twilio  integration no longer requires you to add your SMS phone number to the setup page. \\n\\n 6 January 2021 \\n\\n {: #watson-assistant-jan062021}\\n{: release-note} \\n\\n Import and export UI changes\\n:   The label on buttons for importing intents and entities changed from  Import  to  Upload . The label on buttons for exporting intents and entities changed from  Export  to  Download . \\n\\n 4 January 2021 \\n\\n {: #watson-assistant-jan042021}\\n{: release-note} \\n\\n Dialog methods updates\\n:   Documentation and examples were added for the following supported dialog methods: \\n\\n  - `JSONArray.addAll(JSONArray)`\\n- `JSONArray.containsIgnoreCase(value)`\\n- `String.equals(String)`\\n- `String.equalsIgnoreCase(String)`\\n\\nFor more information, see [Expression language methods](/docs/assistant?topic=assistant-dialog-methods){: external}.\\n  \\n\\n 17 December 2020 \\n\\n {: #watson-assistant-dec172020}\\n{: release-note} \\n\\n Accessibility improvements\\n:   The product was updated to provide enhanced accessibility features. \\n\\n 14 December 2020 \\n\\n {: #watson-assistant-dec142020}\\n{: release-note} \\n\\n Increased Phone and SMS with Twilio integrations availability\\n:   These beta SMS and voice capabilities are now available from service instances that are hosted in Seoul, Tokyo, London, and Sydney. \\n\\n Improved JSON editor\\n:   The JSON editor in the dialog skill was updated. The editor now uses JSON syntax highlighting and allows you to expand and collapse objects. \\n\\n Connect to agent from actions skill\\n:   The actions skill now supports transferring a customer to an agent from within an action step. For more information, see  rect /docs/assistant?topic=assistant-actions#actions-what-next Deciding what to do next {: external}. \\n\\n 4 December 2020 \\n\\n {: #watson-assistant-dec042020}\\n{: release-note} \\n\\n Introducing more service desk options for web chat\\n:   When you deploy your assistant by using the web chat integration, there are now reference implementations that you can use for the following service desks: \\n\\n  - Twilio Flex\\n- Genesys Cloud\\n\\nAlternatively, you can bring your own service desk by using the service desk extension starter kit.\\n\\nFor more information, see [Adding service desk support](/docs/assistant?topic=assistant-deploy-web-chat#deploy-web-chat-haa){: external}.\\n  \\n\\n Autolearning has been moved and improved\\n:   Go to the  Analytics>Autolearning  page to enable the feature and see visualizations that illustrate how autolearning impacts your assistant\\'s performance over time. For more information, see  rect /docs/assistant?topic=assistant-autolearn Empower your skill to learn automatically {: external}. \\n\\n Search from actions skill\\n:   The actions skill now supports triggering a search that uses your associated search skill from within an action step. For more information, see  rect /docs/assistant?topic=assistant-actions#actions-what-next Deciding what to do next {: external}. \\n\\n System entities language support change\\n:   The new system entities are now used by all skills except Korean-language dialog skills. If you have a Korean skill that uses the older version of the system entities, update it. The legacy version will stop being supported for Korean skills in March 2021. For more information, see  rect /docs/assistant?topic=assistant-legacy-system-entities Legacy system entities {: external}. \\n\\n Disambiguation selection enhancement\\n:   When a customer chooses an option from a disambiguation list, the corresponding intent is submitted. With this latest release, a confidence score of 1.0 is assigned to the intent. Previously, the original confidence score of the option was used. \\n\\n Skill import improvements\\n:   Importing of large skills from JSON data is now processed in the background. When you import a JSON file to create a skill, the new skill tile appears immediately. However, depending on the size of the skill, it might not be available for several minutes while the import is being processed. During this time, the skill cannot be opened for editing or added to an assistant, and the skill tile shows the text  Processing . \\n\\n 23 November 2020 \\n\\n {: #watson-assistant-nov232020}\\n{: release-note} \\n\\n Deploy your assistant to WhatsApp!\\n:   Make your assistant available through WhatsApp messaging so it can exchange messages with your customers where they are. This beta integration creates a connection between your assistant and WhatsApp by using Twilio as a provider. For more information, see  rect /docs/assistant?topic=assistant-deploy-whatsapp Integrating with WhatsApp {: external}. \\n\\n 13 November 2020 \\n\\n {: #watson-assistant-nov132020}\\n{: release-note} \\n\\n New coverage metric and enhanced intent detection model\\n:   The following features are available in service instances hosted in all data center locations except Dallas. \\n\\n Introducing the coverage metric!\\n:   Want a quick way to see how your dialog is doing at responding to customer queries? Enable the new coverage metric to find out. The coverage metric measures the rate at which your dialog is confident that it can address a customer\\'s request per message. For conversations that are not covered, you can review the logs to learn more about what the customer wanted. For the metric to work, you must design your dialog to include an  Anything else  node that is processed when no other dialog node intents are matched. For more information, see  rect /docs/assistant?topic=assistant-logs-overview#logs-overview-graphs Graphs and statistics {: external}. \\n\\n Try out the enhanced intent detection model\\n:   The new model, which is being offered as a beta feature in English-language dialog and actions skills, is faster and more accurate. It combines traditional machine learning, transfer learning, and deep learning techniques in a cohesive model that is highly responsive at run time. For more information, see  rect /docs/assistant?topic=assistant-intent-detection Improved intent recognition {: external}. \\n\\n 3 November 2020 \\n\\n {: #watson-assistant-nov032020}\\n{: release-note} \\n\\n Suggestions are now generally available\\n:   The Suggestions feature that is available for the web chat integration is generally available and is enabled by default when you create a new web chat integration. For more information, see  rect /docs/assistant?topic=assistant-deploy-web-chat#deploy-web-chat-alternate Showing more suggestions {: external}. \\n\\n New languages supported by the dialog analysis notebook\\n:   The  Dialog skill analysis notebook  was updated with language support for French, German, Spanish, Czech, Italian, and Portuguese. For more information, see  rect /docs/assistant?topic=assistant-logs-resources#logs-resources-jupyter-logs Analysis notebooks {: external}. \\n\\n Visit the learning center!\\n:   Click the  Learning center  link that is displayed in the header of the skill pages to find helpful product tours. The tours guide you through the steps to follow to complete a range of tasks, from adding your first intent to a dialog skill to enhancing the conversation in an actions skill. The  Additional resources  page has links to relevant documentation topics and how-to videos. You can search the resource link titles to find what you\\'re looking for quickly. \\n\\n 29 October 2020 \\n\\n {: #watson-assistant-oct292020}\\n{: release-note} \\n\\n System entity support changes\\n:   For English, Brazilian Portuguese, Czech, Dutch, French, German, Italian, and Spanish dialog skills only the new system entities API version is supported. For backward compatibility, both the  interpretation  and  metadata  attributes are included with the recognized entity object. The new system entity version is enabled automatically for dialog skills in the Arabic, Chinese, Korean, and Japanese languages. You can choose to use the legacy version of the system entities API by switching to it from the  Options>System Entities  page. This settings page is not displayed in English, Brazilian Portuguese, Czech, Dutch, French, German, Italian, and Spanish dialog skills because use of the legacy version of the API is no longer supported for those languages. For more information about the new system entities, see  rect /docs/assistant?topic=assistant-system-entities System entities {: external}. \\n\\n 28 October 2020 \\n\\n {: #watson-assistant-oct282020}\\n{: release-note} \\n\\n Introducing the  actions skill !\\n:   The actions skill is the latest step in the continuing evolution of {{site.data.keyword.conversationshort}} as a software as a service application. The actions skill is designed to make it simple enough for  anyone  to build a virtual assistant. We\\'ve removed the need to navigate between intents, entities, and dialog to create conversational flows. Building can all now be done in one simple and intuitive interface. \\n\\n  The actions skill is available as a beta feature. For more information, see [Adding an actions skill](/docs/assistant?topic=assistant-skill-actions-add){: external}.\\n  \\n\\n Web chat integration is created automatically\\n:   When you create a new assistant, a web chat integration is created for you automatically (in addition to the preview link integration, which was created previously). These integrations are added also to the assistant that is auto-generated (named  My first assistant ) when you create a new service instance. For more information, see  rect /docs/assistant?topic=assistant-deploy-web-chat Integrating the web chat with your website {: external}. \\n\\n Text messaging integration was renamed\\n:   The  Twilio messaging  integration was renamed to  SMS with Twilio . \\n\\n 9 October 2020 \\n\\n {: #watson-assistant-oct092020}\\n{: release-note} \\n\\n Search skill update\\n:   Support was added for a new version of the {{site.data.keyword.discoveryshort}} API which adds the following capabilities: \\n\\n  - The search skill can now connect to existing Premium {{site.data.keyword.discoveryshort}} service instances.\\n\\n- When you connect to a Box, Sharepoint, or Web crawl data collection, the result content fields are automatically populated for you. The **Title** now uses the `title` field from the source document instead of the `extracted_metadata.title` field, which provides better results.\\n  \\n\\n 1 October 2020 \\n\\n {: #watson-assistant-oct012020}\\n{: release-note} \\n\\n Introducing the  Phone  integration!\\n:   Your customers are calling; now your assistant can answer. Add a phone integration to enable your assistant to answer customer support calls. The integration connects to your existing Session Initiation Protocol (SIP) trunk, which routes incoming calls to your assistant. For more information, see  rect /docs/assistant?topic=assistant-deploy-phone Integrating with phone {: external}. \\n\\n Introducing the  Twilio messaging  integration!\\n:   Enable your assistant to receive and respond to questions that customers submit by using SMS text messaging. When you enable both new integrations, your assistant can send text messages to a customer in the context of an ongoing phone conversation. For more information, see  rect /docs/assistant?topic=assistant-deploy-sms Integrating with Twilio messaging {: external}. \\n\\n  The *Phone* and *Twilio messaging* integrations are available as beta features in {{site.data.keyword.conversationshort}} service instances that are hosted in Dallas, Frankfurt, and Washington, DC.\\n  \\n\\n The web chat integration is added to new assistants automatically\\n:   Much like the  Preview link  integration, the  Web chat  integration now is added to the  My first assistant  assistant that is created for new users automatically. \\n\\n 24 September 2020 \\n\\n {: #watson-assistant-sep242020}\\n{: release-note} \\n\\n Introducing the containment metric!\\n:   Want a quick way to see how often your assistant has to ask for help? Enable the new containment metric to find out. The containment metric measures the rate at which your assistant is able to address a customer\\'s goal without human intervention. For conversations that are not contained, you can review the logs to understand what led customers to seek help outside of the assistant. For the metric to work, you must design your dialog to flag requests for additional support when they occur. For more information, see  rect /docs/assistant?topic=assistant-logs-overview#logs-overview-graphs Graphs and statistics {: external}. \\n\\n Chat transfer improvements\\n:   When you add the  Connect to human agent  response type to a dialog node, you can now define messages to show to your customers during the transfer, and can specify service desk agent routing preferences. For more information, see  rect /docs/assistant?topic=assistant-dialog-overview#dialog-overview-add-connect-to-human-agent Adding a  Connect to human agent  response type {: external}. \\n\\n 22 September 2020 \\n\\n {: #watson-assistant-sep222020}\\n{: release-note} \\n\\n New API version\\n:   The current v2 API version is now  2020-09-24 . In this version, the structure of the  search  response type has changed. The  results  property has been removed and replaced with two new properties: \\n\\n  - `primary_results` property includes the search results that should be displayed in the initial response to a user query.\\n- `additional_results` property includes search results that can be displayed if the user wants to see more.\\n\\nThe search skill configuration determines how many search results are included in the `primary_results` and `additional_results` properties.\\n  \\n\\n Search skill improvements\\n:   The following improvements were made to the search skill: \\n\\n  - **Control the number of search results**: You can now customize the number of search results that are shown in a response from the search skill. For more information, see [Configure the search](/docs/assistant?topic=assistant-skill-search-add#skill-search-add-configure){: external}.\\n\\n- **FAQ extraction is available for web crawl data collections**: When you create a web crawl data collection type, you can now enable the FAQ extraction beta feature. FAQ extraction allows the {{site.data.keyword.discoveryshort}} service to identify question and answer pairs that it finds as it crawls the website. For more information, see [Create a data collection](/docs/assistant?topic=assistant-skill-search-add#skill-search-add-create-discovery-collection){: external}.\\n  \\n\\n 16 September 2020 \\n\\n {: #watson-assistant-sep162020}\\n{: release-note} \\n\\n Search skill refinement change\\n:   The search refinement beta feature that was added in  rect #24jun2020 June  now is disabled by default. Enable the feature to refine the search results that are returned from the {{site.data.keyword.discoveryshort}} service. For more information, see  rect /docs/assistant?topic=assistant-skill-search-add#skill-search-add-configure Configure the search {: external}. \\n\\n 25 August 2020 \\n\\n {: #watson-assistant-aug252020}\\n{: release-note} \\n\\n Give the web chat integration a try!\\n:   You can now use the web chat integration with a Lite plan. Previously, the web chat was available to Plus or higher plans only. For more information, see  rect /docs/assistant?topic=assistant-deploy-web-chat Integrating the web chat with your website {: external}. \\n\\n 12 August 2020 \\n\\n {: #watson-assistant-aug122020}\\n{: release-note} \\n\\n v2 Logs API is available\\n:   If you have a Premium plan, you can use the v2 API  logs  method to list log events for an assistant. For more information, see the  rect https://cloud.ibm.com/apidocs/assistant/assistant-v2#listlogs API reference {: external} documentation. \\n\\n 5 August 2020 \\n\\n {: #watson-assistant-aug052020}\\n{: release-note} \\n\\n Enable your skill to improve itself\\n:   Try the new  autolearning  beta feature to empower your skill to improve itself automatically over time. Your skill observes customer choices to understand which choices are most often the best. As its confidence grows, your skill presents better options to get the right answers to your customers with fewer clicks. For more information, see  rect /docs/assistant?topic=assistant-autolearn Empower your skill to learn over time {: external}. \\n\\n Show more of search results\\n:   When search results are returned from the search skill, the customer can now click a twistie to expand the search result card to see more of the returned text. \\n\\n 29 July 2020 \\n\\n {: #watson-assistant-jul292020}\\n{: release-note} \\n\\n The @sys-location and @sys-person system entities were removed\\n:   The  @sys-location  and  @sys-person  system entities are no longer listed on the  System entities  page. If your dialog uses one of these entities, a red  Entity not created  notification is displayed to inform you that the entity is not recognized. \\n\\n Skill menu actions moved\\n:   The menu that was displayed in the header of the skill while you were working with a skill was removed. The actions that were available from the menu, such as import and export, are still available. Go to the Skills page, and click the menu on the skill tile. \\n\\n  The import skill process was updated to support overwriting an existing skill on import. For more information, see [Overwriting a skill](/docs/assistant?topic=assistant-skill-tasks#skill-tasks-overwrite){: external}.\\n  \\n\\n Dialog issues were addressed\\n:   These dialog issues were addressed:\\n    - Fixed an issue with adding a jump-to from a conditional response in one node to a conditional response in another node.\\n    - The page now responds better when you scroll horizontally to see multiple levels of child nodes. \\n\\n 15 July 2020 \\n\\n {: #watson-assistant-jul152020}\\n{: release-note} \\n\\n Support ended for @sys-location and @sys-person\\n:   The person and location system entities, which were available as a beta feature in English dialog skills only, are no longer supported. You cannot enable them. If your dialog uses them, they are ignored by the service. \\n\\n  Use contextual entities to teach your skill to recognize the context in which such names are used. For more information about contextual entities, see [Annotation-based method](/docs/assistant?topic=assistant-entities#entities-annotations-overview){: external}.\\n\\nFor more information about how to use contextual entites to identify names of people, see the [Detecting Names And Locations With {{site.data.keyword.conversationshort}}](https://medium.com/ibm-watson/detecting-names-and-locations-with-watson-assistant-e3e1fa2a8427){: external} blog post on Medium.\\n  \\n\\n How legacy numeric system entities are processed has changed\\n:   All new dialog skills use the new system entities automatically. \\n\\n  For existing skills that use legacy numeric system entities, how the entities are processed now differs based on the skill language.\\n\\n- Arabic, Chinese, Korean, and Japanese dialog skills that use legacy numeric system entities function the same as before.\\n- If you choose to continue to use the legacy system entities in European-language dialog skills, a new legacy API format is used. The new legacy API format simulates the legacy system entities behavior. In particular, it returns a `metadata` object and does not stop the service from idenfifying multiple system entities for the same input string. In addition, it returns an `interpretation` object, which was introduced with the new version of system entities. Review the `interpretation` object to see the useful information that is returned by the new version.\\n\\nUpdate your skills to use the new system entities from the **Options>System Entities** page.\\n  \\n\\n Web chat security is generally available\\n:   Enable the security feature of web chat so that you can verify that messages sent to your assistant come from only your customers and can pass sensitive information to your assistant. \\n\\n  When configuring the JWT, you no longer need to specify the Authentication Context Class Reference (acr) claim.\\n  \\n\\n 1 July 2020 \\n\\n {: #watson-assistant-jul012020}\\n{: release-note} \\n\\n Salesforce support is generally available\\n:   Integrate your web chat with Salesforce so your assistant can transfer customers who asks to speak to a person to a Salesforce agent who can answer their questions. For more information, see  rect /docs/assistant?topic=assistant-deploy-salesforce Integrating with Salesforce {: external}. \\n\\n 24 June 2020 \\n\\n {: #watson-assistant-jun242020}\\n{: release-note} \\n\\n Get better answers from search skill\\n:   The search skill now has a beta feature that limits the search results that are returned to include only those for which {{site.data.keyword.discoveryshort}} has calculated a 20% or higher confidence score. You can toggle the feature on or off from the  Refine results to return more selective answers  switch on the configuration page. You cannot change the confidence score threshold from 0.2. This beta feature is enabled by default. For more information, see  rect /docs/assistant?topic=assistant-skill-search-add Creating a search skill {: external}. \\n\\n 3 June 2020 \\n\\n {: #watson-assistant-jun032020}\\n{: release-note} \\n\\n Zendesk support is generally available\\n:   Integrate your web chat with Zendesk so your assistant can transfer customers who asks to speak to a person to a Zendesk agent who can answer their questions. And now you can secure the connection to Zendesk. For more information, see  rect /docs/assistant?topic=assistant-deploy-zendesk Adding support for transfers {: external}. \\n\\n Pricing plan changes\\n:   We continue to revamp the overall service plan structure for {{site.data.keyword.conversationshort}}. In April, we announced  rect #watson-assistant-apr012020 a new low cost entry point  for the Plus plan. Today, the Standard plan is being retired. Existing Standard plan users are not impacted; they can continue to work in their Standard instances. New users do not see the Standard plan as an option when they create a service instance. For more information, see the  rect https://www.ibm.com/cloud/watson-assistant/pricing/ Pricing {: external} page. \\n\\n 27 May 2020 \\n\\n {: #watson-assistant-may272020}\\n{: release-note} \\n\\n Full language support for new system entities\\n:   The new version of the system entities is generally available in dialog skills of all languages, including Arabic, Chinese (Simplified), Chinese (Traditional), Korean, and Japanese. For more information, see  rect /docs/assistant?topic=assistant-language-support Supported languages {: external}. \\n\\n New system entities are enabled automatically\\n:   All new dialog skills use the new version of the system entities automatically. For more information, see  rect /docs/assistant?topic=assistant-system-entities New system entities {: external}. \\n\\n 22 May 2020 \\n\\n {: #watson-assistant-may222020}\\n{: release-note} \\n\\n Spelling correction in v2 API\\n:   The v2  message  API now supports spelling correction options. For more information see the  rect https://cloud.ibm.com/apidocs/assistant/assistant-v2#message API Reference {: external}. \\n\\n 21 May 2020 \\n\\n {: #watson-assistant-may212020}\\n{: release-note} \\n\\n Preview link URL change\\n:   The URL for the preview link was changed. If you previously shared the link with teammates, provide them with the new URL. \\n\\n 15 May 2020 \\n\\n {: #watson-assistant-may152020}\\n{: release-note} \\n\\n Private endpoints support is available in Plus plan\\n:   You can use private endpoints to route services over the {{site.data.keyword.cloud_notm}} private network instead of the public network. For more information, see  rect /docs/assistant?topic=assistant-security#security-private-endpoints Private network endpoints {: external}. This feature was previously available to users of Premium plans only. \\n\\n 14 May 2020 \\n\\n {: #watson-assistant-may142020}\\n{: release-note} \\n\\n Get skill owner information\\n:   The email address of the person who owns the service instance that you are using is displayed from the User account menu. This information is especially helpful if you want to contact the instance owner to request access changes. For more information about access control, see  rect /docs/assistant?topic=assistant-access-control Managing access to resources {: external}. \\n\\n System entity deprecation\\n:   As stated in the  rect #watson-assistant-mar012020 March deprecation notice , the  @sys-location  and  @sys-person  system entities that were available as a beta feature are deprecated. If you are using one of these system entities in your dialog, a toggle is displayed for the entity on the  System entities  page. You can  rect /docs/assistant?topic=assistant-dialog-tasks#dialog-tasks-search search your dialog {: external} to find out where you are currently using the entity, and remove it. Consider using a contextual entity to identify references to locations and people instead. After removing the entity from your dialog, disable the entity from the  System entities  page. \\n\\n 13 May 2020 \\n\\n {: #watson-assistant-may132020}\\n{: release-note} \\n\\n Stateless v2 message API\\n:   The v2 runtime API now supports a new stateless  message  method. If you have a client application that manages its own state, you can use this new method to take advantage of  rect https://medium.com/ibm-watson/the-new-watson-assistant-v2-stateless-api-unlock-enterprise-features-today-2c02a4bbdef5 many of the benefits {: external} of the v2 API without the overhead of creating sessions. For more information, see the  rect https://cloud.ibm.com/apidocs/assistant/assistant-v2#message-stateless API Reference {: external}. \\n\\n 30 April 2020 \\n\\n {: #watson-assistant-apr302020}\\n{: release-note} \\n\\n Web chat is generally available!\\n:   Add your assistant to your company website as a web chat widget that can help your customers with common questions and tasks. Service desk transfer support continues to be a beta feature. For more information, see  rect /docs/assistant?topic=assistant-deploy-web-chat Integrating with your own website {: external}. \\n\\n Secure your web chat\\n:   Enable the beta security feature of web chat so that you can verify that messages sent to your assistant come from only your customers and can pass sensitive information to your assistant. \\n\\n 27 April 2020 \\n\\n {: #watson-assistant-apr272020}\\n{: release-note} \\n\\n Add personality to your assistant in web chat\\n:   You can add an assistant image to the web chat header to brand the window. You can add an avatar image that represents your assistant or a brand logo, for example. For more information, see  rect /docs/assistant?topic=assistant-deploy-web-chat Integrating with your own web site {: external}. \\n\\n Know your plan\\n:   Now your service plan is displayed in the page header. And if you have a Plus Trial plan, you can see how many days are left in the trial. \\n\\n 21 April 2020 \\n\\n {: #watson-assistant-apr212020}\\n{: release-note} \\n\\n Fuzzy matching support was expanded\\n:   Added support for stemming and misspelling in French, German, and Czech dialog skills. This enhancement means that the assistant can recognize an entity value that is defined in its singular form but mentioned in its plural form in user input. It also can recognize conjugated forms of a verb that is specified as an entity value. \\n\\n  For example, if your French-language dialog skill has an entity value of `animal`, it recognizes the plural form of the word (`animaux`) when it is mentioned in user input. If your German-language dialog skill has the root verb `haben` as an entity value, it recognizes conjugated forms of the verb (`hast`) in user input as mentions of the entity.\\n  \\n\\n 2 April 2020 \\n\\n {: #watson-assistant-apr022020}\\n{: release-note} \\n\\n New and improved access control\\n:   Now, when you give other people access to your {{site.data.keyword.conversationshort}} resources, you have more control over the level of access they have to individual skills and assistants. You can give one person read-only access to a production skill and manager-level access to a development skill, for example. For more information, see  rect /docs/assistant?topic=assistant-access-control Managing access to resources {: external}. \\n\\n  Can\\'t see Analytics anymore? If you cannot do things that you could do before, you might not have appropriate access. Ask the service instance owner to change your service access role. For more information, see [How to keep your access](/docs/assistant?topic=assistant-access-control#access-control-prep){: external}.\\n\\nIf you can\\'t access the API Details for a skill or assistant anymore, you might not have the access role that is required to use the instance-level API credentials. You can use a personal API key instead. For more information, see [Getting API information](/docs/assistant?topic=assistant-assistant-settings#assistant-settings-api-details){: external}.\\n  \\n\\n 1 April 2020 \\n\\n {: #watson-assistant-apr012020}\\n{: release-note} \\n\\n Plus plan changes\\n:   The Plus plan is now available starting at $120/month for 1,000 users on pay-as-you-go or subscription IBM Cloud accounts. And you can subscribe without contacting Sales. \\n\\n French language beta support added for contextual entities\\n:   You can add contextual entities to French-language dialog skills. For more information about contextual entities, see  rect /docs/assistant?topic=assistant-entities#entities-annotations-overview Creating entities {: external}. \\n\\n New API version\\n:   The current API version is now  2020-04-01 . The following change was made with this version: \\n\\n  - An `integrations` property was added to the V2 `/message` context. The service now expects the `context.integrations` property to conform to a specific schema in which the allowed values are as follows:\\n\\n    - `chat`\\n    - `facebook`\\n    - `intercom`\\n    - `liveengage`\\n    - `salesforce`\\n    - `slack`\\n    - `service_desk`\\n    - `text_messaging`\\n    - `voice_telephony`\\n    - `zendesk`\\n\\nIf your app uses a `context.integrations` property that does not conform to the schema, a 400 error code will be returned.\\n  \\n\\n 31 March 2020 \\n\\n {: #watson-assistant-mar312020}\\n{: release-note} \\n\\n The web chat integration was updated\\n:   The update adds an  isTrackingEnabled  parameter. You can add this parameter and set it to  false  to add the  X-Watson-Learning-Opt-Out  header to each  /message  request that originates from the web chat. For more information about the header, see  rect https://cloud.ibm.com/apidocs/assistant/assistant-v2#data-collection Data collection {: external}. For more information about the parameter, see  rect https://integrations.us-south.assistant.watson.cloud.ibm.com/web/developer-documentation/api-configuration Configuration {: external}. \\n\\n 26 March 2020 \\n\\n {: #watson-assistant-mar262020}\\n{: release-note} \\n\\n The Covid-19 content catalog is available in Brazilian Portuguese, French, and Spanish\\n:   The content catalog defines a group of intents that recognize the common types of questions people ask about the novel coronavirus. You can use the catalog to jump-start development of chatbots that can answer questions about the virus and help to minimize the anxiety and misinformation associated with it. For more information about how to add a content catalog to your skill, see  rect /docs/assistant?topic=assistant-catalog Using content catalogs {: external}. \\n\\n 19 March 2020 \\n\\n {: #watson-assistant-mar192020}\\n{: release-note} \\n\\n A Covid-19 content catalog is available\\n:   The English-only content catalog defines a group of intents that recognize the common types of questions people ask about the novel coronavirus. The World Health Organization characterized COVID-19 as a pandemic on 11 March 2020. You can use the catalog to jump-start development of chatbots that can answer questions about the virus and help to minimize the anxiety and misinformation associated with it. For more information about how to add a content catalog to your skill, see  rect /docs/assistant?topic=assistant-catalog Using content catalogs {: external}. \\n\\n Fixed a problem with missing User Conversation data\\n:   A recent change resulted in no logs being shown in the User Conversations page unless you had a skill as the chosen data source. And the chosen skill had to be the same skill (with same skill ID) that was connected to the assistant when the user messages were submitted. \\n\\n 18 March 2020 \\n\\n {: #watson-assistant-mar182020}\\n{: release-note} \\n\\n Technology preview is discontinued\\n:   The technology preview user interface was replaced with the {{site.data.keyword.conversationshort}} standard user interface. If you used an Actions page to create actions and steps for your skill previously, you cannot access the Actions page anymore. Instead, use the Intents and Dialog pages to work with your skill. \\n\\n 16 March 2020 \\n\\n {: #watson-assistant-mar162020}\\n{: release-note} \\n\\n Instructions updated for Slack integrations\\n:   The steps required to set up a Slack integration have changed to reflect permission assignment changes that were made by Slack. For more information, see  rect /docs/assistant?topic=assistant-deploy-slack Integrating with Slack {: external}. \\n\\n Order of response types is preserved\\n:   Previously, if you included a response type of  Search skill  in a list of response types for a dialog node, the search results were displayed last despite its placement in the list. This behavior was changed to show the search results in the appropriate order, namely in the sequence in which the search skill response type is listed for the dialog node. \\n\\n 10 March 2020 \\n\\n {: #watson-assistant-mar102020}\\n{: release-note} \\n\\n Contextual entity support is generally available\\n:   You can add contextual entities to English-language dialog skills. For more information about contextual entities, see  rect /docs/assistant?topic=assistant-entities#entities-annotations-overview Creating entities {: external}. \\n\\n French language support added for autocorrection\\n:   Autocorrection helps your assistant understand what your customers want. It corrects misspellings in the input that customers submit before the input is evaluated. With more precise input, your assistant can more easily recognize entity mentions and understand the customer\\'s intent. See  rect /docs/assistant?topic=assistant-dialog-runtime-spell-check Correcting user input {: external} for more details. \\n\\n The new system entities are used by new skills\\n:   For new English, Brazilian Portuguese, Czech, Dutch, French, German, Italian, and Spanish dialog skills, the new system entities are enabled automatically. If you decide to turn on a system entity and add it to your dialog, it\\'s the new and improved version of the system entity that is used. For more information, see  rect /docs/assistant?topic=assistant-system-entities New system entities {: external}. \\n\\n 6 March 2020 \\n\\n {: #watson-assistant-mar062020}\\n{: release-note} \\n\\n Transfer a web chat conversation to a human agent\\n:   Delight your customers with 360-degree support by integrating your web chat with a third-party service desk solution. When a customer asks to speak to a person, you can connect them to an agent through a service desk solution, such as Zendesk or Salesforce. Service desk support is a beta feature. For more information, see  rect /docs/assistant?topic=assistant-deploy-web-chat#deploy-web-chat-haa Adding support for transfers {: external}. \\n\\n 2 March 2020 \\n\\n {: #watson-assistant-mar022020}\\n{: release-note} \\n\\n Known issue accessing logs\\n:   If you cannot access user logs from the Analytics page, ask the owner of the service instance for the skill to change your service level access to make you a Manager of the instance. For more information about access control, see  rect /docs/assistant?topic=assistant-access-control Managing access to resources {: external}. \\n\\n 1 March 2020 deprecation notice \\n\\n {: #watson-assistant-mar012020}\\n{: release-note} \\n\\n March 2020 deprecation notice\\n:   To help us continue to improve and expand the capabilities of the assistants you build with {{site.data.keyword.conversationshort}}, we are deprecating some of the older technologies. Support for the older technologies will end in June 2020. Take action now to test and adopt the new technologies, so your skills and assistants will be ready when the old technologies stop being supported. \\n\\n  The following technologies are being deprecated:\\n\\n- **Legacy version of numeric system entities**\\n\\n    We released a whole new infrastructure for our numeric system entities across all languages except Chinese, Korean, Japanese and Arabic. The updated `@sys-number`, `@sys-date`, `@sys-time`, `@sys-currency`, and `@sys-percentage` entities provide superior number recognition with higher precision. For more information about the new system entities, see [System entity details](/docs/assistant?topic=assistant-system-entities){: external}.\\n\\n    The old version of the numeric system entities will stop being supported in June 2020 for English, Brazilian Portuguese, Czech, Dutch, French, German, Italian, and Spanish dialog skills.\\n\\n    ***Action***: In each dialog skill where you use numeric system entities, go to the **Options>System entities** page and turn on the new system entities. Take some time to test the new version of system entities with your own dialogs to make sure they continue to work as expected. As you adopt the new system entities, share your feedback about your experience with the new technology.\\n\\n- **Person and location system entities**\\n\\n    The `@sys-person` and `@sys-location` system entities, which were available in English as a beta only, are being deprecated. Consider using contextual entities as a way to capture these types of proper nouns. Instead of trying to add a dictionary-based entity that covers every permutation of the names for people or cities, for example, you can teach your skill to recognize the context in which such names are used. For more information about contextual entities, see [Annotation-based method](/docs/assistant?topic=assistant-entities#entities-annotations-overview){: external}.\\n\\n    ***Action***: Remove references to `@sys-person` and `@sys-location` from your dialogs. Turn off the `@sys-person` and `@sys-location` system entities to prevent yourself or others from adding them to a dialog inadvertently.\\n\\n- **Irrelevance detection**\\n\\n    We revised the irrelevance detection classification algorithm to make it even smarter out of the box. Now, even before you begin to teach the system about irrelevant requests, it is able to recognize user input that your skill is not designed to address. For more information, see [Irrelevance detection](/docs/assistant?topic=assistant-irrelevance-detection){: external}.\\n\\n    ***Action***: In each dialog skill, go to the **Options>Irrelevance detection** page and turn on the new classification model. Make sure everything works as well, if not better, than it did before. Share your feedback.\\n\\n- **Old API version dates**\\n\\n    v1 API versions that are dated on or before `2017-02-03` are being deprecated. When you send calls to the service with earlier API version dates, they will receive properly formatted and valid responses for a time, so you can gracefully transition to using the later API versions. However, the confidence scores and other results that are sent in the response will reflect those generated by a more recent version of the API.\\n\\n    ***Action***: Do some testing of calls with the latest version to verify that things work as expected. Some functionality has changed over the last few years. After testing, change the version date on any API calls that you make from your applications.\\n  \\n\\n 28 February 2020 \\n\\n {: #watson-assistant-feb282020}\\n{: release-note} \\n\\n {{site.data.keyword.conversationfull}} is available in {{site.data.keyword.icp4dfull_notm}}\\n:   The service can be installed on-premises in environments where {{site.data.keyword.icp4dfull_notm}} 2.5 is installed on OpenShift or standalone. See the  rect https://www.ibm.com/support/knowledgecenter/SSQNUZ_2.5.0/cpd/svc/watson/assistant-overview.html {{site.data.keyword.icp4dfull_notm}} documentation {: external} for more information. \\n\\n 26 February 2020 \\n\\n {: #watson-assistant-feb262020}\\n{: release-note} \\n\\n Slot  Save it as  field retains your edits\\n:   When you edit what gets saved for a slot by using the JSON editor to edit the value of the context variable to be something other than what is specified in the  Check for  field, your changes are kept even if someone subsequently clicks the  Save it as  field. \\n\\n 20 February 2020 \\n\\n {: #watson-assistant-feb202020}\\n{: release-note} \\n\\n Access control changes are coming\\n:   Notifications are displayed in the user interface for anyone with Reader and Writer level access to a service instance. The notification explains that access control is going to change soon, and that what they can do in the instance will change unless they are given Manager service access beforehand. For more information, see  rect /docs/assistant?topic=assistant-access-control Preventing loss of access {: external}. \\n\\n 14 February 2020 \\n\\n {: #watson-assistant-feb142020}\\n{: release-note} \\n\\n More web chat color settings\\n:   You can now specify the color of more elements of the web chat integration. For example, you can define one color for the web chat window header. You can define a different color for the user message bubble. And another color for interactive components, such as the launcher button for the chat. \\n\\n 13 February 2020 \\n\\n {: #watson-assistant-feb132020}\\n{: release-note} \\n\\n Track API events\\n:   Premium plan users can now use the Activity Tracker service to track how users and applications interact with {{site.data.keyword.conversationfull}} in {{site.data.keyword.cloud}}. See  rect /docs/assistant?topic=assistant-at-events Activity Tracker events {: external}. \\n\\n 5 February 2020 \\n\\n {: #watson-assistant-feb052020}\\n{: release-note} \\n\\n New API version\\n:   The current API version is now  2020-02-05 . The following changes were made with this version: \\n\\n  - When a dialog node\\'s response type is `connect-to-agent`, the node\\'sÂ\\xa0`title` is used as the `topic` value. Previously, `user_label` was used.\\n\\n- The `alternate_intents` property is stored as a Boolean value instead of a String.\\n  \\n\\n 4 February 2020 \\n\\n {: #watson-assistant-feb042020}\\n{: release-note} \\n\\n Product user interface makeover\\n:   The UI has been updated to be more intuitive, responsive, and consistent across its pages. While the look and feel of the UI elements has changed, their function has not. \\n\\n Requesting early access\\n:   The button you click to request participation in the early access program has moved from the Skills page to the user account menu. For more information, see  rect /docs/assistant?topic=assistant-feedback#feedback-beta Feedback {: external}. \\n\\n 24 January 2020 \\n\\n {: #watson-assistant-jan242020}\\n{: release-note} \\n\\n New system entities are now generally available in multiple languages\\n:   The new and improved numeric system entities are now generally available in all supported languages, except Arabic, Chinese, Japanese, and Korean, where they are available as a beta feature. They are not used by your dialog skill unless you enable them from the  Options>System entities  page. For more information, see  rect /docs/assistant?topic=assistant-system-entities New system entities {: external}. \\n\\n 14 January 2020 \\n\\n {: #watson-assistant-jan142020}\\n{: release-note} \\n\\n Fixed an error message that was displayed when opening an instance\\n:   An error that was displayed when you launched {{site.data.keyword.conversationshort}} from the {{site.data.keyword.cloud}} dashboard has been fixed. Previously, an error message that said,  Module \\'ui-router\\' is not available! You either misspelled the module name or forgot to load it  would sometimes be displayed. \\n\\n 12 December 2019 \\n\\n {: #watson-assistant-dec122019}\\n{: release-note} \\n\\n Support for private network endpoints\\n:   Users of Premium plans can create private network endpoints to connect to {{site.data.keyword.conversationshort}} over a private network. Connections to private network endpoints do not require public internet access. For more information, see  rect /docs/assistant?topic=assistant-security Protecting sensitive information {: external}. \\n\\n Full support for IBM Cloud IAM\\n:   {{site.data.keyword.conversationshort}} now supports the full implementation of {{site.data.keyword.cloud_notm}} Identity and Access Management (IAM). API keys for Watson services are no longer limited to a single service instance. You can create access policies and API keys that apply to more than one service, and you can grant access between services. \\n\\n  - To support this change, the API service endpoints use a different domain and include the service instance ID. The pattern is `api.{location}.{offering}.watson.cloud.ibm.com/instances/{instance_id}`.\\n\\n    Example URL for an instance hosted in the Dallas location: `api.us-south.assistant.watson.cloud.ibm.com/instances/6bbda3b3-d572-45e1-8c54-22d6ed9e52c2`\\n\\n    The previous public endpoint domain was `watsonplatform.net`.\\n\\n    For more information, see the [API reference](https://cloud.ibm.com/apidocs/assistant/assistant-v2#service-endpoint){: external}.\\n\\n    These URLs do not introduce a breaking change. The new URLs work both for your existing service instances and for new instances. The original URLs continue to work on your existing service instances for at least one year (until December 2020).\\n\\n- For more information, see [Authenticating to Watson services](/docs/watson?topic=watson-iam){: external}.\\n  \\n\\n 26 November 2019 \\n\\n {: #watson-assistant-nov262019}\\n{: release-note} \\n\\n Disambiguation is available to everyone\\n:   Disambiguation is now available to users of every plan type. \\n\\n  The following changes were made to how it functions:\\n\\n- The text that you add to the dialog **node name** field now matters.\\n\\n- The text in the node name field might be shown to customers. The disambiguation feature shows it to customers if the assistant needs to ask them to clarify their meaning. The text you add as the node name must identify the purpose of the node clearly and succinctly, such as *Place an order* or *Get plan information*.\\n\\n    If the *External node name* field exists and contains a summary of the node\\'s purpose, then its summary is shown in the disambiguation list instead. Otherwise, the dialog node name content is shown.\\n\\n  - Disambiguation is enabled automatically for all nodes. You can disable it for the entire dialog or for individual dialog nodes.\\n\\n  - When testing, you might notice that the order of the options in the disambiguation list changes from one test run to the next. Don\\'t worry; this new behavior is intended. As part of work being done to help the assistant learn automatically from user choices, the order of the options in the disambiguation list is being randomized on purpose. Changing the order helps to avoid bias that can be introduced by a percentage of people who always pick the first option without first reviewing their choices.\\n  \\n\\n 12 November 2019 \\n\\n {: #watson-assistant-nov122019}\\n{: release-note} \\n\\n Slot prompt JSON editor\\n:   You can now use the context or JSON editors for the slot response field where you define the question that your assistant asks to get information it needs from the customer. For more information about slots, see  rect /docs/assistant?topic=assistant-dialog-slots Gathering information with slots {: external}. \\n\\n New South Korea location\\n:   You can now create {{site.data.keyword.conversationshort}} instances in the Seoul location. As with other locations, the {{site.data.keyword.cloud_notm}} Seoul location uses token-based Identity and Access Management (IAM) authentication. \\n\\n Technology preview\\n:   A technology preview experience was released. A select set of new users are being presented with a new user interface that takes a different approach to building an assistant. \\n\\n 7 November 2019 \\n\\n {: #watson-assistant-nov072019}\\n{: release-note} \\n\\n Irrelevance detection has been added\\n:   When enabled, a supplemental model is used to help identify utterances that are irrelevant and should not be answered by the dialog skill. This new model is especially beneficial for skills that have not been trained on what subjects to ignore. This feature is available for English skills only. For more information, see  rect /docs/assistant?topic=assistant-irrelevance-detection Irrelevance detection {: external}. \\n\\n Time zone support for now() method\\n:   You can now specify the time zone for the date and time that is returned by the  now()  method. See  rect /docs/assistant?topic=assistant-dialog-methods#dialog-methods-dates-now Now() {: external}. \\n\\n 24 October 2019 \\n\\n {: #watson-assistant-oct242019}\\n{: release-note} \\n\\n Testing improvement\\n:   You can now see the top three intents that were recognized in a test user input from the \"Try it out\" pane. For more details, see  rect /docs/assistant?topic=assistant-dialog-tasks#dialog-tasks-test Testing your dialog {: external}. \\n\\n Error message when opening an instance\\n:   When you launch {{site.data.keyword.conversationshort}} from the {{site.data.keyword.cloud}} dashboard, you might see an error message that says,  Module \\'ui-router\\' is not available! You either misspelled the module name or forgot to load it.  You can ignore the message. Refresh the web browser page to close the notification. \\n\\n 16 October 2019 \\n\\n {: #watson-assistant-oct162019}\\n{: release-note} \\n\\n The changes from 14 October are now available in Dallas. \\n\\n 14 October 2019 \\n\\n {: #watson-assistant-oct142019}\\n{: release-note} \\n\\n Deploy your assistant in minutes\\n:   Create a web chat integration to embed your assistant into a page on your website as a chat widget. See  rect /docs/assistant?topic=assistant-deploy-web-chat Integrating with your own website {: external}. \\n\\n UI changes\\n:   The main menu options of  Assistants  and  Skills  have moved from being displayed in the page header to being shown as icons on the side of the page. The tabbed pages for the tools you use to develop a dialog skill were moved to a secondary navigation bar that is displayed when you open the skill. \\n\\n Rich response types are supported in a dialog node with slots\\n:   You can display a list of options for a user to choose from as the prompt for a slot, for example. \\n\\n Change to switching service instances\\n:   Where you go to switch service instances has changed. See  rect /docs/assistant?topic=assistant-assistant-settings#assistant-settings-switch-instance Switching the service instance {: external}. \\n\\n Known issue: Cannot rename search skills\\n:   You currently cannot rename a search skill after you create it. \\n\\n 9 October 2019 \\n\\n {: #watson-assistant-oct092019}\\n{: release-note} \\n\\n New system entities changes\\n:   The following updates have been made: \\n\\n  - In addition to English and German, the new numeric system entities are now available in these languages: Brazilian Portuguese, Czech, French, Italian, and Spanish.\\n\\n- The `part_of_day` property of the `@sys-time` entity now returns a time range instead of a single time value.\\n  \\n\\n 23 September 2019 \\n\\n {: #watson-assistant-sep232019}\\n{: release-note} \\n\\n Dallas updates\\n:   The updates from 20 September are now available to service instances hosted in Dallas. \\n\\n 20 September 2019 \\n\\n {: #watson-assistant-sep202019}\\n{: release-note} \\n\\n Inactivity timeout increase\\n:   The maximum inactivity timeout can now be extended to up to 7 days for Premium plans. See  rect /docs/assistant?topic=assistant-assistant-settings Changing the inactivity timeout setting {: external}. \\n\\n Pattern entity fix\\n:   A change that was introduced in the previous release which changed all alphabetic characters to lowercase at the time an entity value was added has been fixed. The case of any alphabetic characters that are part of a pattern entity value are no longer changed when the value is added. \\n\\n Dialog text response syntax fix\\n:   Fixed a bug in which the format of a dialog response reverted to an earlier version of the JSON syntax. Standard text responses were being saved as  output.text  instead of  output.generic . For more information about the  output  object, see  rect /docs/assistant?topic=assistant-dialog-runtime-context Anatomy of a dialog call {: external}. \\n\\n 13 September 2019 \\n\\n {: #watson-assistant-sep132019}\\n{: release-note} \\n\\n Improved Entities and Intents page responsiveness\\n:   The Entities and Intents pages were updated to use a new JavaScript library that increases the page responsiveness. As a result, the look of some graphical user interface elements, such as buttons, changed slightly, but the function did not. \\n\\n Creating contextual entities got easier\\n:   The process you use to annotate entity mentions from intent user examples was improved. You can now put the intent page into annotation mode to more easily select and label mentions. See  rect /docs/assistant?topic=assistant-entities#entities-create-annotation-based Adding contextual entities {: external}. \\n\\n 6 September 2019 \\n\\n {: #watson-assistant-sep062019}\\n{: release-note} \\n\\n Label character limit increase\\n:   The limit to the number of characters allowed for a label that you define for an option response type changed from 64 characters to 2,048 characters. \\n\\n 12 August 2019 \\n\\n {: #watson-assistant-aug122019}\\n{: release-note} \\n\\n New dialog method\\n:   The  getMatch  method was added. You can use this method to extract a specific occurrence of a regular expression pattern that recurs in user input. For more details, see the  rect /docs/assistant?topic=assistant-dialog-methods#dialog-methods-strings-getMatch dialog methods {: external} topic. \\n\\n 9 August 2019 \\n\\n {: #watson-assistant-aug092019}\\n{: release-note} \\n\\n Introductory product tour\\n:   For some first-time users, a new introductory product tour is shown that the user can choose to follow to perform the initial steps of creating an assistant. \\n\\n 6 August 2019 \\n\\n {: #watson-assistant-aug062019}\\n{: release-note} \\n\\n \\t Webhook callouts and Dialog page improvements are available in Dallas. \\n \\n\\n 1 August 2019 \\n\\n {: #watson-assistant-aug012019}\\n{: release-note} \\n\\n Webhook callouts are available\\n:   Add webhooks to dialog nodes to make programmatic calls to an external application as part of the conversational flow. The new Webhook support simplifies the callout implementation process. (No more  action  JSON objects required.) For more information, see  rect /docs/assistant?topic=assistant-dialog-webhooks Making a programmatic call from a dialog node {: external}. \\n\\n Improved dialog page responsiveness\\n:   In all service instances, the user interface of the Dialog page was updated to use a new JavaScript library that increases the page responsiveness. As a result, the look of some graphical user interface elements, such as buttons, changed slightly, but the function did not. \\n\\n 31 July 2019 \\n\\n {: #watson-assistant-jul312019}\\n{: release-note} \\n\\n Search skill and autocorrection are generally available\\n:   The search skill and spelling autocorrection features, which were previously available as beta features, are now generally available. \\n\\n  - Search skills can be created by users of Plus or Premium plans only.\\n\\n- You can enable autocorrection for English-language dialog skills only. It is enabled automatically for new English-language dialog skills.\\n  \\n\\n 26 July 2019 \\n\\n {: #watson-assistant-jul262019}\\n{: release-note} \\n\\n Missing skills issue is resolved\\n:   In some cases, workspaces that were created through the API only were not being displayed when you opened the {{site.data.keyword.conversationshort}} user interface. This issue has been addressed. All workspaces that you create by using the API are displayed as dialog skills when you open the user interface. \\n\\n 23 July 2019 \\n\\n {: #watson-assistant-ju23l2019}\\n{: release-note} \\n\\n Dialog search is fixed\\n:   In some skills, the search function was not working in the Dialog page. The issue is now fixed. \\n\\n 17 July 2019 \\n\\n {: #watson-assistant-jul172019}\\n{: release-note} \\n\\n Disambiguation choice limit\\n:   You can now set the maximum number of options to show to users when the assistant asks them to clarify what they want to do. For more information about disambiguation, see  rect /docs/assistant?topic=assistant-dialog-runtime#dialog-runtime-disambiguation Disambiguation {: external}. \\n\\n Dialog search issue\\n:   In some skills, the search function is not working in the Dialog page. A new user interface library, which increases the page responsiveness, is being rolled out to existing service instances in phases. This search issue affects only dialog skills for which the new library is not yet enabled. \\n\\n Missing skills issue\\n:   In some cases, workspaces that were created through the API only are not being displayed when you open the {{site.data.keyword.conversationshort}} user interface. Normally, these workspaces are displayed as dialog skills. If you do not see your skills from the UI, don\\'t worry; they are not gone. Contact support to report the issue, so the team can enable the workspaces to be displayed properly. \\n\\n 15 July 2019 \\n\\n {: #watson-assistant-jul152019}\\n{: release-note} \\n\\n Numeric system entities upgrade available in Dallas\\n:   The new system entities are now also available as a beta feature for instances that are hosted in Dallas. See  rect /docs/assistant?topic=assistant-system-entities New system entities {: external}. \\n\\n 12 June 2019 \\n\\n {: #watson-assistant-jun122019}\\n{: release-note} \\n\\n Numeric system entities upgrade\\n:   New system entities are available as a beta feature that you can enable in dialog skills that are written in English or German. The revised system entities offer better date and time understanding. They can recognize date and number spans, national holiday references, and classify mentions with more precision. For example, a date such as  May 15  is recognized as a date mention( @sys-date:2019-05-15 ), and is  not  also identified as a number mention ( @sys-number:15 ). See  rect /docs/assistant?topic=assistant-system-entities New system entities {: external}. \\n\\n A Plus Trial plan is available\\n:   You can use the free Plus Trial plan to try out the features of the Plus plan as you make a purchasing decision. The trial lasts for 30 days. After the trial period ends, if you do not upgrade to a Plus plan, your Plus Trial instance is converted to a Lite plan instance. \\n\\n 23 May 2019 \\n\\n {: #watson-assistant-may232019}\\n{: release-note} \\n\\n Updated navigation\\n:   The home page was removed, and the order of the Assistants and Skills tabs was reversed. The new tab order encourages you to start your development work by creating an assistant, and then a skill. \\n\\n Disambiguation settings have moved\\n:   The toggle to enable disamibugation, which is a feature that is available to Plus and Premium plan users only, has moved. The  Settings  button was removed from the  Dialog  page. You can now enable disambiguation and configure it from the skill\\'s  Options  tab. \\n\\n An introductory tour is now available\\n:   A short product tour is now displayed when a new service instance is created. Brand new users are also given help as they start development. A new assistant is created for them automatically. Informational popups are displayed to introduce the product user interface features, and guide the new user toward taking the key first step of creating a dialog skill. \\n\\n 10 April 2019 \\n\\n {: #watson-assistant-apr102019}\\n{: release-note} \\n\\n Autocorrection is now available\\n:   Autocorrection is a beta feature that helps your assistant understand what your customers want. It corrects misspellings in the input that customers submit before the input is evaluated. With more precise input, your assistant can more easily recognize entity mentions and understand the customer\\'s intent. See  rect /docs/assistant?topic=assistant-dialog-runtime-spell-check Correcting user input {: external} for more details. \\n\\n 22 March 2019 \\n\\n {: #watson-assistant-mar222019}\\n{: release-note} \\n\\n Introducing search skill\\n:   A search skill helps you to make your assistant useful to customers faster. Customer inquiries that you did not anticipate and so have not built dialog logic to handle can be met with useful responses. Instead of saying it can\\'t help, the assistant can query an external data source to find relevant information to share in its response. Over time, you can build dialog responses to answer customer queries that require follow-up questions to clarify the user\\'s meaning or for which a short and clear response is suitable. And you can use search skill responses to address more open-ended customer queries that require a longer explanation. This beta feature is available to users of Premium and Plus service plans only. \\n\\n  See [Building a search skill](/docs/assistant?topic=assistant-skill-search-add){: external} for more details.\\n  \\n\\n 14 March 2019 \\n\\n {: #watson-assistant-mar142019}\\n{: release-note} \\n\\n 4 March 2019 \\n\\n {: #watson-assistant-mar042019}\\n{: release-note} \\n\\n Simplified navigation\\n:   The sidebar navigation with separate  Build ,   Improve , and  Deploy  tabs has been removed. Now, you can get to all the tools you need to build a dialog skill from the main skill page. \\n\\n Improve page is now called Analytics\\n:   The informational metrics that Watson generates from conversations between your users and your assistant moved from the  Improve  tab of the sidebar to a new tab on the main skill page called  Analytics . \\n\\n 1 March 2019 \\n\\n {: #watson-assistant-mar012019}\\n{: release-note} \\n\\n 28 February 2019 \\n\\n {: #watson-assistant-feb282019}\\n{: release-note} \\n\\n New API version\\n:   The current API version is now  2019-02-28 . The following changes were made with this version: \\n\\n  - The order in which conditions are evaluated in nodes with slots has changed. Previously, if you had a node with slots that allowed for digressions away, the `anything_else` root node was triggered before any of the slot level Not found conditions could be evaluated. The order of operations has been changed to address this behavior. Now, when a user digresses away from a node with slots, all the root nodes except the `anything_else` node are processed. Next, the slot level Not found conditions are evaluated. And, finally, the root level `anything_else` node is processed. To better understand the full order of operations for a node with slots, see [Slot usage tips](/docs/assistant?topic=assistant-dialog-slots#dialog-slots-node-level-handler){: external}.\\n\\n- Strings that begin with a number sign (#) in the `context` or `output` objects of a message are no longer treated as intent references.\\n\\n  Previously, these strings were treated as intents automatically. For example, if you specified a context variable, such as `\"color\":\"#FFFFFF\"`, then the hex color code (#FFFFFF) would be treated as an intent. Your assistant would check whether an intent named #FFFFFF was detected in the user\\'s input, and if not, would replace #FFFFFF with `false`. This replacement no longer occurs.\\n\\n  Similarly, if you included a number sign (#) in the text string in a node response, you used to have to escape it by preceding it with a back slash (`\\\\`). For example, `We are the \\\\#1 seller of lobster rolls in Maine.` You no longer need to escape the `#` symbol in a text response.\\n\\n  This change does not apply to node or condtional response conditions. Any strings that begin with a number sign (#) which are specified in conditions continue to be treated as intent references. Also, you can use SpEL expression syntax to force the system to treat a string in the `context` or `output` objects of a message as an intent. For example, specify the intent as `<? #intent-name ?>`.\\n  \\n\\n 25 February 2019 \\n\\n {: #watson-assistant-feb252019}\\n{: release-note} \\n\\n Slack integration enhancement\\n:   You can now choose the type of event that triggers your assistant in a Slack channel. Previously, when you integrated your assistant with Slack, the assistant interacted with users through a direct message channel. Now, you can configure the assistant to listen for mentions, and respond when it is mentioned in other channels. You can choose to use one or both event types as the mechanism through which your assistant interacts with users. \\n\\n 11 February 2019 \\n\\n {: #watson-assistant-feb112019}\\n{: release-note} \\n\\n Integrate with Intercom\\n:   Intercom, a leading customer service messaging platform, has partnered with IBM to add a new agent to the team, a virtual {{site.data.keyword.conversationshort}}. You can integrate your assistant with an Intercom application to enable the app to seamlessly pass user conversations between your assistant and human support agents. This integration is available to Plus and Premium plan users only. See  rect /docs/assistant?topic=assistant-deploy-intercom Integrating with Intercom {: external} for more details. \\n\\n 8 February 2019 \\n\\n {: #watson-assistant-feb082019}\\n{: release-note} \\n\\n Version your skills\\n:   You can now capture a snapshot of the the intents, entities, dialog, and configuration settings for a skill at key points during the development process. With versions, it\\'s safe to get creative. You can deploy new design approaches in a test environment to validate them before you apply any updates to a production deployment of your assistant. See  rect /docs/assistant?topic=assistant-versions Creating skill versions {: external} for more details. \\n\\n Arabic content catalog\\n:   Users of Arabic-language skills can now add prebuilt intents to their dialogs. See  rect /docs/assistant?topic=assistant-catalog Using content catalogs {: external} for more information. \\n\\n 17 January 2019 \\n\\n {: #watson-assistant-jan172019}\\n{: release-note} \\n\\n Czech language support is generally available\\n:   Support for the Czech language is no longer classified as beta; it is now generally available. See  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} for more information. \\n\\n Language support improvements\\n:   The language understanding components were updated to improve the following features: \\n\\n  - German and Korean system entities\\n\\n- Intent classification tokenization for Arabic, Dutch, French, Italian, Japanese, Portuguese, and Spanish\\n  \\n\\n 4 January 2019 \\n\\n {: #watson-assistant-jan042019}\\n{: release-note} \\n\\n IBM Cloud Functions in DC and London locations\\n:   You can now make programmatic calls to IBM Cloud Functions from the dialog of an assistant in a service instance that is hosted in the London and Washington, DC data centers. See  rect /docs/assistant?topic=assistant-dialog-actions-client Making programmatic calls from a dialog node {: external}. \\n\\n New methods for working with arrays\\n:   The following SpEL expression methods are available that make it easier to work with array values in your dialog: \\n\\n  - **JSONArray.filter**: Filters an array by comparing each value in the array to a value that can vary based on user input.\\n- **JSONArray.includesIntent**: Checks whether an `intents` array contains a particular intent.\\n- **JSONArray.indexOf**: Gets the index number of a specific value in an array.\\n- **JSONArray.joinToArray**: Applies formatting to values that are returned from an array.\\n  \\n\\n See the  rect /docs/assistant?topic=assistant-dialog-methods#dialog-methods-arrays array method documentation {: external} for more details. \\n\\n 13 December 2018 \\n\\n {: #watson-assistant-dec132018}\\n{: release-note} \\n\\n London data center\\n:   You can now create {{site.data.keyword.conversationshort}} service instances that are hosted in the London data center without syndication. See  rect /docs/assistant?topic=assistant-services-information#services-information-regions Data centers {: external} for more details. \\n\\n Dialog node limit changes\\n:   The dialog node limit was temporarily changed from 100,000 to 500 for new Standard plan instances. This limit change was later reversed. If you created a Standard plan instance during the time frame in which the limit was in effect, your dialogs might be impacted. The limit was in effect for skills created between 10 December and 12 December 2018. The lower limits will be removed from all impacted instances in January. If you need to have the lower limit lifted before then, open a support ticket. \\n\\n 1 December 2018 \\n\\n {: #watson-assistant-dec012018}\\n{: release-note} \\n\\n Determine the number of dialog nodes\\n:   To determine the number of dialog nodes in a dialog skill, do one of the following things: \\n\\n   - From the tool, if it is not associated with an assistant already, add the dialog skill to an assistant, and then view the skill tile from the main page of the assistant. The *trained data* section lists the number of dialog nodes.\\n\\n - Send a GET request to the /dialog_nodes API endpoint, and include the `include_count=true` parameter. For example:\\n\\n    ```curl\\n    curl -u \"apikey:{apikey}\" \"https://{service-hostname}/assistant/api/v1/workspaces/{workspace_id}/dialog_nodes?version=2018-09-20&include_count=true\"\\n    ```\\n    {: codeblock}\\n\\n    where {service-hostname} is the appropriate URL for your instance. For more details, see [Service endpoint](https://cloud.ibm.com/apidocs/assistant/assistant-v1#service-endpoint){: external}.\\n\\n    In the response, the `total` attribute in the `pagination` object contains the number of dialog nodes.\\n\\n    See [Troubleshooting skill import issues](/docs/assistant?topic=assistant-skill-dialog-add#skill-dialog-add-import-errors){: external} for information about how to edit skills that you want to continue using.\\n  \\n\\n 27 November 2018 \\n\\n {: #watson-assistant-nov272018}\\n{: release-note} \\n\\n A new service plan, the Plus plan, is available\\n:   The new plan offers premium-level features at a lower price point. Unlike previous plans, the Plus plan is a user-based billing plan. It measures usage by the number of unique users that interact with your assistant over a given time period. To get the most from the plan, if you build your own client application, design your app such that it defines a unique ID for each user, and passes the user ID with each /message API call. For the built-in integrations, the session ID is used to identify user interactions with the assistant. See  rect /docs/assistant?topic=assistant-services-information#services-information-user-based-plans User-based plans {: external} for more information. \\n\\n  | Artifact | Limit |\\n| --- | --- |\\n| Assistants | 100 |\\n| Contextual entities | 20 |\\n| Contextual entity annotations | 2,000 |\\n| Dialog nodes | 100,000 |\\n| Entities | 1,000 |\\n| Entity synonyms | 100,000 |\\n| Entity values | 100,000 |\\n| Intents | 2,000 |\\n| Intent user examples | 25,000 |\\n| Integrations | 100 |\\n| Logs | 30 days |\\n| Skills | 50 |\\n{: caption=\"Plus plan limits\" caption-side=\"top\"}\\n  \\n\\n User-based Premium plan\\n:   The Premium plan now bases its billing on the number of active unique users. If you choose to use this plan, design any custom applications that you build to properly identify the users who generate /message API calls. See  rect /docs/assistant?topic=assistant-services-information#services-information-user-based-plans User-based plans {: external} for more information. \\n\\n  Existing Premium plan service instances are not impacted by this change; they continue to use API-based billing methods. Only existing Premium plan users will see the API-based plan listed as the *Premium (API)* plan option.\\n\\nSee {{site.data.keyword.conversationshort}} [service plan options](https://www.ibm.com/cloud/watson-assistant/pricing/){: external} for more information about all available service plans.\\n  \\n\\n 20 November 2018 \\n\\n {: #watson-assistant-nov202018}\\n{: release-note} \\n\\n **Recommendations are discontinued\\n:   The Recomendations section on the Improve tab was removed. Recommendations was a beta feature available to Premium plan users only. It recommended actions that users could take to improve their training data. Instead of consolidating recommendations in one place, recommendations are now being made available from the parts of the tool where you make actual training data changes. For example, while adding entity synonyms, you can now opt to see a list of synonymous terms that are recommended by Watson. If you are looking for other ways to analyze your user conversation logs in more detail, consider using Jupyter notebooks. See  rect /docs/assistant?topic=assistant-logs-resources Advanced tasks {: external} for more details. \\n\\n 9 November 2018 \\n\\n {: #watson-assistant-nov092018}\\n{: release-note} \\n\\n Major user interface revision\\n:   The {{site.data.keyword.conversationshort}} service has a new look and added features. \\n\\n  This version of the tool was evaluated by beta program participants over the past several months.\\n\\n- **Skills**: What you think of as a *workspace* is now called a *skill*. A *dialog skill* is a container for the natural language processing training data and artifacts that enable your assistant to understand user questions, and respond to them.\\n\\n**Where are my workspaces?** Any workspaces that you created previously are now listed in your service instance as skills. Click the **Skills** tab to see them. For more information, see [Adding skills to your assistant](/docs/assistant?topic=assistant-skill-add){: external}.\\n\\n- **Assistants**: You can now publish your skill in just two steps. Add your skill to an assistant, and then set up one or more integrations with which to deploy your skill. The assistant adds a layer of function to your skill that enables {{site.data.keyword.conversationshort}} to orchestrate and manage the flow of information for you. See [Assistants](/docs/assistant?topic=assistant-assistants){: external}.\\n\\n- **Built-in integrations**: Instead of going to the **Deploy** tab to deploy your workspace, you add your dialog skill to an assistant, and add integrations to the assistant through which the skill is made available to your users. You do not need to build a custom front-end application and manage the conversation state from one call to the next. However, you can still do so if you want to. See [Adding integrations](/docs/assistant?topic=assistant-deploy-integration-add){: external} for more information.\\n\\n- **New major API version**: A V2 version of the API is available. This version provides access to methods you can use to interact with an assistant at run time. No more passing context with each API call; the session state is managed for you as part of the assistant layer.\\n\\nWhat is presented in the tooling as a dialog skill is effectively a wrapper for a V1 workspace. There are currently no API methods for authoring skills and assistants with the V2 API. However, you can continue to use the V1 API for authoring workspaces. See [API Overview](/docs/assistant?topic=assistant-api-overview){: external} for more details.\\n\\n- **Switching data sources**: It is now easier to improve the model in one skill with user conversation logs from a different skill. You do not need to rely on deployment IDs, but can simply pick the name of the assistant to which a skill was added and deployed to use its data. See [Improving across assistants](/docs/assistant?topic=assistant-logs#logs-deploy-id){: external}.\\n\\n- **Preview links from London instances**: If your service instance is hosted in London, then you must edit the preview link URL. The URL includes a region code for the region where the instance is hosted. Because instances in London are syndicated to Dallas, you must replace the `eu-gb` reference in the URL with `us-south` for the preview web page to render properly.\\n  \\n\\n 8 November 2018 \\n\\n {: #watson-assistant-nov082018}\\n{: release-note} \\n\\n Japanese data center\\n:   You can now create {{site.data.keyword.conversationshort}} service instances that are hosted in the Tokyo data center. See  rect /docs/assistant?topic=assistant-services-information#services-information-regions Data centers {: external} for more details. \\n\\n 30 October 2018 \\n\\n {: #watson-assistant-oct302018}\\n{: release-note} \\n\\n New API authentication process\\n:   The {{site.data.keyword.conversationshort}} service transitioned from using Cloud Foundry to using token-based Identity and Access Management (IAM) authentication in the following regions: \\n\\n  - Dallas (us-south)\\n- Frankfurt (eu-de)\\n\\nFor new service instances, you use IAM for authentication. You can pass either a bearer token or an API key. Tokens support authenticated requests without embedding service credentials in every call. API keys use basic authentication.\\n\\nFor all existing service instances, you continue to use service credentials (`{username}:{password}`) for authentication.\\n  \\n\\n 25 October 2018 \\n\\n {: #watson-assistant-oct252018}\\n{: release-note} \\n\\n Entity synonym recommendations are available in more languages\\n:   Synonym recommendation support was added for the French, Japanese, and Spanish languages. \\n\\n 26 September 2018 \\n\\n {: #watson-assistant-sep262018}\\n{: release-note} \\n\\n {{site.data.keyword.conversationfull}} is available in {{site.data.keyword.icpfull}}\\n:   {{site.data.keyword.conversationfull}} is available in {{site.data.keyword.icpfull}} \\n\\n 21 September 2018 \\n\\n {: #watson-assistant-sep212018}\\n{: release-note} \\n\\n New API version\\n:   The current API version is now  2018-09-20 . In this version, the  errors[].path  attribute of the error object that is returned by the API is expressed as a  rect https://tools.ietf.org/html/rfc6901 JSON Pointer {: external} instead of in dot notation form. \\n\\n Web actions support\\n:   You can now call {{site.data.keyword.openwhisk_short}} web actions from a dialog node. See  rect /docs/assistant?topic=assistant-dialog-actions-client Making programmatic calls from a dialog node {: external} for more details. \\n\\n 15 August 2018 \\n\\n {: #watson-assistant-aug152018}\\n{: release-note} \\n\\n Entity fuzzy matching support improvements\\n:   Fuzzy matching is fully supported for English entities, and the misspelling feature is no longer a Beta-only feature for many other languages. See  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} for details. \\n\\n 6 August 2018 \\n\\n {: #watson-assistant-aug062018}\\n{: release-note} \\n\\n Intent conflict resolution\\n:   The tool can now help you to resolve conflicts when two or more user examples in separate intents are similar to one another. Non-distinct user examples can weaken the training data and make it harder for your assistant to map user input to the appropriate intent at run time. See  rect /docs/assistant?topic=assistant-intents#intents-resolve-conflicts Resolving intent conflicts {: external} for details. \\n\\n Disambiguation\\n:   Enable disambiguation to allow your assistant to ask the user for help when it needs to decide between two or more viable dialog nodes to process for a response. See  rect /docs/assistant?topic=assistant-dialog-runtime#dialog-runtime-disambiguation Disambiguation {: external} for more details. \\n\\n Jump-to fix\\n:   Fixed a bug in the Dialogs tool which prevented you from being able to configure a jump-to that targets the response of a node with the  anything_else  special condition. \\n\\n Digression return message\\n:   You can now specify text to display when the user returns to a node after a digression. The user will have seen the prompt for the node already. You can change the message slightly to let users know they are returning to where they left off. For example, specify a response like,  Where were we? Oh, yes...  See  rect /docs/assistant?topic=assistant-dialog-runtime#dialog-runtime-digressions Digressions {: external} for more details. \\n\\n 12 July 2018 \\n\\n {: #watson-assistant-jul122018}\\n{: release-note} \\n\\n Rich response types\\n:   You can now add rich responses that include elements such as images or buttons in addition to text, to your dialog. See  rect /docs/assistant?topic=assistant-dialog-overview#dialog-overview-multimedia Rich responses {: external} for more information. \\n\\n Contextual entities (Beta)\\n:   Contextual entities are entities that you define by labeling mentions of the entity type that occur in intent user examples. These entity types teach your assistant not only terms of interest, but also the context in which terms of interest typically appear in user utterances, enabling your assistant to recognize never-seen-before entity mentions based solely on how they are referenced in user input. For example, if you annotate the intent user example,  I want a flight to Boston  by labeling  Boston  as a  @destination  entity, then your assistant can recognize  Chicago  as a  @destination  mention in a user input that says,  I want a flight to Chicago.  This feature is currently available for English only. See  rect /docs/assistant?topic=assistant-entities#entities-create-annotation-based Adding contextual entities {: external} for more information. \\n\\n  When you access the tool with an Internet Explorer web browser, you cannot label entity mentions in intent user examples nor edit user example text.\\n  \\n\\n Entity recommendations\\n:   Watson can now recommend synonyms for your entity values. The recommender finds related synonyms based on contextual similarity extracted from a vast body of existing information, including large sources of written text, and uses natural language processing techniques to identify words similar to the existing synonyms in your entity value. For more information see  rect /docs/assistant?topic=assistant-entities#entities-create-dictionary-based Synonyms {: external}. \\n\\n New API version\\n:   The current API version is now  2018-07-10 . This version introduces the following changes: \\n\\n  - The content of the /message `output` object changed from being a `text` JSON object to being a `generic` array that supports multiple rich response types, including `image`, `option`, `pause`, and `text`.\\n- Support for contextual entities was added.\\n- You can no longer add user-defined properties in `context.metadata`. However, you can add them directly to `context`.\\n  \\n\\n Overview page date filter\\n:   Use the new date filters to choose the period for which data is displayed. These filters affect all data shown on the page: not just the number of conversations displayed in the graph, but also the statistics displayed along with the graph, and the lists of top intents and entities. See  rect /docs/assistant?topic=assistant-logs-overview#logs-overview-controls Controls {: external} for more information. \\n\\n Pattern limit expanded\\n:   When using the  Patterns  field to  rect /docs/assistant?topic=assistant-entities#entities-patterns define specific patterns for an entity value {: external}, the pattern (regular expression) is now limited to 512 characters. \\n\\n 2 July 2018 \\n\\n {: #watson-assistant-jul022018}\\n{: release-note} \\n\\n Jump-tos from conditional responses\\n:   You can now configure a conditional response to jump directly to another node. See  rect /docs/assistant?topic=assistant-dialog-overview#dialog-overview-multiple Conditional responses {: external} for more details. \\n\\n 21 June 2018 \\n\\n {: #watson-assistant-jun212018}\\n{: release-note} \\n\\n Language updates for system entities\\n:   Dutch and Simplified Chinese language support are now generally available. Dutch language support includes fuzzy matching for misspellings. Traditional Chinese language support includes the availability of  rect /docs/assistant?topic=assistant-system-entities system entities {: external} in beta release. See  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} for details. \\n\\n 14 June 2018 \\n\\n {: #watson-assistant-jun142018}\\n{: release-note} \\n\\n Washington, DC data center opens\\n:   You can now create {{site.data.keyword.conversationshort}} service instances that are hosted in the Washington, DC data center. See  rect /docs/assistant?topic=assistant-services-information#services-information-regions Data centers {: external} for more details. \\n\\n New API authentication process\\n:   The {{site.data.keyword.conversationshort}} service has a new API authentication process for service instances that are hosted in the following regions: \\n\\n  - Washington, DC (us-east) as of 14 June 2018\\n- Sydney, Australia (au-syd) as of 7 May 2018\\n\\n{{site.data.keyword.cloud}} is migrating to token-based Identity and Access Management (IAM) authentication.\\n\\nFor new service instances in the regions listed, you use IAM for authentication. You can pass either a bearer token or an API key. Tokens support authenticated requests without embedding service credentials in every call. API keys use basic authentication.\\n\\nFor all new and existing service instances in other regions, you continue to use service credentials (`{username}:{password}`) for authentication.\\n\\nWhen you use any of the Watson SDKs, you can pass the API key and let the SDK manage the lifecycle of the tokens. For more information and examples, see [Authentication](https://cloud.ibm.com/apidocs/assistant/assistant-v2#authentication){: external} in the API reference.\\n\\nIf you are not sure which type of authentication to use, view the {{site.data.keyword.conversationshort}} credentials by clicking the service instance from the Services section of the [{{site.data.keyword.Bluemix_notm}} Resource List](https://cloud.ibm.com){: external}.\\n  \\n\\n 25 May 2018 \\n\\n {: #watson-assistant-may252018}\\n{: release-note} \\n\\n New sample workspace\\n:   The sample workspace that is provided for you to explore or to use as a starting point for your own workspace has changed. The  Car Dashboard  sample was replaced by a  Customer Service  sample. The new sample showcases how to use content catalog intents and other newer features to build a bot. It can answer common questions, such as inquiries about store hours and locations, and illustrates how to use a node with slots to schedule in-store appointments. \\n\\n HTML rendering was added to Try it out\\n:   The \"Try it out\" pane now renders HTML formatting that is included in response text. Previously, if you included a hypertext link as an HTML anchor tag in a text response, you would see the HTML source in the \"Try it out\" pane during testing. It used to look like this: \\n\\n  `Contact us at <a href=\"https://www.ibm.com\">ibm.com</a>.`\\n\\nNow, the hypertext link is rendered as if on a web page. It is displayed like this:\\n\\n`Contact us at` [ibm.com](https://www.ibm.com){: external}.\\n\\nRemember, you must use the appropriate type of syntax in your responses for the client application to which you will deploy the conversation. Only use HTML syntax if your client application can interpret it properly. Other integration channels might expect other formats.\\n  \\n\\n Deployment changes\\n:   The  Test in Slack  option was removed. \\n\\n 11 May 2018 \\n\\n {: #watson-assistant-may112018}\\n{: release-note} \\n\\n Information security\\n:   The documentation includes some new details about data privacy. Read more in  rect /docs/assistant?topic=assistant-information-security Information security {: external}. \\n\\n 7 May 2018 \\n\\n {: #watson-assistant-may072018}\\n{: release-note} \\n\\n Sydney, Australia data center opens\\n:   You can now create {{site.data.keyword.conversationshort}} service instances that are hosted in the Sydney, Australia data center. See  rect https://www.ibm.com/cloud/data-centers/ IBM Cloud global data centers {: external} for more details. \\n\\n 4 April 2018 \\n\\n {: #watson-assistant-apr042018}\\n{: release-note} \\n\\n Search dialogs\\n:   You can now  rect /docs/assistant?topic=assistant-dialog-tasks#dialog-tasks-search search dialog nodes {: external} for a given word or phrase. \\n\\n 15 March 2018 \\n\\n {: #watson-assistant-mar152018}\\n{: release-note} \\n\\n Introducing {{site.data.keyword.conversationfull}}\\n:   {{site.data.keyword.ibmwatson}} Conversation has been renamed. It is now called {{site.data.keyword.conversationfull}}. The name change reflects the fact that {{site.data.keyword.conversationshort}} is expanding to provide prebuilt content and tools that help you more easily share the virtual assistants you build. Read  rect https://www.ibm.com/blogs/watson/2018/03/the-future-of-watson-conversation-watson-assistant/ this blog post {: external} for more details. \\n\\n New REST APIs and SDKs are available for {{site.data.keyword.conversationshort}}\\n:   The new APIs are functionally identical to the existing Conversation APIs, which continue to be supported. For more information about the {{site.data.keyword.conversationshort}} APIs, see the  rect https://cloud.ibm.com/apidocs/assistant/assistant-v1 API Reference {: external}. \\n\\n Dialog enhancements\\n:   The following features were added to the dialog tool: \\n\\n  - Simple variable name and value fields are now available that you can use to add context variables or update context variable values. You do not need to open the JSON editor unless you want to. See [Defining a context variable](/docs/assistant?topic=assistant-dialog-runtime-context#dialog-runtime-context-var-define){: external} for more details.\\n- Organize your dialog by using folders to group together related dialog nodes. See [Organizing the dialog with folders](/docs/assistant?topic=assistant-dialog-tasks#dialog-tasks-folders){: external} for more details.\\n- Support was added for customizing how each dialog node participates in user-initiated digressions away from the designated dialog flow. See [Digressions](/docs/assistant?topic=assistant-dialog-runtime#dialog-runtime-digressions){: external} for more details.\\n  \\n\\n Search intents and entities\\n:   A new search feature has been added that allows you to  rect /docs/assistant?topic=assistant-intents#intents-search search intents {: external} for user examples, intent names, or descriptions, or to  rect /docs/assistant?topic=assistant-entities#entities-search search entity {: external} values and synonyms. \\n\\n Content catalogs\\n:   The new  rect /docs/assistant?topic=assistant-catalog#catalog-add content catalogs {: external} contain a single category of prebuilt common intents and entities that you can add to your application. For example, most applications require a general #greeting-type intent that starts a dialog with the user. You can add it from the content catalog rather than building your own. \\n\\n Enhanced user metrics\\n:   The Improve component has been enhanced with additional user metrics and logging statistics. For example, the Overview page includes several new, detailed graphs that summarize interactions between users and your application, the amount of traffic for a given time period, and the intents and entities that were recognized most often in user conversations. \\n\\n 12 March 2018 \\n\\n {: #watson-assistant-mar122018}\\n{: release-note} \\n\\n New date and time methods\\n:   Methods were added that make it easier to perform date calculations from the dialog. See  rect /docs/assistant?topic=assistant-dialog-methods#dialog-methods-date-time-calculations Date and time calculations {: external} for more details. \\n\\n 16 February 2018 \\n\\n {: #watson-assistant-feb162018}\\n{: release-note} \\n\\n Dialog node tracing\\n:   When you use the \"Try it out\" pane to test a dialog, a location icon is displayed next to each response. You can click the icon to highlight the path that your assistant traversed through the dialog tree to arrive at the response. See  rect /docs/assistant?topic=assistant-dialog-tasks#dialog-tasks-test Building a dialog {: external} for details. \\n\\n New API version\\n:   The current API version is now  2018-02-16 . This version introduces the following changes: \\n\\n  - A new `include_audit` parameter is now supported on most GET requests. This is an optional boolean parameter that specifies whether the response should include the audit properties (`created` and `updated` timestamps). The default value is `false`. (If you are using an API version earlier than `2018-02-16`, the default value is `true`.) For more information, see the [API Reference](https://cloud.ibm.com/apidocs/assistant/assistant-v1){: external}.\\n\\n- Responses from API calls using the new version include only properties with non-`null` values.\\n\\n- The `output.nodes_visited` and `output.nodes_visited_details` properties of message responses now include nodes with the following types, which were previously omitted:\\n\\n- Nodes with `type`=`response_condition`\\n- Nodes with `type`=`event_handler` and `event_name`=`input`\\n  \\n\\n 9 February 2018 \\n\\n {: #watson-assistant-feb092018}\\n{: release-note} \\n\\n Dutch system entities (Beta)\\n:   Dutch language support has been enhanced to include the availability of  rect /docs/assistant?topic=assistant-system-entities System entities {: external} in beta release. See  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} for details. \\n\\n 29 January 2018 \\n\\n {: #watson-assistant-jan292018}\\n{: release-note} \\n\\n \\t The {{site.data.keyword.conversationshort}} REST API now supports new request parameters: \\t Use the  append  parameter when updating a workspace to indicate whether the new workspace data should be added to the existing data, rather than replacing it. For more information, see  rect https://cloud.ibm.com/apidocs/assistant/assistant-v1?curl=#update-workspace Update workspace {: external}. \\n\\t Use the  nodes_visited_details  parameter when sending a message to indicate whether the response should include additional diagnostic information about the nodes that were visited during processing of the message. For more information, see  rect https://cloud.ibm.com/apidocs/assistant/assistant-v1?curl=#message Send message {: external}. \\n \\n\\n \\n \\n\\n 23 January 2018 \\n\\n {: #watson-assistant-jan232018}\\n{: release-note} \\n\\n Unable to retrieve list of workspaces\\n:   If you see this or similar error messages when working in the tooling, it might mean that your session has expired. Log out by choosing  Log out  from the  User information  icon, and then log back in. \\n\\n 8 December 2017 \\n\\n {: #watson-assistant-dec082017}\\n{: release-note} \\n\\n Log data access across instances (Premium users only)\\n:   If you are a {{site.data.keyword.conversationshort}} Premium user, your premium instances can optionally be configured to allow access to log data from workspaces across your different premium instances. \\n\\n Copy nodes\\n:   You can now duplicate a node to make a copy of it and its children. This feature is helpful if you build a node with useful logic that you want to reuse elsewhere in your dialog. See  rect /docs/assistant?topic=assistant-dialog-tasks#dialog-tasks-copy-node Copying a dialog node {: external} for more information. \\n\\n Capture groups in pattern entities\\n:   You can identify groups in the regular expression pattern that you define for an entity. Identifying groups is useful if you want to be able to refer to a subsection of the pattern later. For example, your entity might have a regex pattern that captures US phone numbers. If you identify the area code segment of the number pattern as a group, then you can subsequently refer to that group to access just the area code segment of a phone number. See  rect /docs/assistant?topic=assistant-entities#entities-creating-task Defining entities {: external} for more information. \\n\\n 6 December 2017 \\n\\n {: #watson-assistant-dec062017}\\n{: release-note} \\n\\n {{site.data.keyword.openwhisk}} integration (Beta)\\n:   Call {{site.data.keyword.openwhisk}} (formerly IBM OpenWhisk) actions directly from a dialog node. This feature enables you to, for example, call an action to retrieve weather information from within a dialog node, and then condition on the returned information in the dialog response. Currently, you can call an action from a {{site.data.keyword.openwhisk_short}} instance that is hosted in the US South region from {{site.data.keyword.conversationshort}} instances that are hosted in the US South region. See  rect /docs/assistant?topic=assistant-dialog-actions-client Making programmatic calls from a dialog node {: external} for more details. \\n\\n 5 December 2017 \\n\\n {: #watson-assistant-dec052017}\\n{: release-note} \\n\\n Redesigned UI for Intents and Entities\\n:   The  Intents  and  Entities  tabs have been redesigned to provide an easier, more efficient workflow when creating and editing entities and intents. See  rect /docs/assistant?topic=assistant-intents-create-task Defining intents {: external} and  rect /docs/assistant?topic=assistant-entities#entities-creating-task Defining entities {: external} for information about working with these tabs. \\n\\n 30 November 2017 \\n\\n {: #watson-assistant-nov302017}\\n{: release-note} \\n\\n Eastern Arabic numeral support\\n:   Eastern Arabic numerals are now supported in Arabic system entities. \\n\\n 29 November 2017 \\n\\n {: #watson-assistant-nov292017}\\n{: release-note} \\n\\n Improving understanding of user input across workspaces\\n:   You can now improve a workspace with utterances that were sent to other workspaces within your instance. For example, you might have multiple versions of production workspaces and development workspaces; you can use the same utterance data to improve any of these workspaces. See  rect /docs/assistant?topic=assistant-logs#logs-deploy-id Improving across workspaces {: external}. \\n\\n 20 November 2017 \\n\\n {: #watson-assistant-nov202017}\\n{: release-note} \\n\\n GB18030 compliance\\n:   GB18030 is a Chinese standard that specifies an extended code page for use in the Chinese market. This code page standard is important for the software industry because the China National Information Technology Standardization Technical Committee has mandated that any software application that is released for the Chinese market after September 1, 2001, be enabled for GB18030. The {{site.data.keyword.conversationshort}} service supports this encoding, and is certified GB18030-compliant. \\n\\n 9 November 2017 \\n\\n {: #watson-assistant-nov092017}\\n{: release-note} \\n\\n Intent examples can directly reference entities\\n:   You can now specify an entity reference directly in an intent example. That entity reference, along with all its values or synonyms, is used by the {{site.data.keyword.conversationshort}} service classifier for training the intent. For more information, see  rect /docs/assistant?topic=assistant-intents#intents-entity-as-example  Entity as example  {: external} in the  rect /docs/assistant?topic=assistant-intents Intents {: external} topic. \\n\\n  Currently, you can only directly reference closed entities that you define. You cannot directly reference [pattern entities](/docs/assistant?topic=assistant-entities#entities-patterns) or [system entities](/docs/assistant?topic=assistant-system-entities){: external}.\\n  \\n\\n 8 November 2017 \\n\\n {: #watson-assistant-nov082017}\\n{: release-note} \\n\\n {{site.data.keyword.conversationshort}} connector\\n:   You can use the new {{site.data.keyword.conversationshort}} connector tool to connect your workspace to a Slack or Facebook Messenger app that you own, making it available as a chatbot that Slack or Facebook Messenger users can interact with. This tool is available only for the {{site.data.keyword.Bluemix_notm}} US South region. \\n\\n 3 November 2017 \\n\\n {: #watson-assistant-nov032017}\\n{: release-note} \\n\\n Dialog updates\\n:   The following updates make is easier for you to build a dialog. (See  rect /docs/assistant?topic=assistant-dialog-build Building a dialog {: external} for details.) \\n\\n  - You can add a condition to a slot to make it required only if certain conditions are met. For example, you can make a slot that asks for the name of a spouse required only if a previous (required) slot that asks for marital status indicates that the user is married.\\n\\n- You can now choose **Skip user input** as the next step for a node. When you choose this option, after processing the current node, your assistant jumps directly to the first child node of the current node. This option is similar to the existing *Jump to* next step option, except that it allows for more flexibility. You do not need to specify the exact node to jump to. At run time, your assistant always jumps to whichever node is the first child node, even if the child nodes are reordered or new nodes are added after the next step behavior is defined.\\n\\n- You can add conditional responses for slots. For both Found and Not found responses, you can customize how your assistant responds based on whether certain conditions are met. This feature enables you to check for possible misinterpretations and correct them before saving the value provided by the user in the slot\\'s context variable. For example, if the slot saves the user\\'s age, and uses `@sys-number` in the *Check for* field to capture it, you can add a condition that checks for numbers over 100, and responds with something like, *Please provide a valid age in years.* See [Adding conditions to Found and Not found responses](/docs/assistant?topic=assistant-dialog-slots#dialog-slots-handler-next-steps){: external} for more details.\\n\\n- The interface you use to add conditional responses to a node has been redesigned to make it easier to list each condition and its response. To add node-level conditional responses, click **Customize**, and then enable the **Multiple responses** option.\\n\\n The **Multiple responses** toggle sets the feature on or off for the node-level response only. It does not control the ability to define conditional responses for a slot. The slot multiple response setting is controlled separately.\\n\\n- To keep the page where you edit a slot simple, you now select menu options to a.) add a condition that must be met for the slot to be processed, and b.) add conditional responses for the Found and Not found conditions for a slot. Unless you choose to add this extra functionality, the slot condition and multiple responses fields are not displayed, which declutters the page and makes it easier to use.\\n  \\n\\n 25 October 2017 \\n\\n {: #watson-assistant-oct252017}\\n{: release-note} \\n\\n Updates to Simplified Chinese\\n:   Language support has been enhanced for Simplified Chinese. This includes intent classification improvements using character-level word embeddings, and the availability of system entities. Note that the {{site.data.keyword.conversationshort}} service learning models may have been updated as part of this enhancement, and when you retrain your model any changes will be applied. \\n\\n Updates to Spanish\\n:   Improvements have been made to Spanish intent classification, for very large datasets. \\n\\n 11 October 2017 \\n\\n {: #watson-assistant-oct112017}\\n{: release-note} \\n\\n Updates to Korean\\n:   Language support has been enhanced for Korean. Note that the {{site.data.keyword.conversationshort}} service learning models may have been updated as part of this enhancement, and when you retrain your model any changes will be applied. \\n\\n 3 October 2017 \\n\\n {: #watson-assistant-oct032017}\\n{: release-note} \\n\\n Pattern-defined entities (Beta)\\n:   You can now define specific patterns for an entity, using regular expressions. This can help you identify entities that follow a defined pattern, for example SKU or part numbers, phone numbers, or email addresses. See  rect /docs/assistant?topic=assistant-entities#entities-patterns Pattern-defined entities {: external} for additional details. \\n\\n  - You can add either synonyms or patterns for a single entity value; you cannot add both.\\n- For each entity value, there can be a maximum of up to 5 patterns.\\n- Each pattern (regular expression) is limited to 128 characters.\\n- Importing or exporting via a CSV file does not currently support patterns.\\n- The REST API does not support direct access to patterns, but you can retrieve or modify patterns using the `/values` endpoint.\\n  \\n\\n Fuzzy matching filtered by dictionary (English only)\\n:   An improved version of fuzzy matching for entities is now available, for English. This improvement prevents the capturing of some common, valid English words as fuzzy matches for a given entity. For example, fuzzy matching will not match the entity value  like  to  hike  or  bike , which are valid English words, but will continue to match examples such as  lkie  or  oike . \\n\\n 27 September 2017 \\n\\n {: #watson-assistant-sep272017}\\n{: release-note} \\n\\n Condition builder updates\\n:   The control that is displayed to help you define a condition in a dialog node has been updated. Enhancements include support for listing available context variable names after you enter the $ to begin adding a context variable. \\n\\n 31 August 2017 \\n\\n {: #watson-assistant-aug312017}\\n{: release-note} \\n\\n Improve section rollback\\n:   The median conversation time metric, and corresponding filters, are being temporarily removed from the Overview page of the Improve section. This removal will prevent the calculation of certain metrics from causing the median conversation time metric, and the conversations over time graph, to display inaccurate information. IBM regrets removing functionality from the tool, but is committed to ensuring that we are communicating accurate information to users. \\n\\n Dialog node names\\n:   You can now assign any name to a dialog node; it does not need to be unique. And you can subsequently change the node name without impacting how the node is referenced internally. The name you specify is saved as a title attribute of the node in the workspace JSON file and the system uses a unique ID that is stored in the name attribute to reference the node. \\n\\n 23 August 2017 \\n\\n {: #watson-assistant-aug232017}\\n{: release-note} \\n\\n Updates to Korean, Japanese, and Italian\\n:   Language support has been enhanced for Korean, Japanese, and Italian. Note that the {{site.data.keyword.conversationshort}} service learning models may have been updated as part of this enhancement, and when you retrain your model any changes will be applied. \\n\\n 10 August 2017 \\n\\n {: #watson-assistant-aug102017}\\n{: release-note} \\n\\n Accent normalization\\n:   In a conversational setting, users may or may not use accents while interacting with the {{site.data.keyword.conversationshort}} service. As such, an update has been made to the algorithm so that accented and non-accented versions of words are treated the same for intent detection and entity recognition. \\n\\n  However, for some languages like Spanish, some accents can alter the meaning of the entity. Thus, for entity detection, although the original entity may implicitly have an accent, your assistant can also match the non-accented version of the same entity, but with a slightly lower confidence score.\\n\\nFor example, for the word `barriÃ³`, which has an accent and corresponds to the past tense of the verb `barrer` (to sweep), your assistant can also match the word `barrio` (neighborhood), but with a slightly lower confidence.\\n\\nThe system will provide the highest confidence scores in entities with exact matches. For example, `barrio` will not be detected if `barriÃ³` is in the training set; and `barriÃ³` will not be detected if `barrio` is in the training set.\\n\\nYou are expected to train the system with the proper characters and accents. For example, if you are expecting `barriÃ³` as a response, then you should put `barriÃ³` into the training set.\\n\\nAlthough not an accent mark, the same applies to words using, for example, the Spanish letter `Ã±` vs. the letter `n`, such as `uÃ±a` vs. `una`. In this case the letter `Ã±` is not simply an `n` with an accent; it is a unique, Spanish-specific letter.\\n\\nYou can enable fuzzy matching if you think your customers will not use the appropriate accents, or misspell words (including, for example, putting a `n` instead of a `Ã±`), or you can explicitly include them in the training examples.\\n\\n**Note:** Accent normalization is enabled for Portuguese, Spanish, French, and Czech.\\n  \\n\\n Workspace opt-out flag\\n:   The {{site.data.keyword.conversationshort}} REST API now supports an opt-out flag for workspaces. This flag indicates that workspace training data such as intents and entities are not to be used by IBM for general service improvements. For more information, see the  rect https://cloud.ibm.com/apidocs/assistant/assistant-v1?curl=#data-collection API Reference {: external} \\n\\n 7 August 2017 \\n\\n {: #watson-assistant-aug072017}\\n{: release-note} \\n\\n  Next  and  last  date interpretation\\n:   The {{site.data.keyword.conversationshort}} service treats  last  and  next  dates as referring to the most immediate last or next day referenced, which may be in either the same or a previous week. See the  rect /docs/assistant?topic=assistant-system-entities#system-entities-sys-date-time system entities {: external} topic for additional information. \\n\\n 3 August 2017 \\n\\n {: #watson-assistant-aug032017}\\n{: release-note} \\n\\n Fuzzy matching for additional languages (Beta)\\n:   Fuzzy matching for entities is now available for additional languages, as noted in the  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} topic. \\n\\n Partial match (Beta - English only)\\n:   Fuzzy matching will now automatically suggest substring-based synonyms present in user-defined entities, and assign a lower confidence score as compared to the exact entity match. See  rect /docs/assistant?topic=assistant-entities#entities-fuzzy-matching Fuzzy matching {: external} for details. \\n\\n 28 July 2017 \\n\\n {: #watson-assistant-jul282017}\\n{: release-note} \\n\\n Updates\\n:   This release includes the following updates: \\n\\n  - When you set bidirectional preferences for the tooling, you can now specify the graphical user interface direction.\\n- The color scheme of the tooling was updated to be consistent with other Watson services and products.\\n  \\n\\n 19 July 2017 \\n\\n {: #watson-assistant-jul192017}\\n{: release-note} \\n\\n REST API now supports access to dialog nodes\\n:   The {{site.data.keyword.conversationshort}} REST API now supports access to dialog nodes. For more information, see the  rect https://cloud.ibm.com/apidocs/assistant/assistant-v1?curl=#listdialognodes API Reference {: external}. \\n\\n 14 July 2017 \\n\\n {: #watson-assistant-jul142017}\\n{: release-note} \\n\\n Slots enhancement\\n:   The slots functionality of dialogs was enhanced. For example, a  slot_in_focus  property was added that you can use to define a condition that applies to a single slot only. See  rect /docs/assistant?topic=assistant-dialog-slots Gathering information with slots {: external} for details. \\n\\n 12 July 2017 \\n\\n {: #watson-assistant-jul122017}\\n{: release-note} \\n\\n Support for Czech\\n:   Czech language support has been introduced; please see the  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} topic for additional details. \\n\\n 11 July 2017 \\n\\n {: #watson-assistant-jul112017}\\n{: release-note} \\n\\n Test in Slack\\n:   You can use the new  Test in Slack  tool to quickly deploy your workspace as a Slack bot user for testing purposes. This tool is available only for the {{site.data.keyword.Bluemix_notm}} US South region. \\n\\n Updates to Arabic\\n:   Arabic language support has been enhanced to include absolute scoring per intent, and the ability to mark intents as irrelevant; please see the  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} topic for additional details. Note that the {{site.data.keyword.conversationshort}} service learning models may have been updated as part of this enhancement, and when you retrain your model any changes will be applied. \\n\\n 23 June 2017 \\n\\n {: #watson-assistant-jun232017}\\n{: release-note} \\n\\n Updates to Korean\\n:   Korean language support has been enhanced; please see the  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} topic for additional details. Note that the {{site.data.keyword.conversationshort}} service learning models may have been updated as part of this enhancement, and when you retrain your model any changes will be applied. \\n\\n 22 June 2017 \\n\\n {: #watson-assistant-jun222017}\\n{: release-note} \\n\\n Introducing slots\\n:   It is now easier to collect multiple pieces of information from a user in a single node by adding slots. Previously, you had to create several dialog nodes to cover all the possible combinations of ways that users might provide the information. With slots, you can configure a single node that saves any information that the user provides, and prompts for any required details that the user does not. See  rect /docs/assistant?topic=assistant-dialog-slots Gathering information with slots {: external} for more details. \\n\\n Simplified dialog tree\\n:   The dialog tree has been redesigned to improve its usability. The tree view is more compact so it is easier to see where you are within it. And the links between nodes are represented in a way that makes it easier to understand the relationships between the nodes. \\n\\n 21 June 2017 \\n\\n {: #watson-assistant-jun212017}\\n{: release-note} \\n\\n Arabic support\\n:   Language support for Arabic is now generally available. For details, see  rect /docs/assistant?topic=assistant-language-support#language-support-configure-bidirectional Configuring bidirectional languages {: external}. \\n\\n Language updates\\n:   The {{site.data.keyword.conversationshort}} service algorithms have been updated to improve overall language support. See the  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} topic for details. \\n\\n 16 June 2017 \\n\\n {: #watson-assistant-jun162017}\\n{: release-note} \\n\\n Recommendations (Beta - Premium users only)\\n:   The Improve panel also includes a  Recommendations  page that recommends ways to improve your system by analyzing the conversations that users have with your chatbot, and taking into account your system\\'s current training data and response certainty. \\n\\n 14 June 2017 \\n\\n {: #watson-assistant-jun142017}\\n{: release-note} \\n\\n Fuzzy matching for additional languages (Beta)\\n:   Fuzzy matching for entities is now available for additional languages, as noted in the  rect /docs/assistant?topic=assistant-language-support Supported languages {: external} topic. You can turn on fuzzy matching per entity to improve the ability of your assistant to recognize terms in user input with syntax that is similar to the entity, without requiring an exact match. The feature is able to map user input to the appropriate corresponding entity despite the presence of misspellings or slight syntactical differences. For example, if you define giraffe as a synonym for an animal entity, and the user input contains the terms giraffes or girafe, the fuzzy match is able to map the term to the animal entity correctly. See  rect /docs/assistant?topic=assistant-entities#entities-fuzzy-matching Fuzzy matching {: external} for details. \\n\\n 13 June 2017 \\n\\n {: #watson-assistant-jun132017}\\n{: release-note} \\n\\n User conversations\\n:   The Improve panel now includes a  User conversations  page, which provides a list of user interactions with your chatbot that can be filtered by keyword, intent, entity, or number of days. You can open individual conversations to correct intents, or to add entity values or synonyms. \\n\\n Regex change\\n:   The regular expressions that are supported by SpEL functions like find, matches, extract, replaceFirst, replaceAll and split have changed. A group of regular expression constructs are no longer allowed, including look-ahead, look-behind, possessive repetition and backreference constructs. This change was necessary to avoid a security exposure in the original regular expression library. \\n\\n 12 June 2017 \\n\\n {: #watson-assistant-jun122017}\\n{: release-note} \\n\\n Updates\\n:   This release includes the following updates:\\n    - The maximum number of workspaces that you can create with the  Lite  plan (formerly named the Free plan) changed from 3 to 5.\\n    - You can now assign any name to a dialog node; it does not need to be unique. And you can subsequently change the node name without impacting how the node is referenced internally. The name you specify is treated as an alias and the system uses its own internal identifier to reference the node.\\n    - You can no longer change the language of a workspace after you create it by editing the workspace details. If you need to change the language, you can export the workspace as a JSON file, update the language property, and then import the JSON file as a new workspace. \\n\\n 6 June 2017 \\n\\n {: #watson-assistant-jun062017}\\n{: release-note} \\n\\n Learn\\n:   A new  Learn about {{site.data.keyword.conversationfull}}  page is available that provides getting started information and links to service documentation and other useful resources. To open the page, click the icon in the page header. \\n\\n Bulk export and delete\\n:   You can now simultaneously export a number of intents or entities to a CSV file, so you can then import and reuse them for another {{site.data.keyword.conversationshort}} application. You can also simultaneously select a number of entities or intents for deletion in bulk. \\n\\n Updates to Korean\\n:   Korean tokenizers have been updated to address informal language support. IBM continues to work on improvements to entity recognition and classification. \\n\\n Emoji support\\n:   Emojis added to intent examples, or as entity values, will now be correctly classified/extracted. \\n\\n  Only emojis that are included in your training data will be correctly and consistently identified; emoji support may not correctly classify similar emojis with different color tones or other variations.\\n  \\n\\n Entity stemming (Beta - English only)\\n:   The fuzzy matching beta feature recognizes entities and matches them based on the stem form of the entity value. For example, this feature correctly recognizes  bananas  as being similar to  banana , and  run  being similar to  running  as they share a common stem form. For more information, see  rect /docs/assistant?topic=assistant-entities#entities-fuzzy-matching Fuzzy matching {: external}. \\n\\n Workspace import progress\\n:   When you import a workspace from a JSON file, a tile for the workspace is displayed immediately, in which information about the progress of the import is displayed. \\n\\n Reduced training time\\n:   Multiple models are now trained in parallel, which noticeably reduces the training time for large workspaces. \\n\\n 26 May 2017 \\n\\n {: #watson-assistant-may262017}\\n{: release-note} \\n\\n New API version\\n:   The current API version is now  2017-05-26 . This version introduces the following changes: \\n\\n  - The schema of ErrorResponse objects has changed. This change affects all endpoints and methods. For more information, see the [API Reference](https://cloud.ibm.com/apidocs/assistant){: external}.\\n- The internal schema used to represent dialog nodes in exported workspace JSON has changed. If you use the `2017-05-26` API to import a workspace that was exported using an earlier version, some dialog nodes might not import correctly. For best results, always import a workspace using the same version that was used to export it.\\n  \\n\\n 25 May 2017 \\n\\n {: #watson-assistant-may252017}\\n{: release-note} \\n\\n Manage context variables\\n:   You can now manage context variables in the \"Try it out\" pane. Click the  Manage context  link to open a new pane where you can set and check the values of context variables as you test the dialog. See  rect /docs/assistant?topic=assistant-dialog-tasks#dialog-tasks-test Testing your dialog {: external} for more information. \\n\\n 16 May 2017 \\n\\n {: #watson-assistant-may162017}\\n{: release-note} \\n\\n Updates\\n:   This release includes the following updates: \\n\\n  - A **Car Dashboard** sample workspace is now available when you open the tool. To use the sample as a starting point for your own workspace, edit the workspace. If you want to use it for multiple workspaces, then duplicate it instead. The sample workspace does not count toward your subscription workspace total unless you use it.\\n- It is now easier to navigate the tool. The navigation menu options are available from the side of the main page instead of the top. In the page header, breadcrumb links display that show you where you are. You can now switch between service instances from the Workspaces page. To get there quickly, click **Back to workspaces** from the navigation menu. If you have multiple service instances, the name of the current instance is displayed. You can click the **Change** link beside it to choose another instance.\\n- When you create a dialog, two nodes are now added to it for you: 1) a **Welcome** node at the start the dialog tree that contains the greeting to display to the user and 2) an **Anything else** node at the end of the tree that catches any user inquiries that are not recognized by other nodes in the dialog and responds to them. See [Creating a dialog](/docs/assistant?topic=assistant-dialog-build){: external} for more details.\\n- When you are testing a dialog in the \"Try it out\" pane, you can now find and resubmit a recent test utterance by pressing the Up key to cycle through your previous inputs.\\n- Experimental Korean language support for 5 system entities (`@sys-date`, `@sys-time`, `@sys-currency`, `@sys-number`, `@sys-percentage`) is now available. There are known issues for some of the numeric entities, and limited support for informal language input.\\n- An Overview page is available from the Improve tab. The page provides a summary of interactions with your bot. You can view the amount of traffic for a given time period, as well as the intents and entities that were recognized most often in user conversations. For additional information, see [Using the Overview page](/docs/assistant?topic=assistant-logs-overview){: external}.\\n  \\n\\n 27 April 2017 \\n\\n {: #watson-assistant-apr272017}\\n{: release-note} \\n\\n System entities\\n:   The following system entities are now available as beta features in English only: \\n\\n  - sys-location: Recognizes references to locations, such as towns, cities, and countries, in user utterances.\\n- sys-person: Recognizes references to people\\'s names, first and last, in user utterances.\\n\\nFor more information, see the [System entities reference](/docs/assistant?topic=assistant-system-entities){: external}.\\n  \\n\\n Fuzzy matching for entities\\n:   Fuzzy matching for entities is a beta feature that is now available in English. You can turn on fuzzy matching per entity to improve the ability of your assistant to recognize terms in user input with syntax that is similar to the entity, without requiring an exact match. The feature is able to map user input to the appropriate corresponding entity despite the presence of misspellings or slight syntactical differences. For examples, if you define  giraffe  as a synonym for an animal entity, and the user input contains the terms  giraffes  or  girafe , the fuzzy match is able to map the term to the animal entity correctly. See  rect /docs/assistant?topic=assistant-entities#entities-fuzzy-matching Defining entities {: external} and search for  Fuzzy Matching  for details. \\n\\n 18 April 2017 \\n\\n {: #watson-assistant-apr182017}\\n{: release-note} \\n\\n Updates\\n:   This release includes the following updates: \\n\\n  - The {{site.data.keyword.conversationshort}} REST API now supports access to the following resources:\\n    - entities\\n    - entity values\\n    - entity value synonyms\\n    - logs\\n\\n    For more information, see the [API Reference](https://cloud.ibm.com/apidocs/assistant/assistant-v1){: external}.\\n\\n- The behavior of the /messages `POST` method has changed the handling of entities and intents specified as part of the message input:\\n    - If you specify intents on input, your assistant uses the intents you specify, but uses natural language processing to detect entities in the user input.\\n    - If you specify entities on input, your assistant uses the entities you specify, but uses natural language processing to detect intents in the user input.\\n\\n    The behavior has not changed for messages that specify both intents and entities, or for messages that specify neither.\\n\\n- The option to mark user input as irrelevant is now available for all supported languages. This is a beta feature.\\n\\n- A new Credentials tab provides a single place where you can find all of the information you need for connecting your application to a workspace (such as the {{site.data.keyword.conversationshort}} credentials and workspace ID), as well as other deployment options. To access the Credentials tab for your workspace, click the icon and select **Credentials**.\\n  \\n\\n 9 March 2017 \\n\\n {: #watson-assistant-mar092017}\\n{: release-note} \\n\\n REST API updates\\n:   The {{site.data.keyword.conversationshort}} REST API now supports access to the following resources: \\n\\n  - workspaces\\n- intents\\n- examples\\n- counterexamples\\n\\nFor more information, see the [API Reference](https://cloud.ibm.com/apidocs/assistant/assistant-v1){: external}.\\n  \\n\\n 7 March 2017 \\n\\n {: #watson-assistant-mar072017}\\n{: release-note} \\n\\n Intent name restrictions\\n:   The use of  .  or  ..  as an intent name causes problems and is no longer supported. You cannot rename or delete an intent with this name; to change the name, export your intents to a file, rename the intent in the file, and import the updated file into your workspace. Paying customers can contact support for a database change. \\n\\n 1 March 2017 \\n\\n {: #watson-assistant-mar012017}\\n{: release-note} \\n\\n System entities are now enabled in German\\n:   System entities are now enabled in German. \\n\\n 22 February 2017 \\n\\n {: #watson-assistant-feb222017}\\n{: release-note} \\n\\n Messages are now limited to 2,048 characters\\n: Messages are now limited to 2,048 characters. \\n\\n 3 February 2017 \\n\\n {: #watson-assistant-feb032017}\\n{: release-note} \\n\\n Updates\\n:   This release includes the following updates: \\n\\n  - We changed how intents are scored and added the ability to mark input as irrelevant to your application. For details, see [Defining intents](/docs/assistant?topic=assistant-intents){: external} and search for `Mark as irrelevant`.\\n\\n- This release introduced a major change to the workspace. To benefit from the changes, you must manually upgrade your workspace.\\n\\n- The processing of **Jump to** actions changed to prevent loops that can occur under certain conditions. Previously, if you jumped to the condition of a node and neither that node nor any of its peer nodes had a condition that was evaluated as true, the system would jump to the root-level node and look for a node whose condition matched the input. In some situations this processing created a loop, which prevented the dialog from progressing.\\n\\n Under the new process, if neither the target node nor its peers is evaluated as true, the dialog turn is ended. To reimplement the old model, add a final peer node with a condition of `true`. In the response, use a **Jump to** action that targets the condition of the first node at the root level of your dialog tree.\\n  \\n\\n 11 January 2017 \\n\\n {: #watson-assistant-jan112017}\\n{: release-note} \\n\\n Customize node titles\\n:   In this release, you can customize node titles in dialog. \\n\\n 22 December 2016 \\n\\n {: #watson-assistant-dec222016}\\n{: release-note} \\n\\n Node title section\\n:   In this release, dialog nodes display a new section for  node title . The ability to customize the  node title  is not available. When collapsed, the  node title  displays the  node condition  of the dialog node. If there is not a  node condition , \"Untitled Node\" is displayed as the title. \\n\\n 19 December 2016 \\n\\n {: #watson-assistant-dec192016}\\n{: release-note} \\n\\n Dialog editor UI changes\\n:   Several changes make the dialog editor easier and more intuitive to use: \\n\\n  - A larger editing view makes it easier to view all the details of a node as you work on it.\\n- A node can contain multiple responses, each triggered by a separate condition. For more information see [Multiple responses](/docs/assistant?topic=assistant-dialog-overview#dialog-overview-responses){: external}.\\n  \\n\\n 5 December 2016 \\n\\n {: #watson-assistant-dec052016}\\n{: release-note} \\n\\n Updates\\n:   This release includes the following updates: \\n\\n  - New languages are supported, all in Experimental mode: German, Traditional Chinese, Simplified Chinese, and Dutch.\\n- Two new system entities are available: @sys-date and @sys-time. For details, see [System entities](/docs/assistant?topic=assistant-system-entities){: external}.\\n  \\n\\n 21 October 2016 \\n\\n {: #watson-assistant-oct212016}\\n{: release-note} \\n\\n Updates\\n:   This release includes the following updates: \\n\\n  - The {{site.data.keyword.conversationshort}} service now provides system entities, which are common entities that can be used across any use case. For details, see [Defining entities](/docs/assistant?topic=assistant-entities){: external} and search for `Enabling system entities`.\\n- You can now view a history of conversations with users on the Improve page. You can use this to understand your bot\\'s behavior. For details, see [Improving your skill](/docs/assistant?topic=assistant-logs){: external}.\\n- You can now import entities from a comma-separated value (CSV) file, which helps with when you have a large number of entities. For details, see [Defining entities](/docs/assistant?topic=assistant-entities){: external} and search for `Importing entities`.\\n  \\n\\n 20 September 2016 \\n\\n {: #watson-assistant-sep202016}\\n{: release-note} \\n\\n New version 2016-09-20 \\n\\n :   To take advantage of the changes in a new version, change the value of the  version  parameter to the new date. If you\\'re not ready to update to this version, don\\'t change your version date. \\n\\n  - version **2016-09-20**: `dialog_stack` changed from an array of strings to an array of JSON objects.\\n  \\n\\n 29 August 2016 \\n\\n {: #watson-assistant-aug292016}\\n{: release-note} \\n\\n Updates\\n:   This release includes the following updates: \\n\\n  - You can move dialog nodes from one branch to another, as siblings or peers. For details, see [Moving a dialog node](/docs/assistant?topic=assistant-dialog-tasks#dialog-tasks-move-node){: external}.\\n- You can expand the JSON editor window.\\n- You can view chat logs of your bot\\'s conversations to help you understand it\\'s behavior. You can filter by intents, entities, date, and time. For details, see [Improving your skill](/docs/assistant?topic=assistant-logs){: external}.\\n  \\n\\n 11 July 2016 \\n\\n {: #watson-assistant-jul212016}\\n{: release-note} \\n\\n General Availability\\n:    This General Availability release enables you to work with entities and dialogs to create a fully functioning bot. \\n\\n 18 May 2016 \\n\\n {: #watson-assistant-may182016}\\n{: release-note} \\n\\n Experimental release\\n:   This Experimental release of the {{site.data.keyword.conversationshort}} introduces the user interface and enables you to work with workspaces, intents, and examples. \\n  ']\n",
      "Name = [' \\n \\n stream_size 9721  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.html.HtmlParser  \\n stream_content_type application/html  \\n Content-Encoding ISO-8859-1  \\n resourceName /root/ibm_cloud_docs_process2/watson-assistant/faqs.html  \\n Content-Type text/html; charset=ISO-8859-1  \\n  \\n \\n   \\n\\n copyright:\\n  years: 2015, 2023\\nlastupdated: \"2023-04-07\" \\n\\n keywords: Watson Assistant frequently asked questions \\n\\n subcollection: watson-assistant \\n\\n content-type: faq \\n\\n  \\n\\n {{site.data.keyword.attribute-definition-list}} \\n\\n FAQs for {{site.data.keyword.conversationfull}} \\n\\n {: #watson-assistant-faqs} \\n\\n Find answers to frequently-asked questions and quick fixes for common problems.\\n{: shortdesc} \\n\\n FAQs about the new {{site.data.keyword.conversationfull}} experience \\n\\n {: #faqs-new-experience}\\n{: faq} \\n\\n What is the new {{site.data.keyword.conversationfull}} experience? \\n\\n {: faq-what-is-the-new-experience}\\n{: faq} \\n\\n The new {{site.data.keyword.conversationshort}} is an improved way to build, publish, and improve virtual assistants. In the new experience, you use actions to build conversations. Actions are a simple way for anyone, developer or not, to create assistants. For more information, see the  rect https://www.ibm.com/blogs/watson/2021/12/getting-started-with-the-new-watson-assistant-part-i-the-build-guide/ Getting Started guide {: external} or the  rect /docs/watson-assistant documentation  for the new experience. \\n\\n Why can\\'t I see the assistants I made with classic {{site.data.keyword.conversationshort}} in the new experience? \\n\\n {: #faq-classic-assistants}\\n{: faq} \\n\\n The new {{site.data.keyword.conversationshort}} is a clean slate in the same IBM Cloud instance as your classic experience. Assistants you created in one experience don\\'t appear in the other. However, you can switch back and forth between experiences without losing any work. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-welcome-new-assistant#welcome-new-assistant-switch-experience Switching the experience . \\n\\n What happens when I switch between the classic and new {{site.data.keyword.conversationshort}} experiences? \\n\\n {: #faq-switching}\\n{: faq} \\n\\n The assistants you create in one experience don\\'t transfer to the other. However, you can switch experiences, return to your work, and create or use assistants. You won\\'t lose anything by switching. Changing experiences doesn\\'t affect other users working in the same instance. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-welcome-new-assistant#welcome-new-assistant-switch-experience Switching the experience . \\n\\n Is the classic {{site.data.keyword.conversationshort}} experience going away? \\n\\n {: #faq-classic-lifecycle}\\n{: faq} \\n\\n IBM has no plans to discontinue the classic {{site.data.keyword.conversationshort}} experience. However, we encourage you to explore the benefits and capabilities in the new {{site.data.keyword.conversationshort}}. For more information, see the  rect https://www.ibm.com/blogs/watson/2021/12/getting-started-with-the-new-watson-assistant-part-i-the-build-guide/ Getting Started guide {: external} or the  rect /docs/watson-assistant documentation  for the new experience. \\n\\n Where are the search skill and channel integrations in the new {{site.data.keyword.conversationshort}} experience? \\n\\n {: #faq-integrations}\\n{: faq} \\n\\n In the left navigation, click  Integrations   Integrations images/integrations-icon.png  . On the Integrations page, you can add search, channel, and extension integrations to your assistant. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-deploy-integration-add Adding integrations . \\n\\n Where is the Assistant ID found in the new product experience? \\n\\n {: #faq-assistant-id}\\n{: faq} \\n\\n The assistant ID can be found in  Assistant settings . \\n\\n In  Assistant settings , the assistant ID is in the  Assistant IDs and API details  section. \\n\\n What do the draft and live tags mean? \\n\\n {: #faqs-draft-live-tags}\\n{: faq} \\n\\n A  Draft  tag indicates that the information is linked to your draft environment, which means that you can preview these updates but they are not visible to your users. A  Live  tag indicates that the information is linked to your live environment, which means that the content is available to your users to interact with. \\n\\n For more information on environments, see  rect /docs/watson-assistant?topic=watson-assistant-publish-overview#environments Environments . \\n\\n Why can\\'t I log in? \\n\\n {: #faqs-cannot-login}\\n{: faq} \\n\\n If you are having trouble logging in to a service instance or see messages about tokens, such as  unable to fetch access token  or  400 bad request - header or cookie too large , it might mean that you need to clear your browser cache. Open a private browser window, and then try again. \\n\\n \\t If accessing the page by using a private browsing window fixes the issue, then consider always using a private window or clear the cache of your browser. You can typically find an option for clearing the cache or deleting cookies in the browser\\'s privacy and security settings. \\n\\t If accessing the page by using a private browsing window doesn\\'t fix the issue, then try deleting the API key for the instance and creating a new one. \\n \\n\\n Why am I being asked to log in repeatedly? \\n\\n {: #faqs-login-repeatedly}\\n{: faq} \\n\\n If you keep getting messages, such as  you are getting redirected to login , it might be due to one of the following things: \\n\\n \\t The Lite plan you were using has expired. Lite plans expire if they are not used within a 30-day span. To begin again, log in to IBM Cloud and create a new service instance of {{site.data.keyword.conversationshort}}. \\n\\t An instance is locked when you exceed the plan limits for the month. To log in successfully, wait until the start of the next month when the plan limit totals are reset. \\n \\n\\n Why don\\'t I see the Analytics page? \\n\\n {: #faqs-view-analytics}\\n{: faq} \\n\\n To view the  Analytics  page, you must have a service role of Manager and a platform role of at least Viewer. For more information about access roles and how to request an access role change, see  rect /docs/watson-assistant?topic=watson-assistant-access-control Managing access to resources . \\n\\n Why am I unable to view the API details, API key, or service credentials? \\n\\n {: #faqs-view-api-details}\\n{: faq} \\n\\n If you cannot view the API details or service credentials, it is likely that you do not have Manager access to the service instance in which the resource was created. Only people with Manager access to the instance can use the service credentials. \\n\\n Can I export the user conversations from the Analytics page? \\n\\n {: #faqs-export-conversation}\\n{: faq} \\n\\n You cannot directly export conversations from the conversation page. You can, however, use the  /logs  API to list events from the transcripts of conversations that occurred between your users and your assistant. For more information, see the  rect https://cloud.ibm.com/apidocs/assistant/assistant-v2#listlogs API reference {: external}. \\n\\n Can I change my plan to a Lite plan? \\n\\n {: #faqs-downgrade-plan}\\n{: faq} \\n\\n No, you cannot change from a Trial, Plus, or Standard plan to a Lite plan. And you cannot upgrade from a Trial to a Standard plan. \\n\\n How do I create a webhook? \\n\\n {: #faqs-webhook-how}\\n{: faq} \\n\\n To define a webhook and add its details, go to the  Live environment  page and open the  Environment settings  page. From the  Environment settings  page, click  Webhooks > Pre-message webhook . From this page, you can add details about your webhook. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-webhook-pre Making a call before processing a message . \\n\\n Can I have more than one entry in the URL field for a webhook? \\n\\n {: #faqs-webhook-url}\\n{: faq} \\n\\n No, you can define only one webhook URL for an action. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-webhook-pre#webhook-pre-create Defining the webhook . \\n\\n Is there a range of IP addresses that are being used by a webhook? \\n\\n {: #faqs-webhook-ip}\\n{: faq} \\n\\n Unfortunately, the IP address ranges from which {{site.data.keyword.conversationshort}} may call a webhook URL are subject to change, which in turn prevent using them in any static firewall configuration. Please use the https transport and specify an authorization header to control access to the webhook. \\n\\n What do I do if the training process seems stuck? \\n\\n {: #faqs-stuck-training}\\n{: faq} \\n\\n If the training process gets stuck, first check whether there is an outage for the service by going to the  rect https://cloud.ibm.com/status Cloud status page {: external}. You can start a new training process to stop the current process and start over. \\n\\n How do I see my monthly active users in {{site.data.keyword.conversationshort}}? \\n\\n {: #faqs-see-mau}\\n{: faq} \\n\\n To see your monthly active users (MAU) do the following:\\n1.  Sign in to https://cloud.ibm.com\\n1.  Click on the  Manage  menu, then choose  Billing and usage .\\n1.  Click on  Usage .\\n1.  For {{site.data.keyword.conversationshort}}, select  View Plans .\\n1.  Under Time Frame, select the month you need.\\n1.  Select your Plus plans or Plus Trial plans to see monthly active users and the API calls. \\n  ']\n",
      "Name = [' \\n \\n stream_size 11338  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.csv.TextAndCSVParser  \\n stream_content_type application/txt  \\n Content-Encoding UTF-8  \\n resourceName /root/IBM Medium blog/Watson Assistant API: From V1 to\\xa0V2.txt  \\n Content-Type text/plain; charset=UTF-8  \\n  \\n \\n   Watson Assistant API: From V1 to\\xa0V2 The new Watson Assistant experience is powered by a redeveloped… Watson Assistant is leaning hard into the new experience, and a key element of the new information architecture is the V2 API. Lots of exciting new features to manage your assistant’s lifecycle, and bring value to your clients, are available through the new experience and the V2 API. \\n If you find yourself stuck using the V1 API because you fear what changes lie lurking in the new experience, I’m going to, hopefully, illustrate how to leverage the V2 API in your application and give you the confidence to make the switch! \\n Let’s review how to get up and running with the V1 API. We will step through, in painstaking detail, what requests and responses are involved in creating your workspace, ensuring the natural language understanding (NLU) models have been successfully trained, and leveraging the models to classify utterances provided by a user. That journey looks like the following: \\n A workspace is the fundamental building block of the V1 API. All training data used in intent classification and entity detection is provided as part of a workspace. \\n To get started, a user must create a workspace with relevant data. \\n Once a workspace is created, the system will begin to train a set of machine learning (ML) and NLU models based on the included training data. Before the /message endpoint can be used, we want to ensure the NLU models are Available. \\n Once the NLU models areAvailable, we are ready to invoke the /message API. A V1 /message API request is composed of three types of information: \\n input determines how the system processes the request. It contains the “utterance” of the user (input.text) plus optional flags that influence the processing or response of the API call. \\n context is additional information that can be associated with the multi-turn conversation throughout its duration and be leveraged by client applications. It does NOT influence the behavior of Watson Assistant when processing a request. \\n user_id is a unique identifier for a given user interacting with your assistant. While the system provides a randomly generated string if this value is not specified, it is a good idea for your system to identify users. The majority of Watson Assistant plans are billed based on “users” as identified by this attribute — so relying on randomly generated identifiers can lead to unnecessary charges to your IBM Cloud account! \\n For an initial /message API request, it’s likely context is not very interesting. \\n The response for the above request contains a lot more information because of the system processing the message. \\n To help understand what is being returned, let’s break the response into two groups: everything except context and context. \\n First, the non-context information: \\n Now, the context information: \\n Based on the example above, any subsequent /message API request needs to “carry forward” the context. While user defined attributes can be modified, conversation_id, system, and metadata should be preserved. \\n This propagation of context into a subsequent /message API request is what we refer to as V1 /message being “stateless”. The system does not remember any stateful information from one /message API request to the next, meaning all the necessary information must be provided by the client to advance a multi-turn conversation. \\n The response for the above request contains information similar to what we saw on the initial request, but relevant to the specific point of the conversation based on the dialog_nodes configuration. \\n Again, we will first look at the non-context information: \\n And now, context. \\n Congratulations, you are now intimately familiar with the structure of the V1 /message API! \\n Let’s “up the ante” and take a similarly analytical approach to the V2 API. \\n Building on the prior section, a comparable V2 workflow has an additional step. First, an assistant is created (which automatically creates a draft and liveenvironment), then an actions skill. \\n Then, we are on more familiar footing: the training status needs to be checked to confirm various NLU models are available before user utterances can be analyzed. Again, for the “picture people” out there, the following diagram illustrates what we are going to work through. \\n While in V1 a workspace was the fundamental building block of the API, the V2 API has been designed to handle more complex use cases. The V2 API works seamlessly over a variety of channels, integrations, and customer-defined webhooks. With extensibility a design point, the API can more easily adapt to future functional enhancements with minimal effort required for customers to adopt. \\n In V2, the fundamental building blocks are skills and environments. \\n The assistant is then the container for skills, environments, and other resources that help create and manage a production assistant. Again, the assistant does not orchestrate messages — that role is served by an environment. \\n Assistants define the scope in which a skill may be accessed. For skills to be invoked via /message, a skill must be assigned to an environment. This relationship between a skill and an environment is referred to in the API as a skill reference. \\n To get started in V2, a user need only create an assistant. Once the assistant is created, the system will automatically : \\n While all this might seem like information overload, getting up and running has never been easier through the New Watson Assistant experience! As we dive into what this looks like from an API experience to compare and contrast with the V1 perspective, please note not all the APIs I reference here are publicly available (yet). For those that are not publicly available, the Watson Assistant Tooling serves those use cases well. \\n ℹ️ The following use case, for simplicity, will only leverage the draft environment. Working amongst different environments within an assistant will be covered in a later article. \\n First, we create an assistant: \\n With an assistant created, we can now add a dialog skill to do a more apples-to-apples comparison to the V1 scenario above. \\n While the New Watson Assistant experience prefers the actions skill as the training data container, a dialog skill can be added alongside the actions skill to help support established systems. When a dialog skill is active on the assistant, it acts the primary skill and can delegate to the actions skill where appropriate. \\n As with V1, you want to make sure your skill has finished training before invoking the V2 /message API. While the Watson Assistant Tooling provides a beautiful banner to keep you informed, a peek behind the curtain from an API standpoint looks like this: \\n With our assistant now configured with a properly trained dialog skill, we are ready to invoke the V2 /message API! To appreciate the differences in V2, we will replicate the same scenario we played out for the V1 API analysis. \\n Before we start to to dive into JSON payloads again, I want to briefly call out another important difference in the V2 API. The /message API in V2 comes in 2 different varieties: \\n Please note the following examples will leverage the stateless variant of the V2 /message API. Now, lets starts to see what this looks like in practice by sending our initial call! \\n ⚠️ One last explanatory interlude: \\n In the following screenshots, a discerning reader may notice and wonder why we are providing the environment_id value in the v2/assistants/{id}/message endpoint. \\n This is supported in order to allow existing customers to adopt the the new Watson Assistant experience with limited changes. In time, we will be rolling out a POST v2/assistants/{assistant_id}/environments/{environment_id}/message endpoint that more naturally maps to the new data model. \\n Let’s dive into the response. Again, as we did with V1, let’s break the response into two groups: everything except context and context. \\n With the V1 response in mind, there are a few key differences to call out here: \\n Next, the context information: \\n This is a significant departure from the V1 context but enables greater extensibility and coherency. Let’s look into what is going on here: \\n As with the V1 API, any subsequent /message API request needs to “carry forward” the context. While context.skills.* skill.user_defined attributes can be modified, context.global.system and context.skills.* skill.system values should be preserved. \\n This example, in order to align with what was outlined in the V1 API, leverages the V2 stateless /message API. As such, propagation of context into a subsequent /message API request is still necessary. Out of scope to this article, there is also a stateful variant of the /message API available in V2. \\n Once again, we will first look at the non-context information of the response: \\n The explanation of the above attributes is identical to the analysis of initial V2 /message API invocation. As with V1, the actions attribute present in the output attribute of the response is due to a Dialog Callout I have configured on the dialog_node of the dialog skill that handled this request. \\n And, lastly, the context information. \\n No real surprises here. The explanation of the above is identical to the analysis of initial V2 /message API invocation. \\n Pat yourself on the back. That was an intensive API overview. \\n Let’s summarize everything we have discussed and highlight some additional differences not covered in the trivial example above. \\n Following the Pareto Principle, it is not unreasonable to assume this review of the differences between V1 and V2 API behavior covers 80% of the use cases a client would be interested in. That said, failure to appreciate differences on (API) paths less travelled can still result in a very frustrating debugging experience for a Watson Assistant customer, and potentially an unhappy user of that customer’s assistant. \\n In the interest of complete disclosure, the following tables outline all the differences you can expect on request and response payloads: \\n Until now, we have focused on the major API version differences. However, there is an additional piece to this puzzle: the minor API version. While minor versions are “tweaks” (whereas major versions are typically complete overhauls), failure to account for minor API version differences can still break your application! \\n It’s not far-fetched to think, for an established customer that has been happy with Watson Assistant, that a given client application is not using the latest supported minor API version. While minor API version releases are always documented in the IBM Watson Assistant Release Notes, there is a lot of other information unrelated to the API that is documented. This makes it sometimes tricky to identify what you should be aware of when upgrading from an old minor version. \\n Lucky for you, I have done all the necessary scrolling and squinting required to summarize the relevant API changes below! Changes come in a couple different flavors: \\n I have tried to explicitly call out what type of changes occur in each API version in the table below.- The Release Notes column is a link to the official Watson Assistant Release Notes for the given API version. \\n \\n  ']\n",
      "Name = [' \\n \\n stream_size 10967  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.csv.TextAndCSVParser  \\n stream_content_type application/txt  \\n Content-Encoding UTF-8  \\n resourceName /root/IBM Medium blog/Refining and Improving the Watson Assistant User Experience.txt  \\n Content-Type text/plain; charset=UTF-8  \\n  \\n \\n   Refining and Improving the Watson Assistant User Experience We’ve made big design and UX changes\\xa0to… The balance between functionality and usability can be a struggle for enterprise products. As Watson Assistant has grown and progressed over the last decade, the product has become more capable, but the refinement of the end-to-end experience hasn’t always kept pace. \\n Feedback from our users helped us recognize that we were in need of a more streamlined, user-friendly, user-focused experience that would make implementing and using Watson Assistant not just approachable but delightful. This was particularly important for new users who may have been intimidated by the complex, technical nature of artificial intelligence, machine learning, and natural language processing. \\n We also recognized that, as designers, we wouldn’t be able to do this on our own. Design thinking and the Agile development process encourage communication and collaboration, and we saw that these approaches were key to delivering an outstanding experience for our users alongside a remarkably capable product. \\n The first step was to align with our product management counterparts to develop design roadmaps and research plans. Because our product managers are focused on the users first and work regularly to nurture those relationships, we felt like we had an ongoing open line of communication that made getting feedback from our users easier and faster. \\n Likewise, we partnered with our developers and engineers to completely revamp Watson Assistant’s interaction models. We made a point to work in a more iterative way (we call it The Loop) that encouraged constant feedback and reflection. \\n Until recently, setting up Watson Assistant typically meant that a team of people from a variety of roles would need to play a part. Subject matter experts were needed to provide the material from which Watson Assistant would retrieve its answers. Designers weighed in on branding, interaction patterns, and the overall look and feel. Engineers would take the content and program it as a dialog tree with choices and dead ends, and front-end developers would plug it all into a website’s HTML, CSS, Javascript, and backend architecture. \\n As we were planning the recent improvements, we heard from the people on these teams. They were mostly happy with the end product, but getting to the finish line was typically fraught with problems. If changes needed to be made, they’d have to start the process from the beginning. And for the teams that didn’t have the resources or the staffing to dedicate to every role, things were more complicated and time-consuming. \\n According to our UX Architect Tom Roach, “We needed to find a way to help [users] get to that point radically faster and radically cheaper. They need to be able to build an assistant in a week, not a year.” \\n We made it a priority to design the product so that any person with the subject matter knowledge could build directly in Watson Assistant. The fact is that the people building Watson Assistant and working on it every day are coders and non-coders alike, and we wanted to make it welcoming to users of all disciplines and skill levels. \\n “[Users] need to be able to build an assistant in a week, not a year. “ \\n — UX Architect Tom Roach \\n These people were at the center of our research, our planning, and our designing over the past year plus. From the research, we created personas that represented our stakeholders at every level, and those personas were at the heart of every design decision we made. Every change and every tweak had to be in the best interest of “Tanya”, the subject matter matter expert (and the person building out the content within Watson Assistant), because she had to deliver results for “Cade”, the end user (the person interacting with the chatbot on an organization’s website). Tanya also had to make sure she was able to clearly communicate her successes to the rest of her team: “Paula”, the product manager; “Dinesh”, the data scientist; and “Deb”, the front-end developer. \\n Our user research lead Ashwini Kamath and the rest of the research team sought to build on the work done by teams in Watson Assistant’s past: “We recognized that these [users] were not working by themselves. They were working in partnership with these other people [on their team]. The subject matter expert is the one that has the knowledge … so we should be enabling them. \\n “We did a lot of work around understanding what Tanya’s needs were, what her workflow looked like, what did the collaboration between her and her developer look like. I think, at the heart of it, they realized that the functionality was powerful. We just needed to package it in a way that it would be understandable and usable for a wider audience.” \\n “The subject matter expert is the one that has the knowledge … so we should be enabling them.” \\n — UX research lead Ashwini Kamath \\n With all of these personas in mind and with a great deal of user research and feedback along the way, we’ve made big design and user experience (UX) changes to Watson Assistant so that it’s more user friendly for all types of users; less complicated to get started and to maintain; and easier to extend, integrate, understand, and improve. \\n We want teams and individuals to feel empowered, enabled, and capable when it comes to creating their new assistant. To help with that, we’ve made it easier for users of all experience levels to design and create an assistant in a way that meets their users’ needs as quickly as possible. \\n Users are presented with a dashboard screen that’s been redesigned so that’s it’s easy to track the design and deployment process. It’s now much easier to see what you’ve done and what remains, and it follows the order of the steps we recommend to get your assistant up and running. \\n Watson Assistant now speaks in actions and steps rather than in a tree-based “if-then” system. The process is now akin to writing dialog or scripting a real-life conversation. It’s less complicated and more straightforward for subject matter experts to understand if they choose to (or have to) create their assistant without relying on the help of a developer. \\n We did this because we understand how intimidating topics like machine learning and conditional logic can be for both new and experienced subject matter experts. By making the creation flow feel more like a conversation and less like a logic tree, we hope that assistant creators feel more comfortable and capable when creating the actions and steps their users need. \\n One of the existing features of Watson Assistant that we’ve kept intact is the ability to customize and personalize the look and feel of your assistant without having to write or edit code. \\n We know that creating and launching an assistant from scratch takes time and money. That’s why we’ve included this template built with the same Carbon design system we use internally at IBM. If you can devote fewer resources to coding and deploying your assistant, you’re free to invest more resources in the things that make it your own like the content and the external channel integrations. \\n We heard from users that they wanted the option of seeing their assistant in action without having to go through the process of submitting pull requests and reviewing a litany of small changes. In addition, we wanted subject matter experts to have the freedom to fine tune their content and conversations alongside their assistant instead of having to navigate a codebase or a complex set of if-then statements. \\n At least 85% of users we tested had difficulty telling the difference between the old preview page and the preview panel in our new Actions workflow, so to show Watson Assistant as a true preview, we’ve superimposed the web chat portion of the assistant on a webpage instead of a blank screen. \\n This new preview page takes the assistant you’ve designed and lets you see how it would look and perform. This is where you can easily test and review your conversations, actions, and steps. Once you’re satisfied, you can share your work with others quickly via a public URL. In the coming weeks, we’ll be improving this feature even more: you’ll be able to see what your assistant looks like on your website. \\n For most users, Watson Assistant is more than just a chatbot on a website. Most implementations are connected to other channels like the phone integration, WhatsApp, and Facebook Messenger. Our users told us that these essential connections weren’t always easy to find, so we added a link directly to the integration catalog if you want to connect (and then preview) additional assistant channels. \\n Early designs that we tested with our users indicated that when an assistant’s live and draft environments were both on the same page in the workflow, users were unclear how the two differed. This drove us to design two separate pages so that users can see clearly where the live environment lives and where the draft environment lives. We tested this and found that all users were able to find both pages and understood that they were separate. This means that once you’re ready to deploy or make changes, you can do so without fear of breaking your assistant. \\n In addition, the new publishing model incorporates versioning, so you can review (or deploy) your changes in both draft and live environments with the option to revert if necessary. We’ve added change tracking as well, and there’s an option to save in-progress changes in case you need to take a break and return later. \\n We’ve also redesigned the analytics dashboard in Watson Assistant to make it much easier to find and fix the gaps in your new assistant. In addition to being able to see how your assistant is performing at a glance, you can now dig deeper into specific conversations and actions to see what’s working and, more importantly, where your users are getting lost or frustrated. \\n The potential complexity of data and analytics can be overwhelming. We sought to make these elements as approachable as possible in order to help subject matter experts make rapid yet effective improvements for their users. A more approachable set of analytics that’s easier to understand and act upon means more opportunities to improve your assistant’s performance. \\n We’ve spent more than a year working on these improvements, and we hope that you’ll both enjoy what we’ve done and help us continue to improve. Give the new Watson Assistant a try and let us know what you think. We’d love your feedback. \\n Thank you to all of the designers (and I would be remiss if I did not also thank those in product management, content, development, marketing and more!) that worked so hard to make this happen! \\n Thanks to Will Fanguy for help with the content and design of this post. \\n \\n  ']\n",
      "Name = [' \\n \\n stream_size 13126  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.csv.TextAndCSVParser  \\n stream_content_type application/txt  \\n Content-Encoding UTF-8  \\n resourceName /root/ibm_developer/Implement voice over web chat using Watson Assistant.txt  \\n Content-Type text/plain; charset=UTF-8  \\n  \\n \\n  updated_date: 2023-02-13T00:00:00\\npublish_date: 2023-02-13T00:00:00\\ntitle: Implement voice over web chat using Watson Assistant\\nupdated_date: 2023-02-13T00:00:00\\nsub_title: Connect Watson Assistant with Wikipedia and use your voice to communicate questions and responses\\ncategories: [\\'embeddable-ai\\', \\'watson-assistant\\', \\'watson-apis\\', \\'natural-language-processing\\']\\nContent: <p xmlns=\"http://www.w3.org/1999/xhtml\">A voice chatbot is a virtual assistant that hears, perceives, and responds to your voice input. Voice chatbots read your voice input, analyze the task at hand, and respond with relevant answers. These conversational interfaces understand natural language because they use natural language processing and natural language understanding and are trained to respond in natural language.</p>\\n<p>IBM Watson Assistant provides a question and answer system that can answer questions that are posed in natural language. You can prepare the questions per your business requirements, using different intents. You can then configure Watson Assistant with voice functions. For that, you use three IBM services.</p>\\n<ol>\\n<li><a href=\"https://www.ibm.com/products/watson-assistant\" rel=\"noopener noreferrer\" target=\"_blank\">Watson Assistant</a></li>\\n<li><a href=\"https://www.ibm.com/cloud/watson-speech-to-text\" rel=\"noopener noreferrer\" target=\"_blank\">Watson Speech to Text</a></li>\\n<li><a href=\"https://www.ibm.com/cloud/watson-text-to-speech\" rel=\"noopener noreferrer\" target=\"_blank\">Watson Text to Speech</a></li>\\n</ol>\\n<p>After configuring and starting the required services, you can integrate the voice solution with Watson Assistant.</p>\\n<p>This tutorial explains how to implement voice over web chat using Watson Assistant, which means interacting with a chatbot with your voice instead of typing. The tutorial demonstrates how to connect Watson Assistant with Wikipedia for question/answer responses by using voice to communicate questions and responses.</p>\\n<p>For this solution, there is a <a href=\"https://github.com/ibm-build-lab/Watson-Assistant/tree/main/voice-over-watson-assistant\" rel=\"noopener noreferrer\" target=\"_blank\">GitHub repository</a> that contains HTML code for a voice UI microphone button along with a Watson Assistant chatbot and JavaScript code for sending the request to the Watson Speech to Text and Watson Text to Speech Services.</p>\\n<h2 id=\"prerequisites\">Prerequisites</h2>\\n<ul>\\n<li>An <a href=\"https://cloud.ibm.com/login?cm_sp=ibmdev-_-developer-tutorials-_-cloudreg\" rel=\"noopener noreferrer\" target=\"_blank\">IBM Cloud</a> account</li>\\n<li>A Watson Assistant instance</li>\\n<li>A Watson Speech to Text instance</li>\\n<li>A Watson Text to Speech instance</li>\\n</ul>\\n<h2 id=\"estimated-time\">Estimated time</h2>\\n<p>It should take you approximately 30 minutes to complete the tutorial.</p>\\n<h2 id=\"steps\">Steps</h2>\\n<p>Now, let\\'s set up the voice chatbot on your local system. You first download all of the configuration files, including the HTML file for the voice bot, JavaScript files for calling the APIs, and other dependencies from the parent directory. You configure all API keys and the path as described in this tutorial. Then, you launch the HTML file in the browser.</p>\\n<p>This Voice-over-Watson configuration with Wikipedia APIs uses cloud functions that are known as webhooks. You can ask a question that uses keywords such as \"tell me about\" or \"what is,\" and the chatbot tries to find a short summary from Wikipedia. The best thing about Voice-over-Watson is that you can get answers in speech.</p>\\n<h3 id=\"voice-function-integration-with-watson-assistant\">Voice function integration with Watson Assistant</h3>\\n<p>Connect Watson Assistant with the voice UI so that you can communicate with the chatbot by using the microphone button.</p>\\n<h4 id=\"step-1-download-files\">Step 1. Download files</h4>\\n<ol>\\n<li><a href=\"https://github.com/ibm-build-lab/Watson-Assistant/tree/main/voice-over-watson-assistant\" rel=\"noopener noreferrer\" target=\"_blank\">Download</a> all of the configuration files, including the HTML file for the voice bot, the JavaScript files for calling the APIs, and other dependencies from the parent directory.</li>\\n</ol>\\n<h4 id=\"step-2-make-configuration-changes-to-html-file\">Step 2. Make configuration changes to HTML file</h4>\\n<p>There are two JavaScript files in the HTML code. The first file is recoder.js, which contains functions for recording the voice. The second file is voice.js, which sends the voice file to the Watson Speech to Text Service and gets the result from the Watson Text to Speech Service. The file also contains a microphone icon with Watson Assistant.</p>\\n<ol>\\n<li><p>Paste the following code into the body section of the main HTML file.</p>\\n<div class=\"bx--snippet bx--snippet--multi\" data-code-snippet=\"\"><div aria-label=\"Code Snippet Text\" class=\"bx--snippet-container\"><pre><code>  &lt;img class=\"voice\" id=\"recordVoice\" src=\"./img/mic.png\"&gt;\\n  &lt;div id=\"log\"/&gt;\\n  &lt;script src=\"scripts/recorder.js\"&gt;&lt;/script&gt;\\n  &lt;script src=\"scripts/voice.js\"&gt;&lt;/script&gt;\\n</code></pre></div><button aria-label=\"Copy code\" class=\"bx--snippet-button\" data-copy-btn=\"\" tabindex=\"0\" type=\"button\"><svg class=\"bx--snippet__icon\" height=\"16\" viewbox=\"0 0 16 16\" width=\"16\" xmlns=\"http://www.w3.org/2000/svg\"><path d=\"M1 10H0V2C0 .9.9 0 2 0h8v1H2c-.6 0-1 .5-1 1v8z\"></path><path d=\"M11 4.2V8h3.8L11 4.2zM15 9h-4c-.6 0-1-.4-1-1V4H4.5c-.3 0-.5.2-.5.5v10c0 .3.2.5.5.5h10c.3 0 .5-.2.5-.5V9zm-4-6c.1 0 .3.1.4.1l4.5 4.5c0 .1.1.3.1.4v6.5c0 .8-.7 1.5-1.5 1.5h-10c-.8 0-1.5-.7-1.5-1.5v-10C3 3.7 3.7 3 4.5 3H11z\"></path></svg><div class=\"bx--btn--copy__feedback\" data-feedback=\"Copied!\"></div></button><button class=\"bx--btn bx--btn--ghost bx--btn--sm bx--snippet-btn--expand\" type=\"button\"><span class=\"bx--snippet-btn--text\" data-show-less-text=\"Show less\" data-show-more-text=\"Show more\">Show more</span><svg aria-label=\"Show more icon\" class=\"bx--icon-chevron--down\" height=\"7\" viewbox=\"0 0 12 7\" width=\"12\"><title>Show more icon</title><path d=\"M6.002 5.55L11.27 0l.726.685L6.003 7 0 .685.726 0z\" fill-rule=\"nonzero\"></path></svg></button></div>\\n<p xmlns=\"http://www.w3.org/1999/xhtml\"> <img alt=\"Pasting code into HTML\" src=\"https://developer.ibm.com/developer/default/tutorials/implement-voice-over-webchat-using-watson-assistant/images/adding-html.png\"/></p>\\n<p> <strong>Note:</strong> Ensure that all required files are available under the specified path.</p>\\n</li>\\n<li><p>Paste the following code into the HTML file so that you can set the design and style according to the requirements.</p>\\n<div class=\"bx--snippet bx--snippet--multi\" data-code-snippet=\"\"><div aria-label=\"Code Snippet Text\" class=\"bx--snippet-container\"><pre><code>&lt;head&gt;\\n      &lt;title&gt;Home Lending pal&lt;/title&gt;        \\n      &lt;link rel=\"stylesheet\" href=\"./css/style.css\"&gt;\\n&lt;/head&gt;\\n</code></pre></div><button aria-label=\"Copy code\" class=\"bx--snippet-button\" data-copy-btn=\"\" tabindex=\"0\" type=\"button\"><svg class=\"bx--snippet__icon\" height=\"16\" viewbox=\"0 0 16 16\" width=\"16\" xmlns=\"http://www.w3.org/2000/svg\"><path d=\"M1 10H0V2C0 .9.9 0 2 0h8v1H2c-.6 0-1 .5-1 1v8z\"></path><path d=\"M11 4.2V8h3.8L11 4.2zM15 9h-4c-.6 0-1-.4-1-1V4H4.5c-.3 0-.5.2-.5.5v10c0 .3.2.5.5.5h10c.3 0 .5-.2.5-.5V9zm-4-6c.1 0 .3.1.4.1l4.5 4.5c0 .1.1.3.1.4v6.5c0 .8-.7 1.5-1.5 1.5h-10c-.8 0-1.5-.7-1.5-1.5v-10C3 3.7 3.7 3 4.5 3H11z\"></path></svg><div class=\"bx--btn--copy__feedback\" data-feedback=\"Copied!\"></div></button><button class=\"bx--btn bx--btn--ghost bx--btn--sm bx--snippet-btn--expand\" type=\"button\"><span class=\"bx--snippet-btn--text\" data-show-less-text=\"Show less\" data-show-more-text=\"Show more\">Show more</span><svg aria-label=\"Show more icon\" class=\"bx--icon-chevron--down\" height=\"7\" viewbox=\"0 0 12 7\" width=\"12\"><title>Show more icon</title><path d=\"M6.002 5.55L11.27 0l.726.685L6.003 7 0 .685.726 0z\" fill-rule=\"nonzero\"></path></svg></button></div>\\n<p xmlns=\"http://www.w3.org/1999/xhtml\"> <img alt=\"Including CSS file\" src=\"https://developer.ibm.com/developer/default/tutorials/implement-voice-over-webchat-using-watson-assistant/images/css-html.png\"/></p>\\n</li>\\n<li><p>Launch the HTML file so that the microphone icon appears near Watson Assistant. When you click the microphone button, you can use your voice to communicate with the chatbot.</p>\\n</li>\\n</ol>\\n<h4 id=\"step-3-set-up-all-required-services-with-the-voice-chatbot\">Step 3. Set up all required services with the voice chatbot</h4>\\n<p>To set up Watson Assistant, you add details to the voice.js file. The following examples show the services and keys. You replace the example services and keys with your own Service URL and keys.</p>\\n<ol>\\n<li><p>Set the <code style=\"font-family:monospace;font-size:1rem\">IntegrationID</code>, <code style=\"font-family:monospace;font-size:1rem\">Region</code>, and <code style=\"font-family:monospace;font-size:1rem\">ServiceInstanceID</code> of Watson Assistant. All of the required details are available in the embedded Watson Assistant code.</p>\\n</li>\\n<li><p>Enter your API key (base64 encoded).</p>\\n<p> <img alt=\"Adding details to voice.js file\" src=\"https://developer.ibm.com/developer/default/tutorials/implement-voice-over-webchat-using-watson-assistant/images/add-details-voice-js.png\"/></p>\\n</li>\\n<li><p>Set up the URL of Watson Speech to Text in the <code style=\"font-family:monospace;font-size:1rem\">SSTInvoke</code> function, and set up the URL of Watson Text to Speech in the <code style=\"font-family:monospace;font-size:1rem\">PlayAudio</code> function in the Voice.js file. You also set up the API key (base64 encoded).</p>\\n<p> <img alt=\"Set URL of Speech to Text\" src=\"https://developer.ibm.com/developer/default/tutorials/implement-voice-over-webchat-using-watson-assistant/images/speech-to-text-url.png\"/></p>\\n</li>\\n</ol>\\n<p>After setting up all of the required APIs and keys, all of the services are integrated into the main HTML file. No additional embedded Watson Assistant code is required. Watson Assistant can be embedded directly with JavaScript and deployed by using HTML.</p>\\n<h3 id=\"watson-assistant-voice-chatbot-with-wikipedia-api-integration\">Watson Assistant voice chatbot with Wikipedia API integration</h3>\\n<p>Now, let\\'s connect the voice chatbot to Wikipedia. This way, you can ask the Watson Assistant voice chatbot any question, and the chatbot tries to find the corresponding answer from Wikipedia using the webhook API function. You can ask anything with the tag words “What is“ or \"tell me about.” After following the previous configuration steps, you should have an HTML file that you can open.</p>\\n<ol>\\n<li><p>Prepare the user input parameters for the Watson Assistant dialog box so that the webhook function fetches the text that is entered by the user in the chatbot. Enter <code style=\"font-family:monospace;font-size:1rem\">\"&lt;?input_text?&gt;\"</code> in the <code style=\"font-family:monospace;font-size:1rem\">user_input</code> parameter field so that the user input is passed to the webhook function.</p>\\n<p> <img alt=\"Entering Key/Value parameters\" src=\"https://developer.ibm.com/developer/default/tutorials/implement-voice-over-webchat-using-watson-assistant/images/chatbot-dialog.png\"/></p>\\n</li>\\n<li><p><a href=\"https://github.com/IBM/IBMDeveloper-recipes/blob/main/connect-watson-assistant-with-wikipedia-api-via-cloud-functions/index.md\" rel=\"noopener noreferrer\" target=\"_blank\">Prepare the webhook function</a> that can access the Watson Assistant user input and pass it to the Wikipedia API. The webhook function takes the user input and removes some unwanted words like \"What is\" and \"Tell me about it,\" extracts important words that the user wants to know about, and passes them to the Wikipedia API.</p>\\n<p> <img alt=\"Preparing the webhook function\" src=\"https://developer.ibm.com/developer/default/tutorials/implement-voice-over-webchat-using-watson-assistant/images/wiki-webhook.png\"/></p>\\n</li>\\n</ol>\\n<p>The webhook Wikipedia API helps to search for important terms on Wikipedia and gives a short summary about it. The short summary is the short description about the topic that the user is looking for. The following image shows the short summaries from Wikipedia for a couple of \"what is\" questions.</p>\\n<p><img alt=\"Watson Assistant chatbot screen\" src=\"https://developer.ibm.com/developer/default/tutorials/implement-voice-over-webchat-using-watson-assistant/images/watson-assistant.png\"/></p>\\n<h2 id=\"summary\">Summary</h2>\\n<p>This tutorial explained how to how implement voice over web chat by using Watson Assistant. The tutorial demonstrated how to connect Watson Assistant with Wikipedia for question and answer responses by using your voice to communicate questions and responses.</p>\\n<p>To see how this solution works, look at the <a href=\"https://htmlpreview.github.io/?https://github.com/ibm-build-lab/Watson-Assistant/blob/main/voice-over-watson-assistant/Voice_bot.html\" rel=\"noopener noreferrer\" target=\"_blank\">demo</a>. You can get more information about <a href=\"/technologies/embeddable-ai\">Embeddable AI technology</a> on the IBM Developer site.</p> \\n  ']\n",
      "Name = [' \\n \\n stream_size 2917  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.html.HtmlParser  \\n stream_content_type application/html  \\n Content-Encoding ISO-8859-1  \\n resourceName /root/ibm_cloud_docs_process2/watson-assistant/manage-flow-overview.html  \\n Content-Type text/html; charset=ISO-8859-1  \\n  \\n \\n   \\n\\n copyright:\\n  years: 2018, 2021\\nlastupdated: \"2021-08-31\" \\n\\n subcollection: watson-assistant \\n\\n  \\n\\n {:shortdesc: .shortdesc}\\n{:new_window: target=\"_blank\"}\\n{:external: target=\"_blank\" .external}\\n{:deprecated: .deprecated}\\n{:important: .important}\\n{:note: .note}\\n{:tip: .tip}\\n{:pre: .pre}\\n{:codeblock: .codeblock}\\n{:screen: .screen}\\n{:javascript: .ph data-hd-programlang=\\'javascript\\'}\\n{:java: .ph data-hd-programlang=\\'java\\'}\\n{:python: .ph data-hd-programlang=\\'python\\'}\\n{:swift: .ph data-hd-programlang=\\'swift\\'} \\n\\n {{site.data.content.classiclink}} \\n\\n Overview: Managing conversation flow \\n\\n {: #manage-flow} \\n\\n A conversation with your assistant might take many different paths, depending on what information your customers provide and what decisions they make. You can build your assistant so it guides each customer along the fastest path to the right solution.\\n{: shortdesc} \\n\\n After your assistant\\'s greeting, each conversation begins with a customer question or request. The first step in guiding your customer to the right solution is for the assistant to recognize what the customer is asking for (based on the example input you have specified), triggering the appropriate action. \\n\\n There are several ways in which the assistant can adapt the conversation to guide the customer down the right path: \\n\\n \\t \\n  rect /docs/watson-assistant?topic=watson-assistant-step-conditions  Step conditions  : Each step in an action can be configured to execute only under certain conditions. Therefore, one action can handle multiple variations of the same issue, depending on conditions at run time. \\n\\n \\n\\t \\n  rect /docs/watson-assistant?topic=watson-assistant-skip-steps  Skipping steps  : Each step in an action can be skipped, if the information it asks for has already been provided. \\n\\n \\n\\t \\n  rect /docs/watson-assistant?topic=watson-assistant-step-what-next  Choosing what to do next  : Instead of executing the steps in an action in sequential order, you can modify the sequence of steps by defining what happens when a step is complete. \\n\\n \\n\\t \\n  rect /docs/watson-assistant?topic=watson-assistant-change-topic  Changing the conversation topic  : Your customer might change the topic of the conversation in the middle of an action. Your assistant can respond to this when appropriate. \\n\\n \\n\\t \\n  rect /docs/watson-assistant?topic=handle-errors  Handling errors  : Sometimes customers can run into problems getting the assistant to do what they want. You can configure how the assistant recovers from these situations to get back on track. \\n\\n \\n \\n\\n By using these mechanisms, you can build an assistant that is flexible and responds dynamically to quickly get your customers to the solutions they need. \\n  ']\n",
      "Name = [' \\n \\n stream_size 17166  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.csv.TextAndCSVParser  \\n stream_content_type application/txt  \\n Content-Encoding UTF-8  \\n resourceName /root/IBM Medium blog/Build an AI Personal Trainer with IBM Watson Assistant - Part\\xa01.txt  \\n Content-Type text/plain; charset=UTF-8  \\n  \\n \\n   Build an AI Personal Trainer with IBM Watson Assistant - Part\\xa01 Quarantine got you feeling the gym\\xa0rat… Staying healthy and fit is a critical habit to build (especially in the midst of a global pandemic). Unfortunately, without the amenities of our everyday fitness routines— lavish community gyms, expert personal trainers, even that one buddy who spends way too much time working out— staying in shape can be a struggle for many. \\n But what if you could have 24/7 access to expert-level, on-demand personal training advice, as quickly and easily as sending a text message? Thanks to increasingly sophisticated conversational AI technologies, it’s now possible to build your very own virtual workout advisor in just minutes (even if you have no clue how to code). \\n In this tutorial, we’re going to walk through the process of creating an AI personal trainer using IBM’s Watson Assistant. With a graphical user interface and intuitive mechanisms for mapping dialogue flows, Watson Assistant is one of the most accessible conversational AI platforms on the market, whether you’re a tech-savvy programmer or an ordinary gym-goer. \\n To ensure our bot only gives its users high-quality exercise tips, we’re going to be leveraging the expertise of my friend Stephon Moise, a Florida-based doctor of physical therapy at the helm of WISE Fitness. Stephon has created a repository of home workout advice that will serve as the basis for our bot’s Q&A dialogue flow. \\n Watson Assistant is one of the many services available on IBM’s public cloud platform, so the first step in creating our chatbot is to sign up and log in. \\n This tab will eventually show us the various cloud services we’ve provisioned (including Watson Assistant). Click Create Resource in the upper-right corner. \\n IBM Cloud boasts 170+ products & services, with dozens of data management, AI, and analytics applications, so be sure to check out everything the platform has to offer. \\n Search the IBM Cloud Catalog for Watson Assistant, and create an instance free-of-charge with the Lite plan. \\n You have the ability to create several assistants within a single instance. After creating your first assistant, provide it with a name and an optional description. Finally, click Create Assistant. \\n Assistants interact with users through a collection of behaviors called Skills. Watson Assistant comes with a library of pre-configured skills for a variety of common enterprise uses cases (customer care, for example). In this tutorial, however, we’ll be creating our own. \\n To begin designing our bot’s conversational abilities, click Add dialog skill. \\n Watson Assistant’s dialog skill is comprised of 3 main components: \\n #Intents: An intent is a collection of similar statements expressed by users, typically representing a desired goal or function. For a retail bot, for example, a customer might be seeking #product-info. \\n @Entities: An entity represents a more specific piece of information in a user request, providing detail and context to the overarching intent (perhaps a particular @product in the retail bot example). \\n Dialog: The dialog element provides a graphical outline of how intents and entities interact, mapping each combination to a customized, automated response. \\n Most users will begin interacting with your bot the same way they would interact with a human— by greeting them! Click Create intent and give it the name #Greetings. \\n To help our bot recognize what a typical greeting might look like, we’ll give Watson some examples to train on. Provide at least 5 to 7 variations of common greetings. \\n After analyzing these examples, our bot will not only be able to recognize those specific phrases as #Greetings — thanks to Watson’s core Natural Language Processing capabilities, it will also be able to more readily identify other variations we haven’t trained on. \\n Now that we’ve created our first Intent, let’s integrate it into our bot’s conversational flow by creating a dialog node. Click Dialog on the menu, then click Add Node. Name the node “Greetings,” and under the subheading “If assistant recognizes,” select your #Greetings Intent. \\n Under the “Assistant responds” subheading, script some replies to be triggered whenever your bot detects a greeting. It could seem a little too robotic if your workout assistant responded the same way every time, so give your bot a few different greetings to cycle through! \\n Your customized dialogue updates in real-time, and we can continuously test out our assistant’s conversational capabilities by clicking the Try it button in the upper-right corner. Open up the chat window and say hello! \\n You’ll notice that, below your chat entries, Watson displays the #Greetings classification. If Watson does not recognize your greeting’s Intent, however, the entry will be marked as “Irrelevant.” \\n Luckily, we can train Watson to recognize these unfamiliar greetings right here in the chat window. Try entering a greeting that you did not include in your initial list of examples. If Watson fails to correctly classify it, simply click to open up the classification’s drop-down menu in the chat, and select the #Greetings Intent. Watson will now recognize this phrase in future instances! \\n Note that after some changes, Watson will need time to train. Watson is still training if the following purple message appears at the top of the chat window. Once this message disappears, the dialog results will reflect your latest edits. \\n Now that we’re familiar with how Intents and Dialog nodes work, let’s teach our assistant about fitness! Navigate back to the Intents window, then click Create intent. \\n Choose a common exercise topic that users might want to ask your bot about (for example, “How long should I rest between sets?”). As was the case with our #Greetings Intent, you’ll also need to provide Watson with 3–5 examples of how this type of question might vary. Our example question, with a few variations, might look something like this: \\n How long should I rest between sets? \\n Should I rest before doing another execise? \\n Rest period? \\n Do I need to rest between each set? \\n Once you’re finished building out your first Intent, go ahead and add a few more (check out the WISE Fitness FAQ Page for a full list of ideas). The more Intents and topics your Workout Advisor is able to recognize, the more useful it will be for your users! \\n After you’ve created a collection of new workout-related Intents for your bot to advise users on, return to the Dialog page to integrate them into your conversational flow. Follow the same process from when you creating your Greetings dialog— selecting Add node, identifying the desired Intent, and providing a scripted response. \\n Feel free to pull answers from the WISE Fitness FAQ, or do a bit of research and customize your own answers. The more details your assistant can provide, the better! As you build out each node, continue to test and experiment with your bot in the Try it window, double-checking to ensure that your Q&A’s are working as expected. \\n By now, your Exercise Advisor is capable of fielding quite a few questions, but your Dialog map is probably looking pretty crowded. As the number of Intents grows, it can become difficult to visualize how a conversation with your bot might flow. Let’s clean things up a bit! \\n Click the Add folder button at the top of the Dialog page. \\n After creating the folder and giving it a name (i.e. “General Workout Questions”), move each of your new exercise-related Dialog nodes by clicking the Actions icon (⋮) and selecting Move. Then, click on your folder and select the To Folder option to place your Dialog node inside. \\n Now you can easily reveal/hide this group of Intents by clicking on the folder icon. Much better! \\n So far, we’ve learned how to create basic Q&A dialog flows using some pretty straightforward Intents. But what if our conversation gets a little more complex? \\n Let’s say our user is looking for some personalized workout recommendations. Inherently, this kind of request can’t be answered with a simple Q&A structure. There’s more context that you need to understand— what kind of equipment do they have available? What muscle group are they targeting? Without getting a little more background, your bot won’t be able to offer a high-quality response. \\n This is where context variables come in. Watson uses context variables to store this kind of situational input (provided by either the user or an external plugin) in order to create a more specific, customized response. \\n Before we create our context variables, however, we’ll need to build out the corresponding Intent and Entities. Begin by creating a “workout recommendations” Intent (i.e. #WorkoutRecs). Follow the same process as before, and include a few variations with alternative phrasings. \\n Sample utterances of this Intent might include: \\n Can you recommend some exercises? \\n What workouts can I do? \\n Any exercise recommendations for a home workout? \\n I need recommendations for my workouts! \\n Next, we’ll add an Entity to help further specify the context of our workout recommendations. In this case, we’ll categorize each exercise by the targeted muscle group. For the sake of simplicity, in this tutorial, we’ll only define 2 broad muscle groups (upper and lower body). Feel free to return to this step later on to create additional muscle groups (and other contextual categories like equipment) to help fine-tune your bot’s recommendation engine. \\n To begin, click Create entity and provide it with the name @musclegroup. \\n Then, name your first @musclegroup value “upper body.” Under the synonyms field, list a series of muscles that would fall into the upper body category (arms, chest, shoulders, etc.). Also include alternative ways that users might phrase the “upper body” category (upper, torso, top half). \\n Once you’re finished listing possible ways a user might request workout recommendations for the “upper body” category, click Add value. Then, repeat this process for the “lower body” value. Provide both muscle names (quads, glutes, calves) and alternative category names (lower, legs, bottom half). \\n Navigate back to the Dialog page and click Add node. Under “If assistant recognizes,” select your #WorkoutRecs Intent. \\n Because we want to differentiate our recommendations based on the desired muscle group, we’ll need to customize this node a little further. Click Customize in the upper right corner of the node, then turn on the Slots and Multiple conditioned responses options. \\n Slots give your bot the ability to collect and store user input. In this case, our slot will store our target muscle group. \\n Multiple conditioned responses will allow you to use the input you’ve captured in your slots to offer different responses, depending on the context. \\n Remember context variables? We’re now going to use our slot to capture the user’s muscle group input (in the form of an Entity) and store it in one of these variables. Context variables are denoted with the $ prefix. \\n Under the Then check for section, fill out the text fields as follows: \\n Check for: @musclegroup \\n Save it as: $musclegroup \\n If not present, ask: What muscle group would you like to work out? \\n Now, the bot should be able to prompt the user for a target muscle group (if not initially provided), then store this information as a context variable for later use in the conversation. \\n You can see your context variable in action by clicking the Try it window and chatting with your bot. After requesting a workout recommendation for “arms,” click Manage Context in the upper right corner. Your bot has stored this input as $musclegroup = upper body! \\n It’s finally time to script exercise recommendations! Scroll down to the Assistant Responds section of your node and click Add response to bring the total number of potential responses to 2. Then, fill out each field as follows: \\n If assistant recognizes: $musclegroup:(upper body) \\n Respond with: For upper body exercises at home, here are a few options you can try: <br/>- Push-Ups (Chest) <br/>- Dips (Triceps) <br/>- Planks (Shoulders) \\n AND \\n If assistant recognizes: $musclegroup:(lower body) \\n Respond with: For lower body exercises at home, here are a few options you can try: <br/> - Body Weight Squat (Hamstrings/Quads) <br/> - Lunges (Hamstrings/Quads) <br/> - Toe Raises (Calves) \\n The <br/> symbol included in the two examples above is what is known as a “break” element in HTML. This particular element allows you to separate your text into sequential lines. There are many other ways you can format your bot’s responses using HTML tags, for example: \\n Feel free to get creative with the look and feel of your bot’s responses— unique formatting can be a great way of injecting a little more personality into the conversation! \\n While testing your bot, one thing you may notice is that context variables are stored for the entire conversation, by default. This can present some issues. For example, if your user asks for upper body workout recommendations, then follows up to ask about recommendations for a different muscle group, your bot will assume they’re still requesting information from the upper body category. \\n Bot: Hello. How can I help you? \\n User: Can you provide me with some workout recommendations? \\n Bot: What muscle group would you like to work out? \\n User: Upper body. \\n Bot: For upper body exercises at home, here are a few options you can try… \\n User: Any recommended exercises for other muscle groups? \\n Bot: For upper body exercises at home, here are a few options you can try… \\n What the bot should do is make another request for a target muscle group. Because the “upper body” context variable is still being stored, however, the bot is unable to recognize the new scope of this question. \\n To address this concern, we’ll construct a Child node to clear our conversation’s context after exercise recommendations are provided. \\n Click the Actions icon (⋮) on your “Workout Recommendations” node, then click Add child node. Name this new node “Clear Context.” Under “If assistant recognizes,” select your $musclegroup context variable. Then, click the Actions icon (⋮) next to the “Assistant responds” section, and open up the Context Editor. \\n After the Context Editor is opened, enter the following values into the fields: \\n Variable: musclegroup \\n Value: null \\n This will clear the $musclegroup variable of any stored values once the node is activated. \\n Finally, under “Assistant responds,” add a friendly follow-up message (“Let me know if you have any other questions!”), inviting the user to make additional queries. \\n In order to trigger our “Clear Context” actions, we’ll have to connect it to our “Workout Recommendations” node. \\n Click back over to the “Workout Recommendations” node, and scroll down to the bottom. Underneath the “Then assistant should” section, select Jump to from the dropdown menu. Finally, click the “Clear Context” node, and select Respond. \\n Your bot’s workout recommendation capabilities should be ready to do some heavy lifting! But one last thing… \\n The most important rule in building a chatbot? Make sure people know they’re interacting with one. From the very beginning of the conversation, users should know there’s not a human on the other end of the line. This sets reasonable expectations for what sort of questions your bot can answer, and helps users feel you’re being transparent about your use of an automated system. \\n Click the Welcome node to begin editing your intro message. Here’s a sample greeting to get you started: \\n Hi there! My name is Flex. I’m a chatbot, and your very own personal workout assistant! I can help answer basic questions about fitness and provide exercise recommendations. What can I do for you today? \\n Congrats— your virtual workout advisor should now be ready for the big leagues! Navigate back to the Assistants page to access your bot’s preview link, and give it a spin. The preview URL is publicly accessible, so be sure to share with your most fitness-savvy friends! \\n We’ve added a lot of cool features today, but don’t stop now! There’s plenty of ways you can continue strengthening your bot’s training capabilities. Now that you’ve got a grip on the basics, here are a few ideas for improvements: \\n Get creative with personalizing your virtual workout advisor, and stay tuned for Part 2 of the tutorial, where we’ll integrate our chatbot with Watson Discovery to extract exercise instructions from the WISE Fitness training manual! \\n Enjoy this tutorial? Learn more about Watson Assistant by checking out our collection of Demos, engaging with us on our developer community, or messaging me directly at parker.merritt@ibm.com. \\n Feel free to connect with me on Twitter and LinkedIn, and be sure to check out other AI-focused articles I’ve written: \\n medium.com \\n medium.com \\n medium.com \\n \\n  ']\n",
      "Name = [' \\n \\n stream_size 14607  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.csv.TextAndCSVParser  \\n stream_content_type application/txt  \\n Content-Encoding ISO-8859-1  \\n resourceName /root/ibm_developer/Create a retail customer service chatbot.txt  \\n Content-Type text/plain; charset=ISO-8859-1  \\n  \\n \\n  updated_date: 2021-10-04T00:00:00\\npublish_date: 2021-09-22T00:00:00\\ntitle: Create a retail customer service chatbot\\nupdated_date: 2021-10-04T00:00:00\\nsub_title: Add Assistant actions and variables\\ncategories: [\\'conversation\\', \\'watson-assistant\\']\\nContent: <p xmlns=\"http://www.w3.org/1999/xhtml\">Watson Assistant can help you solve a problem by providing an intelligent interface using natural language. You can use the tools provided by the Assistant service with skills that will directly help your customers. The flexibility of the GUI tools and APIs combine to allow you to power applications and tools using AI in simple and powerful ways.</p>\\n<h2 id=\"what-you-re-going-to-learn\">What you\\'re going to learn</h2>\\n<ol>\\n<li><a href=\"#create-the-assistant-service-and-first-assistant\">Create the Assistant service and first Assistant</a></li>\\n<li><a href=\"#create-an-action\">Create an action</a></li>\\n<li><a href=\"#add-actions-with-conditions\">Add actions with conditions</a></li>\\n<li><a href=\"#add-actions-with-variables\">Add actions with variables</a></li>\\n<li><a href=\"#publish-the-changes\">Publish the changes</a></li>\\n<li><a href=\"#conclusion\">Conclusion</a></li>\\n</ol>\\n<h2 id=\"create-the-assistant-service-and-first-assistant\">Create the Assistant service and first Assistant</h2>\\n<ol>\\n<li><p>The first step in using Watson Assistant is creating an instance of the service. You\\'ll do this using <a href=\"https://cloud.ibm.com/catalog/services/watson-assistant?cm_sp=ibmdev-_-developer-tutorials-_-cloudreg\">IBM Cloud</a>. Give your instance a meaningful name. Choose the resource group you wish to belong to, add tags as desired, and click <strong>Create</strong>.</p>\\n<p> <img alt=\"Create assistant instance\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/create-assistant-instance.png\"/></p>\\n</li>\\n<li><p>Click <strong>Launch Watson Assistant</strong>.</p>\\n<p> <img alt=\"Launch Watson Assistant\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/launch-watson-assistant.png\"/></p>\\n</li>\\n<li><p>From the top drop-down menu, click <strong>Create new +</strong>.</p>\\n<p> <img alt=\"Click create new\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/click-create-new.png\"/></p>\\n</li>\\n<li><p>Give the instance a name and optional description, and click <strong>Create Assistant</strong>.</p>\\n<p> <img alt=\"Name and create Assistant\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/name-and-create-assistant.png\"/></p>\\n</li>\\n</ol>\\n<h2 id=\"create-an-action\">Create an action</h2>\\n<ol>\\n<li><p>On your new home page, you can follow the navigation steps and click <strong>Learn about Watson Assistant</strong> to watch a 1-minute video. You can either continue and click <strong>Create your first action</strong>, or use the navigation panel on the left and choose the icon for <em>Actions</em>.</p>\\n<p> <img alt=\"Create your first action\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/create-first-action.png\"/></p>\\n</li>\\n<li><p>You are asked <em>\"What does your customer say to start this interaction?\"</em>. For this tutorial, enter <code style=\"font-family:monospace;font-size:1rem\">What are your store hours?</code>, and click <strong>Save</strong>.</p>\\n<p> <img alt=\"Store hours action\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/store-hours-action.png\"/></p>\\n</li>\\n<li><p>Add a resonse such as \"We are open from 8:00 AM until 9:00 PM every day.\" Because the customer doesn\\'t have to add any input, and the question has been answered, you can leave the <em>Define customer response</em> section empty, and leave the default <em>Continue to next step</em>. </p>\\n<p> <img alt=\"Assitant says store hours\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/asst-says-store-hours.png\"/></p>\\n</li>\\n<li><p>Now let\\'s test what you have so far. Click <strong>Preview</strong> in the lower-right corner. The chatbot begins with \"Welcome, how can I assist you?\" Enter the text <code style=\"font-family:monospace;font-size:1rem\">What are your store hours?</code>, and click the arrow or press <strong>Enter/Return</strong> on your keyboard.. You should get the response that you entered, \"We are open from 8:00 AM until 9:00 PM every day.\"</p>\\n<p> <img alt=\"First test for store hours\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/first-test-store-hours.png\"/></p>\\n</li>\\n<li><p>Click the counter-clockwise arrow to reset the bot. This time, enter <code style=\"font-family:monospace;font-size:1rem\">When are you open?</code> This time, the bot responds with \"I\\'m afraid I don\\'t understand. Please rephrase your question.\" You need to add some alternates to the customer question to help the bot understand. Click the <strong>Customer starts with</strong> box in the upper-left to go back to this and add some alternate ways to phrase the question.</p>\\n<p> <img alt=\"Alternate phrasing for question\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/alternate-phrases-for-hours.png\"/></p>\\n</li>\\n<li><p>Click the Save icon in the upper right because this must be saved to use the Preview widget. If you click on the gear icon for the Assistant settings, you see that auto-save is on, but it performs this save when you switch between steps. Now, test again in the preview window by typing something like <code style=\"font-family:monospace;font-size:1rem\">open</code> that is present in your list of phrases. You should get a correct response.</p>\\n</li>\\n<li><p>Notice in the preview window that the bot ends with \"There are no additional steps for this action. Add a new step or end the action.\" Let\\'s end the action. Go back to the Conversation step by clicking on it in the upper-left corner, and change the <em>And then</em> dropdown to <strong>End the action</strong>.</p>\\n<p> <img alt=\"Change to end the action\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/change-to-end-the-action.png\"/></p>\\n</li>\\n<li><p>Save your changes, and reset the Preview. Now, the action should complete after the store hours are given.</p>\\n</li>\\n</ol>\\n<h2 id=\"add-actions-with-conditions\">Add actions with conditions</h2>\\n<ol>\\n<li><p>Now, add another action to answer the question \"Where are you located?\" From the home page, click <strong>New action +</strong>.</p>\\n<p> <img alt=\"Create New action\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/new-action.png\"/></p>\\n</li>\\n<li><p>In response to \"What does your customer say to start this interaction?\", enter <code style=\"font-family:monospace;font-size:1rem\">Where are you located?</code></p>\\n</li>\\n<li>For the section <em>Assistant says</em>, enter <code style=\"font-family:monospace;font-size:1rem\">We have 2 locations, Downtown and Riverside. Which one are you closest to?</code></li>\\n<li>Under <em>Customer response</em>, choose <strong>Options</strong>, and enter <code style=\"font-family:monospace;font-size:1rem\">Downtown</code> and <code style=\"font-family:monospace;font-size:1rem\">Riverside</code>. You can leave the default for <em>Allow skipping or always ask?</em> to be <em>Skip if the customer already gave this information</em> because this information is saved from that previous step. You might choose <em>Always ask for this information, regardless of earlier messages</em> for something like a confirmation before a purchase. </li>\\n<li><p>Click <strong>Apply</strong>.</p>\\n<p> <img alt=\"Add location options\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/location-choose-options.png\"/></p>\\n</li>\\n<li><p>Add another step, and this time you use the pull-down menu to change <em>Step 2 is taken</em> to be <em>with conditions</em>. The conditions should prepopulate with <em>\"1. All of this is true: If </em>We have 2 locations...\"<em> is <code style=\"font-family:monospace;font-size:1rem\">Downtown</code>. You can click around and see that you can change the logic in various ways, from </em>is<em> to </em>is not<em> or from </em>All<em> of this is true to </em>Any<em>. You can add more conditions or an entire group of conditions. Now, you can add an answer under </em>Assistant says* such as \"Our Downtown location is at 3210 Main St., and the phone number is 303-867-5309\".</p>\\n<p> <img alt=\"Add location conditions\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/location-conditions.png\"/></p>\\n</li>\\n<li><p>You can add a similar condition for If <code style=\"font-family:monospace;font-size:1rem\">We have 2 locations</code> is <code style=\"font-family:monospace;font-size:1rem\">Riverside</code>. After saving, test the bot in the Preview widget to make sure that it works as expected. In the following example, I ask \"what are your locations?\", but I haven\\'t added any alternatives. The Watson Assistant disambiguation feature displays a question for clarification, \"Did you mean?\" along with options for my actions. I can click \"Where are you located?\" or enter the text and get the wanted response. I also learned that I need to add some alternate phrases to my original question.</p>\\n<p> <img alt=\"Clarifications for ambiguous question\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/location-did-you-mean.png\"/></p>\\n</li>\\n</ol>\\n<h2 id=\"add-actions-with-variables\">Add actions with variables</h2>\\n<p>Now, take a look at how variables work. There are variables set by Watson Assistant such as <em>Now</em> (the current time and date), <em>Current time</em>, and Current date*.</p>\\n<p><img alt=\"Variables set by Assistant\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/variables-set-by-assistant.png\"/></p>\\n<ol>\\n<li><p>Click <strong>Variables created by you</strong> and <strong>New variable +</strong>. Enter the name <code style=\"font-family:monospace;font-size:1rem\">username</code>, and notice that the Variable ID is prepopulated with the same name. This is the name that can be read or set by the API and could be different if you want. Give an optional description, and click <strong>Save</strong>.</p>\\n<p> <img alt=\"Create session variable\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/create-session-variable.png\"/></p>\\n</li>\\n<li><p>Click <strong>Actions -&gt; Created by you</strong> and <strong>New action +</strong>, and enter <code style=\"font-family:monospace;font-size:1rem\">I have a question about my account</code> for the start of the interaction. For <em>Assistant says</em>, ask \"What is your username?\", and for customer response, choose <em>Free text</em>.</p>\\n</li>\\n<li><p>Click <strong>New step +</strong>, and click the <strong>Fx</strong> icon to add a variable. Click <strong>Set new value +</strong>, choose <strong>Session variables -&gt; username</strong>, and for \"to\" choose <strong>1. What is your username?</strong>. </p>\\n<p> <img alt=\"Set variable for username\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/set-username-variable.png\"/></p>\\n</li>\\n<li><p>Edit the <em>Assistant says</em> response by entering <code style=\"font-family:monospace;font-size:1rem\">Hello. Welcome back</code>, then click the <strong>01-0</strong> icon to insert a variable. From the drop-down menu, choose <strong>Session variables -&gt; username</strong>.</p>\\n<p> <img alt=\"Insert a variable\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/insert-a-variable.png\"/></p>\\n</li>\\n<li><p>When you test this, you see that the username is inserted into the Assistant response.</p>\\n<p> <img alt=\"Using variable in response\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/test-username-variable.png\"/></p>\\n</li>\\n<li><p>If you navigate back to the Action for \"Where are you located?\" and click the last (third) step, then <strong>New step +</strong>, you can use an Action variable. Inside the <em>Assistant says</em> box, add the test <code style=\"font-family:monospace;font-size:1rem\">We hope to see you soon at our</code>, and then click the <strong>01-0</strong> icon to insert a variable. Under <em>Action variables</em>, choose <strong>\"1. We have 2 locations...\"</strong>, and it is inserted.</p>\\n<p> <img alt=\"Insert action variable for location\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/insert-action-variable-for-location.png\"/></p>\\n</li>\\n<li><p>Add the text <code style=\"font-family:monospace;font-size:1rem\">location.</code> to make the response smooth, and change the <em>And then</em> section to <strong>End the action</strong>. Now, when you test you see the location used in the dialog.</p>\\n<p> <img alt=\"Test action variable for location\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/test-action-variable-for-location.png\"/></p>\\n</li>\\n</ol>\\n<h2 id=\"publish-the-changes\">Publish the changes</h2>\\n<p>You can finalize the changes by publishing them. </p>\\n<ol>\\n<li><p>Click the rocket\"icon on the navigation on the left to see your updates. Then, click <strong>Publish</strong>.</p>\\n<p> <img alt=\"Publish changes\" src=\"https://developer.ibm.com/developer/default/tutorials/create-your-first-assistant-powered-chatbot/images/publish-changes.png\"/></p>\\n</li>\\n<li><p>Add a description, and click <strong>Publish</strong>. The changes are live as a new version and show up in any deployed chatbot.</p>\\n</li>\\n</ol>\\n<h2 id=\"conclusion\">Conclusion</h2>\\n<p>This tutorial walked you through the process of creating your first Assistant-powered chatbot. It covered creating the Assistant service and adding several Actions, using conditional logic and variables in the response. There are other Watson Assistant features to explore and use and these are covered in other tutorials on the Watson Assistant learning path.</p> \\n  ']\n",
      "Name = [' \\n \\n stream_size 23139  \\n X-Parsed-By org.apache.tika.parser.DefaultParser  \\n X-Parsed-By org.apache.tika.parser.html.HtmlParser  \\n stream_content_type application/html  \\n Content-Encoding ISO-8859-1  \\n resourceName /root/ibm_cloud_docs_process2/watson-assistant/respond.html  \\n Content-Type text/html; charset=ISO-8859-1  \\n  \\n \\n   \\n\\n copyright:\\n  years: 2021, 2023\\nlastupdated: \"2023-02-13\" \\n\\n subcollection: watson-assistant \\n\\n  \\n\\n {:shortdesc: .shortdesc}\\n{:new_window: target=\"_blank\"}\\n{:external: target=\"_blank\" .external}\\n{:deprecated: .deprecated}\\n{:important: .important}\\n{:note: .note}\\n{:tip: .tip}\\n{:pre: .pre}\\n{:codeblock: .codeblock}\\n{:screen: .screen}\\n{:javascript: .ph data-hd-programlang=\\'javascript\\'}\\n{:java: .ph data-hd-programlang=\\'java\\'}\\n{:python: .ph data-hd-programlang=\\'python\\'}\\n{:swift: .ph data-hd-programlang=\\'swift\\'} \\n\\n {{site.data.content.classiclink}} \\n\\n Adding assistant responses \\n\\n {: #respond} \\n\\n Once an action is activated, the body of the action is composed of multiple  steps  that make up the conversation between your assistant and your users. One part of each step is what the assistant says to the customer when the step is processed. \\n\\n To create your assistant\\'s response in a step, you use the  Assistant says  section. This represents the text or speech the assistant delivers to a user at a particular step. Depending on the step, you can add a complete answer to a user\\'s question or ask a follow-up question. \\n\\n You can enter a simple text response just by entering the text that you want your assistant to display to the user. You can also add formatting and web content, and you can reference user information using  variables . \\n\\n Formatting responses \\n\\n {: #respond-formatting} \\n\\n Use the text editor tools to apply font styling, such as bold or italic, to the text or to add links. \\n\\n Behind the scenes, font styling and link syntax are stored in Markdown format. If you are using the web chat integration, HTML and Markdown tagging are supported (for more information, see  rect /docs/watson-assistant?topic=watson-assistant-web-chat-architecture#web-chat-architecture-markdown Markdown formatting ). \\n\\n HTML tags (except for links) are automatically removed from text responses that are sent to the Facebook, WhatsApp, and Slack integrations, because those channels do not support HTML formatting. HTML tags are still handled appropriately in channels that support them (such as the web chat) and stored in the session history. \\n\\n\\n\\n If you\\'re using a custom client application that does not support Markdown, don\\'t apply text styling to your text responses.\\n{: note} \\n\\n Adding and referencing variables \\n\\n {: #respond-variables} \\n\\n During the conversation, your assistant stores information as  variables . Variables are containers for data values that become available at run time; the value of a variable can change over time. Variables include  action variables , which persist only during a particular action, and  session variables , which are available to any action. For more information about variables, see  rect /docs/watson-assistant?topic=watson-assistant-manage-info Managing information during the conversation . \\n\\n In your assistant\\'s output, you can reference variables in order to personalize the conversation or include information that is available at run time. For more information about referencing variables in what your assistant says, see  rect /docs/watson-assistant?topic=watson-assistant-manage-info#reference-variables Using variables to customize the conversation . \\n\\n Testing responses \\n\\n {: #respond-testing} \\n\\n To see if the assistant responses are formatted correctly, you can use  Preview . \\n\\n \\t Click the  Preview  button. \\n\\t To start the action, enter your first phrase, for example:  What are your store hours? . \\n\\t When the assistant responds, check that the message displays as you intended with formatting, use of variables, and so on. \\n \\n\\n Tips for adding responses \\n\\n {: #respond-tips-responses} \\n\\n \\t Keep answers short and useful. \\n\\t Reflect the user\\'s intent in the response. Doing so assures users that the bot is understanding them, or if it is not, gives users a chance to correct a misunderstanding immediately. \\n\\t Only include links to external sites in responses if the answer depends on data that changes frequently. \\n\\t Word your responses carefully. You can change how someone reacts to your system based simply on how you phrase a response. Changing one line of text can prevent you from having to write multiple lines of code to implement a complex programmatic solution. \\n \\n\\n Adding variations \\n\\n {: #respond-variations} \\n\\n If your users return to your assistant frequently, they might be bored to see the same greetings and responses every time. You can add  response variations  so that your assistant can respond to the same request in different ways. \\n\\n You can choose to rotate through the response variations sequentially or in random order. By default, responses are rotated sequentially, as if they were chosen from an ordered list. \\n\\n To add response variations: \\n\\n \\t \\n In  Assistant says , click the  Add response variations  icon  Add response variations images/response-variations-icon.png  . \\n\\n \\n\\t \\n For  Response variation type , choose whether to rotate through the response variations sequentially or in random order. For more information see  rect #respond-variations-sequential-random Sequential or random . \\n\\n \\n \\n\\n  Response variations images/response-variations-modal.png  {: caption=\"Response variations\" caption-side=\"bottom\"} \\n\\n \\t Add each variation into its own field. For example: \\n \\n\\n | Response number | Variation |\\n   | -- | -- |\\n   | Response 1 | How can I help you? |\\n   | Response 2 | What can I do for you today? |\\n   | Response 3 | Tell me what I can help with. |\\n   | Response 4 | Can I help you? |\\n{: caption=\"Response variation examples\" caption-side=\"bottom\"} \\n\\n \\t When you\\'re finished, click  Apply . The variations appear as a block inside  Assistant says . You can click the  Edit  icon to update the variations, or click the  Delete  icon to remove all the variations. Also, you can add multiple sets of response variations to a step. \\n \\n\\n  Response variations in Assistant says images/response-variations-assistant-says.png  {: caption=\"Response variations in Assistant says\" caption-side=\"bottom\"} \\n\\n Sequential or random \\n\\n {: #respond-variations-sequential-random} \\n\\n For  Response variation type , you can choose  Sequential  or  Random . \\n\\n  Sequential  returns the first response variation the first time the action is triggered, the second response variation the second time the action is triggered, and so on, in the same order as you entered the variations. This results in responses being returned in the following order when the node is processed: \\n\\n \\t First time: \\n \\n\\n  screen\\n   How can I help you?  \\n\\n \\t Second time: \\n \\n\\n  screen\\n   What can I do for you today?  \\n\\n \\t Third time: \\n \\n\\n  screen\\n   Tell me what I can help with.  \\n\\n \\t Fourth time: \\n \\n\\n  screen\\n   Can I help you?  \\n\\n  Random  selects variation the first time the action is triggered, and randomly selects another variation the next time, but without repeating the same variation consecutively. Here\\'s an example of the order that responses might appear: \\n\\n \\t First time: \\n \\n\\n  screen\\n   Tell me what I can help with.  \\n\\n \\t Second time: \\n \\n\\n  screen\\n   Can I help you?  \\n\\n \\t Third time: \\n \\n\\n  screen\\n   How can I help you?  \\n\\n \\t Fourth time: \\n \\n\\n  screen\\n   What can I do for you today?  \\n\\n Media responses \\n\\n {: #respond-response-types} \\n\\n In addition to text responses, you can use other  response types  to send responses that include multimedia or interactive elements.  \\n\\n The action editor supports the following media response types: \\n\\n \\t  Image : Embeds an image into the response. The source image file must be hosted somewhere and have a URL that you can use to reference it. It cannot be a file that is stored in a directory that is not publicly accessible. \\n\\t  Video : Embeds a video player into the response. The source video must be hosted somewhere, either as a playable video on a supported video streaming service or as a video file with a URL that you can use to reference it. It cannot be a file that is stored in a directory that is not publicly accessible. \\n\\t  Audio : Embeds an audio clip into the response. The source audio file must be hosted somewhere and have a URL that you can use to reference it. It cannot be a file that is stored in a directory that is not publicly accessible. \\n\\t  iframe : Embeds content from an external website, such as a form or other interactive component, directly within the chat. The source content must be publicly accessible using HTTP, and must be embeddable as an HTML  iframe  element. \\n \\n\\n Different channel integrations have different capabilities for displaying media responses. To see which channel integrations support which response types, see  rect /docs/watson-assistant?topic=watson-assistant-assistant-responses-json-integration-support Channel integration support for response types . \\n\\n If you want to define different responses that are customized for different channels, you can do so by editing the response using the JSON editor. For more information, see  rect /docs/watson-assistant?topic=watson-assistant-assistant-responses-json#assistant-responses-json-target-integrations Targeting specific integrations . \\n\\n By editing your responses using the JSON editor, you can also access additional response types for handling channel-specific interactions. \\n\\n For more information about how to edit responses using the JSON editor, see  rect /docs/watson-assistant?topic=watson-assistant-assistant-responses-json Defining responses using the JSON editor .\\n{: note} \\n\\n Adding an  Image  response \\n\\n {: #respond-add-image} \\n\\n Add an  Image  response to display an image to the customer. \\n\\n The  Image  response type is supported by the following channel integrations:\\n- Web chat\\n- SMS\\n- Slack\\n- Microsoft Teams\\n- Facebook\\n- WhatsApp \\n\\n To add an  Image  response, complete the following steps: \\n\\n \\t \\n In the  Assistant says  field, click the  Image images/image-response.png    Image  icon. \\n\\n \\n\\t \\n In the  Source URL  field, type the full URL to the hosted image. \\n\\n The image must be in  .jpg ,  .gif , or  .png  format. The image file must be stored in a location that is publicly addressable by an  https:  URL (such as  https://www.example.com/assets/common/logo.png ). \\n\\n To access an image that is stored in {{site.data.keyword.cloud}} {{site.data.keyword.cos_short}}, enable public access to the individual image storage object, and then reference it by specifying the image source with syntax like this:  https://s3.eu.cloud-object-storage.appdomain.cloud/your-bucket-name/image-name.png . \\n\\n \\n\\t \\n Optionally specify an image title, description, and alt text in the fields provided. In the web chat integration, the title and description are displayed along with the image. \\n\\n References to variables are not supported. Some integration channels ignore titles or descriptions.\\n{: note} \\n\\n \\n\\t \\n Click  Apply . \\n\\n \\n \\n\\n Adding an  Audio  response \\n\\n {: #respond-add-audio} \\n\\n Add an  Audio  response to include spoken-word or other audible content. In the web chat, an audio response renders as an embedded audio player. In the phone integration, an audio response plays over the phone. \\n\\n The  Audio  response type is supported by the following channel integrations:\\n- Web chat\\n- Phone\\n- SMS\\n- Slack\\n- Facebook\\n- WhatsApp \\n\\n To add an  Audio  response, complete the following steps: \\n\\n \\t \\n In the  Assistant says  field, click the  Audio images/audio-response.png    Audio  icon. \\n\\n \\n\\t \\n In the  Source URL  field, type the full URL to the hosted audio clip: \\n\\n \\t \\n To link directly to an audio file, specify the URL to a file in any standard format such as MP3 or WAV. In the web chat, the linked audio clip will render as an embedded audio player. \\n\\n \\n\\t \\n To link to an audio clip on a supported audio hosting service, specify the URL to the audio clip. In the web chat, the linked audio clip will render using the embeddable player for the hosting service. \\n\\n \\n \\n\\n Specify the URL you would use to access the audio file in your browser (for example,  https://soundcloud.com/ibmresearch/fallen-star-amped ). You do not need to convert the URL to an embeddable form; the web chat will do this automatically.\\n  {: note} \\n\\n You can embed audio hosted on the following services:\\n  -  rect https://soundcloud.com SoundCloud {: external}\\n  -  rect https://mixcloud.com Mixcloud {: external} \\n\\n \\n\\t \\n Optionally specify a title, description, and alt text in the fields provided. In the web chat integration, the title and description are displayed along with the audio player. \\n\\n References to variables are not supported. Some integration channels ignore titles or descriptions.\\n{: note} \\n\\n \\n \\n\\n Adding a  Video  response \\n\\n {: #respond-add-video} \\n\\n Add a  Video  response to display a how-to demonstration, promotional clip, or other video content. In the web chat, a video response renders as an embedded video player. \\n\\n The  Video  response type is supported by the following channel integrations:\\n- Web chat\\n- SMS\\n- Slack\\n- Facebook\\n- WhatsApp \\n\\n To add a  Video  response, complete the following steps: \\n\\n \\t \\n In the  Assistant says  field, click the  Video images/video-response.png    Video  icon. \\n\\n \\n\\t \\n In the  Source URL  field, type the full URL to the hosted video: \\n\\n \\t To link directly to a video file, specify the URL to a file in any standard format such as MPEG or AVI. In the web chat, the linked video will render as an embedded video player. \\n \\n\\n HLS ( .m3u8 ) and DASH (MPD) streaming videos are not supported.\\n  {: note} \\n\\n \\t To link to a video hosted on a supported video hosting service, specify the URL to the video. In the web chat, the linked video will render using the embeddable player for the hosting service. \\n \\n\\n Specify the URL you would use to view the video in your browser (for example,  https://www.youtube.com/watch?v=52bpMKVigGU ). You do not need to convert the URL to an embeddable form; the web chat will do this automatically.\\n  {: note} \\n\\n You can embed videos hosted on the following services:\\n  -  rect https://youtube.com YouTube {: external}\\n  -  rect https://facebook.com Facebook {: external}\\n  -  rect https://vimeo.com Vimeo {: external}\\n  -  rect https://twitch.tv Twitch {: external}\\n  -  rect https://streamable.com Streamable {: external}\\n  -  rect https://wistia.com Wistia {: external}\\n  -  rect https://vidyard.com Vidyard {: external} \\n\\n \\n\\t \\n Optionally specify a video title, description, and alt text in the fields provided. In the web chat integration, the title and description are displayed along with the video player. \\n\\n References to variables are not supported. Some integration channels ignore titles or descriptions.\\n{: note} \\n\\n \\n\\t \\n If you want to scale the video to a specific display size, specify a number in the  Base height  field. \\n\\n \\n \\n\\n Adding an  iframe  response \\n\\n {: #respond-add-iframe} \\n\\n Add an  iframe  response to embed content from another website directly inside the chat window as an HTML  iframe  element. An iframe response is useful if you want to enable customers to perform some interaction with an external service without leaving the chat. For example, you might use an  iframe  response to display the following within the web chat: \\n\\n \\t An interactive map on  rect https://www.google.com/maps Google Maps {: external} \\n\\t A survey using  rect https://www.surveymonkey.com/ SurveyMonkey {: external} \\n\\t A form for making reservations through  rect https://www.opentable.com/ OpenTable {: external} \\n\\t A scheduling form using  rect https://calendly.com/ Calendly {: external} \\n \\n\\n In the web chat, an iframe response renders as a preview card that describes the embedded content. Customers can click this card to display the frame and interact with the content. \\n\\n The  iframe  response type is supported by the following channel integrations:\\n- Web chat\\n- Facebook \\n\\n To add an  iframe  response type, complete the following steps: \\n\\n \\t \\n In the  Assistant says  field, click the  iframe images/iframe-response.png    iframe  icon. \\n\\n \\n\\t \\n Add the full URL to the external content in the  iframe source  field. \\n\\n The URL must specify content that is embeddable in an HTML  iframe  element. Different sites have different restrictions for embedding content, and different processes for generating embeddable URLs. An embeddable URL is one that can be specified as the value of the  src  attribute of the  iframe  element. \\n\\n For example, to embed an interactive map using Google Maps, you can use the Google Maps Embed API. (For more information, see  rect https://developers.google.com/maps/documentation/embed/get-started The Maps Embed API overview {: external}.) Other sites have different processes for creating embeddable content. \\n\\n For technical details about using  Content-Security-Policy: frame-src  to allow embedding of your website content, see  rect https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/frame-src CSP: frame-src {: external}. \\n\\n \\n\\t \\n Optionally add a descriptive title in the  Title  field. \\n\\n In the web chat, this title will be displayed in the preview card that the customer clicks to render the external content. (If you do not specify a title, the web chat will attempt to retrieve metadata from the specified URL and display the title of the content as specified at the source.) \\n\\n References to variables are not supported.\\n{: note} \\n\\n \\n \\n\\n Technical details: <iframe> sandboxing \\n\\n Content loaded in an iframe by the web chat is  sandboxed , meaning that it has restricted permissions that reduce security vulnerabilities. The web chat uses the  sandbox  attribute of the  iframe  element to grant only the following permissions: \\n\\n | Permission          | Description |\\n|---------------------|-------------|\\n|  allow-downloads    | Allows downloading files from the network, if the download is initiated by the user. |\\n|  allow-forms        | Allows submitting forms. |\\n|  allow-scripts      | Allows running scripts, but  not  opening pop-up windows. |\\n|  allow-same-origin  | Allows the content to access its own data storage (such as cookies), and allows only very limited access to JavaScript APIs. | \\n\\n A script running inside a sandboxed iframe cannot make changes to any content content outside the iframe,  if  the outer page and the iframe have different origins. Be careful if you use an  iframe  response to embed content that has the same origin as the the page where your web chat widget is hosted; in this situation the embedded content can defeat the sandboxing and gain access to content outside the frame. For more information about this potential vulnerability, see the  sandbox  attribute  rect https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox documentation {: external}.\\n{: note} \\n\\n Pause response \\n\\n {: #respond-pause-response} \\n\\n Use a  Pause  response to have your assistant wait for a specified interval before displaying the next response. This pause might be to allow time for a request to complete, or simply to mimic the appearance of a live agent who might pause between responses. The pause can be of any duration from 1 to 10 seconds. \\n\\n A  Pause  response is typically used in combination with other responses. By default, a typing indicator animation appears during the pause in order to simulate a live agent. \\n\\n The  Pause  response type is supported by the following channel integrations:\\n- Web chat\\n- Facebook\\n- WhatsApp \\n\\n With the phone channel, you can add a pause by including the SSML  break  element in the assistant output. For more information, see the  rect /docs/text-to-speech?topic=text-to-speech-elements#break_element {{site.data.keyword.texttospeechshort}} documentation {: external}.\\n{: note} \\n\\n To add a  Pause  response: \\n\\n \\t \\n In the  Assistant says  field, click the  Pause images/pause.png    Pause  icon.  \\n\\n \\n\\t \\n In the   Duration  field, enter the length of time for the pause to last as a number of seconds. \\n\\n The duration can\\'t exceed 10 seconds. Customers are typically willing to wait about 8 seconds for someone to enter a response.  \\n\\n \\n\\t \\n The  Typing indicator  is set to  On  by default. You can set this to  Off  if you want. \\n\\n Add another response type, such as a text response type, after the pause to clearly denote that the pause is over.\\n{: tip} \\n\\n \\n \\n  ']\n"
     ]
    }
   ],
   "source": [
    "get_answer_solr(query)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7a8a47b7",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
