copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2024-06-07 |
building a dialog, condition, response, options, jump, jump-to, multiline, response variations |
watson-assistant |
{{site.data.keyword.attribute-definition-list}}
{: #dialog-overview}
The dialog defines what your assistant says in response to customers. {: shortdesc}
The following nodes are created for you automatically:
-
Welcome: The first node. It contains a greeting that is displayed to your users when they first engage with your assistant. You can edit the greeting.
This node is not triggered in dialog flows that are initiated by users. For example, dialogs used in integrations with channels such as Facebook or Slack skip nodes with the
welcome
special condition. {: note} -
Anything else: The final node. It contains phrases that are used to reply to users when their input is not recognized. You can replace the responses that are provided or add more responses with a similar meaning to add variety to the conversation. You can also choose whether you want your assistant to return each response that is defined in turn or return them in random order.
For more information about these built-in nodes, see Starting and ending the dialog.
-
To add more nodes to the dialog tree, click Add node.
Your new node is added after the Welcome node and before the Anything else node.
-
Add a name to the node.
Use a short, customer-friendly description of what the node does as its name. For example,
Open an account
,Get policy information
, orGet a weather forecast
.The name can be up to 512 characters in length.
This node name is shown to customers or service desk personnel to express the purpose of this branch of the dialog, so take some time to add a name that is concise and descriptive. {: tip}
-
In the If assistant recognizes field, enter a condition that, when met, triggers your assistant to process the node.
To start off, you typically want to add an intent as the condition. For example, if you add
#open_account
here, it means that you want the response that you will specify in this node to be returned to the user if the user input indicates that the user wants to open an account.As you begin to define a condition, a box is displayed that shows you your options. You can enter one of the following characters, and then pick a value from the list of options that is displayed.
Character Lists defined values for these artifact types #
intents @
entities @{entity-name}:
{entity-name} values $
context-variables that you defined or referenced elsewhere in the dialog {: caption="Condition builder syntax" caption-side="bottom"} You can create a new intent, entity, entity value, or context variable by defining a new condition that uses it. If you create an artifact this way, be sure to go back and complete any other steps that are necessary for the artifact to be created completely, such as defining sample utterances for an intent.
To define a node that triggers based on more than one condition, enter one condition, and then click the plus sign (+) icon next to it. If you want to apply an
OR
operator to the multiple conditions instead ofAND
, click theand
that is displayed between the fields to change the operator type. AND operations are executed before OR operations, but you can change the order by using parentheses. For example:$isMember:true AND ($memberlevel:silver OR $memberlevel:gold)
The condition you define must be less than 2,048 characters in length.
For more information about how to test for values in conditions, see Conditions.
-
Optional: If you want to collect multiple pieces of information from the user in this node, then click Customize and enable Slots. See Gathering information with slots for more details.
-
Enter a response.
- Add the text or multimedia elements that you want your assistant to display to the user as a response.
- If you want to define different responses based on certain conditions, then click Customize and enable Multiple responses.
- For information about conditional responses, rich responses, or how to add variety to responses, see Responses.
-
Specify what to do after the current node is processed. You can choose from the following options:
- Wait for user input: Your assistant pauses until new input is provided by the user.
- Skip user input: Your assistant jumps directly to the first child node. This option is only available if the current node has at least one child node.
- Jump to: Your assistant continues the dialog by processing the node you specify. You can choose whether your assistant should evaluate the target node's condition or skip directly to the target node's response. See Configuring the Jump to action for more details.
-
Optional: If you want this node to be considered when users are shown a set of node choices at run time, and asked to pick the one that best matches their goal, then add a short description of the user goal handled by this node to the external node name field. For example, Open an account.
[Plus]{: tag-green} The external node name field is only displayed only to users of paid plans. See Disambiguation for more details.
-
To add more nodes, select a node in the tree, and then click the More icon.
- To create a peer node that is checked next if the condition for the existing node is not met, select Add node below.
- To create a peer node that is checked before the condition for the existing node is checked, select Add node above.
- To create a child node to the selected node, select Add child node. A child node is processed after its parent node.
- To copy the current node, select Duplicate.
For more information about the order in which dialog nodes are processed, see Dialog overview.
-
Test the dialog as you build it.
See Testing your dialog for more information.
{: #dialog-overview-conditions}
A node condition determines whether that node is used in the conversation. Response conditions determine which response to return to a user.
For tips on performing more advanced tasks in conditions, see Condition usage tips.
{: #dialog-overview-condition-artifacts}
You can use one or more of the following artifacts in any combination to define a condition:
-
Context variable: The node is used if the context variable expression that you specify is true. Use the syntax,
$variable_name:value
or$variable_name == 'value'
.For node conditions, this artifact type is typically used with an
AND
orOR
operator and another condition value. That's because something in the user input must trigger the node; the context variable value being matched alone is not enough to trigger it. If the user input object sets the context variable value somehow, for example, then the node is triggered.Do not define a node condition based on the value of a context variable in the same dialog node in which you set the context variable value. {: tip}
For response conditions, this artifact type can be used alone. You can change the response based on a specific context variable value. For example,
$city:Boston
checks whether the$city
context variable contains the value,Boston
. If so, the response is returned.For more information about context variables, see Context variables.
-
Entity: The node is used when any value or synonym for the entity is recognized in the user input. Use the syntax,
@entity_name
. For example,@city
checks whether any of the city names that are defined for the @city entity were detected in the user input. If so, the node or response is processed.Consider creating a peer node to handle the case where none of the entity's values or synonyms are recognized. {: tip}
For more information about entities, see Defining entities.
-
Entity value: The node is used if the entity value is detected in the user input. Use the syntax,
@entity_name:value
and specify a defined value for the entity, not a synonym. For example:@city:Boston
checks whether the specific city name,Boston
, was detected in the user input.If you check for the presence of the entity, without specifying a particular value for it, in a peer node, be sure to position this node (which checks for a particular entity value) before the peer node that checks only for the presence of the entity. Otherwise, this node will never be evaluated. {: tip}
If the entity is a pattern entity with capture groups, then you can check for a certain group value match. For example, you can use the syntax:
@us_phone.groups[1] == '617'
See Storing and recognizing pattern entity groups in input for more information.
-
Intent: The simplest condition is a single intent. The node is used if, after your assistant's natural language processing evaluates the user's input, it determines that the purpose of the user's input maps to the pre-defined intent. Use the syntax,
#intent_name
. For example,#weather
checks if the user input is asking for a weather forecast. If so, the node with the#weather
intent condition is processed.For more information about intents, see Defining intents.
-
Special condition: Conditions that are provided with the product that you can use to perform common dialog functions. See the Special conditions table in the next section for details.
{: #dialog-overview-special-conditions}
Condition syntax | Description |
---|---|
anything_else |
You can use this condition at the end of a dialog, to be processed when the user input does not match any other dialog nodes. The Anything else node is triggered by this condition. If you add search to your assistant, a root node with this condition can be configured to trigger a search. |
conversation_start |
Like welcome, this condition is evaluated as true during the first dialog turn. Unlike welcome, it is true whether or not the initial request from the application contains user input. |
false |
This condition is always evaluated to false. You might use this at the start of a branch that is under development, to prevent it from being used, or as the condition for a node that provides a common function and is used only as the target of a Jump to action. |
irrelevant |
This condition will evaluate to true if the user’s input is determined to be irrelevant by the {{site.data.keyword.conversationshort}} service. |
true |
This condition is always evaluated to true. You can use it at the end of a list of nodes or responses to catch any responses that did not match any of the previous conditions. |
welcome |
This condition is evaluated as true during the first dialog turn (when the conversation starts), only if the initial request from the application does not contain any user input. It is evaluated as false in all subsequent dialog turns. The Welcome node is triggered by this condition. Typically, a node with this condition is used to greet the user, for example, to display a message such as Welcome to our Pizza ordering app. This node is never processed during interactions that occur through channels such as Facebook or Slack. |
{: caption="Special conditions" caption-side="top"} |
{: #dialog-overview-condition-syntax}
Use one of these syntax options to create valid expressions in conditions:
-
Shorthand notations to refer to intents, entities, and context variables. See Accessing and evaluating objects.
-
Spring Expression (SpEL) language, which is an expression language that supports querying and manipulating an object graph at run time. See Spring Expression Language (SpEL) language{: external} for more information.
You can use regular expressions to check for values to condition against. To find a matching string, for example, you can use the String.find
method. See Methods for more details.
{: #dialog-overview-responses}
The dialog response defines how to reply to the user.
You can reply in the following ways:
{: #dialog-overview-simple-text}
If you want to provide a text response, simply enter the text that you want your assistant to display to the user.
{caption="Simple response" caption-side="bottom"}
To include a context variable value in the response, use the syntax $variable_name
to specify it. See Context variables for more information. For example, if you know that the $user context variable is set to the current user's name before a node is processed, then you can refer to it in the text response of the node like this:
Hello $user
{: screen}
If the current user's name is Norman
, then the response that is displayed to Norman is Hello Norman
.
If you include one of these special characters in a text response, escape it by adding a backslash (\
) in front of it. If you are using the JSON editor, you need to use two backslashes to escape (\\
). Escaping the character prevents your assistant from misinterpreting it as being one of the following artifact types:
Special character | Artifact | Example |
---|---|---|
$ |
Context variable | The transaction fee is \$2. |
@ |
Entity | Send us your feedback at feedback\@example.com. |
# |
Intent | We are the \#1 seller of lobster rolls in Maine. |
{: caption="Special characters to escape in responses" caption-side="top"} |
The built-in integrations support the following Markdown syntax elements:
Format | Syntax | Example |
---|---|---|
Italics | We're talking about *practice*. |
We're talking about practice. |
Bold | There's **no** crying in baseball. |
There's no crying in baseball. |
Hypertext link | Contact us at [ibm.com](https://www.ibm.com). |
Contact us at ibm.com. |
{: caption="Supported markdown syntax" caption-side="top"} |
If you don't code a link when you specify a phone number in a text response, it is not converted to a telephone link anywhere except in a web chat integration that is accessed from a mobile device.
The "Try it out" pane does not support Markdown syntax currently. For testing purposes, you can use the assistant Preview to see how the Markdown syntax is rendered.
The "Try it out" pane, assistant Preview, and web chat integration support HTML syntax. The Slack and Facebook integrations do not.
{: #dialog-overview-variety}
{: #dialog-overview-multiline}
If you want a single text response to include multiple lines separated by carriage returns, then follow these steps:
-
Add each line that you want to show to the user as a separate sentence into its own response variation field. For example:
Response variations Hi. How are you today? {: caption="Multiple line response" caption-side="bottom"} -
For the response variation setting, choose multiline.
When the response is shown to the user, both response variations are displayed, one on each line, like this:
Hi. How are you today?
{: screen}
{: #dialog-overview-add-variety}
If your users return to your conversation service frequently, they might be bored to hear the same greetings and responses every time. You can add variations to your responses so that your conversation can respond to the same condition in different ways.
In this example, the answer that your assistant provides in response to questions about store locations differs from one interaction to the next:
{caption="Response variations" caption-side="bottom"}
You can choose to rotate through the response variations sequentially or in random order. By default, responses are rotated sequentially, as if they were chosen from an ordered list.
To change the sequence in which individual text responses are returned, complete the following steps:
-
Add each variation of the response into its own response variation field. For example:
Response variations Hello. Hi. Howdy! {: caption="Varying responses" caption-side="bottom"} -
For the response variation setting, choose one of the following settings:
-
sequential: The system returns the first response variation the first time the dialog node is triggered, the second response variation the second time the node is triggered, and so on, in the same order as you define the variations in the node.
Results in responses being returned in the following order when the node is processed:
First time:
Hello.
{: screen}
Second time:
Hi.
{: screen}
Third time:
Howdy!
{: screen}
-
random: The system randomly selects a text string from the variations list the first time the dialog node is triggered, and randomly selects another variation the next time, but without repeating the same text string consecutively.
Example of the order that responses might be returned in when the node is processed:
First time:
Howdy!
{: screen}
Second time:
Hi.
{: screen}
Third time:
Hello.
{: screen}
-
{: #dialog-overview-multimedia}
You can return responses with multimedia or interactive elements such as images or clickable buttons to simplify the interaction model of your application and enhance the user experience.
In addition to the default response type of Text, for which you specify the text to return to the user as a response, the following response types are supported:
- Connect to human agent: The dialog calls a service that you designate, typically a service that manages human agent support ticket queues, to transfer the conversation to a person. You can optionally include a message that summarizes the user's issue to be provided to the human agent.
- Channel transfer: The dialog requests that the conversation be transferred to a different channel (for example, from the Slack integration to the web chat integration).
- Image: Embeds an image into the response. The source image file must be hosted somewhere and have a URL that you can use to reference it. It cannot be a file that is stored in a directory that is not publicly accessible.
- Video: Embeds a video player into the response. The source video must be hosted somewhere, either as a playable video on a supported video streaming service or as a video file with a URL that you can use to reference it. It cannot be a file that is stored in a directory that is not publicly accessible.
- Audio: Embeds an audio clip into the response. The source audio file must be hosted somewhere and have a URL that you can use to reference it. It cannot be a file that is stored in a directory that is not publicly accessible.
- iframe: Embeds content from an external website, such as a form or other interactive component, directly within the chat. The source content must be publicly accessible using HTTP, and must be embeddable as an HTML
iframe
element. - Option: Adds a list of one or more options. When a user clicks one of the options, an associated user input value is sent to your assistant. How options are rendered can differ depending on the number of options and where you deploy the dialog.
- Pause: Forces the application to wait for a specified number of milliseconds before continuing with processing. You can choose to show an indicator that the assistant is working on typing a response. Use this response type if you need to perform an action that might take some time.
- Search skill: [Plus]{: tag-green} Searches an external data source for relevant information to return to the user. The data source that is searched is a {{site.data.keyword.discoveryshort}} service data collection that you configure when you add search to the assistant that uses this dialog.
- User-defined: If you use the JSON editor to define the response, you can create your own user-defined response type. For more information, see Defining responses using the JSON editor.
Different integrations have different capabilities for displaying rich responses. If you want to define different responses that are customized for different channels, you can do so by editing the response using the JSON editor. For more information, see Targeting specific integrations.
To add a rich response, complete the following steps:
-
Click the dropdown menu in the Assistant responds field to choose a response type, and then provide any required information.
For more information, see the following sections:
- Connect to human agent
- Channel transfer
- Image
- Option
- Pause
- Search skill [Plus]{: tag-green}
- Text
-
To add another response type to the current response, click Add response type.
You might want to add multiple response types to a single response to provide a richer answer to a user query. For example, if a user asks for store locations, you could show a map and display a button for each store location that the user can click to get address details. To build that type of response, you can use a combination of image, options, and text response types. Another example is using a text response type before a pause response type so you can warn users before pausing the dialog.
You cannot add more than 5 response types to a single response. This means that if you define three conditioned responses for a dialog node, each conditioned response can have no more than 5 response types added to it. {: note}
You cannot add more than one Connect to human agent or more than one Search skill response type to a single dialog node. {: note}
Do not add more than one option response type to a single dialog node because both lists are displayed at once, but the customer can choose an option from only one of them. {: note}
-
If you added more than one response type, you can click the Move up or down arrows to arrange the response types in the order you want your assistant to process them.
{: #dialog-overview-add-connect-to-human-agent}
If your client application is able to transfer a conversation to a person, such as a customer support agent, then you can add a Connect to human agent response type to initiate the transfer. Some of the built-in integrations, such as web chat and Intercom, support making transfers to service desk agents. If you are using a custom application, you must program the application to recognize when this response type is triggered.
If you want to take advantage of the containment metric to track your assistant's success rate, add this response type to your dialog or use an alternate method to identify when customers are directed to outside support. For more information, see Measuring containment. {: tip}
To add a Connect to human agent response type, complete the following steps:
-
From the dialog node where you want to add the response type, click the dropdown menu in the Assistant responds field, and then choose Connect to human agent.
-
Optional. Add a message to share with the human agent to whom the conversation is transferred in the Message to human agent field.
-
Add a message to show to the customer to explain that they are being transferred.
You can add a message to show when agents are available and a message to show when agents are unavailable. Each message can be up to 320 characters in length.
Web chat built-in service desk integrations only: The text you add to the Response when agents are online and Response when no agents are online fields is used for transfers in web chat version 3 and later. If you don't add your own messages, the hint text (the grayed out text that is displayed as the example messages) is used.
If you use this response type in multiple nodes and want to use the same custom text each time, but don't want to have to edit each node individually, you can change the default text that is used by the web chat. To change the default messages, edit the language source file{: external}. Look for the
default_agent_availableMessage
anddefault_agent_unavailableMessage
values. For more information about how to change web chat text, see Languages{: external}. {: tip} -
Optional: If the channel where you deploy the assistant is integrated with a service desk, you can add initial routing information to pass with the transfer request.
-
Pick the integration type from the Service desk routing field.
-
Add routing information that is meaningful to the service desk you are using.
Service desk type Routing information Description Salesforce Button ID Specify a valid button ID from your Salesforce deployment. Zendesk Department Specify a valid department name from your Zendesk account. {: caption="Service desk routing options" caption-side="bottom"}
-
The dialog transfer does not occur when you test dialog nodes with this response type in the "Try it out" pane of the dialog. You must access a node that uses this response type from the Preview button for the assistant to see how your users will experience it.
{: #dialog-overview-add-channel-transfer}
If your assistant uses multiple integrations to support different channels for interaction with users, there might be some situations when a customer begins a conversation in one channel but then needs to transfer to a different channel.
The most common such situation is transferring a conversation to the web chat integration, in order to take advantage of web chat features such as service desk integration.
Currently, the web chat is the only supported target for a channel transfer. {: note}
Only the following integrations can initiate a channel transfer:
- Slack
- Facebook Messenger
Other integrations ignore the Channel transfer response type.
To add a Channel transfer response type, complete the following steps:
-
From the dialog node where you want to add the response type, click the dropdown menu in the Assistant responds field, and then choose Channel transfer.
-
Optional. In the Message before link to web chat field, edit the introductory message to display to the user (in the originating channel) before the link that initiates the transfer. By default, this message is
OK, click this link for additional help. Chat will continue on a new web page.
-
In the URL to web chat field, type the URL for your website where the web chat widget is embedded.
In the integration that processes the Channel transfer response, the introductory message is displayed, followed by a link to the URL you specify. The user must then click the link to initiate the transfer.
When a conversation is transferred from one channel to another, the session history and context are preserved, so the destination channel can continue the conversation from where it left off. Note that the message output that contains the Channel transfer response is processed first by the channel that initiates the transfer, and then by the target channel. If the output contains multiple responses (perhaps using different response types), these will be processed by both channels (before and after the transfer). If you want to target individual responses to specific channels, you can do so by editing the response using the JSON editor. For more information, see Targeting specific integrations.
{: #dialog-overview-add-image}
Sometimes a picture is worth a thousand words. Include images in your response to do things like illustrate a concept, show off merchandise for sale, or maybe to show a map of your store location.
To add an Image response type, complete the following steps:
-
Choose Image.
-
Add the full URL to the hosted image file into the Image source field.
The image must be in .jpg, .gif, or .png format. The image file must be stored in a location that is publicly addressable by an
https:
URL.For example:
https://www.example.com/assets/common/logo.png
.If you want to display an image title and description above the embedded image in the response, then add them in the fields provided.
To access an image that is stored in {{site.data.keyword.cloud}} {{site.data.keyword.cos_short}}, enable public access to the individual image storage object, and then reference it by specifying the image source with syntax like this:
https://s3.eu.cloud-object-storage.appdomain.cloud/your-bucket-name/image-name.png
.Some integration channels ignore titles or descriptions. {: note}
{: #dialog-overview-add-video}
Include videos in your response to share how-to demonstrations, promotional clips, and so forth. In the web chat, a video response renders as an embedded video player.
To add a Video response type, complete the following steps:
-
Choose Video.
-
Add the full URL to the hosted video into the Video source field:
-
To link directly to a video file, specify the URL to a file in any standard format such as MPEG or AVI. In the web chat, the linked video will render as an embedded video player.
HLS (
.m3u8
) and DASH (MPD) streaming videos are not supported. {: note} -
To link to a video hosted on a supported video hosting service, specify the URL to the video. In the web chat, the linked video will render using the embeddable player for the hosting service.
Specify the URL you would use to view the video in your browser (for example,
https://www.youtube.com/watch?v=52bpMKVigGU
). You do not need to convert the URL to an embeddable form; the web chat will do this automatically. {: note}You can embed videos hosted on the following services:
If you want to display a video title and description above the embedded video in the response, then add them in the fields provided.
Some integration channels ignore titles or descriptions. {: note}
If you want to scale the video to a specific display size, specify a number in the Base height field.
-
-
The Video response type is supported in the web chat, Facebook, WhatsApp, Slack, and SMS integrations.
{: #dialog-overview-add-audio}
Include audio clips in your response to share spoken-word or other audible content. In the web chat, a video response renders as an embedded video player. In the phone integration, an audio response plays over the phone.
To add an Audio response type, complete the following steps:
-
Choose Audio.
-
Add the full URL to the hosted audio clip into the Audio source field:
-
To link directly to an audio file, specify the URL to a file in any standard format such as MP3 or WAV. In the web chat, the linked audio clip will render as an embedded audio player.
-
To link to an audio clip on a supported audio hosting service, specify the URL to the audio clip. In the web chat, the linked audio clip will render using the embeddable player for the hosting service.
Specify the URL you would use to access the audio file in your browser (for example,
https://soundcloud.com/ibmresearch/fallen-star-amped
). You do not need to convert the URL to an embeddable form; the web chat will do this automatically. {: note}You can embed audio hosted on the following services:
- SoundCloud{: external}
- Mixcloud{: external}
If you want to display a title and description above the embedded audio player in the response, then add them in the fields provided.
Some integration channels ignore titles or descriptions. {: note}
If you want the audio clip to loop indefinitely, select On in the Loop field. For example, you might want to use this option to play music while a user waits on the phone. (By default, the audio plays only once and then stops.)
The Loop option is currently supported only by the phone integration. This option has no effect if you are using the web chat integration or any other channel. {: note}
-
-
The Audio response type is supported in the web chat, Facebook, WhatsApp, Slack, SMS, and phone integrations.
{: #respond-add-iframe}
Add an iframe response to embed content from another website directly inside the chat window as an HTML iframe
element. An iframe response is useful if you want to enable customers to perform some interaction with an external service without leaving the chat. For example, you might use an iframe response to display the following examples within the web chat:
- An interactive map on Google Maps{: external}
- A survey that uses SurveyMonkey{: external}
- A form for making reservations through OpenTable{: external}
- A scheduling form that uses Calendly{: external}
In the web chat, there are two ways the iframe can be included:
- Like a preview card that describes the embedded content. Customers can click this card to display the frame and interact with the content.
The iframe response type is supported by the following channel integrations:
- Web chat
To add an iframe response type, complete the following steps:
-
Add the full URL to the external content in the iframe source field.
The URL must specify content that is embeddable in an HTML
iframe
element. Different sites have different restrictions for embedding content, and different processes for generating embeddable URLs. An embeddable URL is one that can be specified as the value of thesrc
attribute of theiframe
element.For example, to embed an interactive map that uses Google Maps, you can use the Google Maps Embed API. For more information, see The Maps Embed API overview{: external}. Other sites have different processes for creating embeddable content.
For the technical details of using
Content-Security-Policy: frame-src
that gives you permission to embed the website content in your assistant, see CSP: frame-src{: external}. -
Optionally add a descriptive title in the Title field.
In the web chat, the title that you add is displayed in the preview card. The customer clicks the preview card to render the external content.
If you do not specify a title, the web chat attempts to retrieve metadata from the specified URL and displays the content title per the specification in the source.{: note}
References to variables are not supported. {: note}
Content that is loaded in an iframe by the web chat is sandboxed, meaning that it restricts permissions that reduce security vulnerabilities. The web chat uses the sandbox
attribute of the iframe
element to grant only the following permissions:
Permission | Description |
---|---|
allow-downloads |
Allows downloading files from the network, if the download is initiated by the user. |
allow-forms |
Allows submitting forms. |
allow-scripts |
Allows running scripts, but not opening pop-up windows. |
allow-same-origin |
Allows the content to access its own data storage (such as cookies), and allows limited access to JavaScript APIs. |
A script that runs inside a sandboxed iframe cannot change any content outside the iframe, if the outer page and the iframe have different origins. Be careful if you use an iframe response to embed content that has the same origin as the page where your web chat widget is hosted. In this situation the embedded content can defeat the sandboxing and gain access to content outside the frame. For more information about this potential vulnerability, see the sandbox
attribute documentation{: external}.
{: note}
The iframe
response type in web chat displays the Preview card, which includes an image, title, and description of the webpage that the user visits in the web chat.
To display an image, title, and description in the Preview card, the webpage needs the following <meta>
tags inside the <head>
tag:
<meta property="og:image" content="https://.../image.jpg" />
<meta property="og:image:url" content="https://.../image.jpg" />
<meta property="og:title" content="The webpage title" />
<meta property="og:description" content="The webpage description" />
These metadata properties specified come from The Open Graph Protocol.
The metadata is optional. The web chat displays a preview card with the webpage url and metadata, that the web chat fetched successfully. {: tip}
{: #dialog-overview-add-option}
Add an option response type when you want to give the customer a set of options to choose from. For example, you can construct a response like this:
List title | List description | Option label | User input submitted when clicked |
---|---|---|---|
Insurance types | Which of these items do you want to insure? | ||
Boat | I want to buy boat insurance | ||
Car | I want to buy car insurance | ||
Home | I want to buy home insurance | ||
{: caption="Response options" caption-side="bottom"} |
Most integrations display the options as buttons if there are only a few items (4 or fewer, for example).
{caption="Option buttons" caption-side="bottom"}
Otherwise, the options are displayed as a list.
To add an Option response type, complete the following steps:
-
From the dialog node where you want to add the response type, click the dropdown menu in the Assistant responds field, and then choose Option.
-
Click Add option.
-
In the List label field, enter the option to display in the list.
The label must be less than 2,048 characters in length.
-
In the corresponding Value field, enter the user input to pass to your assistant when this option is selected.
The value must be less than 2,048 characters in length.
For Slack integrations where the options are displayed as a list, each value must be 75 characters or less in length. {: important}
Specify a value that you know will trigger the correct intent when it is submitted. For example, it might be a user example from the training data for the intent.
-
Repeat the previous steps to add more options to the list.
You can add up to 20 options.
-
Add a list introduction in the Title field. The title can ask the user to pick from the list of options.
Some integration channels do not display the title. {: note}
-
Optionally, add additional information in the Description field. If specified, the description is displayed after the title and before the option list.
Some integration channels do not display the description. {: note}
-
Optional: If you want to indicate a preference for how the options are displayed, as buttons or in a list, you can add a
preference
property for the response.To do so, open the JSON editor for the response, and then add a
preference
name and value pair before theresponse_type
name and value pair. You can set the preference todropdown
orbutton
.{ "output": { "generic": [ { "title": "Insurance types", "options": [ { "label": "Boat", "value": { "input": { "text": "I want to buy boat insurance." } } }, { "label": "Car", "value": { "input": { "text": "I want to buy car insurance." } } }, { "label": "House", "value": { "input": { "text": "I want to buy house insurance." } } } ], "preference": "dropdown", //add this name and value pair "description": "Which of these items do you want to insure?", "response_type": "option" } ] } }
{: codeblock}
When you define an options list with only 3 items, the options are typically displayed as buttons. When you add a preference property that indicates
dropdown
as the preference, for example, you can see in the "Try it out" pane that the list is displayed as a drop-down list instead.{caption="Options list caption-side="bottom"}
Some integration types, such as the web chat, reflect your preference. Other integration types, such as Slack, do not reflect your preference when they render the options.
Do not add more than one option response type to a single dialog node because both lists are displayed at once, but the customer can choose an option from only one of them. {: important}
If you need to be able to populate the list of options with different values based on some other factors, you can design a dynamic options list. For more information, see the How to Dynamically Add Response Options to Dialog Nodes{: external} blog post.
{: #dialog-overview-add-pause}
Add a pause response type to give the assistant time to respond. For example, you might add a pause response type to a node that calls a webhook. The pause indicates that the assistant is working on an answer, which gives the assistant time to make the webhook call and get a response. Then, you can jump to a child node to show the result.
To add a Pause response type, complete the following steps:
-
From the dialog node where you want to add the response type, click the dropdown menu in the Assistant responds field, and then choose Pause.
-
Add the length of time for the pause to last as a number of milliseconds (ms) to the Duration field.
The value cannot exceed 10,000 ms. Users are typically willing to wait about 8 seconds (8,000 ms) for someone to enter a response. To prevent a typing indicator from being displayed during the pause, choose Off.
Add another response type, such as a text response type, after the pause to clearly denote that the pause is over. {: tip}
This response type does not render in the "Try it out" pane. You must access a node that uses this response type from a test deployment to see how your users will experience it.
{: #dialog-overview-add-search-skill}
[Plus]{: tag-green}
If you have existing customer-facing material, such as an FAQ, a product catalog, or sales material that can answer questions that customers often ask, put that information to use. You can trigger a search of the existing material in real time to get the latest and most up-to-date answer for your customers.
To use the search skill response type, you must add search to the same assistant that uses this dialog. For more information, see {{site.data.keyword.discoveryfull}} search integration setup.
To add a Search skill response type, complete the following steps:
-
From the dialog node where you want to add the response type, click the dropdown menu in the Assistant responds field, and then choose Search skill.
Indicates that you want to search an external data source for a relevant response.
-
To edit the search query to pass to the {{site.data.keyword.discoveryshort}} service, click Customize, and then fill in the following fields:
-
Query: Optional. You can specify a specific query in natural language to pass to {{site.data.keyword.discoveryshort}}. If you do not add a query, then the customer's exact input text is passed as the query.
For example, you can specify
What cities do you fly to?
. This query value is passed to {{site.data.keyword.discoveryshort}} as a search query. {{site.data.keyword.discoveryshort}} uses natural language understanding to understand the query and to find an answer or relevant information about the subject in the data collection that is configured for the search.You can include specific information provided by the user by referencing entities that were detected in the user's input as part of the query. For example,
Tell me about @product
. Or you can reference a context variable, such asDo you have flights to $destination?
. Just be sure to design your dialog such that the search is not triggered unless any entities or context variables that you reference in the query have been set to valid values.This field is equivalent to the {{site.data.keyword.discoveryshort}}
natural_language_query
parameter. For more information, see Query parameters{: external}.-
Filter: Optional. Specify a text string that defines information that must be present in any of the search results that are returned.
-
To indicate that you want to return only documents with positive sentiment detected, for example, specify
enriched_text.sentiment.document.label:positive
. -
To filter results to includes only documents that the ingestion process identified as containing the entity
Boston, MA
, specifyenriched_text.entities.text:"Boston, MA"
. -
To filter results to includes only documents that the ingestion process identified as containing a product name provided by the customer, you can specify
enriched_text.entities.text:@product
. -
To filter results to includes only documents that the ingestion process identified as containing a city name that you saved in a context variable named
$destination
, you can specifyenriched_text.entities.text:$destination
.
This field is equivalent to the {{site.data.keyword.discoveryshort}}
filter
parameter. For more information, see Query parameters{: external}. -
If you add both a query and a filter value, the filter parameter is applied first to filter the data collection documents and cache the results. The query parameter then ranks the cached results.
-
-
-
Optional: Change the query type that is used for the search.
The search sends a natural language query to {{site.data.keyword.discoveryshort}} automatically. If you want to use the {{site.data.keyword.discoveryshort}} query language instead, you can specify it. To do so, open the JSON editor for the node response.
Edit the JSON code snippet to replace
natural_language
withdiscovery_query_language
. For example:{ "output": { "generic": [ { "query": "", "filter": "enriched_text.sentiment.document.label:positive", "query_type": "discovery_query_language", "response_type": "search_skill" } ] } }
{: codeblock}
Test this response type from the assistant Preview. You cannot test it from the "Try it out" pane.
{: #dialog-overview-multiple}
A single dialog node can provide different responses, each one triggered by a different condition. Use this approach to address multiple scenarios in a single node.
The node still has a main condition, which is the condition for using the node and processing the conditions and responses that it contains.
In this example, your assistant uses information that it collected earlier about the user's location to tailor its response, and provide information about the store nearest the user. See Context variables for more information about how to store information collected from the user.
{caption="Conditional responses" caption-side="bottom"}
This single node now provides the equivalent function of four separate nodes.
To add conditional responses to a node, complete the following steps:
-
Click Customize, and then set the Multiple conditioned responses switch to On.
The node response section changes to show a pair of condition and response fields. You can add a condition and a response into them.
-
To customize a response further, click the Customize response icon next to the response.
You must open the response for editing to complete the following tasks:
-
Update context. To change the value of a context variable when the response is triggered, specify the context value in the context editor. You update context for each individual conditional response; there is no common context editor or JSON editor for all conditional responses.
-
Add rich responses. To add more than one text response or to add response types other than text responses to a single conditional response, you must open the edit response view.
-
Configure a jump. To instruct your assistant to jump to a different node after this conditional response is processed, select Jump to from the And finally section of the response edit view. Identify the node that you want your assistant to process next. See Configuring the Jump to action for more information.
A Jump to action that is configured for the node is not processed until all of the conditional responses are processed. Therefore, if a conditional response is configured to jump to another node, and the conditional response is triggered, then the jump configured for the node is never processed, and so does not occur.
-
-
Click Add response to add another conditional response.
The conditions within a node are evaluated in order, just as nodes are. Be sure that your conditional responses are listed in the correct order. If you need to change the order, select a condition and response pair and move it up or down in the list using the arrows that are displayed.
{: #dialog-overview-jump-to}
After making the specified response, you can instruct your assistant to do one of the following things:
-
Wait for user input: Your assistant waits for the user to provide new input that the response elicits. For example, the response might ask the user a yes or no question. The dialog will not progress until the user provides more input.
-
Skip user input: Use this option when you want to bypass waiting for user input and go directly to the first child node of the current node instead.
The current node must have at least one child node for this option to be available. {: note}
-
Jump to another dialog node: Use this option when you want the conversation to go directly to an entirely different dialog node. You can use a Jump to action to route the flow to a common dialog node from multiple locations in the tree, for example.
The target node that you want to jump to must exist before you can configure the jump to action to use it. {: note}
{: #dialog-overview-jump-to-config}
If you choose to jump to another node, specify when the target node is processed by choosing one of the following options:
-
Condition: If the statement targets the condition section of the selected dialog node, your assistant checks first whether the condition of the targeted node evaluates to true.
-
If the condition evaluates to true, the system processes the target node immediately.
-
If the condition does not evaluate to true, the system moves to the next sibling node of the target node to evaluate its condition, and repeats this process until it finds a dialog node with a condition that evaluates to true.
-
If the system processes all the siblings and none of the conditions evaluate to true, the basic fallback strategy is used, and the dialog evaluates the nodes at the base level of the dialog tree.
Targeting the condition is useful for chaining the conditions of dialog nodes. For example, you might want to first check whether the input contains an intent, such as
#turn_on
, and if it does, you might want to check whether the input contains entities, such as@lights
,@radio
, or@wipers
. Chaining conditions helps to structure larger dialog trees.Avoid choosing this option when configuring a jump-to from a conditional response that goes to a node situated above the current node in the dialog tree. Otherwise, you can create an infinite loop. If your assistant jumps to the earlier node and checks its condition, it is likely to return false because the same user input is being evaluated that triggered the current node last time through the dialog. Your assistant will go to the next sibling or back to root to check the conditions on those nodes, and will likely end up triggering this node again, which means the process will repeat itself. {: note}
-
-
Response: If the statement targets the response section of the selected dialog node, it is run immediately. That is, the system does not evaluate the condition of the selected dialog node; it processes the response of the selected dialog node immediately.
Targeting the response is useful for chaining several dialog nodes together. The response is processed as if the condition of this dialog node is true. If the selected dialog node has another Jump to action, that action is run immediately, too.
-
Wait for user input: Waits for new input from the user, and then begins to process it from the node that you jump to. This option is useful if the source node asks a question, for example, and you want to jump to a separate node to process the user's answer to the question.
{: #dialog-overview-next}
- Be sure to test your dialog as you build it. For more information, see Testing the dialog.
- For more information about ways to address common use cases, see Dialog building tips.
- For more information about the expression language that you can use to improve your dialog, such as methods that reformat dates or text, see Expression language methods.
You can also use the API to add nodes or otherwise edit a dialog. See Modifying a dialog using the API for more information.