Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add section about AI connector #3906

Merged
merged 2 commits into from
May 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
68 changes: 65 additions & 3 deletions docs/en/observability/observability-ai-assistant.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ Your feedback helps us improve the AI Assistant!

[discrete]
[[obs-ai-chat]]
=== AI Assistant chat
=== Chat with the assistant

Click *AI Assistant* in the upper-right corner of any {observability} application to start the chat:

Expand All @@ -181,7 +181,7 @@ image::images/obs-ai-chat.png[Observability AI assistant chat, 60%]

[discrete]
[[obs-ai-functions]]
=== AI Assistant functions
=== Suggest functions

beta::[]

Expand Down Expand Up @@ -209,7 +209,7 @@ Additional functions are available when your cluster has APM data:

[discrete]
[[obs-ai-prompts]]
=== AI Assistant contextual prompts
=== Use contextual prompts

AI Assistant contextual prompts throughout {observability} provide the following information:

Expand All @@ -231,6 +231,66 @@ image::images/obs-ai-logs.gif[Observability AI assistant example, 75%]

You can continue a conversation from a contextual prompt by clicking *Start chat* to open the AI Assistant chat.

[discrete]
[[obs-ai-connector]]
=== Add the AI Assistant connector to alerting workflows

//TODO: After https://github.com/elastic/kibana/pull/183792 is merged, make "configure the Observability AI Assistant connector" an active link to the published docs.

IMPORTANT: To use the Observability AI Assistant connector,
you must have the `api:observabilityAIAssistant` and `app:observabilityAIAssistant` privileges.

You can use the Observability AI Assistant connector to add AI-generated insights and custom actions to your alerting workflows.
To do this:

. <<create-alerts-rules,Create (or edit) an alerting rule>> and specify the conditions that must be met for the alert to fire.
. Under **Actions**, select the **Observability AI Assistant** connector type.
. In the **Connector** list, select the AI connector you created when you set up the assistant.
. In the **Message** field, specify the message to send to the assistant:
+
[role="screenshot"]
image::images/obs-ai-assistant-action-high-cpu.png[Add an Observability AI assistant action while creating a rule in the Observability UI]

You can ask the assistant to generate a report of the alert that fired,
dedemorton marked this conversation as resolved.
Show resolved Hide resolved
recall any information or potential resolutions of past occurrences stored in the knowledge base,
provide troubleshooting guidance and resolution steps,
and also include other active alerts that may be related.
As a last step, you can ask the assistant to trigger an action,
such as sending the report (or any other message) to a Slack webhook.

NOTE: Currently you can only send messages to Slack, email, Jira, PagerDuty, or a webhook.
Additional actions will be added in the future.

When the alert fires, contextual details about the event—such as when the alert fired,
the service or host impacted, and the threshold breached—are sent to the AI Assistant,
along with the message provided during configuration.
The AI Assistant runs the tasks requested in the message and creates a conversation you can use to chat with the assistant:

[role="screenshot"]
image::images/obs-ai-assistant-output.png[AI Assistant conversation created in response to an alert]

IMPORTANT: Conversations created by the AI Assistant are public and accessible to every user with permissions to use the assistant.

It might take a minute or two for the AI Assistant to process the message and create the conversation.

Note that overly broad prompts may result in the request exceeding token limits.
For more information, refer to <<obs-ai-token-limits>>.
Also, attempting to analyze several alerts in a single connector execution may cause you to exceed the function call limit.
If this happens, modify the message specified in the connector configuration to avoid exceeding limits.

When asked to send a message to another connector, such as Slack,
the AI Assistant attempts to include a link to the generated conversation.

TIP: The `server.publicBaseUrl` setting must be correctly specified under {kib} settings,
or the AI Assistant is unable to generate this link.

[role="screenshot"]
image::images/obs-ai-assistant-slack-message.png[Message sent by Slack by the AI Assistant includes a link to the conversation]

The Observability AI Assistant connector is called when the alert fires and when it recovers.

To learn more about alerting, actions, and connectors, refer to <<create-alerts>>.
dedemorton marked this conversation as resolved.
Show resolved Hide resolved

[discrete]
[[obs-ai-known-issues]]
== Known issues
Expand All @@ -242,3 +302,5 @@ You can continue a conversation from a contextual prompt by clicking *Start chat
Most LLMs have a set number of tokens they can manage in single a conversation.
When you reach the token limit, the LLM will throw an error, and Elastic will display a "Token limit reached" error in Kibana.
The exact number of tokens that the LLM can support depends on the LLM provider and model you're using.
If you are using an OpenAI connector, you can monitor token usage in **OpenAI Token Usage** dashboard.
For more information, refer to the {kibana-ref}/openai-action-type.html#openai-connector-token-dashboard[OpenAI Connector documentation].