Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support setting the agent log level to Trace in Fleet #2212

Closed
Tracked by #3640
cmacknz opened this issue Jan 31, 2023 · 22 comments
Closed
Tracked by #3640

Support setting the agent log level to Trace in Fleet #2212

cmacknz opened this issue Jan 31, 2023 · 22 comments
Labels
Team:Elastic-Agent Label for the Agent team Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team

Comments

@cmacknz
Copy link
Member

cmacknz commented Jan 31, 2023

The Fleet UI does not know about the trace level that was introduced in #1955

Screen Shot 2023-01-26 at 11 50 27 AM

We will need to ensure the agent settings action handler understands the new Trace log level:

lvl := logp.InfoLevel
err := lvl.Unpack(action.LogLevel)
if err != nil {
return fmt.Errorf("failed to unpack log level: %w", err)
}
if err := h.agentInfo.SetLogLevel(action.LogLevel); err != nil {
return fmt.Errorf("failed to update log level: %w", err)
}

In the Fleet UI the following two definitions need to be changed to allow setting the Trace level.

https://github.com/elastic/kibana/blob/f78236a2e4c1532a9a135444bea95a0f89d1047e/x-pack/plugins/fleet/server/types/models/agent.ts#L32-L42

  schema.object({
    type: schema.oneOf([schema.literal('SETTINGS')]),
    data: schema.object({
      log_level: schema.oneOf([
        schema.literal('debug'),
        schema.literal('info'),
        schema.literal('warning'),
        schema.literal('error'),
      ]),
    }),
  })

https://github.com/elastic/kibana/blob/f78236a2e4c1532a9a135444bea95a0f89d1047e/x-pack/plugins/fleet/public/applications/fleet/sections/agents/agent_details_page/components/agent_logs/constants.tsx#L48-L53

export const AGENT_LOG_LEVELS = {
  ERROR: 'error',
  WARNING: 'warning',
  INFO: 'info',
  DEBUG: 'debug',
};
@sjoukedv
Copy link

In addition to this can we support overriding the default log level per policy? I can only change the log level per agent, but the agents come and go in an autoscaling Kubernetes cluster.

Additionally, we can support setting the log level from the environment in the configuration like the elasticsearch configuration: https://github.com/elastic/elastic-agent/blob/main/elastic-agent.docker.yml#L7-L9. This would allow me to just run the container docker.elastic.co/beats/elastic-agent with log level from the environment.

@cmacknz cmacknz changed the title Support configuring setting the agent log level to Trace in Fleet Support setting the agent log level to Trace in Fleet Feb 21, 2023
@defensivedepth
Copy link

Would also like to see the ability to override the default log level per policy or globally.

@cmacknz
Copy link
Member Author

cmacknz commented Jun 1, 2023

Changing the log level per policy needs to be supported in Fleet, it should be possible to do this today for a standalone agent. I've opened elastic/kibana#158861 to track this.

For the trace log level in this issue was used to hide some messages that were generated for every event and flooding the log files, making them useless for debugging. There isn’t any value in turning this on right now.

We can eventually add the ability to change the log level per input or integration, we are unlikely to expose the trace level until we do that as its the only way to control the volume of logs enough to keep it useful.

@jlind23
Copy link
Contributor

jlind23 commented Mar 26, 2024

Depends on elastic/kibana#158861

@pierrehilbert pierrehilbert added Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team and removed v8.6.0 labels Jun 3, 2024
@elasticmachine
Copy link
Contributor

Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane)

@jlind23
Copy link
Contributor

jlind23 commented Jun 5, 2024

@cmacknz now that elastic/kibana#158861 landed are we good with closing this one or do we really want to add the trace option in Fleet UI?

@cmacknz
Copy link
Member Author

cmacknz commented Jun 5, 2024

The need for this is going away now that we can write noisy per event logs to a separate file, closing.

@cmacknz cmacknz closed this as completed Jun 5, 2024
@taylor-swanson
Copy link
Contributor

The need for this is going away now that we can write noisy per event logs to a separate file, closing.

@cmacknz, is this documented somewhere? See this comment in elastic/sdh-beats#5005.

@cmacknz
Copy link
Member Author

cmacknz commented Aug 9, 2024

@belimawr where is the event log file for agent documented?

@belimawr
Copy link
Contributor

belimawr commented Aug 9, 2024

@belimawr where is the event log file for agent documented?

It's documented alongside the other already existing logs documentation:

  • https://www.elastic.co/guide/en/fleet/current/elastic-agent-standalone-logging-config.html (look for agent.logging.event_data)
  • https://github.com/elastic/elastic-agent/blob/main/docs/elastic-agent-logging.md (look for agent.logging.event_data)
  • #=============================== Events Logging ===============================
    # Some outputs will log raw events on errors like indexing errors in the
    # Elasticsearch output, to prevent logging raw events (that may contain
    # sensitive information) together with other log messages, a different
    # log file, only for log entries containing raw events, is used. It will
    # use the same level, selectors and all other configurations from the
    # default logger, but it will have it's own file configuration.
    #
    # Having a different log file for raw events also prevents event data
    # from drowning out the regular log files.
    #
    # IMPORTANT: No matter the default logger output configuration, raw events
    # will **always** be logged to a file configured by `agent.logging.event_data.files`.
    # agent.logging.event_data:
    # Logging to rotating files. Set agent.logging.to_files to false to disable logging to
    # files.
    #agent.logging.event_data.to_files: true
    #agent.logging.event_data:
    # Configure the path where the logs are written. The default is the logs directory
    # under the home path (the binary location).
    #path: /var/log/filebeat
    # The name of the files where the logs are written to.
    #name: filebeat-event-data
    # Configure log file size limit. If the limit is reached, log file will be
    # automatically rotated.
    #rotateeverybytes: 5242880 # = 5MB
    # Number of rotated log files to keep. The oldest files will be deleted first.
    #keepfiles: 2
    # The permissions mask to apply when rotating log files. The default value is 0600.
    # Must be a valid Unix-style file permissions mask expressed in octal notation.
    #permissions: 0600
    # Enable log file rotation on time intervals in addition to the size-based rotation.
    # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h
    # are boundary-aligned with minutes, hours, days, weeks, months, and years as
    # reported by the local system clock. All other intervals are calculated from the
    # Unix epoch. Defaults to disabled.
    #interval: 0
    # Rotate existing logs on startup rather than appending them to the existing
    # file. Defaults to false.
    # rotateonstartup: false
    (look for agent.logging.event_data)

@defensivedepth
Copy link

defensivedepth commented Aug 15, 2024

So is trace no longer an option? I'm trying to view the raw logs that Elastic Agent (Filebeat) is sending. In the past with straight Filebeat we would just enable trace log level and we can see this kind of logging.

@belimawr
Copy link
Contributor

So is trace no longer an option? I'm trying to view the raw logs that Elastic Agent (Filebeat) is sending. In the past with straight Filebeat we would just enable trace log level and we can see this kind of logging.

@defensivedepth what logs are you looking for? Is is the raw events Filebeat publishes or something else?

Do you need to be able to see them in Kibana or collecting an Agent diagnostics is an option?

@defensivedepth
Copy link

@belimawr Yes, raw events Filebeat publishes. Collecting an Agent diagnostics is an option - does that exist already?

@belimawr
Copy link
Contributor

Yes, it is!

You can run elastic-agent diagnostics collect, this will generate a zip file with the diagnostics bundle, there will be a folder in there with all the logs.

Another option is via Fleet UI: Fleet -> Agents -> -> Diagnostics -> Request diagnostics.zip

@defensivedepth
Copy link

Thanks @belimawr !

@reyesj2
Copy link

reyesj2 commented Aug 23, 2024

Yes, it is!

You can run elastic-agent diagnostics collect, this will generate a zip file with the diagnostics bundle, there will be a folder in there with all the logs.

Another option is via Fleet UI: Fleet -> Agents -> -> Diagnostics -> Request diagnostics.zip

@belimawr I tried this out and it seems I get the same level of logs seen from the fleet UI? I am still unable to view the raw event logs.

@belimawr
Copy link
Contributor

The events log do not go to the UI. They can be extremely verbose and can also contain sensitive data that should not be shipped alongside the monitoring logs/data.

They can still be collected through the UI if you request the diagnostics, then if you want to visualise/analyse the logs in Kibana, you can upload them in Kibana.

@reyesj2
Copy link

reyesj2 commented Aug 29, 2024

Thanks for the assistance, I am running elastic-agent diagnostics collect and reviewing log files within the generated zip under logs/elastic-agent-*/*.ndjson. I do not see the raw events being logged there.

I have the elastic-agent shipping to logstash, so the correct work around I use is to configure an additional logstash pipeline to write events to a file.

@belimawr
Copy link
Contributor

Thanks for the assistance, I am running elastic-agent diagnostics collect and reviewing log files within the generated zip under logs/elastic-agent-*/*.ndjson. I do not see the raw events being logged there.

I have the elastic-agent shipping to logstash, so the correct work around I use is to configure an additional logstash pipeline to write events to a file.

The raw events go to a different folder, so logs/elastic-agent-*/*.ndjson, they are in logs/elastic-agent-*/events/elastic-agent-event-log-*.ndjson

@reyesj2
Copy link

reyesj2 commented Aug 30, 2024

Is there an associated configuration option for logging to events directory?

Here is what is under ./logs in the unzipped diagnostics

./logs:
total 4
drwxr-xr-x. 2 root root 4096 Aug 30 17:30 elastic-agent-2df2c1

./logs/elastic-agent-2df2c1:
total 81576
-rw-r--r--. 1 root root 10485851 Aug 30 16:40 elastic-agent-20240830-145.ndjson
-rw-r--r--. 1 root root 10486076 Aug 30 16:49 elastic-agent-20240830-146.ndjson
-rw-r--r--. 1 root root 10486000 Aug 30 16:54 elastic-agent-20240830-147.ndjson
-rw-r--r--. 1 root root 10485614 Aug 30 17:02 elastic-agent-20240830-148.ndjson
-rw-r--r--. 1 root root 10486416 Aug 30 17:08 elastic-agent-20240830-149.ndjson
-rw-r--r--. 1 root root 10486091 Aug 30 17:16 elastic-agent-20240830-150.ndjson
-rw-r--r--. 1 root root 10486297 Aug 30 17:23 elastic-agent-20240830-151.ndjson
-rw-r--r--. 1 root root 10106104 Aug 30 17:30 elastic-agent-20240830-152.ndjson

@belimawr
Copy link
Contributor

@reyesj2 which version of the Elastic-Agent are you running? The event log file was released in v8.15.0

@reyesj2
Copy link

reyesj2 commented Aug 30, 2024

That's it, I am running version 8.14.3. Thanks for your time on this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Elastic-Agent Label for the Agent team Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team
Projects
None yet
Development

No branches or pull requests

9 participants