Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Cleanup examples folder vol. 23: Add example script for custom metrics on EnvRunners (using MetricsLogger API). #47969

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

sven1977
Copy link
Contributor

@sven1977 sven1977 commented Oct 10, 2024

Cleanup examples folder vol. 23: Add example script for custom metrics on EnvRunners (using MetricsLogger API).

  • Activated in CI
  • Example creates 2D heatmap for pacman per episode, logs a custom max. and mean metric (per episode over a sliding window), and the number of lives as EMA-smoothed.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: sven1977 <[email protected]>
…nup_examples_folder_23_custom_metrics_and_callbacks
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
…nup_examples_folder_23_custom_metrics_and_callbacks
Signed-off-by: sven1977 <[email protected]>
Copy link
Collaborator

@simonsays1980 simonsays1980 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Awesome example. Documenting the MetricsLogger´ and showing users how to use it correctly is so important. I think still that this is one of the complexest parts of RLlib` and gave some suggestions for further cases.

- the mean distance travelled by MsPacman per episode (over an infinite window).
- the number of lifes of MsPacman EMA-smoothed over time.

This callback can be setup to only log stats on certain EnvRunner indices through
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if an EnvRunner crashes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. The index first goes out of commission (other EnvRunners will NOT change their indices b/c of another EnvRunner's crash), but only until the actor is automatically restarted. The latter only happens if config.recreate_failed_env_runners=True, of course.

custom `Algorithm.training_step()` methods, custom loss functions, custom callbacks,
and custom EnvRunners.

This example:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome example!! This shows a lot how to use the MetricsLogger.

To increase complexity, we could:

  1. Run this only in evaluation and run evaluation each nth training step.
  2. Reset some metrics in between.
  3. Log two group of metrics (for two groups of env-runners).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! There are two more PRs in flight: MetricsLogger on algorithm.training_step and MetricsLogger inside loss function. A third one could be: MetricsLogger only on eval env runners 🙌

How to run this script
----------------------
`python [script file name].py --enable-new-api-stack --wandb-key [your WandB key]
--wandb-projecy [some project name]`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"--wandb-projecy" -> "--wandb-project"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same options occur below in "For logging ..."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's ok. The statement below is the generic enable-logging statement, the one above are the recommended args for this particular script.

episode.get_infos(-1)["lives"],
reduce="mean", # <- default (must be "mean" for EMA smothing)
ema_coeff=0.01, # <- default EMA coefficient (`window` must be None)
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a comment in regard to metrics_logger.reduce when to use it and why not here.

self.stats = tree.map_structure_with_path(_reduce, stats_to_return)
# Provide proper error message if reduction fails due to bad data.
except Exception as e:
raise ValueError(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I wonder what happens when a user resets in between two training steps a key.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great question. Users should - ideally - never use this API and let RLlib determine, when to call .reduce() on a MetricsLogger object. They should only call .peek() to see individual reduced stats at the moment w/o actually altering/reducing the underlying stats object.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I need to make this more clear in the docstring ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Signed-off-by: sven1977 <[email protected]>
@sven1977 sven1977 enabled auto-merge (squash) October 10, 2024 13:02
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Oct 10, 2024
@sven1977 sven1977 enabled auto-merge (squash) October 10, 2024 13:29
Signed-off-by: sven1977 <[email protected]>
@sven1977 sven1977 enabled auto-merge (squash) October 10, 2024 14:51
@sven1977 sven1977 added rllib RLlib related issues rllib-docs-or-examples Issues related to RLlib documentation or rllib/examples rllib-newstack rllib-envrunners Issues around the sampling backend of RLlib labels Oct 10, 2024
@sven1977 sven1977 enabled auto-merge (squash) October 10, 2024 16:20
@sven1977 sven1977 enabled auto-merge (squash) October 11, 2024 10:36
…nup_examples_folder_23_custom_metrics_and_callbacks

Signed-off-by: sven1977 <[email protected]>

# Conflicts:
#	rllib/BUILD
…m_metrics_and_callbacks' into cleanup_examples_folder_23_custom_metrics_and_callbacks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests rllib RLlib related issues rllib-docs-or-examples Issues related to RLlib documentation or rllib/examples rllib-envrunners Issues around the sampling backend of RLlib rllib-newstack
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants