Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark tests #973

Open
wants to merge 3 commits into
base: rolling
Choose a base branch
from
Open

Add benchmark tests #973

wants to merge 3 commits into from

Conversation

sloretz
Copy link
Contributor

@sloretz sloretz commented Jul 19, 2022

This PR adds benchmarks that measure how quickly different types of entities are handled. This would be useful for testing performance improvements.

The benchmark tests are skipped unless AMENT_RUN_PERFORMANCE_TESTS is set, which is the same behavior as as ament_add_google_benchmark_test.

WIP because this needs a new rosdep key python3-pytest-benchmark for https://pypi.org/project/pytest-benchmark/
Requires ros/rosdistro#33918

Copy link
Collaborator

@fujitatomoya fujitatomoya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

benchmark test would be really useful. Do we have any plan to integrate this kind of benchmark in CI/CD so that it can detect the performance degression?

@sloretz sloretz marked this pull request as ready for review July 25, 2022 17:59
@sloretz
Copy link
Contributor Author

sloretz commented Jul 25, 2022

Do we have any plan to integrate this kind of benchmark in CI/CD so that it can detect the performance degression?

There are benchmark jobs run for each release on build.ros2.org. Here's the one for Rolling https://build.ros2.org/view/Rci/job/Rci__benchmark_ubuntu_jammy_amd64/ . It looks like it runs nightly, but I think it wouldn't show an issue until the package got released, and I don't think there's an automatic comparison with past jobs.

@sloretz
Copy link
Contributor Author

sloretz commented Jul 25, 2022

@ros-pull-request-builder retest this please

@sloretz
Copy link
Contributor Author

sloretz commented Jul 25, 2022

CI (build: --packages-up-to rclpy test: --packages-select rclpy)

  • Linux Build Status
  • Linux-aarch64 Build Status
  • Windows Build Status

And one with the benchmark tests enabled (build: -DAMENT_RUN_PERFORMANCE_TESTS=ON --packages-up-to rclpy test: --packages-select rclpy)

  • Linux Build Status

@sloretz
Copy link
Contributor Author

sloretz commented Jul 27, 2022

Benchmark tests one more time with ros2/ci#674

  • Linux: Build Status

@sloretz
Copy link
Contributor Author

sloretz commented Jul 27, 2022

CI LGTM!

Copy link
Member

@jacobperron jacobperron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor comment and a question. Otherwise, LGTM

APPEND_ENV AMENT_PREFIX_PATH=${ament_index_build_path}
PYTHONPATH=${CMAKE_CURRENT_BINARY_DIR}
TIMEOUT 120
WERROR ON
)
# Skip benchmark tests by default
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment looks like it's in the wrong place. Maybe at L209?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved comment to correct spot in 5a0f651

executor.spin_until_future_complete(result_fut)
assert GoalStatus.STATUS_SUCCEEDED == result_fut.result().status

benchmark(bm)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the other tests we are asserting the result of benchmark, should we do that here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the assertions now I don't think they were adding value. I removed them in 76fd188

executor.spin_until_future_complete(fut)
assert fut.result() # Raises if there was an error

benchmark(bm)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, should we assert?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed assertions in 76fd188

Signed-off-by: Shane Loretz <[email protected]>
Signed-off-by: Shane Loretz <[email protected]>
Signed-off-by: Shane Loretz <[email protected]>
@sloretz sloretz force-pushed the sloretz__add_benchmark_tests branch from 1527a9e to 76fd188 Compare October 14, 2022 01:51
@sloretz
Copy link
Contributor Author

sloretz commented Oct 14, 2022

CI re-run

  • Linux Build Status
  • Linux-aarch64 Build Status
  • Windows Build Status

Benchmark job: Build Status

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants