From c2cc2c6505b5f0ad42b5be46ace74efad0b77148 Mon Sep 17 00:00:00 2001 From: maverickg00se <164596138+maverick-g00se@users.noreply.github.com> Date: Mon, 1 Apr 2024 20:11:54 +0800 Subject: [PATCH 1/3] Update README.md with lifecylce node info and fix minor typos. Signed-off-by: maverick-g00se mavg00se@gmail.com Signed-off-by: maverick-g00se --- README.md | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index c93e630c..89f000ac 100644 --- a/README.md +++ b/README.md @@ -14,9 +14,21 @@ Packages that infer and react to the presence of obstacles from sensor inputs. A ROS 2 node that subscribes to `LaserScan` messages and publishes obstacles to `/rmf_obstacles`. -The node is implemented as an`rclcpp::lifecycle_node` node where upon activation, it calibrates the surroundings based on the range values in the initial few `LaserScan` messages. +The node is implemented as an `rclcpp::lifecycle_node` node where upon activation, it calibrates the surroundings based on the range values in the initial few `LaserScan` messages. This essentially becomes the "obstacle-free" configuration. -Subsequently, any changes to the surroundings is detected as an obstacle. +Subsequently, any changes to the surroundings are detected as an obstacle. + +Since this node is implemented as a `rclcpp::lifecycle_node`, before running, it is important to configure and activate the respective node as follows: +``` +#to configure the node +ros2 lifecycle set /laserscan_obstacle_detector configure +``` + +``` +#to activate the node +ros2 lifecycle set /laserscan_obstacle_detector activate +``` +you can read more about `rclcpp::lifecycle_node` here: [ROS2 Lifecycle Node](https://design.ros2.org/articles/node_lifecycle.html) To run ``` @@ -28,7 +40,7 @@ The node accepts the following parameters | Parameter Name | Description | Default Value | | --- | --- | --- | | `scan_topic_name` | The topic over which `LaserScan` messages are published. It is strongly recommended to [filter the scan](http://wiki.ros.org/laser_filters) to remove out-of-range rays before passing it to this node. | `/lidar/scan` | -| `range_threshold` | For each ray, if the difference in range measurement between the latest `LaserScan` and the calibrated one is grater than this value, a new obstacle is assumed. | `1.0` meter | +| `range_threshold` | For each ray, if the difference in range measurement between the latest `LaserScan` and the calibrated one is greater than this value, a new obstacle is assumed. | `1.0` meter | | `min_obstacle_size` | The minimum perceived size of an object for it to be considered an obstacle. | `0.75` meter | | `level_name` | The level(floor) name on which the scanner exists. | `L1` | | `calibration_sample_count` | The number of initial `LaserScan` messages to use for calibrating the "obstacle-free" configuration. | `10` | @@ -47,7 +59,7 @@ To launch as a standalone node, ros2 launch rmf_human_detector_oakd human_detection.launch.xml blob_path:= ``` -THe node has several configurable parameters documented in the [launch file](rmf_human_detector_oakd/launch/human_detector.launch.xml). +The node has several configurable parameters documented in the [launch file](rmf_human_detector_oakd/launch/human_detector.launch.xml). The most important of which is `blob_path` as it holds the absolute path to the NN model for inference. See `depthai` documentation for more information on how the various pre-trained models can be obtained. It is recommended to use the [blobconverter](https://github.com/luxonis/blobconverter/) tool to obtain the `mobilenet-ssd_openvino_2021.4_6shave.blob` blob for inference. @@ -59,7 +71,7 @@ For more information on setup and troubleshooting see [here](rmf_human_detector_ ## rmf_human_detector ![](../media/rmf_human_detector.gif) -A ROS 2 node that subscribes to `sensor_msgs::Image` messages published by a monocular camera and runs `Yolo-V4` to detect the presence of humans. The relative pose of the humans with respect to the camera frame is estimated based on heuristics that can be configured through ROS 2 params. +A ROS 2 node that subscribes to `sensor_msgs::Image` messages published by a monocular camera and runs `Yolo-V4` to detect the presence of humans. The relative pose of the humans with respect to the camera frame is estimated based on heuristics that can be configured through ROS 2 parameters. The use case for this node would be to detect crowds from existing CCTV cameras or vision sensors in the facility. @@ -72,8 +84,8 @@ ros2 launch rmf_human_detector human_detector_launch.py ## rmf_obstacle_ros2 The `rmf_obstacle_ros2` package contains ROS 2 nodes that react to the presence of obstacles. -At present the `lane_blocker_node` is available which subscribes to `/rmf_obstacles`, checks whether -any of the obstacles intersects with any of the lanes across fleet navigation graphs. +At present the `lane_blocker_node` is available which subscribes to `/rmf_obstacles`, and checks whether +any of the obstacles intersect with any of the lanes across fleet navigation graphs. If a new intersection is determined, the lanes for the corresponding fleets are closed. Previously closed lanes are also opened once the obstacles no longer intersect with the lanes. From 83c40c85ae799cd1f542f2e9d47dde96f28dadce Mon Sep 17 00:00:00 2001 From: maverick-g00se Date: Tue, 2 Apr 2024 01:12:06 +0800 Subject: [PATCH 2/3] Update README.md Signed-off-by: maverick-g00se --- README.md | 36 +++++++++++++++++++++++------------- 1 file changed, 23 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 89f000ac..13622b84 100644 --- a/README.md +++ b/README.md @@ -1,16 +1,18 @@ # rmf_obstacle_detectors + ![](https://github.com/open-rmf/rmf_obstacle_detectors/workflows/build/badge.svg) ![](https://github.com/open-rmf/rmf_obstacle_detectors/workflows/style/badge.svg) Packages that infer and react to the presence of obstacles from sensor inputs. - - [rmf_human_detector](#rmf_human_detector) - - [rmf_human_detector_oakd](#rmf_human_detector_oakd) - - [rmf_obstacle_detector_laserscan](#rmf_obstacle_detector_laserscan) - - [rmf_obstacle_ros2](#rmf_obstacle_ros2) + +- [rmf_human_detector](#rmf_human_detector) +- [rmf_human_detector_oakd](#rmf_human_detector_oakd) +- [rmf_obstacle_detector_laserscan](#rmf_obstacle_detector_laserscan) +- [rmf_obstacle_ros2](#rmf_obstacle_ros2) ## rmf_obstacle_detector_laserscan -![](../media/rmf_obstacle_detector_laserscan.gif) +![](../media/rmf_obstacle_detector_laserscan.gif) A ROS 2 node that subscribes to `LaserScan` messages and publishes obstacles to `/rmf_obstacles`. @@ -18,7 +20,16 @@ The node is implemented as an `rclcpp::lifecycle_node` node where upon activatio This essentially becomes the "obstacle-free" configuration. Subsequently, any changes to the surroundings are detected as an obstacle. -Since this node is implemented as a `rclcpp::lifecycle_node`, before running, it is important to configure and activate the respective node as follows: +To run + +``` +ros2 run rmf_obstacle_detector_laserscan laserscan_detector +``` + +> Note: The node can also be loaded into a ROS 2 component container as a plugin (`LaserscanDetector`) + +After running, since this Node is an `rclcpp::lifecycle_node`, you will need to configure and activate it respectively as shown below: + ``` #to configure the node ros2 lifecycle set /laserscan_obstacle_detector configure @@ -28,13 +39,8 @@ ros2 lifecycle set /laserscan_obstacle_detector configure #to activate the node ros2 lifecycle set /laserscan_obstacle_detector activate ``` -you can read more about `rclcpp::lifecycle_node` here: [ROS2 Lifecycle Node](https://design.ros2.org/articles/node_lifecycle.html) -To run -``` -ros2 run rmf_obstacle_detector_laserscan laserscan_detector -``` ->Note: The node can also be loaded into a ROS 2 component container as a plugin (`LaserscanDetector`) +you can read more about `rclcpp::lifecycle_node` here: [ROS2 Lifecycle Node Design](https://design.ros2.org/articles/node_lifecycle.html) The node accepts the following parameters | Parameter Name | Description | Default Value | @@ -46,6 +52,7 @@ The node accepts the following parameters | `calibration_sample_count` | The number of initial `LaserScan` messages to use for calibrating the "obstacle-free" configuration. | `10` | ## rmf_human_detector_oakd + ![](../media/rmf_human_detector_oakd.gif) A ROS 2 node that detects humans via on-chip-inference on `OAK-D` cameras and publishes the detections over `/rmf_obstacles` as `rmf_obstacle_msgs::Obstacles` message. @@ -55,6 +62,7 @@ The node can be run either as a component within a ROS 2 container or as a stand The component plugin is `rmf_human_detector_oakd::HumanDetector` and can be loaded into a `ComponentManager` via `ros2 component load rmf_human_detector_oakd::HumanDetector`. To launch as a standalone node, + ```bash ros2 launch rmf_human_detector_oakd human_detection.launch.xml blob_path:= ``` @@ -63,12 +71,12 @@ The node has several configurable parameters documented in the [launch file](rmf The most important of which is `blob_path` as it holds the absolute path to the NN model for inference. See `depthai` documentation for more information on how the various pre-trained models can be obtained. It is recommended to use the [blobconverter](https://github.com/luxonis/blobconverter/) tool to obtain the `mobilenet-ssd_openvino_2021.4_6shave.blob` blob for inference. - To visualize the detection frames, set `debug:=True`. Note: Frames will only be visualized when a human is actively detected. For more information on setup and troubleshooting see [here](rmf_human_detector_oakd/README.md) ## rmf_human_detector + ![](../media/rmf_human_detector.gif) A ROS 2 node that subscribes to `sensor_msgs::Image` messages published by a monocular camera and runs `Yolo-V4` to detect the presence of humans. The relative pose of the humans with respect to the camera frame is estimated based on heuristics that can be configured through ROS 2 parameters. @@ -82,6 +90,7 @@ ros2 launch rmf_human_detector human_detector_launch.py ``` ## rmf_obstacle_ros2 + The `rmf_obstacle_ros2` package contains ROS 2 nodes that react to the presence of obstacles. At present the `lane_blocker_node` is available which subscribes to `/rmf_obstacles`, and checks whether @@ -90,6 +99,7 @@ If a new intersection is determined, the lanes for the corresponding fleets are Previously closed lanes are also opened once the obstacles no longer intersect with the lanes. To run: + ```bash ros2 run rmf_obstacle_ros2 lane_blocker_node ``` From ba0e3dda15104bf7e168b4fac4823d0355ae7e2d Mon Sep 17 00:00:00 2001 From: maverick-g00se Date: Thu, 25 Jul 2024 15:29:25 +0800 Subject: [PATCH 3/3] Fix indentation --- README.md | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/README.md b/README.md index 59d4b8c6..86e03189 100644 --- a/README.md +++ b/README.md @@ -1,17 +1,14 @@ # rmf_obstacle_detectors - ![](https://github.com/open-rmf/rmf_obstacle_detectors/workflows/build/badge.svg) ![](https://github.com/open-rmf/rmf_obstacle_detectors/workflows/style/badge.svg) Packages that infer and react to the presence of obstacles from sensor inputs. - - [rmf_human_detector](#rmf_human_detector) - [rmf_human_detector_oakd](#rmf_human_detector_oakd) - [rmf_obstacle_detector_laserscan](#rmf_obstacle_detector_laserscan) - [rmf_obstacle_ros2](#rmf_obstacle_ros2) ## rmf_obstacle_detector_laserscan - ![](../media/rmf_obstacle_detector_laserscan.gif) A ROS 2 node that subscribes to `LaserScan` messages and publishes obstacles to `/rmf_obstacles`. @@ -21,13 +18,11 @@ This essentially becomes the "obstacle-free" configuration. Subsequently, any changes to the surroundings are detected as an obstacle. To run - ``` ros2 run rmf_obstacle_detector_laserscan laserscan_detector ``` To configure and Activate - ``` #to configure ros2 lifecycle set /laserscan_obstacle_detector configure @@ -50,7 +45,6 @@ The node accepts the following parameters | `calibration_sample_count` | The number of initial `LaserScan` messages to cailbrate the "obstacle-free" configuration. | `10` | ## rmf_human_detector_oakd - ![](../media/rmf_human_detector_oakd.gif) A ROS 2 node that detects humans via on-chip-inference on `OAK-D` cameras and publishes the detections over `/rmf_obstacles` as `rmf_obstacle_msgs::Obstacles` message. @@ -60,7 +54,6 @@ The node can be run either as a component within a ROS 2 container or as a stand The component plugin is `rmf_human_detector_oakd::HumanDetector` and can be loaded into a `ComponentManager` via `ros2 component load rmf_human_detector_oakd::HumanDetector`. To launch as a standalone node, - ```bash ros2 launch rmf_human_detector_oakd human_detection.launch.xml blob_path:= ``` @@ -74,7 +67,6 @@ To visualize the detection frames, set `debug:=True`. Note: Frames will only be For more information on setup and troubleshooting see [here](rmf_human_detector_oakd/README.md) ## rmf_human_detector - ![](../media/rmf_human_detector.gif) A ROS 2 node that subscribes to `sensor_msgs::Image` messages published by a monocular camera and runs `Yolo-V4` to detect the presence of humans. The relative pose of the humans with respect to the camera frame is estimated based on heuristics that can be configured through ROS 2 parameters. @@ -82,13 +74,11 @@ A ROS 2 node that subscribes to `sensor_msgs::Image` messages published by a mon The use case for this node would be to detect crowds from existing CCTV cameras or vision sensors in the facility. To run: - ```bash ros2 launch rmf_human_detector human_detector_launch.py ``` ## rmf_obstacle_ros2 - The `rmf_obstacle_ros2` package contains ROS 2 nodes that react to the presence of obstacles. At present the `lane_blocker_node` is available which subscribes to `/rmf_obstacles`, and checks whether @@ -97,7 +87,6 @@ If a new intersection is determined, the lanes for the corresponding fleets are Previously closed lanes are also opened once the obstacles no longer intersect with the lanes. To run: - ```bash ros2 run rmf_obstacle_ros2 lane_blocker_node ```