Skip to content

Commit

Permalink
init planning
Browse files Browse the repository at this point in the history
Signed-off-by: ismetatabay <[email protected]>
  • Loading branch information
ismetatabay committed Nov 8, 2023
1 parent 2efb9d3 commit f6377e2
Show file tree
Hide file tree
Showing 8 changed files with 261 additions and 75 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ nav:
- index.md
- Tuning localization: localization-tuning
- Tuning perception: perception-tuning
- Tuning planning: planning-tuning
Original file line number Diff line number Diff line change
Expand Up @@ -63,23 +63,25 @@ ensuring optimal performance in the specific conditions of the YTU campus.
please bear in mind that since this alteration enlarges the size of the NDT input point cloud,
it will require additional resources on your processor.

```diff
/**:
ros__parameters:
input_frame: "base_link"
output_frame: "base_link"
- min_x: -60.0
+ min_x: -150.0
- max_x: 60.0
+ max_x: 150.0
- min_y: -60.0
+ min_y: -150.0
- max_y: 60.0
+ max_y: 150.0
min_z: -30.0
max_z: 50.0
negative: False
```
!!! note "[`crop_box_filter_measurement_range.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/localization/crop_box_filter_measurement_range.param.yaml) parameter file:"

```diff
/**:
ros__parameters:
input_frame: "base_link"
output_frame: "base_link"
- min_x: -60.0
+ min_x: -150.0
- max_x: 60.0
+ max_x: 150.0
- min_y: -60.0
+ min_y: -150.0
- max_y: 60.0
+ max_y: 150.0
min_z: -30.0
max_z: 50.0
negative: False
```

<figure markdown>
![ytu-campus-pcd-range](images/ndt-range-150m.png){ align=center }
Expand Down Expand Up @@ -124,17 +126,19 @@ ensuring optimal performance in the specific conditions of the YTU campus.
so we will reduce the voxel size to improve localization accuracy.
Feel free to experiment with tuning the voxel size for your own computer setup.

```diff

/**:
ros__parameters:
- voxel_size_x: 3.0
+ voxel_size_x: 1.0
- voxel_size_y: 3.0
+ voxel_size_y: 1.0
- voxel_size_z: 3.0
+ voxel_size_z: 1.0
```
!!! note "[voxel_grid_filter.param.yaml](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/localization/voxel_grid_filter.param.yaml) parameter file:"

```diff

/**:
ros__parameters:
- voxel_size_x: 3.0
+ voxel_size_x: 1.0
- voxel_size_y: 3.0
+ voxel_size_y: 1.0
- voxel_size_z: 3.0
+ voxel_size_z: 1.0
```

<figure markdown>
![voxel-size-1.0](images/voxel-size-1.0.png){ align=center }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,24 +55,28 @@ file:
you can set these camera offsets to "0" as the initial value.
Please be careful with the offset array size; it must be equal to your camera count.

```diff
- input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] # 6 cameras
+ input_offset_ms: [0.0, 0.0, 0.0, 0.0] # 4 cameras
```
!!! note "[roi_sync.param.yaml](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/image_projection_based_fusion/roi_sync.param.yaml) parameter file:"

```diff
- input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] # 6 cameras
+ input_offset_ms: [0.0, 0.0, 0.0, 0.0] # 4 cameras
```

- If you have used different namespaces for your camera and ROI topics,
you will need to add the input topics for camera_info,
image_raw,
and rois messages in the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml) file.
and rois messages in the `tier4_perception_component.launch.xml` launch file.

```diff
- <arg name="image_raw0" default="/sensing/camera/camera0/image_rect_color" description="image raw topic name"/>
+ <arg name="image_raw0" default="<YOUR-CAMERA-TOPIC-NAME>" description="image raw topic name"/>
- <arg name="camera_info0" default="/sensing/camera/camera0/camera_info" description="camera info topic name"/>
+ <arg name="camera_info0" default="<YOUR-CAMERA-INFO-TOPIC-NAME>" description="camera info topic name"/>
- <arg name="detection_rois0" default="/perception/object_recognition/detection/rois0" description="detection rois output topic name"/>
+ <arg name="detection_rois0" default="<YOUR-ROIS-TOPIC-NAME>" description="detection rois output topic name"/>
```
!!! note "[`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml) launch file:"

```diff
- <arg name="image_raw0" default="/sensing/camera/camera0/image_rect_color" description="image raw topic name"/>
+ <arg name="image_raw0" default="<YOUR-CAMERA-TOPIC-NAME>" description="image raw topic name"/>
- <arg name="camera_info0" default="/sensing/camera/camera0/camera_info" description="camera info topic name"/>
+ <arg name="camera_info0" default="<YOUR-CAMERA-INFO-TOPIC-NAME>" description="camera info topic name"/>
- <arg name="detection_rois0" default="/perception/object_recognition/detection/rois0" description="detection rois output topic name"/>
+ <arg name="detection_rois0" default="<YOUR-ROIS-TOPIC-NAME>" description="detection rois output topic name"/>
```

### Tuning ground segmentation

Expand Down Expand Up @@ -109,29 +113,35 @@ points on the high-slope roads with default configurations.
However, be cautious when increasing the threshold,
as it may lead to an increase in the number of false negatives.

```diff
- global_slope_max_angle_deg: 10.0
+ global_slope_max_angle_deg: 30.0
```
!!! note "[`ground_segmentation.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/obstacle_segmentation/ground_segmentation/ground_segmentation.param.yaml) parameter file:"

```diff
- global_slope_max_angle_deg: 10.0
+ global_slope_max_angle_deg: 30.0
```

- Then we will update the split_height_distance parameter from 0.2 to 0.35 meters.
This adjustment will help in reducing false positive non-ground points,
especially on step-like road surfaces or in cases of misaligned multiple lidar configurations.

```diff
- split_height_distance: 0.2
+ split_height_distance: 0.35
```
!!! note "[`ground_segmentation.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/obstacle_segmentation/ground_segmentation/ground_segmentation.param.yaml) parameter file:"

```diff
- split_height_distance: 0.2
+ split_height_distance: 0.35
```

- Now, we will change the non_ground_height_threshold value from 0.2 to 0.3 meters.
This will help us in reducing false positive non-ground points,
but it may also decrease the number of true positive non-ground points
that are below this threshold value.

```diff
- non_ground_height_threshold: 0.2
+ non_ground_height_threshold: 0.3
```
!!! note "[`ground_segmentation.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/obstacle_segmentation/ground_segmentation/ground_segmentation.param.yaml) parameter file:"

```diff
- non_ground_height_threshold: 0.2
+ non_ground_height_threshold: 0.3
```

- The following image illustrates the results after these fine-tunings with the ground remover package.

Expand Down Expand Up @@ -167,10 +177,12 @@ the false positive points will disappear from the same location.
- Firstly, we will change our object filter method from lanelet_filter to position_filter
to detect objects that are outside the lanelet boundaries at the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml).

```diff
- <arg name="detected_objects_filter_method" default="lanelet_filter" description="options: lanelet_filter, position_filter"/>
+ <arg name="detected_objects_filter_method" default="position_filter" description="options: lanelet_filter, position_filter"/>
```
!!! note "[`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml) launch file:"

```diff
- <arg name="detected_objects_filter_method" default="lanelet_filter" description="options: lanelet_filter, position_filter"/>
+ <arg name="detected_objects_filter_method" default="position_filter" description="options: lanelet_filter, position_filter"/>
```

- After changing the filter method for objects,
the output of our perception pipeline looks like the image below:
Expand All @@ -186,34 +198,40 @@ the false positive points will disappear from the same location.
but we still need to update the filter range
or disable the filter for unknown objects in the [`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) file.

```diff
upper_bound_x: 100.0
- lower_bound_x: 0.0
+ lower_bound_x: -100.0
- upper_bound_y: 10.0
+ upper_bound_y: 100.0
- lower_bound_y: -10.0
+ lower_bound_y: -100.0
```
!!! note "[`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) parameter file:"

```diff
upper_bound_x: 100.0
- lower_bound_x: 0.0
+ lower_bound_x: -100.0
- upper_bound_y: 10.0
+ upper_bound_y: 100.0
- lower_bound_y: -10.0
+ lower_bound_y: -100.0
```

- Also, you can simply disable the filter for unknown labeled objects.

```diff
- UNKNOWN : true
+ UNKNOWN : false
```
!!! note "[`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) parameter file:"

```diff
- UNKNOWN : true
+ UNKNOWN : false
```

- After that,
we can update our clustering parameters
since we can detect all objects regardless of filtering objects with the lanelet2 map.
As we mentioned earlier, we want to detect small objects.
Therefore,
we will decrease the minimum cluster size to 1 in the [`voxel_grid_based_euclidean_cluster.param.yaml` file](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/clustering/voxel_grid_based_euclidean_cluster.param.yaml).
we will decrease the minimum cluster size to 1 in the `voxel_grid_based_euclidean_cluster.param.yaml` file.

```diff
- min_cluster_size: 10
+ min_cluster_size: 1
```
!!! note "[`voxel_grid_based_euclidean_cluster.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/clustering/voxel_grid_based_euclidean_cluster.param.yaml) parameter file:"

```diff
- min_cluster_size: 10
+ min_cluster_size: 1
```

- After making these changes, our perception output is shown in the following image:

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit f6377e2

Please sign in to comment.