The lane-line sensor is used to extract and publish data describing the position and curvature of road lines on the lane which EGO vehicle currently occupies. It uses ground truth data from map annotations, which can be optionally corrected. You can read more about how annotation data is used in the input data section.
Data format top#
The lane-line sensor currently only supports the Cyber bridge, and publishes data in a format compatible with Apollo 5.0.
Each message follows the perception_lane format and, aside from the header, contains data about one or more lines in a format compatible with CameraLaneLine. Fields populated and published by the simulator are described in the table below.
||Describes the color and shape of the line (white/yellow, solid/dotted)|
||Describes the position of the line in relation to the EGO vehicle (right/left, ego/adjacent/third/etc.)|
||The line curve in sensor space defined as third-degree polynomial *|
* See Curve definition section for details
Please note that even though lane-line sensor visualization in the simulator shows detected lines as an overlay for a color image, the image itself is not part of published data and is only shown as a visual aid.
Curve Definition top#
Each lane-curve is described as third-degree polynomial, with coordinate space being centered on the sensor, the
x axis pointing towards its front, and the
y axis pointing towards its right side. This coordinate space uses only two dimensions, ignoring altitude. Image below shows the described coordinate space.
Each curve is defined by six values -
a, b, c, d, longitude_min, longitude_max - as defined in LaneLineCubicCurve. On the referenced image,
longitude_max are described as
Given these parameters, the curve function
f(x) can be defined as:
f(x) = a + b * x + c * x^2 + d * x^3 for x ∈ <longitude_min, longitude_max>
It's important to note that polynomial coefficients are calculated via polynomial regression, which means the final function is an approximation and might not match the input data (red dots) perfectly. This is mostly noticeable on steep curves.
Input data for the polynomial regression is sampled directly from map annotations for the given environment, trimmed to the sensor's field of view (
FOV on the referenced image) and defined visibility range (see JSON parameters for more information). Details about how the points are sampled can be found in the input data section.
Input Data top#
By default, the lane-line sensor will use lines defined in the map's annotation data. All of the spatial data and metadata for published lines will be based on this, which eliminates issues related to image processing, but requires precise annotations. Since annotation data can be relatively scarce, each segment is resampled along its length before the polynomial regression step.
Sampling density depends on
SampleDelta parameter that can be defined in JSON parameters. An example of how the annotation data is resampled before final processing is shown on the image below. Points used for curve approximation (red dots) are based on white and yellow annotation lines.
In some cases, annotation data imported from external sources might not match the environment perfectly. If you have no option to improve the alignment, you might want to try using the automated correction option. This uses image processing and will attempt to align annotation keypoints with lines detected on road intensity maps. Please note that the results may vary based on the intensity map's quality.
To use automatic line-correction, open the lane-line detector tool and use it with the
Generate Line Sensor Data option enabled. An example of how offset map annotation data can look before (blue and yellow lines) and after automated correction (green lines) is shown below. Note that this correction will only affect the sensor visibility and will not change the annotation data itself.
Testing with Apollo top#
Lane-line sensor currently only supports CyberRT message types. If you want to verify that lane-line data is properly detected and received by Apollo, follow the steps below.
- Follow the instructions for running Apollo 5.0 with SVL Simulator. Don't start any simulation, but make sure that bridge is running.
- Using the Web User Interface, add the map that you want to use for testing to your library. You can either choose a map from Store, or create and upload your own.
- Note: your map can use raw or corrected annotation data. See input data section for more details.
Lane Line Sensorplugin to your library.
- Add any vehicle to your library.
- Create new sensor configuration for your vehicle.
- In the configuration options, add lane-line sensor to the list of used sensors. Make sure that
Frameproperties are not empty.
CyberRTas the bridge used by this configuration.
- Create new simulation using the map, vehicle and sensor configuration that you just created. On the
Apollo 5.0and provide the bridge IP address.
- Start the simulation.
- In Apollo's Docker environment, run
- Lane-line sensor topic should be reporting received data. You can inspect the details by selecting the topic name.