2021.1 Release Features
With the 2021.1 release, we introduce a new online web user interface. You can now log in to your account, link a local simulation cluster, and run a simulation. This means you’ll be able to directly upload maps and vehicle assets that you use for your simulations to your account, without needing to host them online somewhere else. You will also be able to easily share content with other registered users both publicly and privately.
Set up a simulation with one of our four runtime templates, then trigger a simulation to run automatically from your browser.
Please note: In order to migrate to 2021.1 release of simulator, you will have to create a new account at our new website.
To enable easy scenario generation, we introduce a simple-to-use Visual Scenario Editor with support for traffic vehicle and pedestrian waypoints. The VSE allows you to create a scenario in a GUI editor by placing ego vehicle(s), traffic agents, and objects, then set behaviors for agents and objects to create, preview, and save a scenario to run as a simulation.
With use of our new end-to-end automated runtime templates, you can execute repeatable simulated scenario test cases on your autonomous driving stack. To help you analyze the results of such test cases, we have added a new test results section with detailed test reports.
You can analyze many types of detected events in the reports. These include stop sign violation, speed limit violation, sudden braking, low simulator performance, ego collision, sudden steer, sudden jerk, ego stuck and red light violations. You can also see a video playback of the scenario in the test report.
You can Add to Library our example public simulations to quickly try these features or create your own simulations using the different runtime templates for your scenarios.
You can now create environments directly from imported map annotations. This means that after importing an OpenDrive, Lanelet2, Autoware, or Apollo HD map/vector map, you will be able to procedurally generate an environment in Developer Mode, which you can then use for your simulations, just like any other environment.
All sensors used in SVL Simulator are now sensor plugins. Many new sensor plugins are now available on our Asset Store. We have also added the ability to add certain types of noise to sensor data using post-processing effects, such as sun-flare, raindrops, greyscale, and video artifacts.
The LiDAR sensor performance has also been greatly improved by moving calculations to compute shaders.
With our simulator publishing ground truth information for non-ego agents and obstacles, you can test Apollo by module, including prediction and planning. This means that, if you work on just the planning subsystem, you can quickly create scenarios and test your algorithms in isolation, but still as part of an end-to-end test.
For a full list of bug fixes, improvements, and all other changes since the last release, please see the release notes.