Cloud Simulation

Cloud simulation enables you to start scaling up simulation of scenarios for broader testing and verification capabilities.

You can run simulations on cloud-based GPU instances, and analyze the results under Test Results. While simulations based on local-clusters run on your workstation or laptop enabling you to debug and interactively run simulations, simulations based on cloud-clusters allow you run simulations fully automatically in headless mode and non-interactively just with help of a browser and internet connection, enabling scaling up of scenario testing. Additionally, test reports for cloud simulations support visualization and playback of recorded sensors during the scenario.

Table of Contents

Video of how to setup and run Cloud Simulations top#


Running a Cloud Simulation top#

In this section we will create a cloud cluster and run an example "Python API" runtime template simulation in cloud.

  1. Add the Straight2LaneOpposing map, Lincoln2017MKZ vehicle and all sensors needed for Apollo 5.0 (full analysis) sensor configuration on this vehicle from the Store to My library (more details on how to do this are here).

  2. Create a cloud cluster by following the instructions here. Name the cluster "cloud1".

  3. Add the Cloud-Test: NHTSA - EOV_S_25_20.py simulation available under "Available from Others" view in Simulations tab:

    add-to-my-simulations

  4. Select the cloud1 cluster on the General pane:

    select-cluster

  5. Click Next. On the Test case pane, confirm that the Map and Vehicle settings selected on web user interface form match those shown in Python Script:

    test-case-pane

    Note: If your Python script uses environment variable LGSVL__VEHICLE_0 to set the vehicle, then you can choose any Apollo-5.0 compatible vehicle sensor configuration from "My Library". We recommend to use Lincoln2017MKZ with Apollo 5.0 (full analysis) sensor configuration for better analytics report and visualization of sensor data in test results.

    IMPORTANT: The Python script must define the location of the starting point (spawn point) of the ego vehicle as well as the location of the destination the ego vehicle is intended to drive to.

    The start position is passed as a AgentState object that is passed as an argument to sim.add_agent. For the NHTSA - EOV_S_25_20.py test case the spawn point is defined as follows:

       # spawn EGO in the 2nd to right lane
       egoState = lgsvl.AgentState()
       # A point close to the desired lane was found in Editor. This method returns the position and orientation of the closest lane to the point.
       egoState.transform = sim.map_point_on_lane(lgsvl.Vector(-1.6, 0, -65))
       ego = sim.add_agent(env.str("LGSVL__VEHICLE_0", "Lincoln2017MKZ (Apollo 5.0)"), lgsvl.AgentType.EGO, egoState)
    

    To change the starting point, a different transform would need to be provided for the AgentState. The coordinates of the point are expected to be in the world coordinates of the Unity scene.

    The destination point is defined as follows:

       destination = egoState.position + 135 * forward
       dv.setup_apollo(destination.x, destination.z, modules)
    

    Where the destination is defined as a point 135 meters ahead of the starting position. The destination is passed on to Apollo through the Dreamview API using the dv.setup_apollo() function that takes the top-down view coordinates of the destination point in Apollo's coordinate system. Note that the x-axis in Unity and Apollo are aligned, but the y-axis in Apollo maps to the z-axis in Unity. To change the destination, simply pass another set of coordinates; however, there must be a valid route on the road between the start and destination points otherwise the apollo will disregard the destination and the vehicle will not move.

  6. Click Next. On the Autopilot pane:

    autopilot-pane

    Select Apollo 5.0 for the Autopilot.

    Leave the Docker Image unchanged so as to use the default image. Or if you want to add your modifications on Apollo 5.0 and use your custom image, then follow instructions for creating registry and your image here

  7. Click Next. On the Publish pane, click Make private to save (and publish) the simulation to your library. The simulation will be private and only accessible from your account.

    publish-pane

  8. You should now see the Simulations view, with the newly added Cloud-Test: NHTSA - EOV_S_25_20.py simulation:

    simulation-added

    The simulation status should indicate Offline as shown.

    To run the simulation you just created, click Run Simulation.

    You should see the status change to Scheduled:

    scheduled

    Once cloud resources are available to run the simulation, the status will change to Connecting and then Starting. The first time you run a simulation on a cloud cluster, it can take several minutes for the cloud resources to become available. Please be patient.

    starting

    Once the images have finished loading and the simulation has started to run, the status will change to Running. The first time you run a simulation on a cloud cluster, it can take several minutes for the containers to load. Again, please be patient.

    running

    When the simulation has completed, the status will change to Stopping, then to Idle and finally to Offline when the post-processing of the simulation test results starts.

Visualizing Test Results of Cloud Simulation top#

  1. Once the simulation has finished, you can view its results by clicking Test Results, and then View on the entry for the simulation:

    test-results-list

  2. Since this was a cloud simulation, there is more detailed data under Visualization button:

    test-results

  3. Click the Visualization button to bring up an interactive visualization of the sensor data recording during the simulation:

    visualization

    See here for more details on using this visualization tool.

  4. You need not wait for the post-processing to complete before viewing the results of the simulation, but the video from the Video Recording Sensor and the Visualization displays will not be available until it is fully processed. This may take several minutes depending on your simulation. Please be patient.

GPU Instance Specifications top#

The GPU instances used on AWS for cloud simulations have the following specifications:

Simulator: g4dn.xlarge

  • 2 Hyperthreaded cores (4 virtual CPUs); 2.5 GHz sustained clock
  • 16 GiB memory
  • 10 GiB SSD root volume
  • 125 GiB NVMe volume: used for caching downloaded assets
  • NVIDIA T4 GPU (roughly equivalent to a GeForce RTX 2070 Super, except it has 16GiB of GPU memory instead of 8GiB)

Apollo 5.0: g3.4xlarge:

  • 8 Hyperthreaded cores (16 virtual CPUs); 2.7 GHz sustained clock
  • 122 GiB memory: 61 GiB for Apollo + 61 GiB RAM disk for CyberRT bag storage
  • 22 GiB SSD root volume
  • NVIDIA M60 GPU (roughly equivalent to a GeForce GTX 980, except it has 8GiB of GPU memory instead of 4GiB and twice as many CUDA cores)

Limitations on Cloud Simulation top#

  • Only Apollo 5.0 can be used as the autopilot.
  • Only one simulator instance is allowed per cloud cluster.
  • Only one autopilot can be used in a simulations.
  • Limit your simulations to not run for more than 10 minutes.
  • Limited cloud resources have been made available globally, only 2 simulation can be running at the same time.
  • Your test case script must explicitly enable the Recorder module to record the sensor data for the Visualization display.
  • Parallel cloud simulations are not directly supported. However, you can work around this by creating multiple cloud clusters and attaching a different cluster to each cloud simulation.
  • Distributed cloud simulation is not supported currently.