This is an old revision of the document!
Autonomous vehicle development is one of the top trends in the automotive industry and the technology behind them has been evolving to make them safer. In this way, engineers are facing new challenges especially moving toward the Society of Automotive Engineers (SAE) levels 4 and 5. To put Autonomous Vehicles (AV) on roads and evaluate the reliability of their technologies they have to be driven billions of miles, which takes a long time to achieve unless with the help of simulation. Furthermore, due to the past real crash cases of AVs, a high-fidelity simulator has become an efficient and alternative approach to provide different testing scenarios for control of these vehicles, also enabling safety validation before real road driving. Different high-resolution virtual environments can be developed based on the real world for simulators by using cameras or lidars to simulate the scenarios as close as possible to the real. Also, virtual environment development enables us to customize and create various urban backgrounds for testing the vehicle. Creating a virtual copy of an existing intelligent system is a common approach nowadays called the digital twin. In the following, we focus on the utilization of simulation for an AV shuttle at Tallinn University of Technology.
The iseAuto project is a cooperation project between industry and university which has a range of objectives from both sides as well as a very practical outcome. The project started in June 2017, when TalTech and Silberauto Estonia agreed to jointly develop a self-driving car which will have its public demonstration in September 2018. The purpose from the company’s side was to get involved with self-driving technology to be aware of the future of the automotive industry and also get experience in manufacturing a special purpose car body, as that is one of the main activities of one of the branches of the company.
Vehicle specifications
Measures:
Sensors:
Software:
Simulation has been widely used in vehicle manufacturing, particularly for mechanical behavior and dynamical analysis. However, AVs need more than that due to their nature. Simulation in various complex environments and scenarios included other road users with different sensors combination and configuration enables us to verify their decision-making algorithms. One of the most popular robotic simulator platforms is Gazebo. It is based on ROS and utilizes physics engines and various sensor modules suitable for autonomous systems. Nevertheless, Gazebo lacks modern game engine features like Unreal and Unity which gives the power to create a complex virtual environment and realistic rendering.
On the other hand, CARLA and SVL are modern open-source simulators based on the game engines, Unreal and Unity respectively, which also have good compatibility with our AVstack Autoware. Although, comparing these two is beyond our discussion but we selected the SVL as our simulator because of its compatibility with our terrain generator which is Unity.
The above figure shows a full map of the simulation workflow and the relation between Autoware and the simulator. Vehicle 3D Models and the virtual environment, which are created inside the unity, are imported to the simulator. The simulator allows customizing the environment to create different scenarios such as adding/removing other road users, putting traffic systems, and adjusting the time of day and the weather of the scene. Then, virtual sensors provide information for the perception of the environment. This information is transferred via a ROS bridge to our control software platform to use in our perception algorithms for localization and detection. Perception Results are used in the Autoware planning section which makes the control commands for the shuttle. These control commands are sent back to the simulator via the ROS bridge to navigate the vehicle inside the simulator.
The iseAuto 3D model and its lidar sensors are illustrated in the below figure. Two Velodyne VLP-16 and VLP-32 are installed at the top back and front of the vehicle respectively. Furthermore, two Robosense Bpearl are installed at the sides left and right of the vehicle. Finally, one Robosence LiDAR-16 is installed at the front bottom of the vehicle to cover the front blind zone. This lidar configuration can create a good point cloud coverage around the vehicle for perception purposes.
The 3D mesh model of the vehicle should be imported to Unity to define physics components like collider and wheel actuation, in addition, to assign other features like lights and materials for appearance. Finally, the built vehicle exported from Unity is used in the simulator. Later, all the sensor configurations are defined via a JSON file inside the simulator. Terrain generation will be discussed in the next section.
In order to create a virtual environment as a testbed for simulating the shuttle, we selected the aerial mapping approach to map the desired environment. In this way, a drone is used to capture images from the real environment, then multiple software techniques are utilized to convert them to a unity terrain. The images are captured at a grid-based flight path. This ensures that the captured images contain different sides of a subject. Taking aerial photos is one of the most important steps in the mapping process as it will significantly affect the outcome of the process and the amount of work to be done to process those images. The images taken are georeferenced by the drone. The onboard IMU provides the pictures with the orientation so that later they can be stitched together and used for photogrammetric processing. Third-party software aligns and creates the dense point cloud from the pictures that were captured. Once the dense point cloud is created, the segmentation and classification of the points are needed in order to separate unwanted objects and vegetation from the point cloud data. The below figure shows the three main steps to generate the Unity train from geospatial data.
Now it is ready to load the final map that is built by unity for SVL simulator.
Now, by using a high-fidelity simulator we can simulate different scenarios close to the real to evaluate the control algorithm performance and safety. In terms of defining these scenarios, the SVL simulator provides a python API for spawning different objects like NPC (Non-player character) cars and pedestrians inside the virtual environment with different motion plans.
The above-left figure shows the shuttle inside the simulator that is stopped behind an NPC car, while, in the same time the right figure shows the point cloud of all lidars that is derived from the current scene in the environment.