| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| en:ros:simulations:iseauto [2021/04/07 08:52] – [Self-driving Vehicle] momala | en:ros:simulations:iseauto [Unknown date] (current) – external edit (Unknown date) 127.0.0.1 |
|---|
| ====== Self-driving Vehicle ====== | ====== Self-driving Vehicle ====== |
| Autonomous vehicle development is one of the top trends in the automotive industry and the technology behind them has been evolving to make them safer. In this way, engineers are facing new challenges especially moving toward the Society of Automotive Engineers (SAE) levels 4 and 5. To put Autonomous Vehicles (AV) on roads and evaluate the reliability of their technologies they have to be driven billions of miles, which takes a long time to achieve unless with the help of simulation. Furthermore, due to the past real crash cases of AVs, a high-fidelity simulator has become an efficient and alternative approach to provide different testing scenarios for control of these vehicles, also enabling safety validation before real road driving. | Self-driving vehicles can fall into many different categories and levels of automation as described in previous chapters. One of the higher-level self-driving vehicles available nowadays is a Level 4 self-driving robot bus or Automated Vehicle (AV shuttle), also called a last-mile autonomous vehicle. The following describes an AV shuttle designed and developed in Tallinn University of Technology in cooperation with industrial partners in Estonia. |
| | |
| Different high-resolution virtual environments can be developed based on the real world for simulators by using cameras or lidars to simulate the scenarios as close as possible to the real. Also, virtual environment development enables us to customize and create various urban backgrounds for testing the vehicle. Creating a virtual copy of an existing intelligent system is a common approach nowadays called the digital twin. In the following, we focus on the utilization of simulation for an AV shuttle at Tallinn University of Technology. | |
| |
| ===== Vehicle Description ===== | ===== Vehicle Description ===== |
| |
| The iseAuto project is a cooperation project between industry and university which has a range of objectives from both sides as well as a very practical outcome. The project started in June 2017, when TalTech and Silberauto Estonia agreed to jointly develop a self-driving car which will have its public demonstration in September 2018. The purpose from the company’s side was to get involved with self-driving technology to be aware of the future of the automotive industry and also get experience in manufacturing a special purpose car body, as that is one of the main activities of one of the branches of the company. | The iseAuto project was a cooperation project between industry and university which has a range of objectives from both sides as well as a very practical outcome. The project started in June 2017, when TalTech and Silberauto Estonia agreed to jointly develop a self-driving bus that had its public demonstration in September 2018. The purpose from the company’s side was to get involved with self-driving technology to be aware of the future of the automotive industry and also get experience in manufacturing a special purpose car body, as that is one of the main activities of one of the branches of the company. |
| |
| | {{ :en:ros:simulations:iseautod.jpg?600 |}} |
| |
| **Vehicle specifications** | **Vehicle specifications** |
| * Main engine power 47 kW | * Main engine power 47 kW |
| * Battery pack 16 kWh | * Battery pack 16 kWh |
| {{ :et:ros:simulations:ise1.jpg?500 |}} | |
| |
| **Measures:** | |
| |
| * Height 2.4 m | |
| * Length 3.5 m | |
| * Witdht 1.5 m | |
| |
| **Sensors:** | **Sensors:** |
| * Front-top LiDAR Velodyne VLP-32 | * Front-top LiDAR Velodyne VLP-32 |
| * Back-top LiDAR Velodyne VLP-16 | * Back-top LiDAR Velodyne VLP-16 |
| * Side LiDAR Robotsence Bpearl x 2 | * Side LiDAR Robotsense Bpearl x 2 |
| * Front-bottom LiDAR Robosence 16 | * Front-bottom LiDAR Robosence 16 |
| * Cameras | * Cameras |
| {{ :en:ros:simulations:screenshot_from_2021-03-31_11-19-05.png?800 |}} | {{ :en:ros:simulations:screenshot_from_2021-03-31_11-19-05.png?800 |}} |
| |
| The iseAuto 3D model and its lidar sensors are illustrated in the above figure. Two Velodyne VLP-16 and VLP-32 are installed at the top back and front of the vehicle respectively. Furthermore, two Robosense Bpearl are installed at the sides left and right of the vehicle. Finally, one Robosence LiDAR-16 is installed at the front bottom of the vehicle to cover the front blind zone. This lidar configuration can create a good point cloud coverage around the vehicle for perception purposes. | The iseAuto 3D model and its Lidar sensors are illustrated in the above figure. Two Velodyne VLP-16 and VLP-32 are installed at the top back and front of the vehicle respectively. Furthermore, two Robosense Bpearl are installed at the sides left and right of the vehicle. Finally, one Robosence LiDAR-16 is installed at the front bottom of the vehicle to cover the front blind zone. This Lidar configuration can create a good point cloud coverage around the vehicle for perception purposes. |
| |
| **Software:** | **Software:** |
| |
| ===== Simulating the Self-driving Vehicle ===== | ===== Simulating the Self-driving Vehicle ===== |
| Simulation has been widely used in vehicle manufacturing, particularly for mechanical behavior and dynamical analysis. However, AVs need more than that due to their nature. Simulation in various complex environments and scenarios included other road users with different sensors combination and configuration enables us to verify their decision-making algorithms. One of the most popular robotic simulator platforms is Gazebo. It is based on ROS and utilizes physics engines and various sensor modules suitable for autonomous systems. Nevertheless, Gazebo lacks modern game engine features like Unreal and Unity which gives the power to create a complex virtual environment and realistic rendering. | TalTech iseAuto has a simulation model which can be simulated in the Gazebo environment by utilizing its physics and sensors. Nevertheless, Gazebo lacks modern game engine features like Unreal and Unity which gives the power to create a complex virtual environment and realistic rendering. |
| |
| {{ :et:ros:simulations:iseauto.gif |Gazebo simulation for iseAuto}} | {{ :et:ros:simulations:iseauto.gif |Gazebo simulation for iseAuto}} |
| |
| On the other hand, CARLA and SVL are modern open-source simulators based on the game engines, Unreal and Unity respectively, which also have good compatibility with our AVstack Autoware. Although, comparing these two is beyond our discussion but we selected the SVL as our simulator because of its compatibility with our terrain generator which is Unity. You can find detailed information on the SVL simulator and how to install the latest version [[https://www.svlsimulator.com/|Here]]. | On the other hand, CARLA and SVL are modern open-source simulators based on the game engines, Unreal and Unity respectively, which also have good compatibility with our AV stack Autoware. Although, comparing these two is beyond our discussion but we selected the SVL as our simulator because of its compatibility with our terrain generator which is Unity. You can find detailed information on the SVL simulator and how to install the latest version here: https://www.svlsimulator.com/. |
| |
| {{ :en:ros:simulations:screenshot_from_2021-03-31_11-03-59.png?800 |}} | {{ :en:ros:simulations:screenshot_from_2021-03-31_11-03-59.png?800 |}} |
| |
| The above figure shows a full map of the simulation workflow and the relation between Autoware and the simulator. Vehicle 3D Models and the virtual environment, which are created inside the unity, are imported to the simulator. The simulator allows customizing the environment to create different scenarios such as adding/removing other road users, putting traffic systems, and adjusting the time of day and the weather of the scene. Then, virtual sensors provide information for the perception of the environment. This information is transferred via a ROS bridge to our control software platform to use in our perception algorithms for localization and detection. Perception Results are used in the Autoware planning section which makes the control commands for the shuttle. These control commands are sent back to the simulator via the ROS bridge to navigate the vehicle inside the simulator. | The above figure shows a full map of the simulation workflow and the relation between Autoware and the simulator. Vehicle 3D models and the virtual environment, which are created inside Unity, are imported to the simulator. The simulator allows customizing the environment to create different scenarios such as adding/removing other road users, putting traffic systems, and adjusting the time of day and the weather of the scene. Then, virtual sensors provide information for the perception of the environment. This information is transferred via a ROS bridge to the control software platform to use in perception algorithms for localization and detection. Perception results are used in the Autoware planning section which makes the control commands for the vehicle. These control commands are sent back to the simulator via the ROS bridge to navigate the vehicle inside the simulator. |
| |
| The 3D mesh model of the vehicle should be imported to Unity to define physics components like collider and wheel actuation, in addition, to assign other features like lights and materials for appearance. Finally, the built vehicle exported from Unity is used in the simulator. Later, all the sensor configurations are defined via a JSON file inside the simulator. Terrain generation will be discussed in the next section. | The 3D mesh model of the vehicle should be imported to Unity to define physics components like collider and wheel actuation, in addition, to assign other features like lights and materials for appearance. Finally, the built vehicle exported from Unity is used in the simulator. Later, all the sensor configurations are defined via a JSON file inside the simulator. |
| |
| ==== Virtual Environment Creation ==== | ==== Virtual Environment Creation ==== |
| In order to create a virtual environment as a testbed for simulating the shuttle, we selected the aerial mapping approach to map the desired environment. In this way, a drone is used to capture images from the real environment, then multiple software techniques are utilized to convert them to a unity terrain. The images are captured at a grid-based flight path. This ensures that the captured images contain different sides of a subject. Taking aerial photos is one of the most important steps in the mapping process as it will significantly affect the outcome of the process and the amount of work to be done to process those images. The images taken are georeferenced by the drone. The onboard IMU provides the pictures with the orientation so that later they can be stitched together and used for photogrammetric processing. Third-party software aligns and creates the dense point cloud from the pictures that were captured. Once the dense point cloud is created, the segmentation and classification of the points are needed in order to separate unwanted objects and vegetation from the point cloud data. The below figure shows the three main steps to generate the Unity train from geospatial data. | In order to create a virtual environment as a testbed for simulating the autonomous vehicle, the aerial mapping approach to map the desired environment is needed. In this way, a drone is used to capture images from the real environment, then multiple software techniques are utilized to convert them to a Unity terrain. The images are captured at a grid-based flight path, see the image below. This ensures that the captured images contain different sides of a subject. |
| | |
| | {{ :en:ros:simulations:droondeploy.png?600 |}} |
| | |
| | Taking aerial photos is one of the most important steps in the mapping process as it will significantly affect the outcome of the process and the amount of work to be done to process those images. The images taken are georeferenced by the drone. The onboard IMU provides the pictures with the orientation so that later they can be stitched together and used for photogrammetric processing. Third-party software aligns and creates the dense point cloud from the pictures that were captured. Once the dense point cloud is created, the segmentation and classification of the points are needed in order to separate unwanted objects and vegetation from the point cloud data. The below figure shows the three main steps to generate the Unity train from geospatial data. |
| |
| {{ :en:ros:simulations:screenshot_from_2021-03-31_11-56-35.png?800 |}} | {{ :en:ros:simulations:screenshot_from_2021-03-31_11-56-35.png?800 |}} |
| |
| Now it is ready to load the final map that is built by unity for SVL simulator. | Now it is ready to load the final map that is built by Unity for the SVL simulator. |
| |
| ==== Run the Simulation ==== | ==== Run the Simulation ==== |
| {{ :en:ros:simulations:screenshot_from_2021-03-31_12-37-54.png?800 |}} | {{ :en:ros:simulations:screenshot_from_2021-03-31_12-37-54.png?800 |}} |
| |
| The above-left figure shows the shuttle inside the simulator that is stopped behind an NPC car, while, in the same time the right figure shows the point cloud of all lidars that is derived from the current scene in the environment. | The above-left figure shows the vehicle inside the simulator that is stopped behind an NPC car, while, at the same time the right figure shows the point cloud of all Lidars that is derived from the current scene in the environment. |
| |
| {{ :en:ros:simulations:iseauto5.gif |}} | {{ :en:ros:simulations:iseauto5.gif |}} |
| |
| | ==== Summary ==== |
| | Simulation is a more and more important aspect of developing robots and intelligent vehicles like self-driving cars. The importance is rising in parallel with the complexity of control algorithms and the integration of mechatronic systems in the vehicle. Simulation helps to reduce drastically the development cost and increase the general safety of the end result. It is clear that early prototypes may easily cause accidents if the testing is done on real traffic but if thousands of miles are conducted on the simulations and most of the software bugs and electromechanical incompatibilities are eliminated it is safer to start open road tests. Therefore simulations during the self-driving vehicle development are almost must be today's engineering process. |
| |
| |
| |
| |