These are the iseAuto dataset specifically for this work. iseAuto dataset was composed as camera and LiDAR-projection images, which are recorded by iseAuto shuttle's primary camera (FLIR Grasshopper3) and LiDAR (Velodyne VLP-32) sensors, respectively.
We provided two versions of the datasets. In the second version, there are some optimizations of the manual-labeled annotations in daytime subsets. The left and right images below are the same frames from v1 and v2 versions.
The optimization focuses on the consistency of object labeling. Because the manual labeling work was finished by several annotators. They hold the different standards to some small objects. This is an issue that should be avoided for the manual-labeled dataset that are going to be used in model training. Please note that all the testing results in paper are based on the v1 version of iseAuto dataset.
We also provied all raw bag files here. The iseAuto shuttle was operated upon ROS, therefore all sensory data was collected as ROS messages and stored as the bag files. You can extract the camera and LiDAR data from the bag files, then process them to fit the input requirement of your own models.
There are md5sum files that contain checksum of all big files. Once you have downloaded the file, you can check the integrity of the flies by:
md5sum -c FILENAME.md5
Make sure the checksum file and the downloaded file stay in the same directory.