Description
Congratulation for your great job and thanks for sharing the code.
In the paper "Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes", they designed an MLP-based method that takes raw sensor data (e.g., past trajectory, velocity, etc.) as input and directly outputs the future trajectory of the ego vehicle, without using any perception or prediction information such as camera images or LiDAR.
Surprisingly, such a simple method achieves state-of-the-art end-to-end planning performance on the nuScenes dataset, reducing the average L2 error by about 30%.
They concluded that maybe we need to rethink the current open-loop evaluation scheme of end-to-end autonomous driving in nuScenes.
What do you think of this experiment? https://github.com/E2E-AD/AD-MLP
Is there a problem with their experimental results, or we do need a new open-loop/close-loop evaluation framework?