|
| 1 | +# Lane Detection Methods |
| 2 | + |
| 3 | +## Overview |
| 4 | + |
| 5 | +This document describes some of the most common lane detection methods used in the autonomous driving industry. |
| 6 | +Lane detection is a crucial task in autonomous driving, as it is used to determine the boundaries of the road and the |
| 7 | +vehicle's position within the lane. |
| 8 | + |
| 9 | +## Methods |
| 10 | + |
| 11 | +This document covers the methods under two categories: lane detection methods and multitask detection methods. |
| 12 | + |
| 13 | +!!! note |
| 14 | + |
| 15 | + The results have been obtained using pre-trained models. Training the model with your own data will yield more |
| 16 | + successful results. |
| 17 | + |
| 18 | +### Lane Detection Methods |
| 19 | + |
| 20 | +#### CLRerNet |
| 21 | + |
| 22 | +This work introduce LaneIoU, which improves confidence score accuracy by considering local lane angles, and CLRerNet, |
| 23 | +a novel detector leveraging LaneIoU. |
| 24 | + |
| 25 | +- **Paper**: [CLRerNet: Improving Confidence of Lane Detection with LaneIoU](https://arxiv.org/abs/2305.08366) |
| 26 | +- **Code**: [GitHub](https://github.com/hirotomusiker/CLRerNet) |
| 27 | + |
| 28 | +| Method | Backbone | Dataset | Confidence | Campus Video | Road Video | |
| 29 | +| -------- | -------- | ------- | ---------- | -------------------------------------------------------- | -------------------------------------------------------- | |
| 30 | +| CLRerNet | dla34 | culane | 0.4 |  |  | |
| 31 | +| CLRerNet | dla34 | culane | 0.1 |  |  | |
| 32 | +| CLRerNet | dla34 | culane | 0.01 |  |  | |
| 33 | + |
| 34 | +#### CLRNet |
| 35 | + |
| 36 | +This work introduce Cross Layer Refinement Network (CLRNet) to fully utilize high-level semantic and low-level detailed |
| 37 | +features in lane detection. |
| 38 | +CLRNet detects lanes with high-level features and refines them with low-level details. |
| 39 | +Additionally, ROIGather technique and Line IoU loss significantly enhance localization accuracy, |
| 40 | +outperforming state-of-the-art methods. |
| 41 | + |
| 42 | +- **Paper**: [CLRNet: Cross Layer Refinement Network for Lane Detection](https://arxiv.org/abs/2203.10350) |
| 43 | +- **Code**: [GitHub](https://github.com/Turoad/CLRNet) |
| 44 | + |
| 45 | +| Method | Backbone | Dataset | Confidence | Campus Video | Road Video | |
| 46 | +| ------ | --------- | -------- | ---------- | -------------------------------------------------------- | -------------------------------------------------------- | |
| 47 | +| CLRNet | dla34 | culane | 0.2 |  |  | |
| 48 | +| CLRNet | dla34 | culane | 0.1 |  |  | |
| 49 | +| CLRNet | dla34 | culane | 0.01 |  |  | |
| 50 | +| CLRNet | dla34 | llamas | 0.4 |  |  | |
| 51 | +| CLRNet | dla34 | llamas | 0.2 |  |  | |
| 52 | +| CLRNet | dla34 | llamas | 0.1 |  |  | |
| 53 | +| CLRNet | resnet18 | llamas | 0.4 |  |  | |
| 54 | +| CLRNet | resnet18 | llamas | 0.2 |  |  | |
| 55 | +| CLRNet | resnet18 | llamas | 0.1 |  |  | |
| 56 | +| CLRNet | resnet18 | tusimple | 0.2 |  |  | |
| 57 | +| CLRNet | resnet18 | tusimple | 0.1 |  |  | |
| 58 | +| CLRNet | resnet34 | culane | 0.1 |  |  | |
| 59 | +| CLRNet | resnet34 | culane | 0.05 |  |  | |
| 60 | +| CLRNet | resnet101 | culane | 0.2 |  |  | |
| 61 | +| CLRNet | resnet101 | culane | 0.1 |  |  | |
| 62 | + |
| 63 | +#### FENet |
| 64 | + |
| 65 | +This research introduces Focusing Sampling, Partial Field of View Evaluation, Enhanced FPN architecture, |
| 66 | +and Directional IoU Loss, addressing challenges in precise lane detection for autonomous driving. |
| 67 | +Experiments show that Focusing Sampling, which emphasizes distant details crucial for safety, |
| 68 | +significantly improves both benchmark and practical curved/distant lane recognition accuracy over uniform approaches. |
| 69 | + |
| 70 | +- **Paper**: [FENet: Focusing Enhanced Network for Lane Detection](https://arxiv.org/abs/2312.17163) |
| 71 | +- **Code**: [GitHub](https://github.com/HanyangZhong/FENet) |
| 72 | + |
| 73 | +| Method | Backbone | Dataset | Confidence | Campus Video | Road Video | |
| 74 | +| -------- | -------- | ------- | ---------- | -------------------------------------------------------- | -------------------------------------------------------- | |
| 75 | +| FENet v1 | dla34 | culane | 0.2 |  |  | |
| 76 | +| FENet v1 | dla34 | culane | 0.1 |  |  | |
| 77 | +| FENet v1 | dla34 | culane | 0.05 |  |  | |
| 78 | +| FENet v2 | dla34 | culane | 0.2 |  |  | |
| 79 | +| FENet v2 | dla34 | culane | 0.1 |  |  | |
| 80 | +| FENet v2 | dla34 | culane | 0.05 |  |  | |
| 81 | +| FENet v2 | dla34 | llamas | 0.4 |  |  | |
| 82 | +| FENet v2 | dla34 | llamas | 0.2 |  |  | |
| 83 | +| FENet v2 | dla34 | llamas | 0.1 |  |  | |
| 84 | +| FENet v2 | dla34 | llamas | 0.05 |  |  | |
| 85 | + |
| 86 | +### Multitask Detection Methods |
| 87 | + |
| 88 | +#### YOLOPv2 |
| 89 | + |
| 90 | +This work proposes an efficient multi-task learning network for autonomous driving, |
| 91 | +combining traffic object detection, drivable road area segmentation, and lane detection. |
| 92 | +YOLOPv2 model achieves new state-of-the-art performance in accuracy and speed on the BDD100K dataset, |
| 93 | +halving the inference time compared to previous benchmarks. |
| 94 | + |
| 95 | +- **Paper**: [YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception](https://arxiv.org/abs/2208.11434) |
| 96 | +- **Code**: [GitHub](https://github.com/CAIC-AD/YOLOPv2) |
| 97 | + |
| 98 | +| Method | Campus Video | Road Video | |
| 99 | +| ------- | -------------------------------------------------------- | -------------------------------------------------------- | |
| 100 | +| YOLOPv2 |  |  | |
| 101 | + |
| 102 | +#### HybridNets |
| 103 | + |
| 104 | +This work introduces HybridNets, an end-to-end perception network for autonomous driving. |
| 105 | +It optimizes segmentation heads and box/class prediction networks using a weighted bidirectional feature network. |
| 106 | +HybridNets achieves good performance on BDD100K and Berkeley DeepDrive datasets, outperforming state-of-the-art methods. |
| 107 | + |
| 108 | +- **Paper**: [HybridNets: End-to-End Perception Network](https://arxiv.org/abs/2203.09035) |
| 109 | +- **Code**: [GitHub](https://github.com/datvuthanh/HybridNets) |
| 110 | + |
| 111 | +| Method | Campus Video | Road Video | |
| 112 | +| ---------- | -------------------------------------------------------- | -------------------------------------------------------- | |
| 113 | +| HybridNets |  |  | |
| 114 | + |
| 115 | +#### TwinLiteNet |
| 116 | + |
| 117 | +This work introduces TwinLiteNet, a lightweight model designed for driveable area and lane line segmentation in |
| 118 | +autonomous driving. |
| 119 | + |
| 120 | +- **Paper |
| 121 | + **: [TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars](https://arxiv.org/abs/2307.10705) |
| 122 | +- **Code**: [GitHub](https://github.com/chequanghuy/TwinLiteNet) |
| 123 | + |
| 124 | +| Method | Campus Video | Road Video | |
| 125 | +| ----------- | -------------------------------------------------------- | -------------------------------------------------------- | |
| 126 | +| Twinlitenet |  |  | |
| 127 | + |
| 128 | +## Citation |
| 129 | + |
| 130 | +```bibtex |
| 131 | +@article{honda2023clrernet, |
| 132 | +title={CLRerNet: Improving Confidence of Lane Detection with LaneIoU}, |
| 133 | +author={Hiroto Honda and Yusuke Uchida}, |
| 134 | +journal={arXiv preprint arXiv:2305.08366}, |
| 135 | +year={2023}, |
| 136 | +} |
| 137 | +``` |
| 138 | + |
| 139 | +```bibtex |
| 140 | +@InProceedings{Zheng_2022_CVPR, |
| 141 | + author = {Zheng, Tu and Huang, Yifei and Liu, Yang and Tang, Wenjian and Yang, Zheng and Cai, Deng and He, Xiaofei}, |
| 142 | + title = {CLRNet: Cross Layer Refinement Network for Lane Detection}, |
| 143 | + booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
| 144 | + month = {June}, |
| 145 | + year = {2022}, |
| 146 | + pages = {898-907} |
| 147 | +} |
| 148 | +``` |
| 149 | + |
| 150 | +```bibtex |
| 151 | +@article{wang&zhong_2024fenet, |
| 152 | + title={FENet: Focusing Enhanced Network for Lane Detection}, |
| 153 | + author={Liman Wang and Hanyang Zhong}, |
| 154 | + year={2024}, |
| 155 | + eprint={2312.17163}, |
| 156 | + archivePrefix={arXiv}, |
| 157 | + primaryClass={cs.CV} |
| 158 | +} |
| 159 | +``` |
| 160 | + |
| 161 | +```bibtex |
| 162 | +@misc{vu2022hybridnets, |
| 163 | + title={HybridNets: End-to-End Perception Network}, |
| 164 | + author={Dat Vu and Bao Ngo and Hung Phan}, |
| 165 | + year={2022}, |
| 166 | + eprint={2203.09035}, |
| 167 | + archivePrefix={arXiv}, |
| 168 | + primaryClass={cs.CV} |
| 169 | +} |
| 170 | +``` |
| 171 | + |
| 172 | +```bibtex |
| 173 | +@INPROCEEDINGS{10288646, |
| 174 | + author={Che, Quang-Huy and Nguyen, Dinh-Phuc and Pham, Minh-Quan and Lam, Duc-Khai}, |
| 175 | + booktitle={2023 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)}, |
| 176 | + title={TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars}, |
| 177 | + year={2023}, |
| 178 | + volume={}, |
| 179 | + number={}, |
| 180 | + pages={1-6}, |
| 181 | + doi={10.1109/MAPR59823.2023.10288646} |
| 182 | +} |
| 183 | +``` |
0 commit comments