Skip to content

Commit 19888fc

Browse files
authored
feat: add informations on lane detection models (#560)
Signed-off-by: Barış Zeren <bzeren1819@gmail.com>
1 parent 68ee545 commit 19888fc

2 files changed

Lines changed: 184 additions & 0 deletions

File tree

docs/how-to-guides/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@
1919
- [Applying Clang-Tidy to ROS packages](others/applying-clang-tidy-to-ros-packages.md)
2020
- [Defining temporal performance metrics on components](others/defining-temporal-performance-metrics.md)
2121
- [An example procedure for adding and evaluating a new node](others/an-example-procedure-for-adding-and-evaluating-a-new-node.md)
22+
- [Lane Detection Methods](others/lane-detection-methods.md)
2223

2324
TODO: Write the following contents.
2425

Lines changed: 183 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,183 @@
1+
# Lane Detection Methods
2+
3+
## Overview
4+
5+
This document describes some of the most common lane detection methods used in the autonomous driving industry.
6+
Lane detection is a crucial task in autonomous driving, as it is used to determine the boundaries of the road and the
7+
vehicle's position within the lane.
8+
9+
## Methods
10+
11+
This document covers the methods under two categories: lane detection methods and multitask detection methods.
12+
13+
!!! note
14+
15+
The results have been obtained using pre-trained models. Training the model with your own data will yield more
16+
successful results.
17+
18+
### Lane Detection Methods
19+
20+
#### CLRerNet
21+
22+
This work introduce LaneIoU, which improves confidence score accuracy by considering local lane angles, and CLRerNet,
23+
a novel detector leveraging LaneIoU.
24+
25+
- **Paper**: [CLRerNet: Improving Confidence of Lane Detection with LaneIoU](https://arxiv.org/abs/2305.08366)
26+
- **Code**: [GitHub](https://github.com/hirotomusiker/CLRerNet)
27+
28+
| Method | Backbone | Dataset | Confidence | Campus Video | Road Video |
29+
| -------- | -------- | ------- | ---------- | -------------------------------------------------------- | -------------------------------------------------------- |
30+
| CLRerNet | dla34 | culane | 0.4 | ![type:video](https://www.youtube.com/embed/bfuHuoembGg) | ![type:video](https://www.youtube.com/embed/9r_IEg_IkJ8) |
31+
| CLRerNet | dla34 | culane | 0.1 | ![type:video](https://www.youtube.com/embed/XVonhGmvt8Q) | ![type:video](https://www.youtube.com/embed/5P6-yqCPAns) |
32+
| CLRerNet | dla34 | culane | 0.01 | ![type:video](https://www.youtube.com/embed/Sp599_HyegU) | ![type:video](https://www.youtube.com/embed/2tz9gXNIjqs) |
33+
34+
#### CLRNet
35+
36+
This work introduce Cross Layer Refinement Network (CLRNet) to fully utilize high-level semantic and low-level detailed
37+
features in lane detection.
38+
CLRNet detects lanes with high-level features and refines them with low-level details.
39+
Additionally, ROIGather technique and Line IoU loss significantly enhance localization accuracy,
40+
outperforming state-of-the-art methods.
41+
42+
- **Paper**: [CLRNet: Cross Layer Refinement Network for Lane Detection](https://arxiv.org/abs/2203.10350)
43+
- **Code**: [GitHub](https://github.com/Turoad/CLRNet)
44+
45+
| Method | Backbone | Dataset | Confidence | Campus Video | Road Video |
46+
| ------ | --------- | -------- | ---------- | -------------------------------------------------------- | -------------------------------------------------------- |
47+
| CLRNet | dla34 | culane | 0.2 | ![type:video](https://www.youtube.com/embed/n2HpKlOKGvc) | ![type:video](https://www.youtube.com/embed/K6-AHSraopc) |
48+
| CLRNet | dla34 | culane | 0.1 | ![type:video](https://www.youtube.com/embed/BWdCEFy6k3w) | ![type:video](https://www.youtube.com/embed/dHrzsIotVWA) |
49+
| CLRNet | dla34 | culane | 0.01 | ![type:video](https://www.youtube.com/embed/5iNo2VMD9os) | ![type:video](https://www.youtube.com/embed/nl4Lthr1mT8) |
50+
| CLRNet | dla34 | llamas | 0.4 | ![type:video](https://www.youtube.com/embed/IxzJ7TfaSrk) | ![type:video](https://www.youtube.com/embed/IxDdOI5efl0) |
51+
| CLRNet | dla34 | llamas | 0.2 | ![type:video](https://www.youtube.com/embed/vaR8tAgB1Ew) | ![type:video](https://www.youtube.com/embed/vc8-kslVi34) |
52+
| CLRNet | dla34 | llamas | 0.1 | ![type:video](https://www.youtube.com/embed/LkDeZQOItqw) | ![type:video](https://www.youtube.com/embed/r-O92vxSXuw) |
53+
| CLRNet | resnet18 | llamas | 0.4 | ![type:video](https://www.youtube.com/embed/nqSupblM89o) | ![type:video](https://www.youtube.com/embed/py1S5fDIC5E) |
54+
| CLRNet | resnet18 | llamas | 0.2 | ![type:video](https://www.youtube.com/embed/rrNoXck6YLc) | ![type:video](https://www.youtube.com/embed/KHaS9GXueJg) |
55+
| CLRNet | resnet18 | llamas | 0.1 | ![type:video](https://www.youtube.com/embed/J-gU1xbba28) | ![type:video](https://www.youtube.com/embed/5U3O0iaUWF4) |
56+
| CLRNet | resnet18 | tusimple | 0.2 | ![type:video](https://www.youtube.com/embed/HfZYdADQsPM) | ![type:video](https://www.youtube.com/embed/syse16SpafY) |
57+
| CLRNet | resnet18 | tusimple | 0.1 | ![type:video](https://www.youtube.com/embed/o3w3wL_f-GY) | ![type:video](https://www.youtube.com/embed/O2HwNfTJvSQ) |
58+
| CLRNet | resnet34 | culane | 0.1 | ![type:video](https://www.youtube.com/embed/6IgkfJsCjWA) | ![type:video](https://www.youtube.com/embed/LgU3mQniP8c) |
59+
| CLRNet | resnet34 | culane | 0.05 | ![type:video](https://www.youtube.com/embed/eLLcPrEpy84) | ![type:video](https://www.youtube.com/embed/fPoP3uFpzRw) |
60+
| CLRNet | resnet101 | culane | 0.2 | ![type:video](https://www.youtube.com/embed/FODj_M-RRC4) | ![type:video](https://www.youtube.com/embed/5fG8ApvTFD4) |
61+
| CLRNet | resnet101 | culane | 0.1 | ![type:video](https://www.youtube.com/embed/i0Bu2-Ef8T8) | ![type:video](https://www.youtube.com/embed/DWy4HeHyZYQ) |
62+
63+
#### FENet
64+
65+
This research introduces Focusing Sampling, Partial Field of View Evaluation, Enhanced FPN architecture,
66+
and Directional IoU Loss, addressing challenges in precise lane detection for autonomous driving.
67+
Experiments show that Focusing Sampling, which emphasizes distant details crucial for safety,
68+
significantly improves both benchmark and practical curved/distant lane recognition accuracy over uniform approaches.
69+
70+
- **Paper**: [FENet: Focusing Enhanced Network for Lane Detection](https://arxiv.org/abs/2312.17163)
71+
- **Code**: [GitHub](https://github.com/HanyangZhong/FENet)
72+
73+
| Method | Backbone | Dataset | Confidence | Campus Video | Road Video |
74+
| -------- | -------- | ------- | ---------- | -------------------------------------------------------- | -------------------------------------------------------- |
75+
| FENet v1 | dla34 | culane | 0.2 | ![type:video](https://www.youtube.com/embed/eGHgxf-8mcg) | ![type:video](https://www.youtube.com/embed/YMKCWLWq2Ww) |
76+
| FENet v1 | dla34 | culane | 0.1 | ![type:video](https://www.youtube.com/embed/em3eaZ6RKZM) | ![type:video](https://www.youtube.com/embed/bCjEUtoIYac) |
77+
| FENet v1 | dla34 | culane | 0.05 | ![type:video](https://www.youtube.com/embed/_3gwLW54aHw) | ![type:video](https://www.youtube.com/embed/24hjuNlZBIQ) |
78+
| FENet v2 | dla34 | culane | 0.2 | ![type:video](https://www.youtube.com/embed/Z4WPJ9Cop2w) | ![type:video](https://www.youtube.com/embed/d3bovsjF2tE) |
79+
| FENet v2 | dla34 | culane | 0.1 | ![type:video](https://www.youtube.com/embed/vbE1wNIc1Js) | ![type:video](https://www.youtube.com/embed/ezWGPTSbBAw) |
80+
| FENet v2 | dla34 | culane | 0.05 | ![type:video](https://www.youtube.com/embed/sJvyR6jrlpY) | ![type:video](https://www.youtube.com/embed/XKuJ-YoVusY) |
81+
| FENet v2 | dla34 | llamas | 0.4 | ![type:video](https://www.youtube.com/embed/_GrUe6phC7U) | ![type:video](https://www.youtube.com/embed/_YLDS-gTg2w) |
82+
| FENet v2 | dla34 | llamas | 0.2 | ![type:video](https://www.youtube.com/embed/G59KpXE-2OI) | ![type:video](https://www.youtube.com/embed/3MaNauiPAxQ) |
83+
| FENet v2 | dla34 | llamas | 0.1 | ![type:video](https://www.youtube.com/embed/cre7XhUF7IM) | ![type:video](https://www.youtube.com/embed/vGKDraGFamM) |
84+
| FENet v2 | dla34 | llamas | 0.05 | ![type:video](https://www.youtube.com/embed/TNpBmidhChQ) | ![type:video](https://www.youtube.com/embed/Z67DTfoppVo) |
85+
86+
### Multitask Detection Methods
87+
88+
#### YOLOPv2
89+
90+
This work proposes an efficient multi-task learning network for autonomous driving,
91+
combining traffic object detection, drivable road area segmentation, and lane detection.
92+
YOLOPv2 model achieves new state-of-the-art performance in accuracy and speed on the BDD100K dataset,
93+
halving the inference time compared to previous benchmarks.
94+
95+
- **Paper**: [YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception](https://arxiv.org/abs/2208.11434)
96+
- **Code**: [GitHub](https://github.com/CAIC-AD/YOLOPv2)
97+
98+
| Method | Campus Video | Road Video |
99+
| ------- | -------------------------------------------------------- | -------------------------------------------------------- |
100+
| YOLOPv2 | ![type:video](https://www.youtube.com/embed/iovwTg3cisA) | ![type:video](https://www.youtube.com/embed/UzkCnI0Sx7c) |
101+
102+
#### HybridNets
103+
104+
This work introduces HybridNets, an end-to-end perception network for autonomous driving.
105+
It optimizes segmentation heads and box/class prediction networks using a weighted bidirectional feature network.
106+
HybridNets achieves good performance on BDD100K and Berkeley DeepDrive datasets, outperforming state-of-the-art methods.
107+
108+
- **Paper**: [HybridNets: End-to-End Perception Network](https://arxiv.org/abs/2203.09035)
109+
- **Code**: [GitHub](https://github.com/datvuthanh/HybridNets)
110+
111+
| Method | Campus Video | Road Video |
112+
| ---------- | -------------------------------------------------------- | -------------------------------------------------------- |
113+
| HybridNets | ![type:video](https://www.youtube.com/embed/ph9TKSiWvd4) | ![type:video](https://www.youtube.com/embed/aNsm4Uj1gcA) |
114+
115+
#### TwinLiteNet
116+
117+
This work introduces TwinLiteNet, a lightweight model designed for driveable area and lane line segmentation in
118+
autonomous driving.
119+
120+
- **Paper
121+
**: [TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars](https://arxiv.org/abs/2307.10705)
122+
- **Code**: [GitHub](https://github.com/chequanghuy/TwinLiteNet)
123+
124+
| Method | Campus Video | Road Video |
125+
| ----------- | -------------------------------------------------------- | -------------------------------------------------------- |
126+
| Twinlitenet | ![type:video](https://www.youtube.com/embed/hDIcbBup7ww) | ![type:video](https://www.youtube.com/embed/4J9zSoVxw-Q) |
127+
128+
## Citation
129+
130+
```bibtex
131+
@article{honda2023clrernet,
132+
title={CLRerNet: Improving Confidence of Lane Detection with LaneIoU},
133+
author={Hiroto Honda and Yusuke Uchida},
134+
journal={arXiv preprint arXiv:2305.08366},
135+
year={2023},
136+
}
137+
```
138+
139+
```bibtex
140+
@InProceedings{Zheng_2022_CVPR,
141+
author = {Zheng, Tu and Huang, Yifei and Liu, Yang and Tang, Wenjian and Yang, Zheng and Cai, Deng and He, Xiaofei},
142+
title = {CLRNet: Cross Layer Refinement Network for Lane Detection},
143+
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
144+
month = {June},
145+
year = {2022},
146+
pages = {898-907}
147+
}
148+
```
149+
150+
```bibtex
151+
@article{wang&zhong_2024fenet,
152+
title={FENet: Focusing Enhanced Network for Lane Detection},
153+
author={Liman Wang and Hanyang Zhong},
154+
year={2024},
155+
eprint={2312.17163},
156+
archivePrefix={arXiv},
157+
primaryClass={cs.CV}
158+
}
159+
```
160+
161+
```bibtex
162+
@misc{vu2022hybridnets,
163+
title={HybridNets: End-to-End Perception Network},
164+
author={Dat Vu and Bao Ngo and Hung Phan},
165+
year={2022},
166+
eprint={2203.09035},
167+
archivePrefix={arXiv},
168+
primaryClass={cs.CV}
169+
}
170+
```
171+
172+
```bibtex
173+
@INPROCEEDINGS{10288646,
174+
author={Che, Quang-Huy and Nguyen, Dinh-Phuc and Pham, Minh-Quan and Lam, Duc-Khai},
175+
booktitle={2023 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)},
176+
title={TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars},
177+
year={2023},
178+
volume={},
179+
number={},
180+
pages={1-6},
181+
doi={10.1109/MAPR59823.2023.10288646}
182+
}
183+
```

0 commit comments

Comments
 (0)