In this project, we will build a software pipeline to detect lane lines on videos through advance techniques such as color transforms, gradients, perspective transform and fitting polynomial.
- Camera Calibration
- Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
- Pipeline (test images)
- Apply a distortion correction to raw images.
- Use color transforms, gradients, etc., to create a thresholded binary image.
- Apply a perspective transform to rectify binary image ("birds-eye view").
- Detect lane pixels and fit to find the lane boundary.
- Determine the curvature of the lane and vehicle position with respect to center.
- Warp the detected lane boundaries back onto the original image.
- Pipeline (video)
- Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
- Discussion
- Discuss any problems / issues we faced in your implementation of this project. Where will our pipeline likely fail? What could you do to make it more robust?
- Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
- Apply a distortion correction to raw images.
- Use color transforms, gradients, etc., to create a thresholded binary image.
- Apply a perspective transform to rectify binary image ("birds-eye view").
- Detect lane pixels and fit to find the lane boundary.
- Determine the curvature of the lane and vehicle position with respect to center.
- Warp the detected lane boundaries back onto the original image.
- Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
- Apply the image pipeline onto project video.
- Reflection
| Project Video | Challenge Video | Harder Challenge Video |
|---|---|---|
![]() |
![]() |
![]() |
- ./notebook/CarND-Advanced-Lane-Lines.ipynb ( Note book contianing run of implementation )
- README.md (a report writeup markdown file)
- project_video_output.mp4 (a video output that applies advance lane lines)
- /output_images ( folder where all the output and writeup images are present)
- /output_videos ( folder where all the output of video and gif are present)
- Camera correct matrix points Ref In [3]
- chessboard size to 9x6 however just using 9x6 you will 3 images would fail when finding points
- Use a range as in 5,6 to 6,7,8,9 you will find how the below function work
- cv2.findChessboardCorners: to find corners
- cv2.drawChessboardCorners: to draw the corner
- cv2.calibrateCamera: to calibrate camera
- result can be found in ./output_images/corner_x
- Chess Undistort Ref In [7]
- cv2.undistort: to undistort image using the calibated camera
- result can be found in ./output_images/undistorted_calibrationX
- cv2.undistort: to undistort image using the calibated camera
- result can be found in ./output_images test1, test2, test3, test4, test5, test6, straight1, straight2
- Result
- undistort_and_threshold() function applies color transformation and sobel operator to generated thresholded binary image.
- Color Channel Selection: HLS color space Saturation channel used due to better lane lines detection
- Gradients: Sobel threshold X used as it identify lanes better
- Directional and Magnitude thresholds has very minimal to no effect on thresholded binary image.
- Color channel selection and Gradients are used to obtain thresholded binary image

- Result wrap and unwrap perspective transform.

- All image Result
- using cv2.getPerspectiveTransform(src, dst) for perspective transform
- using cv2.getPerspectiveTransform(dst, src) for perspective transform [DST/SRC inversed]
- using cv2.warpPerspective(img, M, (w,h), flags=cv2.INTER_LINEAR) to wrap perspective
- Define 4 source and 4 destination points (this is done by trail and error and checking which workd better)
- Result wrap and unwrap perspective transform.

- polyfit function's like sliding_window_poly_fit() and polyfit_from_previous() identify lane-line pixels and fit their positions with a polynomial.
- histogram peaks of bottom half of the binary thresholded image is used to find base of left and right line(as shown above).
- use a sliding window, placed around the line centers, to find and follow the lines up to the top of the frame to identify lane-line pixels and then fit a 2 degree polynomial.
- Reference
- curvature() function is responsible to calculate radius of curvature and vehicle offset with center.
- road curve either left or right affects vehicle position estimation which happen at top then bottom thus the value 720 and 900 for calculating polynomial intercept.
- fit new polynomials to x,y in world space
- calculate the radii of curvature
- calculate car_pos, lane_center and vechicle offset
- assumption camera is mounted at the center of the car and deviation of midpoint of the lane from center of image.
- process_image() function does the detecting of lane lines for the above image.
- indicated lane boundaries on original image
- it shows the lanes, curvature and position from center(in meters)
- RESULT
- Project Original Video
- Project Output Video
- image processing pipeline successfully processes the video
- outputs generated regarding the radius of curvature of lane and vechicle position within the lane
- pipeline correctly map's out curved lines and does not fail on shadows or pavement color changes
- There was no good fit solution as such. One had to try different combinations and thresholds.
- Right combination of color channel and gradient.
- Perspective transform values for SRC and DST again had to be tried for different values.
- Wrapped images considered only for 2 lines.
- Challenge video fails as it has 3 lines from histogram peak
- Normalizing image, lighting conditions, shadows, discoloration
- Narrow down area or region of interest
- Adding Denoising
- Consider threshold of more than 2 lines, probably discard using region of interest of discard pixel near polyfit
- handling illumination color shades like in challenge video
As shown in above in Result the challenge video and hard challenge video have a hard time when using the pipeline :)
The radius of curvature seems to be consistent with the strength of the curve. Well done.
However the position estimate is not accurate. When the car is in its left-most position around 0:31 (around 0.5m left of the center), the estimate shows near 0 offset. This is because the perspective transformation shifts the whole scene (see below, on the original image, the center of the lane is close to the center of the image, however after the perspective transformation the center of the lane is much more to the left). This can be fixed in two ways:
either you change the perspective transfromation by changing src & dst so that the scene is not shifted
or simply you add a correction tag to the result.
- RESULT as expected
- Redius of curvature before:1193.27 after:1165.53
- Position estimate before:0.14 after:0.44
- Snapshot as below

- Project Video Before Review
- Project Video After Review
- Change made:
-- updated src and dst as below
src = np.float32([[30, img.shape[0]], [555, 460], [700, 460], [1000, img.shape[0]]]) dst = np.float32([[75, img.shape[0]], [75, 0], [1050, 0], [960, img.shape[0]]])
```





