The final score consists of three components, for a total of 100 points:
-
Project Completeness (40 pts)
- The submitted HLS project must compile successfully and generate all evaluation metrics (mIoU, latency, and resource usage).
- If the flow cannot be executed to completion, this portion of the score will not be awarded.
-
HLS Performance Metrics (40 pts)
Submissions will be evaluated based on semantic accuracy, latency, and compliance with FPGA resource constraints:- Semantic Accuracy
- mIoU: Mean Intersection over Union, a measure of semantic segmentation accuracy.
- Latency
- Latency = Max Latency from the
csynthreport. - Different from the qualifying round, RTL simulation is skipped since it takes too long.
- The
csynthreport is located at:
myproject_prj/solution1/syn/report/myproject_csynth.rpt - Be sure to set both the cosim FLAG and validation FPGA to 0 to skip RTL simulation.
- Latency = Max Latency from the
- FPGA Resource Constraints
- All four resources reported by
vynthmust be β€ 75% of the target FPGAβs capacity, identical to the qualifying round requirements. - Target FPGA: Xilinx Alveo U55C.
- All four resources reported by
Final Scoring Rule:
- If FPS (1/latency) β₯ 60:
Score = mIoU - If FPS < 60:
Score = mIoU Γ (FPS / 60) - This ensures real-time performance is prioritized, while rewarding higher segmentation accuracy. Submissions below 60 FPS are proportionally penalized.
- Semantic Accuracy
-
Poster & Presentation (20 pts)
- Each team must deliver both a poster and an oral presentation on the final day.
- Poster size and presentation time limits follow the official website guidelines.
- Judges will award points based on presentation quality and overall project completeness.
Cityscapes is a popular benchmark dataset for semantic segmentation in autonomous driving applications.
- Training set: 2,975 images
- Validation set: 500 images
- The validation set is used for performance evaluation.
- Each image is annotated with 20 semantic classes (e.g., building, road, car, pedestrian, etc.)
To accommodate FPGA limitations, testing images are resized to 64Γ64. However, we provide training images at multiple resolutions (2Γ, 4Γ) and encourage participants to leverage these high-resolution variants to enhance model performance.

- The dataset and evaluation system are hosted on the Kaggle Competition Page, same as the setup used in the qualifying round
- You can import and run the Jupyter notebook on Kaggle to see an end-to-end workflow example.
