Skip to content

vic9112/2025_FPGA_Contest_Final_Round

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

21 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🏁 Final Round


πŸ† Evaluation Criteria

The final score consists of three components, for a total of 100 points:

  1. Project Completeness (40 pts)

    • The submitted HLS project must compile successfully and generate all evaluation metrics (mIoU, latency, and resource usage).
    • If the flow cannot be executed to completion, this portion of the score will not be awarded.
  2. HLS Performance Metrics (40 pts)
    Submissions will be evaluated based on semantic accuracy, latency, and compliance with FPGA resource constraints:

    • Semantic Accuracy
      • mIoU: Mean Intersection over Union, a measure of semantic segmentation accuracy.
    • Latency
      • Latency = Max Latency from the csynth report.
      • Different from the qualifying round, RTL simulation is skipped since it takes too long.
      • The csynth report is located at:
        myproject_prj/solution1/syn/report/myproject_csynth.rpt
      • Be sure to set both the cosim FLAG and validation FPGA to 0 to skip RTL simulation.
        img
    • FPGA Resource Constraints
      • All four resources reported by vynth must be ≀ 75% of the target FPGA’s capacity, identical to the qualifying round requirements.
      • Target FPGA: Xilinx Alveo U55C.

    Final Scoring Rule:

    • If FPS (1/latency) β‰₯ 60:
      Score = mIoU
    • If FPS < 60:
      Score = mIoU Γ— (FPS / 60)
    • This ensures real-time performance is prioritized, while rewarding higher segmentation accuracy. Submissions below 60 FPS are proportionally penalized.
  3. Poster & Presentation (20 pts)

    • Each team must deliver both a poster and an oral presentation on the final day.
    • Poster size and presentation time limits follow the official website guidelines.
    • Judges will award points based on presentation quality and overall project completeness.

πŸ“‚ Dataset Introduction

Cityscapes is a popular benchmark dataset for semantic segmentation in autonomous driving applications.

  • Training set: 2,975 images
  • Validation set: 500 images
  • The validation set is used for performance evaluation.
  • Each image is annotated with 20 semantic classes (e.g., building, road, car, pedestrian, etc.)

To accommodate FPGA limitations, testing images are resized to 64Γ—64. However, we provide training images at multiple resolutions (2Γ—, 4Γ—) and encourage participants to leverage these high-resolution variants to enhance model performance. img1


Accuracy Scoring System & Dataset

About

Submission of 2025 FPGA contest final found

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%