Skip to content

wkid-neu/GHP-FPGA-CNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Towards High-Performance Flexible FPGA-Based Accelerators for CNN Inference: General Hardware Architecture and End-to-End Deploying Toolflow

License

This project is dual-licensed under the following terms:

Open Source Use

Commercial Licensing

  • Commercial use (including but not limited to SaaS offerings, proprietary integrations, or resale) requires a paid commercial license.
  • The commercial license grants rights to:
    • Use the code in closed-source products.
    • Modify the code without open-sourcing derivative works.
    • Access priority technical support and updates.
  • To obtain a commercial license, contact us at wugang@mail.neu.edu.cn.

Patent Notice

This code is protected by patents (Patent No. ZL202310288488.1).

  • Non-commercial use is permitted under AGPL-3.0.
  • Commercial use requires explicit patent authorization.

Folder Description

/FPGA_linux : Hardware design code (Verilog and SystemVerilog) and driver code (C/C++).

/Python : Compiler code.

How to reproduce the results.

Taking the latest version as an example.

  1. Generate bitstream files and burn them into FPGA.

For example, placing a bitstream file in ./FPGA_inux/bitstreams/M64P64Q16R16S8/ folder.

  1. Enter ./FPGA_inux/linux-d river/run/ folder,run the script in the to collect data.

run_model_e2e_perf.sh : End to end delay for collecting various models.

run_model_ins_perf.sh: Used to collect the delay of each instruction for each model.

run_model_build_db.sh: Used to build convolutional databases, it collects the latency of each Conv instruction in both static and dynamic states.

run_test_*: Used for testing various functional components.

All collected raw data is saved in ./FPGA_inux/linux-d river/extvres/.

  1. Enter ./Python/ana/,analyzing results.

Firstly, run the script merge files. py and copy all the original experimental results to this folder.

Secondly, run the script run_ana. pyto start analyzing each model, and save the results in the res_ * file under the accelerator corresponding to each model.

Then, run the script plot. py to start plotting the performance and inference delay ratio of each model.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published