Skip to content

Commit b907a60

Browse files
authored
Merge pull request #90 from fastmachinelearning/vitis
Add information about Vivado for part 7
2 parents 433a32b + d23d40a commit b907a60

File tree

2 files changed

+5
-3
lines changed

2 files changed

+5
-3
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,8 @@ conda activate hls4ml-tutorial
2727
source /path/to/your/installtion/Xilinx/Vitis_HLS/202X.X/settings64.(c)sh
2828
```
2929

30+
Note that part 7 of the tutorial makes use of the `VivadoAccelator` backend of hls4ml for which no Vitis equivalent is available yet. For this part of the tutorial it is therefore necesary to install and source Vivado HLS version 2019.2 or 2020.1, which can be obtained [here](https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/vivado-design-tools/archive.html).
31+
3032
## Companion material
3133
We have prepared a set of slides with some introduction and more details on each of the exercises.
3234
Please find them [here](https://docs.google.com/presentation/d/1c4LvEc6yMByx2HJs8zUP5oxLtY6ACSizQdKvw5cg5Ck/edit?usp=sharing).

part7a_bitstream.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"source": [
88
"# Part 7a: Bitstream Generation\n",
99
"\n",
10-
"In the previous sections we've seen how to train a Neural Network with a small resource footprint using QKeras, then to convert it to `hls4ml` and create an IP. That IP can be interfaced into a larger design to deploy on an FPGA device. In this section, we introduce the `VivadoAccelerator` backend of `hls4ml`, where we can easily target some supported devices to get up and running quickly. Specifically, we'll deploy the model on a [pynq-z2 board](http://www.pynq.io/)."
10+
"In the previous sections we've seen how to train a Neural Network with a small resource footprint using QKeras, then to convert it to `hls4ml` and create an IP. That IP can be interfaced into a larger design to deploy on an FPGA device. In this section, we introduce the `VivadoAccelerator` backend of `hls4ml`, where we can easily target some supported devices to get up and running quickly. Specifically, we'll deploy the model on a [pynq-z2 board](http://www.pynq.io/). NOTE: This tutorial requires on Vivado HLS instead of Vitis."
1111
]
1212
},
1313
{
@@ -26,7 +26,7 @@
2626
"_add_supported_quantized_objects(co)\n",
2727
"import os\n",
2828
"\n",
29-
"os.environ['PATH'] = os.environ['XILINX_Vivado'] + '/bin:' + os.environ['PATH']"
29+
"os.environ['PATH'] = os.environ['XILINX_VIVADO'] + '/bin:' + os.environ['PATH']"
3030
]
3131
},
3232
{
@@ -74,7 +74,7 @@
7474
"import hls4ml\n",
7575
"import plotting\n",
7676
"\n",
77-
"config = hls4ml.utils.config_from_keras_model(model, granularity='name', backend='Vitis')\n",
77+
"config = hls4ml.utils.config_from_keras_model(model, granularity='name')\n",
7878
"config['LayerName']['softmax']['exp_table_t'] = 'ap_fixed<18,8>'\n",
7979
"config['LayerName']['softmax']['inv_table_t'] = 'ap_fixed<18,4>'\n",
8080
"for layer in ['fc1', 'fc2', 'fc3', 'output']:\n",

0 commit comments

Comments
 (0)