You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,11 @@
2
2
3
3
# Tiny Tapeout Verilog Project Template
4
4
5
+
- After reading Elon Musk discuss the efficiency trade-offs between integer and floating-point computation for Tesla AI5/AI6 chips, I thought of trying it myself on free ASIC tools here on TinyTapeout.
6
+
- Since I already have a fully-fledged [TPU](https://github.com/WilliamZhang20/ECE298A-TPU) that does 8-bit integer (INT8) arithmetic we should now do FP8.
7
+
- With the previous project, I had tried doing FP8 with BF16 accumulation, but the chip area blew up. In retrospect, this was likely because I took extra space converting BF16 to FP8.
8
+
- Later on in the previous project, I got rid of 8-bit outputs and kept every output in 16-bit numbers. So no more back-conversion waste, time to try it out again.
9
+
5
10
-[Read the documentation for project](docs/info.md)
0 commit comments