You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Tiny matrix multiplication ASIC for the "1.58 bit" LLMs with Ternary weights
3
+
# Tiny matrix multiplication ASIC for 1.58 bit aka TERNARY weight LLMs
4
+
5
+
This work is inspired by [The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/pdf/2402.17764.pdf) paper that reduces weights of the [Large Language Model](https://en.wikipedia.org/wiki/Large_language_model) to ternary representation `{-1, 0, 1}`.
4
6
5
7
Preliminery **performance** results based on simulations:
6
8
* eFabless 130nm ASIC - **1 GigaOPS** per 0.2 square millimeter of chip area @ 50 MHz
7
-
* $99 FPGA - **0.6 TeraOPS** @ 500 MHz (thanks to [@samsoniuk](https://github.com/samsoniuk) for simulation)
9
+
* $99 FPGA - **0.6 TeraOPS** @ 500 MHz (thanks to [@samsoniuk](https://github.com/samsoniuk) for quick synthesis!)
8
10
9
11
Observation: _**doubling** the chip area leads to **50%** increase in performance given a constant memory bandwidth and clock frequency._
10
12
11
-
## Inspiration
12
-
This work is inspired by [The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/pdf/2402.17764.pdf) paper that reduces weights of the [Large Language Model](https://en.wikipedia.org/wiki/Large_language_model) to ternary representation `{-1, 0, 1}`.
13
+

13
14
14
-
## Intent
15
+
## Intent & ASIC
15
16
This implementation is an exploration of the design space - intent is to measure how chip area, precsion and memory bandwidth affects the performance of the systolic array and AI accelerators.
16
17
17
18
This ASIC will be fabricated using eFabless 130 nm process via [Tiny Tapeout](https://tinytapeout.com).
18
19
20
+

21
+
19
22
## Considerations
20
23
This implementation takes the following considerations into account:
* Extremely low memory bandwidth limited by the 16 IO pins available in Tiny Tapeout ~ 100 MB/s.
23
26
* Be able to increase compute regardless of memory bandwidth.
24
27
25
28
## Implementation
26
-
**Ternary weights.** Currently a pretty basic approach is used to decode ternary values from 8-bit stream. 8-bit values are decoded with a huge case statement. Surprisingly it produces a pretty compact logic. But I am sure it in can be done better!
29
+
**Ternary weights.** Currently a pretty basic approach is used to decode 5 ternary values from every 8 bits. 8-bit values are decoded with a huge case statement. Surprisingly it produces a pretty compact logic. But I am sure it in can be done better!
27
30
```
28
31
always @(*) begin
29
32
case(packed_weights)
@@ -36,11 +39,20 @@ This implementation takes the following considerations into account:
36
39
**Systolic array.** The matrix multiplication is implemented as an activation stationary "pseudo" systolic array. It is "pseudo" because inputs (weights & activations) are directly connected to all elements in the array. Only results (new activations) are shifted out of the array in a systolic manner. Such implementation is closer to Tesla FSD rather than a Google's TPU.
37
40
38
41
**Compute slices.** Systolic array is split into compute slices. Slicing allows to increase the size of systolic array and compute power even if memory bandwidth stays the same.
0 commit comments