Skip to content

Commit d6f2676

Browse files
authored
Merge pull request #3 from Iainmon/add-readme
Add an initial README file
2 parents 9727617 + b23b2a7 commit d6f2676

File tree

1 file changed

+95
-0
lines changed

1 file changed

+95
-0
lines changed

README.md

+95
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
# ChAI: Chapel Artificial Intelligence
2+
3+
ChAI is a library for AI/ML in [Chapel](https://github.com/chapel-lang/chapel).
4+
Due to Chapel's highly parallel nature, it is well-suited for AI/ML tasks;
5+
the goal of the library is to provide a foundation for such tasks, enabling
6+
local, distributed, and GPU-enabled computations.
7+
8+
Please note that ChAI was developed as part of an internship summer project
9+
by [Iain Moncrief](https://github.com/Iainmon), and as such is in a relatively
10+
early stage of development.
11+
12+
## Overall Design
13+
ChAI intends to provide a PyTorch-like API that is familiar to newcomers from
14+
other languages. To this end, it provides a number of tensor primitives,
15+
including pure-Chapel implementations of operations such as matrix-matrix
16+
multiplication and convolution (in fact, ChAI provides _two_ tensor data types;
17+
see [Static and Dynamic Tensors](#static-and-dynamic-tensors) below). On top
18+
of this low-level API, ChAI defines a layer system, which makes it possible
19+
to compose pieces such as `Conv2D`, `Linear`, and `Flatten` into a feed-forward
20+
neural network.
21+
22+
ChAI's tensors keep track of the computational graph, making it usable for both
23+
feed-forward and back-propagation tasks; however, the feed-forward components
24+
have received more attention at this time.
25+
26+
## Examples
27+
The [examples](https://github.com/chapel-lang/ChAI/tree/main/examples) folder contains
28+
various sample programs written using ChAI.
29+
30+
Thus far, the concrete test for ChAI has been the MNIST dataset; specifically,
31+
ChAI's PyTorch interop has been used to load a pre-trained
32+
convolutional MNIST classifier and execute it on multiple locales.
33+
The [`MultiLocaleInference.chpl`](https://github.com/chapel-lang/ChAI/blob/main/examples/MultiLocaleInference.chpl)
34+
file demonstrates this.
35+
36+
## Getting Started
37+
38+
To use ChAI, you need to have installed Chapel; you can follow the installation
39+
instructions [on this page](https://chapel-lang.org/download.html) to do so.
40+
41+
Once you have Chapel installed, you can use the following command to clone ChAI:
42+
43+
```bash
44+
git clone https://github.com/chapel-lang/ChAI.git
45+
```
46+
47+
You can then compile one of the example ChAI programs using the following
48+
command:
49+
50+
```bash
51+
chpl examples/ConvLayerTest.chpl -M lib
52+
./ConvLayerTest
53+
```
54+
55+
The above should produce the following output:
56+
57+
```
58+
Tensor([ 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0]
59+
[ 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0]
60+
[16.0 17.0 18.0 19.0 20.0 21.0 22.0 23.0]
61+
[24.0 25.0 26.0 27.0 28.0 29.0 30.0 31.0]
62+
[32.0 33.0 34.0 35.0 36.0 37.0 38.0 39.0]
63+
[40.0 41.0 42.0 43.0 44.0 45.0 46.0 47.0]
64+
[48.0 49.0 50.0 51.0 52.0 53.0 54.0 55.0]
65+
[56.0 57.0 58.0 59.0 60.0 61.0 62.0 63.0],
66+
shape = (1, 8, 8),
67+
rank = 3)
68+
69+
Tensor([474.0 510.0 546.0 582.0 618.0 654.0]
70+
[762.0 798.0 834.0 870.0 906.0 942.0]
71+
[1050.0 1086.0 1122.0 1158.0 1194.0 1230.0]
72+
[1338.0 1374.0 1410.0 1446.0 1482.0 1518.0]
73+
[1626.0 1662.0 1698.0 1734.0 1770.0 1806.0]
74+
[1914.0 1950.0 1986.0 2022.0 2058.0 2094.0],
75+
shape = (1, 6, 6),
76+
rank = 3)
77+
```
78+
79+
80+
## Static and Dynamic Tensors
81+
82+
Chapel's type system is static and relatively strict; to iterate over tensors
83+
-- thus implementing various mathematical operations -- the dimensions of
84+
the tensors need to be known at compile-time. However, this does not mesh
85+
well with the ability to dynamically load models from files on disk (since
86+
the contents of the files can be arbitrary).
87+
88+
To mediate between these two requirements, ChAI provides two tensor types:
89+
`StaticTensor` and `DynamicTensor`. The `StaticTensor` includes the rank
90+
of the tensor; this makes it possible to iterate over it and perform the "usual"
91+
operations. The `DynamicTensor` is a rank-erased version of `DynamicTensor`;
92+
it cannot be iterated over, but it can be dynamically cast back to a
93+
`StaticTensor` when needed. Both `StaticTensor` and `DynamicTensor` support
94+
the same operations; `DynamicTensor` performs a dynamic cast to `StaticTensor`
95+
under the hood.

0 commit comments

Comments
 (0)