|
1 | | -# FIT: Flexible and Interpretable Training Library |
| 1 | +# FIT |
2 | 2 |
|
3 | | -[](https://github.com/Klus3kk/fit/actions/workflows/ci.yml) |
4 | | -[](https://codecov.io/gh/Klus3kk/fit) |
5 | | -[](https://opensource.org/licenses/MIT) |
| 3 | +A PyTorch-like machine learning library built from scratch with NumPy. Train neural networks with automatic differentiation, no dependencies beyond NumPy. |
6 | 4 |
|
7 | | -FIT is a lightweight machine learning library built from scratch in Python with NumPy. It provides a PyTorch-like API with automatic differentiation and neural network components. |
| 5 | +[Documentation](https://fit.readthedocs.io/) | [Examples](examples/) |
8 | 6 |
|
9 | | -## Features |
| 7 | +## Why FIT? |
10 | 8 |
|
11 | | -- **Automatic Differentiation**: Build and train neural networks with automatic differentiation |
12 | | -- **Neural Network Components**: Linear layers, activations (ReLU, Softmax), BatchNorm, Dropout |
13 | | -- **Optimizers**: SGD, SGD with momentum, Adam |
14 | | -- **Training Utilities**: Training loop, learning rate schedulers, gradient clipping |
15 | | -- **Monitoring**: Training metrics tracking and visualization |
16 | | -- **Model I/O**: Save and load your trained models |
| 9 | +- **Lightweight**: Only requires NumPy |
| 10 | +- **Educational**: Understand ML from first principles |
| 11 | +- **Familiar API**: PyTorch-like interface |
| 12 | +- **Production ready**: Type hints, logging, proper error handling |
17 | 13 |
|
18 | 14 | ## Installation |
19 | 15 |
|
20 | | -### Using pip (recommended) |
21 | | - |
22 | 16 | ```bash |
23 | 17 | pip install git+https://github.com/Klus3kk/fit.git |
24 | 18 | ``` |
25 | 19 |
|
26 | | -### From source |
| 20 | +## Example |
27 | 21 |
|
28 | | -```bash |
29 | | -git clone https://github.com/Klus3kk/fit.git |
30 | | -cd fit |
31 | | -pip install -e . |
| 22 | +Solve XOR problem: |
| 23 | + |
| 24 | +```python |
| 25 | +from fit.core.tensor import Tensor |
| 26 | +from fit.nn.modules.container import Sequential |
| 27 | +from fit.nn.modules.linear import Linear |
| 28 | +from fit.nn.modules.activation import ReLU |
| 29 | +from fit.loss.regression import MSELoss |
| 30 | +from fit.optim.adam import Adam |
| 31 | + |
| 32 | +# XOR dataset |
| 33 | +X = Tensor([[0, 0], [0, 1], [1, 0], [1, 1]]) |
| 34 | +y = Tensor([[0], [1], [1], [0]]) |
| 35 | + |
| 36 | +# Model |
| 37 | +model = Sequential( |
| 38 | + Linear(2, 8), |
| 39 | + ReLU(), |
| 40 | + Linear(8, 1) |
| 41 | +) |
| 42 | + |
| 43 | +# Training |
| 44 | +loss_fn = MSELoss() |
| 45 | +optimizer = Adam(model.parameters(), lr=0.01) |
| 46 | + |
| 47 | +for epoch in range(1000): |
| 48 | + pred = model(X) |
| 49 | + loss = loss_fn(pred, y) |
| 50 | + |
| 51 | + optimizer.zero_grad() |
| 52 | + loss.backward() |
| 53 | + optimizer.step() |
| 54 | + |
| 55 | + if epoch % 200 == 0: |
| 56 | + print(f"Loss: {loss.data:.4f}") |
| 57 | + |
| 58 | +# Test |
| 59 | +print(f"Predictions: {model(X).data}") |
32 | 60 | ``` |
33 | 61 |
|
34 | | -### Using Docker |
| 62 | +## What's included |
35 | 63 |
|
36 | | -```bash |
37 | | -docker-compose up fit-ml |
| 64 | +**Core**: Tensors with autograd, just like PyTorch |
| 65 | +```python |
| 66 | +x = Tensor([1, 2, 3], requires_grad=True) |
| 67 | +y = x.sum() |
| 68 | +y.backward() # x.grad is now [1, 1, 1] |
38 | 69 | ``` |
39 | 70 |
|
40 | | -## Quick Start |
| 71 | +**Layers**: Linear, activations, normalization, attention |
| 72 | +```python |
| 73 | +from fit.nn.modules.activation import ReLU, GELU |
| 74 | +from fit.nn.modules.normalization import BatchNorm1d |
| 75 | +``` |
| 76 | + |
| 77 | +**Optimizers**: SGD, Adam, and advanced ones like SAM |
| 78 | +```python |
| 79 | +from fit.optim.adam import Adam |
| 80 | +from fit.optim.experimental.sam import SAM |
| 81 | +``` |
| 82 | + |
| 83 | +**Simple API**: For quick experiments |
| 84 | +```python |
| 85 | +from fit.simple.models import Classifier |
| 86 | +model = Classifier(input_size=784, num_classes=10) |
| 87 | +``` |
| 88 | + |
| 89 | +## MNIST example |
41 | 90 |
|
42 | 91 | ```python |
| 92 | +import numpy as np |
43 | 93 | from fit.core.tensor import Tensor |
44 | | -from fit.nn.sequential import Sequential |
45 | | -from fit.nn.linear import Linear |
46 | | -from fit.nn.activations import ReLU, Softmax |
47 | | -from fit.train.loss import CrossEntropyLoss |
48 | | -from fit.train.optim import Adam |
49 | | -from fit.train.trainer import Trainer |
50 | | - |
51 | | -# Create a simple model for binary classification |
| 94 | +from fit.nn.modules.container import Sequential |
| 95 | +from fit.nn.modules.linear import Linear |
| 96 | +from fit.nn.modules.activation import ReLU |
| 97 | +from fit.optim.adam import Adam |
| 98 | +from fit.loss.classification import CrossEntropyLoss |
| 99 | + |
| 100 | +# Create MNIST classifier |
52 | 101 | model = Sequential( |
53 | | - Linear(2, 4), |
| 102 | + Linear(784, 128), |
| 103 | + ReLU(), |
| 104 | + Linear(128, 64), |
54 | 105 | ReLU(), |
55 | | - Linear(4, 2), |
56 | | - Softmax() |
| 106 | + Linear(64, 10) |
57 | 107 | ) |
58 | 108 |
|
59 | | -# Prepare data |
60 | | -X = Tensor([[0, 0], [0, 1], [1, 0], [1, 1]]) # XOR problem |
61 | | -y = Tensor([0, 1, 1, 0]) |
| 109 | +# Generate some dummy data |
| 110 | +X_train = Tensor(np.random.randn(100, 784)) |
| 111 | +y_train = Tensor(np.random.randint(0, 10, (100,))) |
62 | 112 |
|
63 | | -# Define loss function and optimizer |
| 113 | +optimizer = Adam(model.parameters(), lr=0.001) |
64 | 114 | loss_fn = CrossEntropyLoss() |
65 | | -optimizer = Adam(model.parameters(), lr=0.01) |
66 | | - |
67 | | -# Create trainer and train |
68 | | -trainer = Trainer(model, loss_fn, optimizer) |
69 | | -trainer.fit(X, y, epochs=100) |
70 | 115 |
|
71 | | -# Make predictions |
72 | | -predictions = model(X) |
| 116 | +# Training loop |
| 117 | +for epoch in range(50): |
| 118 | + pred = model(X_train) |
| 119 | + loss = loss_fn(pred, y_train) |
| 120 | + |
| 121 | + optimizer.zero_grad() |
| 122 | + loss.backward() |
| 123 | + optimizer.step() |
| 124 | + |
| 125 | + if epoch % 10 == 0: |
| 126 | + print(f"Epoch {epoch}, Loss: {loss.data:.4f}") |
73 | 127 | ``` |
| 128 | + |
| 129 | +Perfect for learning how neural networks work under the hood, or when you need a lightweight ML library without the complexity of PyTorch. |
0 commit comments