Skip to content

Commit 4e234ea

Browse files
authored
Merge pull request #7 from Klus3kk/dev
Improved README, preparing readthedocs
2 parents 84ce9ef + 301d8e8 commit 4e234ea

File tree

3 files changed

+125
-152
lines changed

3 files changed

+125
-152
lines changed

.readthedocs.yaml

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# Read the Docs configuration file
2+
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
3+
4+
# Required
5+
version: 2
6+
7+
# Set the OS, Python version, and other tools you might need
8+
build:
9+
os: ubuntu-24.04
10+
tools:
11+
python: "3.13"
12+
13+
# Build documentation in the "docs/" directory with Sphinx
14+
sphinx:
15+
configuration: docs/conf.py
16+
17+
# Optionally, but recommended,
18+
# declare the Python requirements required to build your documentation
19+
# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
20+
# python:
21+
# install:
22+
# - requirements: docs/requirements.txt
23+
24+

CHANGELOG.md

Lines changed: 0 additions & 107 deletions
This file was deleted.

README.md

Lines changed: 101 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -1,73 +1,129 @@
1-
# FIT: Flexible and Interpretable Training Library
1+
# FIT
22

3-
[![Tests](https://github.com/Klus3kk/fit/actions/workflows/ci.yml/badge.svg)](https://github.com/Klus3kk/fit/actions/workflows/ci.yml)
4-
[![codecov](https://codecov.io/gh/Klus3kk/fit/branch/main/graph/badge.svg)](https://codecov.io/gh/Klus3kk/fit)
5-
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
3+
A PyTorch-like machine learning library built from scratch with NumPy. Train neural networks with automatic differentiation, no dependencies beyond NumPy.
64

7-
FIT is a lightweight machine learning library built from scratch in Python with NumPy. It provides a PyTorch-like API with automatic differentiation and neural network components.
5+
[Documentation](https://fit.readthedocs.io/) | [Examples](examples/)
86

9-
## Features
7+
## Why FIT?
108

11-
- **Automatic Differentiation**: Build and train neural networks with automatic differentiation
12-
- **Neural Network Components**: Linear layers, activations (ReLU, Softmax), BatchNorm, Dropout
13-
- **Optimizers**: SGD, SGD with momentum, Adam
14-
- **Training Utilities**: Training loop, learning rate schedulers, gradient clipping
15-
- **Monitoring**: Training metrics tracking and visualization
16-
- **Model I/O**: Save and load your trained models
9+
- **Lightweight**: Only requires NumPy
10+
- **Educational**: Understand ML from first principles
11+
- **Familiar API**: PyTorch-like interface
12+
- **Production ready**: Type hints, logging, proper error handling
1713

1814
## Installation
1915

20-
### Using pip (recommended)
21-
2216
```bash
2317
pip install git+https://github.com/Klus3kk/fit.git
2418
```
2519

26-
### From source
20+
## Example
2721

28-
```bash
29-
git clone https://github.com/Klus3kk/fit.git
30-
cd fit
31-
pip install -e .
22+
Solve XOR problem:
23+
24+
```python
25+
from fit.core.tensor import Tensor
26+
from fit.nn.modules.container import Sequential
27+
from fit.nn.modules.linear import Linear
28+
from fit.nn.modules.activation import ReLU
29+
from fit.loss.regression import MSELoss
30+
from fit.optim.adam import Adam
31+
32+
# XOR dataset
33+
X = Tensor([[0, 0], [0, 1], [1, 0], [1, 1]])
34+
y = Tensor([[0], [1], [1], [0]])
35+
36+
# Model
37+
model = Sequential(
38+
Linear(2, 8),
39+
ReLU(),
40+
Linear(8, 1)
41+
)
42+
43+
# Training
44+
loss_fn = MSELoss()
45+
optimizer = Adam(model.parameters(), lr=0.01)
46+
47+
for epoch in range(1000):
48+
pred = model(X)
49+
loss = loss_fn(pred, y)
50+
51+
optimizer.zero_grad()
52+
loss.backward()
53+
optimizer.step()
54+
55+
if epoch % 200 == 0:
56+
print(f"Loss: {loss.data:.4f}")
57+
58+
# Test
59+
print(f"Predictions: {model(X).data}")
3260
```
3361

34-
### Using Docker
62+
## What's included
3563

36-
```bash
37-
docker-compose up fit-ml
64+
**Core**: Tensors with autograd, just like PyTorch
65+
```python
66+
x = Tensor([1, 2, 3], requires_grad=True)
67+
y = x.sum()
68+
y.backward() # x.grad is now [1, 1, 1]
3869
```
3970

40-
## Quick Start
71+
**Layers**: Linear, activations, normalization, attention
72+
```python
73+
from fit.nn.modules.activation import ReLU, GELU
74+
from fit.nn.modules.normalization import BatchNorm1d
75+
```
76+
77+
**Optimizers**: SGD, Adam, and advanced ones like SAM
78+
```python
79+
from fit.optim.adam import Adam
80+
from fit.optim.experimental.sam import SAM
81+
```
82+
83+
**Simple API**: For quick experiments
84+
```python
85+
from fit.simple.models import Classifier
86+
model = Classifier(input_size=784, num_classes=10)
87+
```
88+
89+
## MNIST example
4190

4291
```python
92+
import numpy as np
4393
from fit.core.tensor import Tensor
44-
from fit.nn.sequential import Sequential
45-
from fit.nn.linear import Linear
46-
from fit.nn.activations import ReLU, Softmax
47-
from fit.train.loss import CrossEntropyLoss
48-
from fit.train.optim import Adam
49-
from fit.train.trainer import Trainer
50-
51-
# Create a simple model for binary classification
94+
from fit.nn.modules.container import Sequential
95+
from fit.nn.modules.linear import Linear
96+
from fit.nn.modules.activation import ReLU
97+
from fit.optim.adam import Adam
98+
from fit.loss.classification import CrossEntropyLoss
99+
100+
# Create MNIST classifier
52101
model = Sequential(
53-
Linear(2, 4),
102+
Linear(784, 128),
103+
ReLU(),
104+
Linear(128, 64),
54105
ReLU(),
55-
Linear(4, 2),
56-
Softmax()
106+
Linear(64, 10)
57107
)
58108

59-
# Prepare data
60-
X = Tensor([[0, 0], [0, 1], [1, 0], [1, 1]]) # XOR problem
61-
y = Tensor([0, 1, 1, 0])
109+
# Generate some dummy data
110+
X_train = Tensor(np.random.randn(100, 784))
111+
y_train = Tensor(np.random.randint(0, 10, (100,)))
62112

63-
# Define loss function and optimizer
113+
optimizer = Adam(model.parameters(), lr=0.001)
64114
loss_fn = CrossEntropyLoss()
65-
optimizer = Adam(model.parameters(), lr=0.01)
66-
67-
# Create trainer and train
68-
trainer = Trainer(model, loss_fn, optimizer)
69-
trainer.fit(X, y, epochs=100)
70115

71-
# Make predictions
72-
predictions = model(X)
116+
# Training loop
117+
for epoch in range(50):
118+
pred = model(X_train)
119+
loss = loss_fn(pred, y_train)
120+
121+
optimizer.zero_grad()
122+
loss.backward()
123+
optimizer.step()
124+
125+
if epoch % 10 == 0:
126+
print(f"Epoch {epoch}, Loss: {loss.data:.4f}")
73127
```
128+
129+
Perfect for learning how neural networks work under the hood, or when you need a lightweight ML library without the complexity of PyTorch.

0 commit comments

Comments
 (0)