A lightweight neural network implementation built from scratch using only NumPy. Fawern-NN provides a simple, intuitive interface for building, training, and evaluating neural networks with minimal dependencies.
- Pure Python implementation with minimal dependencies (NumPy, Matplotlib, scikit-learn)
- Keras-inspired API for easy model building and training
- Support for various activation functions:
- Sigmoid
- Tanh
- ReLU
- Leaky ReLU
- Softmax
- Linear
- Customizable network architecture with flexible layer definitions
- Support for batch training
- Built-in evaluation metrics and visualization tools
- Extensible design for adding custom activation functions
pip install fawern-nn
import numpy as np
from fawern_nn.nn import Layers, NInput, NLayer
# XOR problem
X = np.array([[0,0], [0,1], [1,0], [1,1]])
y = np.array([[0], [1], [1], [0]])
# Create model
model = Layers()
# Add layers
model.add(NInput(2))
model.add(NLayer(4, activation='tanh'))
model.add(NLayer(4, activation='tanh'))
model.add(NLayer(1, activation='sigmoid'))
# Train model
model.train_model(X, y, loss_type='categorical', iterations=10000, learning_rate=0.1, batch_size=4)
# Evaluate model
accuracy, conf_matrix = model.evaluate_trained_model()
print(f"Accuracy: {accuracy}")
print(f"Confusion Matrix:\n{conf_matrix}")
# Visualize training progress
model.show_loss_graph()
The main model container for building and training neural networks.
model = Layers()
Methods:
add(layer)
: Add a layer to the modeltrain_model(x, y, loss_type, iterations, learning_rate, batch_size)
: Train the modelx
: Input data (numpy array)y
: Target data (numpy array)loss_type
: Type of loss function ('categorical', 'mse', 'mae')iterations
: Number of training iterationslearning_rate
: Learning rate for weight updatesbatch_size
: Size of batches for training
evaluate_trained_model()
: Evaluate model performanceshow_loss_graph()
: Visualize training loss over iterationspredict_input()
: Get model predictions
The input layer specification.
input_layer = NInput(input_shape)
Parameters:
input_shape
: Number of input features
The standard neural network layer.
layer = NLayer(num_neurons, activation='linear', use_bias=True)
Parameters:
num_neurons
: Number of neurons in the layeractivation
: Activation function (default: 'linear')use_bias
: Whether to use bias (default: True)function_name
: Optional name for custom activation functionfunction_formula
: Optional formula for custom activation function
Methods:
set_weights(output_shape, new_weights)
: Set layer weightsget_weights()
: Get layer weightsset_activation(activation)
: Set activation functionget_activation()
: Get activation functionforward(input_data)
: Perform forward propagation
Layer to flatten multi-dimensional input.
flatten = FlattenLayer()
Methods:
forward(input_data)
: Flatten input data
The ActivationFunctions
class provides various activation functions:
sigmoid
: Sigmoid activation (0 to 1)tanh
: Hyperbolic tangent (-1 to 1)relu
: Rectified Linear Unit (max(0, x))leaky_relu
: Leaky ReLU (small slope for negative inputs)linear
: Linear/identity functionsoftmax
: Softmax function for multi-class classification
from fawern_nn.nn import ActivationFunctions
# Create activation functions instance
activations = ActivationFunctions()
# Define custom function
def custom_activation(x):
return x**2
# Define its derivative
def custom_activation_derivative(x):
return 2*x
# Add to available functions
activations.add_activation_function('custom', custom_activation)
activations.add_activation_function('custom_derivative', custom_activation_derivative)
import numpy as np
from fawern_nn.nn import Layers, NInput, NLayer
# Create binary classification dataset
X = np.random.randn(100, 2)
y = np.array([(1 if x[0] + x[1] > 0 else 0) for x in X]).reshape(-1, 1)
# Create model
model = Layers()
model.add(NInput(2))
model.add(NLayer(4, activation='relu'))
model.add(NLayer(1, activation='sigmoid'))
# Train model
model.train_model(X, y, loss_type='categorical', iterations=1000, learning_rate=0.01)
# Evaluate and visualize
accuracy, conf_matrix = model.evaluate_trained_model()
print(f"Accuracy: {accuracy}")
model.show_loss_graph()
import numpy as np
from fawern_nn.nn import Layers, NInput, NLayer
# Create regression dataset
X = np.linspace(-5, 5, 100).reshape(-1, 1)
y = np.sin(X) + 0.1 * np.random.randn(100, 1)
# Create model
model = Layers()
model.add(NInput(1))
model.add(NLayer(10, activation='tanh'))
model.add(NLayer(1, activation='linear'))
# Train model
model.train_model(X, y, loss_type='mse', iterations=2000, learning_rate=0.005)
# Evaluate
mse = model.evaluate_trained_model()
print(f"Mean Squared Error: {mse}")
model.show_loss_graph()
import numpy as np
from fawern_nn.nn import Layers, NInput, NLayer
# Create multi-class dataset (simplified MNIST-like)
X = np.random.randn(500, 28*28) # 28x28 flattened images
y = np.eye(10)[np.random.randint(0, 10, size=500)] # One-hot encoded labels
# Create model
model = Layers()
model.add(NInput(28*28))
model.add(NLayer(128, activation='relu'))
model.add(NLayer(64, activation='relu'))
model.add(NLayer(10, activation='softmax'))
# Train model
model.train_model(X, y, loss_type='categorical', iterations=50, learning_rate=0.001, batch_size=32)
# Evaluate model
accuracy, conf_matrix = model.evaluate_trained_model()
print(f"Accuracy: {accuracy}")
model.show_loss_graph()
Fawern-NN implements traditional backpropagation for training neural networks:
- Forward pass through all layers
- Calculate error at output layer
- Propagate error backward through the network
- Update weights based on calculated gradients
The implementation supports mini-batch training for better performance on larger datasets.
categorical
: For classification problems (uses accuracy and confusion matrix for evaluation)mse
: Mean Squared Error for regression problemsmae
: Mean Absolute Error for regression problems
- Python 3.7+
- NumPy >= 1.19.0
- Matplotlib >= 3.3.0
- scikit-learn >= 0.24.0
MIT License
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Fawern - GitHub