Skip to content

Commit 318efc4

Browse files
authored
Merge pull request #393 from apphp/392-convert-layers-to-numpower
392 convert layers to numpower
2 parents 05ea6b4 + 0ce683c commit 318efc4

File tree

46 files changed

+5374
-68
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+5374
-68
lines changed
Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Activation.php">[source]</a></span>
1+
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Activation/Activation.php">[source]</a></span>
22

33
# Activation
44
Activation layers apply a user-defined non-linear activation function to their inputs. They often work in conjunction with [Dense](dense.md) layers as a way to transform their output.
@@ -10,8 +10,8 @@ Activation layers apply a user-defined non-linear activation function to their i
1010

1111
## Example
1212
```php
13-
use Rubix\ML\NeuralNet\Layers\Activation;
14-
use Rubix\ML\NeuralNet\ActivationFunctions\ReLU;
13+
use Rubix\ML\NeuralNet\Layers\Activation\Activation;
14+
use Rubix\ML\NeuralNet\ActivationFunctions\ReLU\ReLU;
1515

1616
$layer = new Activation(new ReLU());
17-
```
17+
```

docs/neural-network/hidden-layers/batch-norm.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/BatchNorm.php">[source]</a></span>
1+
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/BatchNorm/BatchNorm.php">[source]</a></span>
22

33
# Batch Norm
44
Batch Norm layers normalize the activations of the previous layer such that the mean activation is *close* to 0 and the standard deviation is *close* to 1. Adding Batch Norm reduces the amount of covariate shift within the network which makes it possible to use higher learning rates and thus converge faster under some circumstances.
@@ -12,12 +12,12 @@ Batch Norm layers normalize the activations of the previous layer such that the
1212

1313
## Example
1414
```php
15-
use Rubix\ML\NeuralNet\Layers\BatchNorm;
16-
use Rubix\ML\NeuralNet\Initializers\Constant;
17-
use Rubix\ML\NeuralNet\Initializers\Normal;
15+
use Rubix\ML\NeuralNet\Layers\BatchNorm\BatchNorm;
16+
use Rubix\ML\NeuralNet\Initializers\Constant\Constant;
17+
use Rubix\ML\NeuralNet\Initializers\Normal\Normal;
1818

1919
$layer = new BatchNorm(0.7, new Constant(0.), new Normal(1.));
2020
```
2121

2222
## References
23-
[^1]: S. Ioffe et al. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.
23+
[^1]: S. Ioffe et al. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

docs/neural-network/hidden-layers/dense.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Dense.php">[source]</a></span>
1+
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Dense/Dense.php">[source]</a></span>
22

33
# Dense
44
Dense (or *fully connected*) hidden layers are layers of neurons that connect to each node in the previous layer by a parameterized synapse. They perform a linear transformation on their input and are usually followed by an [Activation](activation.md) layer. The majority of the trainable parameters in a standard feed forward neural network are contained within Dense hidden layers.
@@ -14,9 +14,9 @@ Dense (or *fully connected*) hidden layers are layers of neurons that connect to
1414

1515
## Example
1616
```php
17-
use Rubix\ML\NeuralNet\Layers\Dense;
17+
use Rubix\ML\NeuralNet\Layers\Dense\Dense;
1818
use Rubix\ML\NeuralNet\Initializers\He;
1919
use Rubix\ML\NeuralNet\Initializers\Constant;
2020

2121
$layer = new Dense(100, 1e-4, true, new He(), new Constant(0.0));
22-
```
22+
```

docs/neural-network/hidden-layers/dropout.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Dropout.php">[source]</a></span>
1+
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Dropout/Dropout.php">[source]</a></span>
22

33
# Dropout
44
Dropout is a regularization technique to reduce overfitting in neural networks by preventing complex co-adaptations on training data. It works by temporarily disabling output nodes during each training pass. It also acts as an efficient way of performing model averaging with the parameters of neural networks.
@@ -10,10 +10,10 @@ Dropout is a regularization technique to reduce overfitting in neural networks b
1010

1111
## Example
1212
```php
13-
use Rubix\ML\NeuralNet\Layers\Dropout;
13+
use Rubix\ML\NeuralNet\Layers\Dropout\Dropout;
1414

1515
$layer = new Dropout(0.2);
1616
```
1717

1818
## References
19-
[^1]: N. Srivastava et al. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting.
19+
[^1]: N. Srivastava et al. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting.

docs/neural-network/hidden-layers/noise.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Noise.php">[source]</a></span>
1+
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Noise/Noise.php">[source]</a></span>
22

33
# Noise
44
This layer adds random Gaussian noise to the inputs with a user-defined standard deviation. Noise added to neural network activations acts as a regularizer by indirectly adding a penalty to the weights through the cost function in the output layer.
@@ -10,10 +10,10 @@ This layer adds random Gaussian noise to the inputs with a user-defined standard
1010

1111
## Example
1212
```php
13-
use Rubix\ML\NeuralNet\Layers\Noise;
13+
use Rubix\ML\NeuralNet\Layers\Noise\Noise;
1414

1515
$layer = new Noise(1e-3);
1616
```
1717

1818
## References
19-
[^1]: C. Gulcehre et al. (2016). Noisy Activation Functions.
19+
[^1]: C. Gulcehre et al. (2016). Noisy Activation Functions.
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Placeholder1D/Placeholder1D.php">[source]</a></span>
2+
3+
# Placeholder 1D
4+
5+
The Placeholder 1D input layer represents the future input values of a mini batch (matrix) of single dimensional tensors (vectors) to the neural network. It performs shape validation on the input and then forwards it unchanged to the next layer.
6+
7+
## Parameters
8+
| # | Name | Default | Type | Description |
9+
|---|---|---|---|---|
10+
| 1 | inputs | | int | The number of input nodes (features). |
11+
12+
## Example
13+
```php
14+
use Rubix\ML\NeuralNet\Layers\Placeholder1D\Placeholder1D;
15+
16+
$layer = new Placeholder1D(10);
17+
```

docs/neural-network/hidden-layers/prelu.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/PReLU.php">[source]</a></span>
1+
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/PReLU/PReLU.php">[source]</a></span>
22

33
# PReLU
44
Parametric Rectified Linear Units are leaky rectifiers whose *leakage* coefficient is learned during training. Unlike standard [Leaky ReLUs](../activation-functions/leaky-relu.md) whose leakage remains constant, PReLU layers can adjust the leakage to better suite the model on a per node basis.
@@ -14,8 +14,8 @@ $$
1414

1515
## Example
1616
```php
17-
use Rubix\ML\NeuralNet\Layers\PReLU;
18-
use Rubix\ML\NeuralNet\Initializers\Normal;
17+
use Rubix\ML\NeuralNet\Layers\PReLU\PReLU;
18+
use Rubix\ML\NeuralNet\Initializers\Normal\Normal;
1919

2020
$layer = new PReLU(new Normal(0.5));
2121
```

docs/neural-network/hidden-layers/swish.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Swish.php">[source]</a></span>
1+
<span style="float:right;"><a href="https://github.com/RubixML/ML/blob/master/src/NeuralNet/Layers/Swish/Swish.php">[source]</a></span>
22

33
# Swish
44
Swish is a parametric activation layer that utilizes smooth rectified activation functions. The trainable *beta* parameter allows each activation function in the layer to tailor its output to the training set by interpolating between the linear function and ReLU.
@@ -10,8 +10,8 @@ Swish is a parametric activation layer that utilizes smooth rectified activation
1010

1111
## Example
1212
```php
13-
use Rubix\ML\NeuralNet\Layers\Swish;
14-
use Rubix\ML\NeuralNet\Initializers\Constant;
13+
use Rubix\ML\NeuralNet\Layers\Swish\Swish;
14+
use Rubix\ML\NeuralNet\Initializers\Constant\Constant;
1515

1616
$layer = new Swish(new Constant(1.0));
1717
```

phpunit.xml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,5 +83,6 @@
8383
</testsuites>
8484
<php>
8585
<env name="ENV" value="testing"/>
86+
<ini name="memory_limit" value="256M"/>
8687
</php>
8788
</phpunit>

src/NeuralNet/Initializers/He/HeNormal.php

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ public function initialize(int $fanIn, int $fanOut) : NDArray
3535

3636
$stdDev = sqrt(2 / $fanOut);
3737

38-
return NumPower::truncatedNormal(size: [$fanOut, $fanIn], scale: $stdDev);
38+
return NumPower::truncatedNormal(size: [$fanOut, $fanIn], loc: 0.0, scale: $stdDev);
3939
}
4040

4141
/**

0 commit comments

Comments
 (0)