|
| 1 | +# Neural Demo - Using neural.slang |
| 2 | + |
| 3 | +This demo showcases how to use Slang's `neural.slang` standard module to build a neural network for image reconstruction. The network learns to map UV coordinates to RGB colors, reconstructing a reference image through gradient-based optimization. |
| 4 | +This is a re-creation of the texture example in the https://github.com/shader-slang/neural-shading-s25 course. |
| 5 | + |
| 6 | +## neural.slang Types Used |
| 7 | + |
| 8 | +| Type | Description | |
| 9 | +|------|-------------| |
| 10 | +| `InlineVector<T, N>` | Fixed-size vector type with compile-time `.Size` constant | |
| 11 | +| `StructuredBufferStorage<T>` | GPU buffer storage implementing `IStorage<T>` interface | |
| 12 | +| `FFLayer<T, InVec, OutVec, Storage, Activation, HasBias>` | Feed-forward neural network layer | |
| 13 | +| `IdentityActivation<T>` | Pass-through activation (no transformation) | |
| 14 | +| `NoParam()` | Empty parameter for activations that don't need configuration | |
| 15 | + |
| 16 | +## Before/After Comparison |
| 17 | + |
| 18 | +This section shows the comparison between the original code and the code with the neural.slang APIs |
| 19 | + |
| 20 | +### Lines of Code |
| 21 | + |
| 22 | +Approximate lines of code comparison apart from comments. |
| 23 | + |
| 24 | +| File Type | Original | neural.slang | |
| 25 | +|-----------|----------|--------------| |
| 26 | +| Slang | 171 | 104 | |
| 27 | +| Python | 103 | 66 | |
| 28 | + |
| 29 | +### Vector Types |
| 30 | + |
| 31 | +| Before (Manual) | After (neural.slang) | |
| 32 | +|-----------------|---------------------| |
| 33 | +| `float[4]` / `float4` | `InlineVector<float, 4>` | |
| 34 | +| `float[32]` | `InlineVector<float, 32>` | |
| 35 | +| `float[3]` / `float3` | `InlineVector<float, 3>` | |
| 36 | +| Manual size tracking | `Vec4.Size` compile-time constant | |
| 37 | + |
| 38 | +### Parameter Storage |
| 39 | + |
| 40 | +| Before (Manual) | After (neural.slang) | |
| 41 | +|-----------------|---------------------| |
| 42 | +| Separate weight/bias buffers | `StructuredBufferStorage<T>` wrapper | |
| 43 | +| Manual offset calculation | `Storage.getOffset()` method | |
| 44 | +| Manual parameter count | `FFLayer.ParameterCount` constant | |
| 45 | + |
| 46 | +### Layer Forward Pass |
| 47 | + |
| 48 | +| Before (Manual) | After (neural.slang) | |
| 49 | +|-----------------|---------------------| |
| 50 | +| Manual matrix multiply | `FFLayer.eval()` using `linearTransform` | |
| 51 | +| Explicit loops | Optimized internal implementation | |
| 52 | +| Manual bias addition | Handled by `FFLayer` | |
| 53 | + |
| 54 | +**Before:** |
| 55 | +```slang |
| 56 | +[Differentiable] |
| 57 | +float[Outputs] forward(float[Inputs] x) |
| 58 | +{ |
| 59 | + float[Outputs] y; |
| 60 | + [MaxIters(Outputs)] |
| 61 | + for (int row = 0; row < Outputs; ++row) |
| 62 | + { |
| 63 | + var sum = get_bias(row); |
| 64 | + [ForceUnroll] |
| 65 | + for (int col = 0; col < Inputs; ++col) |
| 66 | + sum += get_weight(row, col) * x[col]; |
| 67 | + y[row] = sum; |
| 68 | + } |
| 69 | + return y; |
| 70 | +} |
| 71 | +``` |
| 72 | + |
| 73 | +**After:** |
| 74 | +```slang |
| 75 | +[Differentiable] |
| 76 | +OutputVec mlp_forward(Storage storage, InputVec input) |
| 77 | +{ |
| 78 | + uint addr = 0u; |
| 79 | + let h0 = Layer0(addr, addr + INPUT_DIM * HIDDEN_DIM, LeakyReLU<float>(LEAKY_RELU_SLOPE)).eval<Storage>(storage, input); |
| 80 | + addr = Layer0.nextAddress(addr); |
| 81 | + let h1 = Layer1(addr, addr + HIDDEN_DIM * HIDDEN_DIM, LeakyReLU<float>(LEAKY_RELU_SLOPE)).eval<Storage>(storage, h0); |
| 82 | + addr = Layer1.nextAddress(addr); |
| 83 | + return Layer2(addr, addr + HIDDEN_DIM * OUTPUT_DIM, ExpActivation<float>()).eval<Storage>(storage, h1); |
| 84 | +} |
| 85 | +``` |
| 86 | + |
| 87 | +### Network Definition |
| 88 | + |
| 89 | +| Before (Manual) | After (neural.slang) | |
| 90 | +|-----------------|---------------------| |
| 91 | +| Custom struct with manual layout | Type aliases for layers | |
| 92 | +| Hardcoded dimensions | Dimensions from vector types | |
| 93 | +| Manual weight indexing | Automatic address calculation | |
| 94 | + |
| 95 | +**Before:** |
| 96 | +```slang |
| 97 | +struct Network |
| 98 | +{ |
| 99 | + RWStructuredBuffer<float> layer0_weights; // 4*32 floats |
| 100 | + RWStructuredBuffer<float> layer0_biases; // 32 floats |
| 101 | + RWStructuredBuffer<float> layer1_weights; // 32*32 floats |
| 102 | + RWStructuredBuffer<float> layer1_biases; // 32 floats |
| 103 | + RWStructuredBuffer<float> layer2_weights; // 32*3 floats |
| 104 | + RWStructuredBuffer<float> layer2_biases; // 3 floats |
| 105 | +
|
| 106 | + [Differentiable] |
| 107 | + float3 forward(float4 input) { /* manual implementation */ } |
| 108 | +} |
| 109 | +``` |
| 110 | + |
| 111 | +**After:** |
| 112 | +```slang |
| 113 | +import neural; |
| 114 | +
|
| 115 | +// Type definitions using neural.slang |
| 116 | +typealias Vec4 = InlineVector<float, 4>; |
| 117 | +typealias Vec32 = InlineVector<float, 32>; |
| 118 | +typealias Vec3 = InlineVector<float, 3>; |
| 119 | +typealias Storage = StructuredBufferStorage<float>; |
| 120 | +typealias Act = IdentityActivation<float>; |
| 121 | +
|
| 122 | +typealias Layer0Type = FFLayer<float, Vec4, Vec32, Storage, Act, true>; |
| 123 | +typealias Layer1Type = FFLayer<float, Vec32, Vec32, Storage, Act, true>; |
| 124 | +typealias Layer2Type = FFLayer<float, Vec32, Vec3, Storage, Act, true>; |
| 125 | +
|
| 126 | +struct MLPNetwork |
| 127 | +{ |
| 128 | + // One buffer per layer: [weights, biases] contiguous |
| 129 | + RWStructuredBuffer<float> layer0_params; |
| 130 | + RWStructuredBuffer<float> layer1_params; |
| 131 | + RWStructuredBuffer<float> layer2_params; |
| 132 | +
|
| 133 | + Vec3 forward(Vec4 input) |
| 134 | + { |
| 135 | + let storage0 = Storage(layer0_params); |
| 136 | + let ff0 = Layer0Type(storage0, 0u, INPUT_SIZE * HIDDEN_SIZE); |
| 137 | + Vec32 h0 = ff0.eval(NoParam(), input); |
| 138 | + // ... |
| 139 | + } |
| 140 | +} |
| 141 | +``` |
| 142 | + |
| 143 | +## Running the Demo |
| 144 | + |
| 145 | +```bash |
| 146 | +cd slangpy-samples/examples/neural-demo |
| 147 | +python neural-demo.py |
| 148 | +``` |
| 149 | + |
| 150 | +The demo displays three panels: |
| 151 | +1. **Reference image** - Target to reconstruct |
| 152 | +2. **Network output** - Current reconstruction using FFLayer-based network |
| 153 | +3. **Loss visualization** - Per-pixel error |
| 154 | + |
| 155 | +Loss values are printed to console and should decrease over time as the network learns. |
| 156 | + |
0 commit comments