Instant NGP just got 5x faster at rendering and 1.5x faster at training!
The speedup comes from a new just-in-time (JIT) compiler in tiny-cuda-nn, the neural network library that Instant NGP is based on; see tiny-cuda-nn's 2.0 release notes.
Using this new JIT compiler, Instant NGP fuses the entire NeRF ray marcher, as well as several other core components, into single CUDA kernels. That is to say: the 5x speed-up comes from optimization through fusion rather than algorithmic changes. Therefore all existing snapshots, and configurations still work unmodified.
How to use the new Instant NGP 2.0
If you are on Windows, simply download the latest release.
On Linux, pull the latest source code, make sure submodules are up-to-date, and compile.
$ git pull
$ git submodule sync --recursive
$ git submodule update --init --recursive
$ <compile per instructions in README.md>To confirm that you are running Instant NGP 2.0, make sure that the GUI's title bar says "instant-ngp v2.0dev" and that there is a "JIT fusion" checkbox. See the screenshot below.

Toggle the "JIT fusion" checkbox to compare the new faster implementation with the old one. On RTX 4000 cards (Ada generation) or newer, the rendering speed up should be over 5x.
Other additions and changes since last release
- Added more interpolation modes for camera paths, as well as live recording of keyframes
 - Added Python bindings for camera path editing
 - Added support for orthographic camera lenses
 - All camera lens types except for 
FThetanow support DLSS - All camera lens types can now be used in SDF, Image, and Volume modes (not just NeRF)
 - Fixed various miscellaneous bugs in the build system and Instant NGP itself
 - Glow has been removed