Note: We have made our best effort to package all the most helpful/representative images&videos in the zip file, but some less critical images&videos were too large to include in the zip file so we have uploaded them to github (timestamped & version controlled) and serve the assets via github links here.
Should you wish to review without consideration of internet-downloaded images&videos, please feel free to turn off your internet and be aware that some videos may not display.
Here are some examples of raw images taken by the hyperspectral cameras.
Our HS-NeRF approach (Ours-Hyper) using 128 channels clearly outperforms NeRFs using only RGB
Depth maps can reveal issues with the 3D structures of NeRFs that are not immediately apparent in the rendered images.
Qualitatively, ours is noticeably better than the rgb baselines, and is among the best of the ablations.
Finally, we can also export the NeRFs to point clouds to verify that their 3D structure is reasonable. Shown below are screenshots of the point clouds for Anacampseros (left) and Caladium (right)
Qualitatively, our stated approach appears among the best, but slightly different architectures are not significantly worse.
This video is the easiest for comparing the performance between different methods.
This video shows the camera moving to evidence that the NeRF's performance is consistently good across novel viewpoints.
Ch 15 (477nm)
Ch 35 (576nm)
Ch 55 (675nm)
Ch 75 (772nm)
Ch 95 (869nm)
Ch 105 (918nm)
This video shows the camera moving simultaneously while the wavelength sweeps to evidence that the NeRF's performance is consistently good across novel viewpoints over all wavelengths.
Our NeRF loses almost no accuracy in predicting the full hyperspectral image, even when trained on only 1/8th of the wavelengths.
Our NeRFs can accurately predict an unseen viewpoint consistently across all wavelengths.
Furthermore, even withholding 87.5% of wavelengths from the training set has marginal impact on accuracy.
Watch this rotating video to see that the wavelength interpolation performance is consistently good across novel viewpoints.
Inspect the individual rendered images by wavelength and amount of interpolation.
This is the same as the video above, but in image form to allow for closer inspection.
Hyperspectral NeRFs allow us to simulate arbitrary camera image sensors from a single reference image.