Non-line-of-sight (NLOS) imaging is conducted to infer invisible scenes from indirect light on visible objects. The neural transient field (NeTF) was proposed for representing scenes as neural radiance fields in NLOS scenes. We propose NLOS neural implicit surface (NLOS-NeuS), which extends the NeTF to neural implicit surfaces with a signed distance function (SDF) for reconstructing three-dimensional surfaces in NLOS scenes. We introduce two constraints as loss functions for correctly learning an SDF to avoid non-zero level-set surfaces. We also introduce a lower bound constraint of an SDF based on the geometry of the first-returning photons. The experimental results indicate that these constraints are essential for learning a correct SDF in NLOS scenes. Compared with previous methods with discretized representation, NLOS-NeuS with the neural continuous representation enables us to reconstruct smooth surfaces while preserving fine details in NLOS scenes. To the best of our knowledge, this is the first study on neural implicit surfaces with volume rendering in NLOS scenes.
We show the results with synthetic data from ZNLOS dataset. Due to the lack of geometric representation, the NeTF mesh contains large errors. On the other hand, NLOS-NeuS enables high quality 3D surface reconstruction.
We show the results with the real data captured by a SPAD from f-k dataset. The DLCT can reconstruct the object structures, while the quality of the reconstructed geometry is limited due to its discrete representation. On the other hand, our method can reconstruct the fine details and smooth surface compared to the DLCT because of our continuous representation of the neural implicit surface.
@InProceedings{Fujimura_2023_ICCV,
author = {Fujimura, Yuki and Kushida, Takahiro and Funatomi, Takuya and Mukaigawa, Yasuhiro},
title = {NLOS-NeuS: Non-line-of-sight Neural Implicit Surface},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {10532-10541}
}