[go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quality issues #26

Open
bennyguo opened this issue Dec 5, 2022 · 0 comments
Open

Quality issues #26

bennyguo opened this issue Dec 5, 2022 · 0 comments

Comments

@bennyguo
Copy link
Owner
bennyguo commented Dec 5, 2022

Here are some findings about improving the reconstruction quality.

Bias terms in the geometry MLP matters

In my original implementation, I omitted the bias terms in the geometry MLP for simplicity as they're initialized to 0. However, I found that these bias terms are important for producing high quality surfaces especially for detailed regions. A possible reason is that the "shifting" brought by these biases acts as some form of normalization, making the high frequency signals easier to lean. Thanks @TerryRyu to mention this problem in #22. Fixed in latest commits.

MSE v.s. L1

Although the original NeuS paper adopts L1 as the photometric loss for its "robustness to outliars", we found that L1 could lead to suboptimal results in certain cases, like the Lego bulldozer:

L1, 20k iters MSE, 20k iters
l1 mse

Therefore, we simultaneously adopt L1 and MSE loss in NeuS training.

Floater problem

Training NeuS without background model can lead to floaters (uncontrolled surfaces) in free space. This is because floaters in background color do no harm to rendering quality, therefore cannot be optimized when training with only photometric loss. We alleviate this problem by random background augmentation (masks needed) and the sparsity loss proposed in SparseNeus (no masks needed):
image
For NeRF, we adopt the distortion loss proposed by MipNeRF 360 to alleviate the floater problem in training unbounded 360 scenes.

Complex cases require long training

In the given config files, the number of training steps is set to 20000 by default. This works well for objects of simple geometries, like the chair scene in the NeRF-Synthetic dataset. However, for more complicated cases where many thin structures occur, more training iterations are needed to get high quality results. This could simply be done by setting trainer.max_steps to a higher value, like 50000 or 100000.

Cases where training does not converge

A simple way to tell whether the training is converging is to check the value of inv_s (which is shown in the progress bar by default). If inv_s is steadily increasing (often ends up with >1000), then we are good. If the training diverges, inv_s typically drops below the initialized value and gets stuck. There are many reasons that could lead to divergence. To alleviate divergence caused by unstable optimization, we adopt an learning rate warm-up strategy following the original NeuS in latest commits.

Repository owner locked and limited conversation to collaborators Dec 5, 2022
@bennyguo bennyguo pinned this issue Dec 5, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant