Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError when using ns-export #3586

Open
gldanoob opened this issue Feb 6, 2025 · 6 comments
Open

AssertionError when using ns-export #3586

gldanoob opened this issue Feb 6, 2025 · 6 comments

Comments

@gldanoob
Copy link

gldanoob commented Feb 6, 2025

Describe the bug
Here is the log when I attempt to export a pointcloud or mesh:

nerfstudio ❯ ns-export pointcloud --load-config outputs/stump/nerfacto/2024-11-11_022940/config.yml --o
utput-dir exports/mesh/ --num-points 1000000 --remove-outliers True --normal-method open3d --obb_center--0.1661160403 0.0640090148 -0.4701777494 --obb_rotation -0.0694655468 0.0561346174 -0.0039036355 --obb
_scale 1.0000000000 1.0000000000 1.0000000000
[03:07:26] Auto image downscale factor of 2                                                 nerfstudio_dataparser.py:484
Variable resolution, using variable_res_collate
Loading latest checkpoint from load_dir
✅ Done loading checkpoint from outputs/stump/nerfacto/2024-11-11_022940/nerfstudio_models/step-000029999.ckpt
Traceback (most recent call last):
  File "/home/gldanoob/miniconda3/envs/nerfstudio/bin/ns-export", line 8, in <module>
    sys.exit(entrypoint())
  File "/home/gldanoob/dev/nerfstudio_dev/nerfstudio/scripts/exporter.py", line 671, in entrypoint
    tyro.cli(Commands).main()
  File "/home/gldanoob/dev/nerfstudio_dev/nerfstudio/scripts/exporter.py", line 144, in main
    assert pipeline.datamanager.train_pixel_sampler is not None
AssertionError
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/home/gldanoob/miniconda3/envs/nerfstudio/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
    pid, sts = os.waitpid(self.pid, flag)
  File "/home/gldanoob/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 19392) is killed by signal: Terminated.

However it shows expected behavior if nerfstudio is installed via pip (version 1.1.5)

To Reproduce
Steps to reproduce the behavior:

  1. Install nerfstudio via git (Commit 4a3e3e6)
  2. Run ns-export on any nerfacto model

Expected behavior
Nerfstudio exports the mesh without errors

@MontaEllis
Copy link

same issue

@gradeeterna
Copy link

Same issue here

@SharkWipf
Copy link
Contributor

This is most likely caused by the dataloader changes in #3216, which change/remove train_pixel_sampler. Likely updating the exporter accordingly was forgotten.

@tripp528
Copy link

tripp528 commented Feb 14, 2025

Potentially related issue, also only happening on main (not 1.15). When running ns-export cameras, it successfully completes, but shows the following:

✅ Saved poses to resources/vase/scan_nerfstudio/nerfstudio/transforms_train.json
✅ Saved poses to resources/vase/scan_nerfstudio/nerfstudio/transforms_eval.json
Exception ignored in atexit callback: <function _exit_function at 0x7a251d9932e0>
Traceback (most recent call last):
  File "/home/tripp/.local/share/uv/python/cpython-3.12.6-linux-x86_64-gnu/lib/python3.12/multiprocessing/util.py", line 360, in _exit_function
    p.join()
  File "/home/tripp/.local/share/uv/python/cpython-3.12.6-linux-x86_64-gnu/lib/python3.12/multiprocessing/process.py", line 149, in join
    res = self._popen.wait(timeout)
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tripp/.local/share/uv/python/cpython-3.12.6-linux-x86_64-gnu/lib/python3.12/multiprocessing/popen_fork.py", line 43, in wait
    return self.poll(os.WNOHANG if timeout == 0.0 else 0)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tripp/.local/share/uv/python/cpython-3.12.6-linux-x86_64-gnu/lib/python3.12/multiprocessing/popen_fork.py", line 27, in poll
    pid, sts = os.waitpid(self.pid, flag)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tripp/sbx_engine/.venv/lib/python3.12/site-packages/torch/utils/data/_utils/signal_handling.py", line 67, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 1042352) is killed by signal: Terminated. 

Note: this was exporting poses from a gaussian splat, if that matters.
Note 2: Seems to only occur with cache_images: disk in the config. If I set this value to gpu or cpu I do not see this error.

@f-dy
Copy link
Contributor

f-dy commented Feb 20, 2025

Shouldn't this issue be fixed by #3587?

@tripp528
Copy link

No, I suspect there is some similar fix that's needed for camera export

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants