Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
… into rejection_mask_sampling
  • Loading branch information
anc2001 committed Jan 18, 2024
2 parents 900f7f2 + 368c9ec commit a8cc4e0
Show file tree
Hide file tree
Showing 100 changed files with 814 additions and 722 deletions.
8 changes: 4 additions & 4 deletions .github/workflows/core_code_checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,10 @@ jobs:
- name: Check notebook cell metadata
run: |
python ./nerfstudio/scripts/docs/add_nb_tags.py --check
- name: Run Ruff
run: ruff docs/ nerfstudio/ tests/
- name: Run Black
run: black docs/ nerfstudio/ tests/ --check
- name: Run Ruff Linter
run: ruff check docs/ nerfstudio/ tests/
- name: Run Ruff Formatter
run: ruff format docs/ nerfstudio/ tests/ --check
- name: Run Pyright
run: |
pyright
Expand Down
13 changes: 6 additions & 7 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,15 @@ repos:
files: '.*'
pass_filenames: false
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Ruff version.
rev: 'v0.0.267'
rev: v0.1.13
hooks:
- id: ruff
- repo: https://github.com/psf/black
rev: '23.3.0'
hooks:
- id: black
types_or: [ python, pyi, jupyter ]
args: [ --fix ]
- id: ruff-format
types_or: [ python, pyi, jupyter ]
8 changes: 4 additions & 4 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,16 @@
"typescript.suggestionActions.enabled": false,
"javascript.suggestionActions.enabled": false,
"[python]": {
"editor.defaultFormatter": "ms-python.black-formatter",
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.codeActionsOnSave": {
"source.organizeImports": true,
"source.fixAll": true
"source.organizeImports": "explicit",
"source.fixAll": "explicit"
}
},
"editor.formatOnSave": true,
"editor.rulers": [120],
"python.envFile": "${workspaceFolder}/.env",
"python.formatting.provider": "none",
"black-formatter.args": ["--line-length=120"],
"python.linting.pylintEnabled": false,
"python.linting.flake8Enabled": false,
"python.linting.enabled": true,
Expand Down
18 changes: 9 additions & 9 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -119,44 +119,44 @@ ENV PATH="${PATH}:/home/user/.local/bin"
SHELL ["/bin/bash", "-c"]

# Upgrade pip and install packages.
RUN python3.10 -m pip install --upgrade pip setuptools pathtools promise pybind11
RUN python3.10 -m pip install --no-cache-dir --upgrade pip setuptools pathtools promise pybind11
# Install pytorch and submodules
RUN CUDA_VER=${CUDA_VERSION%.*} && CUDA_VER=${CUDA_VER//./} && python3.10 -m pip install \
RUN CUDA_VER=${CUDA_VERSION%.*} && CUDA_VER=${CUDA_VER//./} && python3.10 -m pip install --no-cache-dir \
torch==2.0.1+cu${CUDA_VER} \
torchvision==0.15.2+cu${CUDA_VER} \
--extra-index-url https://download.pytorch.org/whl/cu${CUDA_VER}
# Install tynyCUDNN (we need to set the target architectures as environment variable first).
ENV TCNN_CUDA_ARCHITECTURES=${CUDA_ARCHITECTURES}
RUN python3.10 -m pip install git+https://github.com/NVlabs/[email protected]#subdirectory=bindings/torch
RUN python3.10 -m pip install --no-cache-dir git+https://github.com/NVlabs/[email protected]#subdirectory=bindings/torch

# Install pycolmap, required by hloc.
RUN git clone --branch v0.4.0 --recursive https://github.com/colmap/pycolmap.git && \
cd pycolmap && \
python3.10 -m pip install . && \
python3.10 -m pip install --no-cache-dir . && \
cd ..

# Install hloc 1.4 as alternative feature detector and matcher option for nerfstudio.
RUN git clone --branch master --recursive https://github.com/cvg/Hierarchical-Localization.git && \
cd Hierarchical-Localization && \
git checkout v1.4 && \
python3.10 -m pip install -e . && \
python3.10 -m pip install --no-cache-dir -e . && \
cd ..

# Install pyceres from source
RUN git clone --branch v1.0 --recursive https://github.com/cvg/pyceres.git && \
cd pyceres && \
python3.10 -m pip install -e . && \
python3.10 -m pip install --no-cache-dir -e . && \
cd ..

# Install pixel perfect sfm.
RUN git clone --recursive https://github.com/cvg/pixel-perfect-sfm.git && \
cd pixel-perfect-sfm && \
git reset --hard 40f7c1339328b2a0c7cf71f76623fb848e0c0357 && \
git clean -df && \
python3.10 -m pip install -e . && \
python3.10 -m pip install --no-cache-dir -e . && \
cd ..

RUN python3.10 -m pip install omegaconf
RUN python3.10 -m pip install --no-cache-dir omegaconf
# Copy nerfstudio folder and give ownership to user.
ADD . /home/user/nerfstudio
USER root
Expand All @@ -165,7 +165,7 @@ USER ${USER_ID}

# Install nerfstudio dependencies.
RUN cd nerfstudio && \
python3.10 -m pip install -e . && \
python3.10 -m pip install --no-cache-dir -e . && \
cd ..

# Change working directory
Expand Down
11 changes: 1 addition & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,19 +124,10 @@ pip install --upgrade pip
Install PyTorch with CUDA (this repo has been tested with CUDA 11.7 and CUDA 11.8) and [tiny-cuda-nn](https://github.com/NVlabs/tiny-cuda-nn).
`cuda-toolkit` is required for building `tiny-cuda-nn`.

For CUDA 11.7:

```bash
pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 --extra-index-url https://download.pytorch.org/whl/cu117

conda install -c "nvidia/label/cuda-11.7.1" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
```

For CUDA 11.8:

```bash
pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
Expand Down
4 changes: 2 additions & 2 deletions docs/extensions/blender_addon.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

## Overview

This Blender add-on allows for compositing with a Nerfstudio render as a background layer by generating a camera path JSON file from the Blender camera path, as well as a way to import Nerfstudio JSON files as a Blender camera baked with the Nerfstudio camera path. This add-on also allows compositing multiple NeRF objects into a NeRF scene. This is achieved by importing a mesh or point-cloud representation of the NeRF scene from Nerfstudio into Blender and getting the camera coordinates relative to the transformations of the NeRF representation. Dynamic FOV from the Blender camera is supported and will match the Nerfstudio render. Perspective, equirectangular, VR180, and omnidirectional stereo (VR 360) cameras are supported.
This Blender add-on allows for compositing with a Nerfstudio render as a background layer by generating a camera path JSON file from the Blender camera path, as well as a way to import Nerfstudio JSON files as a Blender camera baked with the Nerfstudio camera path. This add-on also allows compositing multiple NeRF objects into a NeRF scene. This is achieved by importing a mesh or point-cloud representation of the NeRF scene from Nerfstudio into Blender and getting the camera coordinates relative to the transformations of the NeRF representation. Dynamic FOV from the Blender camera is supported and will match the Nerfstudio render. Perspective, equirectangular, VR180, and omnidirectional stereo (VR 360) cameras are supported. This add-on also supports Gaussian Splatting scenes as well, however equirectangular and VR video rendering is not currently supported.

<center>
<img width="800" alt="image" src="https://user-images.githubusercontent.com/9502341/211442247-99d1ebc7-3ef9-46f7-9bcc-0e18553f19b7.PNG">
Expand All @@ -30,7 +30,7 @@ This Blender add-on allows for compositing with a Nerfstudio render as a backgro

## Scene Setup

1. Export the mesh or point cloud representation of the NeRF from Nerfstudio, which will be used as reference for the actual NeRF in the Blender scene. Mesh export at a good quality is preferred, however, if the export is not clear or the NeRF is large, a detailed point cloud export will also work.
1. Export the mesh or point cloud representation of the NeRF from Nerfstudio, which will be used as reference for the actual NeRF in the Blender scene. Mesh export at a good quality is preferred, however, if the export is not clear or the NeRF is large, a detailed point cloud export will also work. Keep the `save_world_frame` flag as False or in the viewer, de-select the "Save in world frame" checkbox to keep the correct coordinate system for the add-on.

2. Import the mesh or point cloud representation of the NeRF into the scene. You may need to crop the mesh further. Since it is used as a reference and won't be visible in the final render, only the parts that the blender animation will interact with may be necessary to import.

Expand Down
2 changes: 2 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,7 @@ This documentation is organized into 3 parts:
- [NeRF](nerfology/methods/nerf.md): OG Neural Radiance Fields
- [Mip-NeRF](nerfology/methods/mipnerf.md): A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
- [TensoRF](nerfology/methods/tensorf.md): Tensorial Radiance Fields
- [Gaussian Splatting](nerfology/methods/splat.md): 3D Gaussian Splatting

(third_party_methods)=

Expand All @@ -152,6 +153,7 @@ This documentation is organized into 3 parts:
- [NeRFPlayer](nerfology/methods/nerfplayer.md): 4D Radiance Fields by Streaming Feature Channels
- [Tetra-NeRF](nerfology/methods/tetranerf.md): Representing Neural Radiance Fields Using Tetrahedra
- [Instruct-GS2GS](nerfology/methods/igs2gs.md): Editing 3DGS Scenes with Instructions
- [PyNeRF](nerfology/methods/pynerf.md): Pyramidal Neural Radiance Fields

**Eager to contribute a method?** We'd love to see you use nerfstudio in implementing new (or even existing) methods! Please view our {ref}`guide<own_method_docs>` for more details about how to add to this list!

Expand Down
2 changes: 2 additions & 0 deletions docs/nerfology/methods/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ The following methods are supported in nerfstudio:
```{toctree}
:maxdepth: 1
Instant-NGP<instant_ngp.md>
3D Gaussian Splatting<splat.md>
Instruct-NeRF2NeRF<in2n.md>
K-Planes<kplanes.md>
LERF<lerf.md>
Expand All @@ -39,6 +40,7 @@ The following methods are supported in nerfstudio:
TensoRF<tensorf.md>
Generfacto<generfacto.md>
Instruct-GS2GS<igs2gs.md>
PyNeRF<pynerf.md>
```

(own_method_docs)=
Expand Down
92 changes: 92 additions & 0 deletions docs/nerfology/methods/pynerf.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# PyNeRF

<h4>Pyramidal Neural Radiance Fields</h4>


```{button-link} https://haithemturki.com/pynerf/
:color: primary
:outline:
Paper Website
```

```{button-link} https://github.com/hturki/pynerf
:color: primary
:outline:
Code
```

<video id="teaser" muted autoplay playsinline loop controls width="100%">
<source id="mp4" src="https://haithemturki.com/pynerf/vids/ficus.mp4" type="video/mp4">
</video>

**A fast NeRF anti-aliasing strategy.**


## Installation

First, install Nerfstudio and its dependencies. Then install the PyNeRF extension and [torch-scatter](https://github.com/rusty1s/pytorch_scatter):
```
pip install git+https://github.com/hturki/pynerf
pip install torch-scatter -f https://data.pyg.org/whl/torch-${TORCH_VERSION}+${CUDA}.html
```

## Running PyNeRF

There are three default configurations provided which use the MipNeRF 360 and Multicam dataparsers by default. You can easily use other dataparsers via the ``ns-train`` command (ie: ``ns-train pynerf nerfstudio-data --data <your data dir>`` to use the Nerfstudio data parser)

The default configurations provided are:

| Method | Description | Scene type | Memory |
| ----------------------- |---------------------------------------------------| ------------------------------ |--------|
| `pynerf ` | Tuned for outdoor scenes, uses proposal network | outdoors | ~5GB |
| `pynerf-synthetic` | Tuned for synthetic scenes, uses proposal network | synthetic | ~5GB |
| `pynerf-occupancy-grid` | Tuned for Multiscale blender, uses occupancy grid | synthetic | ~5GB |


The main differences between them is whether they are suited for synthetic/indoor or real-world unbounded scenes (in case case appearance embeddings and scene contraction are enabled), and whether sampling is done with a proposal network (usually better for real-world scenes) or an occupancy grid (usally better for single-object synthetic scenes like Blender).

## Method

Most NeRF methods assume that training and test-time cameras capture scene content from a roughly constant distance:

<table>
<tbody>
<tr>
<td style="width: 48%;">
<div style="display: flex; justify-content: center; align-items: center;">
<img src="https://haithemturki.com/pynerf/images/ficus-cameras.jpg">
</div>
</td>
<td style="width: 4%;"><img src="https://haithemturki.com/pynerf/images/arrow-right-white.png" style="width: 100%;"></td>
<td style="width: 48%;">
<video width="100%" autoplay loop controls>
<source src="https://haithemturki.com/pynerf/vids/ficus-rotation.mp4" type="video/mp4" poster="https://haithemturki.com/pynerf/images/ficus-rotation.jpg">
</video>
</td>
</tr>
</tbody>
</table>

They degrade and render blurry views in less constrained settings:

<table>
<tbody>
<tr>
<td style="width: 48%;">
<div style="display: flex; justify-content: center; align-items: center;">
<img src="https://haithemturki.com/pynerf//images/ficus-cameras-different.jpg">
</div>
</td>
<td style="width: 4%;"><img src="https://haithemturki.com/pynerf/images/arrow-right-white.png" style="width: 100%;"></td>
<td style="width: 48%;">
<video width="100%" autoplay loop controls>
<source src="https://haithemturki.com/pynerf/vids/ficus-zoom-nerf.mp4" type="video/mp4" poster="https://haithemturki.com/pynerf/images/ficus-zoom-nerf.jpg">
</video>
</td>
</tr>
</tbody>
</table>

This is due to NeRF being scale-unaware, as it reasons about point samples instead of volumes. We address this by training a pyramid of NeRFs that divide the scene at different resolutions. We use "coarse" NeRFs for far-away samples, and finer NeRF for close-up samples:

<img src="https://haithemturki.com/pynerf/images/model.jpg" width="70%" style="display: block; margin-left: auto; margin-right: auto">
29 changes: 29 additions & 0 deletions docs/nerfology/methods/splat.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Gaussian Splatting
[3D Gaussian Splatting](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/) was proposed in SIGGRAPH 2023 from INRIA, and is a completely
different method of representing radiance fields by explicitly storing a collection of 3D volumetric gaussians. These can be "splatted", or projected, onto a 2D image
provided a camera pose, and rasterized to obtain per-pixel colors. Because rasterization is very fast on GPUs, this method can render much faster than neural representations
of radiance fields.

### Installation
Nerfstudio uses [gsplat](https://github.com/nerfstudio-project/gsplat) as its gaussian rasterization backend, an in-house re-implementation which is meant to be more developer friendly. This can be installed with `pip install gsplat`. The associated CUDA code will be compiled the first time gaussian splatting is executed. Some users with PyTorch 2.0 have experienced issues with this, which can be resolved by either installing gsplat from source, or upgrading torch to 2.1.

### Data
Gaussian Splatting works much better if you initialize it from pre-existing geometry, such as SfM points rom COLMAP. COLMAP datasets or datasets from `ns-process-data` will automatically save these points and initialize gaussians on them. Other datasets currently do not support initialization, and will initialize gaussians randomly. Initializing from other data inputs (i.e. depth from phone app scanners) may be supported in the future.

Because gaussian splatting trains on *full images* instead of bundles of rays, there is a new datamanager in `full_images_datamanager.py` which undistorts input images, caches them, and provides single images at each train step.


### Running the Method
To run gaussian splatting, run `ns-train gaussian-splatting --data <data>`. Just like NeRF methods, the splat can be interactively viewed in the web-viewer, rendered, and exported.

### Details
For more details on the method, see the [original paper](https://arxiv.org/abs/2308.04079). Additionally, for a detailed derivation of the gradients used in the gsplat library, see [here](https://arxiv.org/abs/2312.02121).

### Exporting splats
Gaussian splats can be exported as a `.ply` file which are ingestable by a variety of online web viewers. You can do this via the viewer, or `ns-export gaussian-splat`. Currently splats can only be exported from trained splats, not from nerfacto.

### FAQ
- Can I export a mesh or pointcloud?
Currently these export options are not supported, but may in the future and contributions are always welcome!
- Can I render fisheye, equirectangular, orthographic images?
Currently, no. Gaussian splatting assumes a perspective camera for its rasterization pipeline. Implementing other camera models is of interest but not currently planned.
6 changes: 3 additions & 3 deletions docs/quickstart/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,12 @@ pip uninstall torch torchvision functorch tinycudann
```

::::{tab-set}
:::{tab-item} Torch 2.0.1 with CUDA 11.8
:::{tab-item} Torch 2.1.2 with CUDA 11.8 (recommended)

Install PyTorch 2.0.1 with CUDA 11.8:
Install PyTorch 2.1.2 with CUDA 11.8:

```bash
pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
```

To build the necessary CUDA extensions, `cuda-toolkit` is also required. We
Expand Down
17 changes: 8 additions & 9 deletions docs/reference/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,14 @@ In addition to code contributions, we also encourage contributors to add their o

Below are the various tooling features our team uses to maintain this codebase.

| Tooling | Support |
| --------------- | ---------------------------------------------------------- |
| Formatting | [Black](https://black.readthedocs.io/en/stable/) |
| Linter | [Ruff](https://beta.ruff.rs/docs/) |
| Type checking | [Pyright](https://github.com/microsoft/pyright) |
| Testing | [pytest](https://docs.pytest.org/en/7.1.x/) |
| Docs | [Sphinx](https://www.sphinx-doc.org/en/master/) |
| Docstring style | [Google](https://google.github.io/styleguide/pyguide.html) |
| JS Linting | [eslint](https://eslint.org/) |
| Tooling | Support |
| -------------------- | ---------------------------------------------------------- |
| Formatting & Linting | [Ruff](https://beta.ruff.rs/docs/) |
| Type checking | [Pyright](https://github.com/microsoft/pyright) |
| Testing | [pytest](https://docs.pytest.org/en/7.1.x/) |
| Docs | [Sphinx](https://www.sphinx-doc.org/en/master/) |
| Docstring style | [Google](https://google.github.io/styleguide/pyguide.html) |
| JS Linting | [eslint](https://eslint.org/) |

## Requirements

Expand Down
Loading

0 comments on commit a8cc4e0

Please sign in to comment.