Self-Supervised Representation Learning on Point Clouds
- Python 3.10.4
- CUDA 11.6
- cuDNN 8.4.0
- GCC >= 6 and <= 11.2.1
pip install -U pip wheel
pip install torch torchvision -c requirements.txt --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt
See DATASETS.md for download instructions.
python -m point2vec.datasets.process.check # check if datasets are complete
./scripts/test.sh # check if training works
Type | Dataset | Evaluation | Config | Checkpoint |
---|---|---|---|---|
Point2vec pre-trained | ShapeNet | - | config | checkpoint |
Classification fine-tuned | ModelNet40 | 94.65 / 94.77 (OA / Voting) | A & B | checkpoint |
Classification fine-tuned | ScanObjectNN | 87.47 (OA) | A & B | checkpoint |
Part segmentation fine-tuned | ShapeNetPart | 84.59 (Cat. mIoU) | config | checkpoint |
The scripts in this section use Weights & Biases for logging, so it's important to log in once with wandb login
before running them.
Checkpoints will be saved to the artifacts
directory.
A note on reproducibility:
While reproducing our results on most datasets is straightforward, achieving the same test accuracy on ModelNet40 is more complicated due to the high variance between runs (see also Pang-Yatian/Point-MAE#5 (comment), ma-xu/pointMLP-pytorch#1 (comment), CVMI-Lab/PAConv#9 (comment)).
To obtain comparable results on ModelNet40, you will likely need to experiment with a few different seeds.
However, if you can precisely replicate our test environment, including installing CUDA 11.6, cuDNN 8.4.0, Python 3.10.4, and the dependencies listed in the requirements.txt
file, as well as using a Volta GPU (e.g. Nvidia V100), you should be able to replicate our experiments exactly.
Using our exact environment is necessary to ensure that you obtain the same random state during training, as a seed alone does not guarantee reproducibility across different environments.
./scripts/pretraining_shapenet.bash --data.in_memory true
Replace XXXXXXXX
with the WANDB_RUN_ID
from the pre-training run, or use the checkpoint from the model zoo.
./scripts/classification_scanobjectnn.bash --config configs/classification/_pretrained.yaml --model.pretrained_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=799-step=64800.ckpt
Replace XXXXXXXX
with the WANDB_RUN_ID
from the pre-training run, or use the checkpoint from the model zoo.
./scripts/classification_modelnet40.bash --config configs/classification/_pretrained.yaml --model.pretrained_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=799-step=64800.ckpt --seed_everything 1
Replace XXXXXXXX
with the WANDB_RUN_ID
from the fine-tuning run, and epoch=XXX-step=XXXXX-val_acc=0.XXXX.ckpt
with the best checkpoint from that run, or use the checkpoint from the model zoo.
./scripts/voting_modelnet40.bash --finetuned_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=XXX-step=XXXXX-val_acc=0.XXXX.ckpt
Replace XXXXXXXX
with the WANDB_RUN_ID
from the pre-training run, or use the checkpoint from the model zoo.
You may also pass e.g. --data.way 5
or --data.shot 20
to select the desired m-way–n-shot setting.
for i in $(seq 0 9);
do
SLURM_ARRAY_TASK_ID=$i ./scripts/classification_modelnet_fewshot.bash --model.pretrained_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=799-step=64800.ckpt
done
Replace XXXXXXXX
with the WANDB_RUN_ID
from the pre-training run, or use the checkpoint from the model zoo.
./scripts/part_segmentation_shapenetpart.bash --model.pretrained_ckpt_path artifacts/point2vec-Pretraining-ShapeNet/XXXXXXXX/checkpoints/epoch=799-step=64800.ckpt
Expand
Replace the pre-training step with:
./scripts/pretraining_shapenet.bash --data.in_memory true --model.learning_rate 2e-3 --model.decoder false --trainer.devices 2 --data.batch_size 1024 --model.fix_estimated_stepping_batches 16000
If you only have a single GPU (and enough VRAM), you may replace --trainer.devices 2 --data.batch_size 1024 --model.fix_estimated_stepping_batches 16000
with --data.batch_size 2048
.
Skip the pre-training step, and omit all occurences of --config configs/classification/_pretrained.yaml
and --model.pretrained_ckpt_path ...
.
We use PCA to project the learned representations into RGB space. Both a random initialization and data2vec–pc pre-training show a fairly strong positional bias, whereas point2vec exhibits a stronger semantic grouping without being trained on downstream dense prediction tasks.
If you use point2vec in your research, please use the following BibTeX entry.
@inproceedings{abouzeid2023point2vec,
title={Point2Vec for Self-Supervised Representation Learning on Point Clouds},
author={Abou Zeid, Karim and Schult, Jonas and Hermans, Alexander and Leibe, Bastian},
journal={German Conference on Pattern Recognition (GCPR)},
year={2023},
}