You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found this repo through twitter and found it to be really nicely written and performant so I wanted to use it for 3d segmentation, using a method from a github repo called sam3d. So I combined parts of your repo with torchops with python.
I specifically used your way of loading in las data:
std::vector<at::Tensor> loadLasFile(std::vector<std::string> paths) {
printfmt("hello\n");
initCuda();
printfmt("We get here\n");
initCudaProgram();
printfmt("We init cuda program\n");
bool debug = false;
auto cpu = getCpuData();
int numThreads = 1;
printfmt("cpu.numProcessors: {} \n", cpu.numProcessors);
printfmt("launching {} loader threads \n", numThreads);
pinnedMemPool.reserveSlots(PINNED_MEM_POOL_SIZE);
printfmt("pinnedMemPool.pool.size(): {} \n", pinnedMemPool.pool.size());
mtx_loader.reserve(numThreads);
for(int i = 0; i < numThreads; i++){
mtx_loader.push_back(make_unique<mutex>());
spawnLoader(i);
}
spawnUploader();
reload();
readFiles(paths);
while (!(batchStreamUploadIndex == numBatchesTotal)) {
std::this_thread::sleep_for(10ms);
printfmt("batchStreamUploadIndex: {} (total: {}) \n", batchStreamUploadIndex, numBatchesTotal);
}
// Create tensors to store data from Point arrays, for GPU
std::vector<at::Tensor> tensorsCUDA = createTensorsCUDA(targetCountVector);
printfmt("tensors created \n");
printfmt("resetting \n");
freeCUDAMemory(targetCountVector);
resetCUDA();
//freeCUDAMemory(targetCountVector);
return tensorsCUDA;
}
And generated a transform that is equivalent to the uniform.transform with:
This code also creates the ndc tensor in a similar way as draw_points in your kernel(which I changed for checking if uniform.transform of mine is equivalent to yours):
Is there some transform that is being done on the points that I am missing? I really like the fact that your software is so fast and is self contained, to the extent that is possible of course. So I'd really like to use it, but this really stumped me. And since your the developer I thought maybe you know what I am doing wrong.
The text was updated successfully, but these errors were encountered:
I'm not certain what's going on here but could it be that the resolution of your render target is too large? It looks a bit like what you get when you significantly increase the resolution without increasing the point sizes accordingly.
I found this repo through twitter and found it to be really nicely written and performant so I wanted to use it for 3d segmentation, using a method from a github repo called sam3d. So I combined parts of your repo with torchops with python.
I specifically used your way of loading in las data:
And generated a transform that is equivalent to the
uniform.transform
with:This code also creates the ndc tensor in a similar way as draw_points in your kernel(which I changed for checking if
uniform.transform
of mine is equivalent to yours):However I get:


But your SimLOD gives:
Is there some transform that is being done on the points that I am missing? I really like the fact that your software is so fast and is self contained, to the extent that is possible of course. So I'd really like to use it, but this really stumped me. And since your the developer I thought maybe you know what I am doing wrong.
The text was updated successfully, but these errors were encountered: