Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Foggification lost points #44

Open
ptoews opened this issue May 16, 2021 · 0 comments
Open

Foggification lost points #44

ptoews opened this issue May 16, 2021 · 0 comments

Comments

@ptoews
Copy link

ptoews commented May 16, 2021

Hi,

to better understand the proposed foggification I had a look at the code but couldn't find where the lost points are actually discarded. The probabilities are computed but then only used for selecting points that are scattered.
To be precise, I'm talking about this method:

def haze_point_cloud(pts_3D, Radomized_beta, args):
#print 'minmax_values', max(pts_3D[:, 0]), max(pts_3D[:, 1]), min(pts_3D[:, 1]), max(pts_3D[:, 2]), min(pts_3D[:, 2])
n = []
# foggyfication should be applied to sequences to ensure time correlation inbetween frames
# vectorze calculation
# print pts_3D.shape
if args.sensor_type=='VelodyneHDLS3D':
# Velodyne HDLS643D
n = 0.04
g = 0.45
dmin = 2 # Minimal detectable distance
elif args.sensor_type=='VelodyneHDLS2':
#Velodyne HDL64S2
n = 0.05
g = 0.35
dmin = 2
d = np.sqrt(pts_3D[:,0] * pts_3D[:,0] + pts_3D[:,1] * pts_3D[:,1] + pts_3D[:,2] * pts_3D[:,2])
detectable_points = np.where(d>dmin)
d = d[detectable_points]
pts_3D = pts_3D[detectable_points]
beta_usefull = Radomized_beta.get_beta(pts_3D[:,0], pts_3D[:, 1], pts_3D[:, 2])
dmax = -np.divide(np.log(np.divide(n,(pts_3D[:,3] + g))),(2 * beta_usefull))
dnew = -np.log(1 - 0.5) / (beta_usefull)
probability_lost = 1 - np.exp(-beta_usefull*dmax)
lost = np.random.uniform(0, 1, size=probability_lost.shape) < probability_lost
if Radomized_beta.beta == 0.0:
dist_pts_3d = np.zeros((pts_3D.shape[0], 5))
dist_pts_3d[:, 0:4] = pts_3D
dist_pts_3d[:, 4] = np.zeros(np.shape(pts_3D[:, 3]))
return dist_pts_3d, []
cloud_scatter = np.logical_and(dnew < d, np.logical_not(lost))
random_scatter = np.logical_and(np.logical_not(cloud_scatter), np.logical_not(lost))
idx_stable = np.where(d<dmax)[0]
old_points = np.zeros((len(idx_stable), 5))
old_points[:,0:4] = pts_3D[idx_stable,:]
old_points[:,3] = old_points[:,3]*np.exp(-beta_usefull[idx_stable]*d[idx_stable])
old_points[:, 4] = np.zeros(np.shape(old_points[:,3]))
cloud_scatter_idx = np.where(np.logical_and(dmax<d, cloud_scatter))[0]
cloud_scatter = np.zeros((len(cloud_scatter_idx), 5))
cloud_scatter[:,0:4] = pts_3D[cloud_scatter_idx,:]
cloud_scatter[:,0:3] = np.transpose(np.multiply(np.transpose(cloud_scatter[:,0:3]), np.transpose(np.divide(dnew[cloud_scatter_idx],d[cloud_scatter_idx]))))
cloud_scatter[:,3] = cloud_scatter[:,3]*np.exp(-beta_usefull[cloud_scatter_idx]*dnew[cloud_scatter_idx])
cloud_scatter[:, 4] = np.ones(np.shape(cloud_scatter[:, 3]))
# Subsample random scatter abhaengig vom noise im Lidar
random_scatter_idx = np.where(random_scatter)[0]
scatter_max = np.min(np.vstack((dmax, d)).transpose(), axis=1)
drand = np.random.uniform(high=scatter_max[random_scatter_idx])
# scatter outside min detection range and do some subsampling. Not all points are randomly scattered.
# Fraction of 0.05 is found empirically.
drand_idx = np.where(drand>dmin)
drand = drand[drand_idx]
random_scatter_idx = random_scatter_idx[drand_idx]
# Subsample random scattered points to 0.05%
print(len(random_scatter_idx), args.fraction_random)
subsampled_idx = np.random.choice(len(random_scatter_idx), int(args.fraction_random*len(random_scatter_idx)), replace=False)
drand = drand[subsampled_idx]
random_scatter_idx = random_scatter_idx[subsampled_idx]
random_scatter = np.zeros((len(random_scatter_idx), 5))
random_scatter[:,0:4] = pts_3D[random_scatter_idx,:]
random_scatter[:,0:3] = np.transpose(np.multiply(np.transpose(random_scatter[:,0:3]), np.transpose(drand/d[random_scatter_idx])))
random_scatter[:,3] = random_scatter[:,3]*np.exp(-beta_usefull[random_scatter_idx]*drand)
random_scatter[:, 4] = 2*np.ones(np.shape(random_scatter[:, 3]))
dist_pts_3d = np.concatenate((old_points, cloud_scatter,random_scatter), axis=0)
color = []
return dist_pts_3d, color

In the end, old_points are returned although only the points with a distance larger than dmax were removed. Or am I misunderstanding the algorithm?

I also found this related issue: #21

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant