Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the visualization of the result #1

Open
Ajeosi opened this issue Aug 12, 2017 · 5 comments
Open

Question about the visualization of the result #1

Ajeosi opened this issue Aug 12, 2017 · 5 comments

Comments

@Ajeosi
Copy link

Ajeosi commented Aug 12, 2017

Thank you for your code for segmentation of CamVid data.
I ran the code without any change (10 epochs, Adam optimizer).

My question is,
How can I visualize the predicted image with 12 classes with 12 colors well?
All I get is the strange image with multiple horizontal lines.

When the imgs_mask_test.npy is made after running the code, it is (233,172800, 12) array.
To visualize the result of 1st sample of test set, , I reshaped this to (360,480,12) and,
np.argmax(reshaped image, axis=-1) was done to make each pixcel of (360,480) to represent the each class whose probability was highest.
When I plot the image np.argmax(~~~) by pyplot.imshow, I get the strange image with multiple horizontal lines.

Is there any good way to make a picture made of 12 colors representing each object, without horizontal lines?

(original image of test set)
original

(annotated image of test set)
annot

(result with horizontal lines by prediction)
test

Thank you.

@SteveIb
Copy link

SteveIb commented Jul 17, 2018

Hi @Ajeosi , did you find a solution for that? I'm facing the same problem with 3 classes.

@YJonmo
Copy link

YJonmo commented Aug 22, 2018

Any solution?

@shabtayor
Copy link

I was able to solve this issue by changing the Reshape args for last layers of the model.
Instead of the current model definition in unet.py (notice the input_shape arg for the Keras Reshape):



reshape = Reshape((self.img_rows * self.img_cols, 12), input_shape=(self.img_rows, self.img_cols, 12))(conv9)

print("reshape shape:", reshape.shape)



# permute = Permute((2, 1))(reshape)

# print("permute shape:", permute.shape)



activation = Activation('softmax')(reshape)


After 10 epochs I got this result:
image
image

@Ajeosi
Copy link
Author

Ajeosi commented Aug 26, 2018

Thank you very much!!

@hsinshihlun
Copy link

hsinshihlun commented Feb 3, 2019

I have trained about 150 epochs, I defined a test visualize which is following below, but the results seems something wrong, colors are same as authors, @Ajeosi @shabtayor how do you visualize your results?
Sky = [128,128,128]
Building = [128,0,0]
Pole = [192,192,128]
Road = [128,64,128]
Pavement = [60,40,222]
Tree = [128,128,0]
SignSymbol = [192,128,128]
Fence = [64,64,128]
Car = [64,0,128]
Pedestrian = [64,64,0]
Bicyclist = [0,128,192]
Unlabelled = [0,0,0]

COLOR_DICT = np.array([Sky, Building, Pole, Road, Pavement,Tree, SignSymbol, Fence, Car, Pedestrian, Bicyclist, Unlabelled])

def test_multi_images(test_path):
dirs = os.listdir(test_path)
for filename in dirs:
print(frame)
img_path = os.path.join(test_path,filename)
img = cv2.imread(img_path)
img_color = img.copy()
dims = img.shape
img = img / 255
img = np.reshape(img,(1,360,480,3))
y_prob = model.predict(img)
y_classes = y_prob.argmax(axis=-1)
output=np.reshape(y_classes,(360,480))
for j in range(dims[0]): #height
for i in range(dims[1]): #width
img_color[j, i] = COLOR_DICT[output[j, i]]
Epoch 147/220
323/323 [==============================] - 54s 167ms/step - loss: 0.0371 - acc: 0.9854

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants