-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathimrestore.py
1537 lines (1340 loc) · 70.8 KB
/
imrestore.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
#!/usr/local/bin/python
# -*- coding: utf-8 -*-
# (C) 2015-2017 David Toro <[email protected]>
'''
imrestore (oriented to retinal images):
Restore images by merging and stitching techniques.
Optimization techniques:
resize to smaller versions*
memoization*:
-persistence
-serialization and de-serialization
-caching
multitasking*:
-multiprocessing
-multithreading
lazy evaluations:
-load on demand
-use of weak references
Memory mapped files*
STEPS:
(1) Local features: Key-points and descriptors:
-(1.1) SIFT, SURF, ORB, etc
-ASIFT*
(2) Select main or base image from set for merging:
-First image, Most keypoints, Sorting, User input
(3) Matching (spacial):
-filter 0.7 below Hamming distance
-key points classification
(4) selection in matching set: (pre selection of good matches)
(4.1) Best matches: for general purpose
(4.2) Entropy: used when set is ensured to be of the same object
(The program ensures that, if it is not the case).
(4.3) Histogram comparison: use if set contains unwanted
perspectives or images that do not correspond to image.
(4.4) Custom function
(5) Calculate Homography
(6) Probability tests: (ensures that the matches images
correspond to each other)
(7) Stitching and Merging
(7.1) Histogram matching* (color)
(7.2) Segmentation*
(7.3) Alpha mask calculation*
(7.4) Overlay
(8) Overall filtering*:
Bilateral filtering
(9) Lens simulation for retinal photos*
* optional
Notes:
Optimization techniques:
Resize to smaller versions: process smaller versions of the
inputs and convert the result back to the original versions.
This reduces processing times, standardize the way data is
processed (with fixed sizes), lets limited memory to be used,
allows to apply in big images without breaking down algorithms
that cannot do that.
Memoization:
Persistence: save data to disk for later use.
serialization and de-serialization: (serialization, in
python is refereed as pickling) convert live objects into
a format that can be recorded; (de-serialization, in python
referred as unpickling) it is used to restore serialized
data into "live" objects again as if the object was created
in program conserving its data from precious sessions.
Caching: saves the result of a function depending or not in
the inputs to compute data once and keep retrieving it from
the cached values if asked.
Multitasking:
Multiprocessing: pass tasks to several processes using the
computer's cores to achieve concurrency.
Multithreading: pass tasks to threads to use "clock-slicing"
of a processor to achieve "concurrency".
Lazy evaluations:
load on demand: if data is from an external local file, it is
loaded only when it is needed to be computed otherwise it is
deleted from memory or cached in cases where it is extensively
used. For remotes images (e.g. a server, URL) or in a inadequate
format, it is downloaded and converted to a numpy format in a
temporal local place.
Use of weak references: in cases where the data is cached or
has not been garbage collected, data is retrieved through
weak references and if it is needed but has been garbage
collected it is load again and assigned to the weak reference.
Memory mapped files:
Instantiate an object and keep it not in memory but in a file and
access it directly there. Used when memory is limited or data is
too big to fit in memory. Slow downs are negligible for read only
mmaped files (i.e. "r") considering the gain in free memory, but
it is a real drawback for write operations (i.e. "w", "r+", "w+").
Selection algorithms:
Histogram comparison - used to quickly identify the images that
most resemble a target
Entropy - used to select the best enfoqued images of the same
perspective of an object
Local features: Key-points and descriptors:
ASIFT: used to add a layer of robustness onto other local
feature methods to cover all affine transformations. ASIFT
was conceived to complete the invariance to transformations
offered by SIFT which simulates zoom invariance using gaussian
blurring and normalizes rotation and translation. This by
simulating a set of views from the initial image, varying the
two camera axis orientations: latitude and longitude angles,
hence its acronym Affine-SIFT. Whereas SIFT stands for Scale
Invariant Feature Transform.
Matching (spacial):
Calculate Homography: Used to find the transformation matrix
to overlay a foreground onto a background image.
Filtering:
Bilateral filtering: used to filter noise and make the image
colors more uniform (in some cases more cartoonist-like)
Histogram matching (color): used to approximate the colors from the
foreground to the background image.
Segmentation: detect and individualize the target objects (e.g. optic
disk, flares) to further process them or prevent them to be altered.
Alfa mask calculation: It uses Alfa transparency obtained with sigmoid
filters and binary masks from the segmentation to specify where an
algorithm should have more effect or no effect at all
(i.e. intensity driven).
Stitching and Merging:
This is an application point, where all previous algorithms are
combined to stitch images so as to construct an scenery from the
parts and merge them if overlapped or even take advantage of these
to restore images by completing lacking information or enhancing
poorly illuminated parts in the image. A drawback of this is that
if not well processed and precise information is given or calculated
the result could be if not equal worse than the initial images.
Lens simulation for retinal photos: As its name implies, it is a
post-processing method applied for better appeal of the image
depending on the tastes of the user.
'''
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
# TODO install openCV 2.4.12 as described in http://stackoverflow.com/a/37283690/5288758
# to solve the error Process finished with exit code 139
# UPDATE: openCV 2.4.12 does not solve the error Process finished with exit code 139
from future import standard_library
standard_library.install_aliases()
from builtins import zip
from past.builtins import basestring
from builtins import object
__author__ = 'David Toro'
# needed for installing executable
import six ### DO NOT DELETE used for executable
import packaging ### DO NOT DELETE used for executable
import packaging.specifiers ### DO NOT DELETE used for executable
import packaging.requirements ### DO NOT DELETE used for executable
import appdirs ### DO NOT DELETE used for executable
import six ### DO NOT DELETE used for executable
import tkinter.filedialog ### DO NOT DELETE used for executable
# program imports
import os
import cv2
import warnings
import numpy as np
from RRtoolbox.tools.lens import simulateLens
from RRtoolbox.lib.config import MANAGER, FLOAT
from RRtoolbox.lib.image import hist_match
from RRtoolbox.lib.directory import getData, getPath, mkPath, increment_if_exits
from RRtoolbox.lib.cache import MemoizedDict, LazyDict
from RRtoolbox.lib.image import loadFunc, ImCoors
from RRtoolbox.lib.arrayops.mask import brightness, foreground, thresh_biggestCnt
from multiprocessing.pool import ThreadPool as Pool
from RRtoolbox.tools.selectors import hist_map, hist_comp, entropy
from RRtoolbox.tools.segmentation import get_bright_alpha, retinal_mask, get_layered_alpha
from RRtoolbox.lib.root import (TimeCode, glob, lookinglob, Profiler, VariableNotSettable,
NameSpace)
from RRtoolbox.lib.descriptors import Feature, inlineRatio
from RRtoolbox.lib.plotter import MatchExplorer, Plotim, fastplt
from RRtoolbox.lib.arrayops.filters import getBilateralParameters
from RRtoolbox.lib.arrayops.convert import getSOpointRelation, dict2keyPoint
from RRtoolbox.lib.arrayops.basic import (getTransformedCorners, transformPoint,
im2shapeFormat, contours2mask, pad_to_fit_H, overlay)
from RRtoolbox.shell import tuple_creator, string_interpreter
def check_valid(fn):
"""
Checks that a file is valid for loading.
:param fn: filename
:return: True for valid, False for invalid.
"""
test = os.path.isfile(fn)
if test and getData(fn)[-2].startswith("_"):
return False
return test
class ImRestore(object):
"""
Restore images by merging and stitching techniques.
:param filenames: list of images or string to path which uses glob filter in path.
Loads image array from path, url, server, string
or directly from numpy array (supports databases)
:param debug: (0) flag to print messages and debug data.
0 -> do not print messages.
1 -> print normal messages.
2 -> print normal and debug messages.
3 -> print all messages and show main results.
(consumes significantly more memory).
4 -> print all messages and show all stage results.
(consumes significantly more memory).
5 -> print all messages, show all results and additional data.
(consumes significantly more memory).
:param feature: (None) feature instance. It contains the configured
detector and matcher.
:param pool: (None) use pool Ex: 4 to use 4 CPUs.
:param cachePath: (None) saves memoization to specified path. This is
useful to save some computations and use them in next executions.
If True it creates the cache in current path.
.. warning:: Cached data is not guaranteed to work between different
configurations and this can lead to unexpected program
behaviour. If a different configuration will be used it
is recommended to clear the cache to recompute values.
:param clearCache: (0) clear cache flag.
* 0 do not clear.
* 1 check data integrity of previous session before use
* 2 re-compute data but other cache data is left intact.
* 3 All CachePath is cleared before use.
Notes: using cache can result in unexpected behaviour
if some configurations does not match to the cached data.
:param loader: (None) custom loader function used to load images.
If None it loads the original images in color.
:param process_shape: (400,400) process shape, used to load pseudo images
to process features and then results are converted to the
original images. The smaller the image more memory and speed gain
If None it loads the original images to process the features but it
can incur to performance penalties if images are too big and RAM
memory is scarce.
:param load_shape: (None) custom shape used load images which are being merged.
:param baseImage: (None) First image to merge to.
* None -> takes first image from raw list.
* True -> selects image with most features.
* Image Name.
:param selectMethod: (None) Method to sort images when matching. This
way the merging order can be controlled.
* (None) Best matches.
* Histogram Comparison: Correlation, Chi-squared,
Intersection, Hellinger or any method found in hist_map
* Entropy.
* custom function of the form: rating,fn <-- selectMethod(fns)
:param distanceThresh: (0.75) filter matches by distance ratio.
:param inlineThresh: (0.2) filter homography by inlineratio.
:param rectangularityThresh: (0.5) filter homography by rectangularity.
:param ransacReprojThreshold: (5.0) maximum allowed reprojection error
to treat a point pair as an inlier.
:param centric: (False) tries to attach as many images as possible to
each matching. It is quicker since it does not have to process
too many match computations.
:param hist_match: (False) apply histogram matching to foreground
image with merge image as template
:param grow_scene: If True, allow the restored image to grow in shape if
necessary at the merging process.
:param expert: Path to an expert database. If provided it will use this data
to generate the mask used when merging to the restored image.
:param maskforeground:(False)
* True, limit features area using foreground mask of input images.
This mask is calculated to threshold a well defined object.
* Callable, Custom function to produce the foreground image which
receives the input gray image and must return the mask image
where the keypoints will be processed.
:param noisefunc: True to process noisy images or provide function.
:param save: (False)
* True, saves in path with name _restored_{base_image}
* False, does not save
* Image name used to save the restored image.
:param overwrite: If True and the destine filename for saving already
exists then it is replaced, else a new filename is generated
with an index "{filename}_{index}.{extension}"
"""
def __init__(self, filenames, **opts):
self.profiler = opts.pop("profiler",None)
if self.profiler is None:
self.profiler = Profiler("ImRestore init")
self.log_saved = [] # keeps track of last saved file.
# for debug
self.verbosity = opts.pop("verbosity", 1)
################################## GET IMAGES ####################################
if filenames is None or len(filenames)==0: # if images is empty use demonstration
#test = MANAGER["TESTPATH"]
#if self.verbosity: print("Looking in DEMO path '{}'".format(test))
#fns = glob(test + "*",check=check_valid)
raise Exception("List of filenames is Empty")
elif isinstance(filenames, basestring):
# if string assume it is a path
if self.verbosity: print("Looking as '{}'".format(filenames))
fns = glob(filenames, check=check_valid)
elif not isinstance(filenames, basestring) and \
len(filenames) == 1 and "*" in filenames[0]:
filenames = filenames[0] # get string
if self.verbosity: print("Looking as '{}'".format(filenames))
fns = glob(filenames, check=check_valid)
else: # iterator containing data
fns = filenames # list file names
# check images
if not len(fns)>1:
raise Exception("list of images must be "
"greater than 1, got '{}'".format(len(fns)))
# for multiprocessing
self.pool = opts.pop("pool", None)
if self.pool is not None: # convert pool count to pool class
NO_CPU = cv2.getNumberOfCPUs()
if self.pool <= NO_CPU:
self.pool = Pool(processes = self.pool)
else:
raise Exception("pool of {} exceeds the "
"number of {} processors".format(self.pool, NO_CPU))
# for features
self.feature = opts.pop("feature", None)
# init detector and matcher to compute descriptors
if self.feature is None:
self.feature = Feature(pool=self.pool, debug=self.verbosity)
self.feature.config(name='a-sift-flann')
else:
self.feature.pool = self.pool
self.feature.debug = self.verbosity
# select method to order images to feed in superposition
self.selectMethod = opts.pop("selectMethod", None)
best_match_list = ("bestmatches", "best matches")
entropy_list = ("entropy",)
if callable(self.selectMethod):
self._orderValue = 3
elif self.selectMethod in hist_map:
self._orderValue = 2
elif self.selectMethod in entropy_list:
self._orderValue = 1
elif self.selectMethod in best_match_list or self.selectMethod is None:
self._orderValue = 0
else:
raise Exception("selectMethod '{}' not recognized".format(self.selectMethod))
# distance threshold to filter best matches
self.distanceThresh = opts.pop("distanceThresh", 0.75) # filter ratio
# threshold for inlineRatio
self.inlineThresh = opts.pop("inlineThresh", 0.2) # filter ratio
# ensures adequate value [0,1]
assert self.inlineThresh<=1 and self.inlineThresh>=0
# threshold for rectangularity
self.rectangularityThresh = opts.pop("rectangularityThresh", 0.5) # filter ratio
# ensures adequate value [0,1]
assert self.rectangularityThresh<=1 and self.rectangularityThresh>=0
# threshold to for RANSAC reprojection
self.ransacReprojThreshold = opts.pop("ransacReprojThreshold", 5.0)
self.centric = opts.pop("centric", False) # tries to attach as many images as possible
# it is not memory efficient to compute descriptors from big images
self.process_shape = opts.pop("process_shape", (400, 400)) # use processing shape
self.load_shape = opts.pop("load_shape", None) # shape to load images for merging
self.minKps = 4 # minimum len of key-points to find Homography
self.histMatch = opts.pop("hist_match", False)
self.denoise=opts.pop("denoise", None)
############################## OPTIMIZATION MEMOIZEDIC ###########################
self.cachePath = opts.pop("cachePath", None)
self.clearCache = opts.pop("clearCache", 0)
self.expert = opts.pop("expert", None)
if self.expert is not None:
self.expert = MemoizedDict(self.expert) # convert path
# to select base image ahead of any process
baseImage = opts.pop("baseImage", None)
if isinstance(baseImage, basestring):
if baseImage not in fns:
base, path, name, ext = getData(baseImage)
if not path: # if name is incomplete look for it
base, path, name, ext = getData(fns[0])
try: # tries user input
# selected image must be in fns
baseImage = lookinglob(baseImage,
path= "".join((base, path)),
filelist=fns, raiseErr=True)#,ext=".*"
except Exception as e: # tries to find image based in user input
# generate informative error for the user
try:
# look in the file pattern path
baseImage = lookinglob(baseImage, raiseErr=True)#,ext=".*"
# append new image
fns.append(baseImage)
except Exception as e2:
e.args = e.args + e2.args + \
("A pattern could be '{}'".format("".join((name, ".*"))),)
raise e
self.baseImage = baseImage
if self.verbosity: print("No. images '{}'...".format(len(fns)))
# assign filenames
self.filenames = fns
# make loader
self.loader = opts.pop("loader", None) # BGR loader
# TODO replace loadFunc for class object so that it can provide load_shape and drop that argument
if self.loader is None: self.loader = loadFunc(1)
self._loader_cache = None # keeps last image reference
self._loader_params = None # keeps last track of last loading options to reload
self.save = opts.pop("save", False)
self.grow_scene = opts.pop("grow_scene", True)
self.maskforeground = opts.pop("maskforeground", False)
self.overwrite = opts.pop("overwrite", False)
# do a check of the options
if opts:
raise Exception("Unknown keyword(s) '{}'".format(list(opts.keys())))
# processing variables
self._feature_list = None
self._feature_dict = None
self.used = None # register used images
self.failed = None # register failed images
self.transformations = None # register transformations for each used image (back, fore)
self.comparison = None # used to order matches
self.restored = None # estored image
self.kps_base, self.desc_base = None, None # list of keypoints,
@property
def denoise(self):
return self._noisefunc
@denoise.setter
def denoise(self, value):
if value is False:
value = None
if value is True:
value = "mild"
if value in ("mild", "heavy", "normal", None) or callable(value):
self._noisefunc = value
else:
raise Exception("denoise '{}' not recognised".format(value))
@denoise.deleter
def denoise(self):
del self._noisefunc
@property
def feature_list(self):
if self._feature_list is None:
return self.compute_keypoints()
return self._feature_list
@feature_list.setter
def feature_list(self, value):
raise VariableNotSettable("feature_list is not settable")
@feature_list.deleter
def feature_list(self):
self._feature_list = None
@property
def feature_dict(self):
if self._feature_dict is None:
if self.cachePath is not None:
if self.cachePath is True:
self.cachePath = os.path.abspath(".") # MANAGER["TEMPPATH"]
if self.cachePath == "{temp}":
self.cachePath = self.cachePath.format(temp=MANAGER["TEMPPATH"])
memoized = MemoizedDict(os.path.join(self.cachePath, "descriptors"))
if self.verbosity: print("Cache path is in '{}'".format(memoized._path))
self._feature_dict = LazyDict(getter=self.compute_keypoint,
dictionary=memoized)
if self.clearCache==3: # All CachePath is cleared
self._feature_dict.clear()
if self.verbosity: print("Cache path cleared")
else:
self._feature_dict = LazyDict(getter=self.compute_keypoint)
if self.clearCache==2: # check data = 1, recompute = 2
# tell LazyDict to recompute data if key is requested
self._feature_dict.cached = False
else:
self._feature_dict.cached = True
return self._feature_dict
@feature_dict.setter
def feature_dict(self, value):
self._feature_dict = value
@feature_dict.deleter
def feature_dict(self):
self._feature_dict = None
def load_image(self, path=None, shape=None):
"""
load image from source
:param path: filename, url, .npy, server, image in string
:param shape: shape to convert image
:return: BGR image
"""
params = (path, shape)
if self._loader_cache is None or params != self._loader_params:
# load new image and cache it
img = self.loader(path) # load image
if shape is not None:
img = cv2.resize(img, shape)
self._loader_cache = img # this keeps a reference
self._loader_params = params
return img
else: # return cached image
return self._loader_cache
def compute_keypoint(self, path):
img = self.load_image(path, self.load_shape)
lshape = img.shape[:2]
try:
if self.cachePath is None:
point = Profiler(msg=path, tag="cached")
else:
point = Profiler(msg=path, tag="memoized")
# compare safely if path is in dictionary, this works for LazyDic,
# MemoizeDic or normal dictionaries
if self.feature_dict.cached is False or \
path not in self.feature_dict or \
self.clearCache==2 and path in self.feature_dict:
raise KeyError # clears entry from cache
kps, desc, pshape = self.feature_dict[path] # thread safe
if self.verbosity: print("'{}' is cached...".format(path))
if pshape is None:
if self.verbosity: print("Cache is of different format...")
raise ValueError
if self.clearCache==1 and pshape != self.process_shape: # check integrity
if self.verbosity: print("Cache check not passed...")
raise ValueError
else:
if self.verbosity: print("Cache checked...")
except (KeyError, ValueError) as e: # not memorized
point = Profiler(msg=path, tag="processed")
if self.verbosity: print("Processing features for '{}'...".format(path))
if lshape != self.process_shape:
img = cv2.resize(img, self.process_shape)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# get features
if self.maskforeground is None:
kps, desc = self.feature.detectAndCompute(img)
else:
mask = None
if callable(self.maskforeground):
mask = self.maskforeground(img)
if self.maskforeground is True:
mask = foreground(img)
if self.verbosity > 4:
try:
if mask is None: # simulate a complete mask if None
mask2 = np.ones(img.shape[:2])*255
mask2[0,0] = 0 # add lowest value
fastplt(mask2, block=True,
title="Mask to detect features for '{}'".format(path))
else:
fastplt(overlay(img.copy(), mask*255, alpha=mask*0.8), block=True,
title="Mask to detect features for '{}'".format(path))
except:
pass
kps, desc = self.feature.detectAndCompute(img, mask)
pshape = img.shape[:2] # get process shape
# to memoize
self.feature_dict[path] = kps, desc, pshape
# re-scale keypoints to original image
if lshape != pshape:
# this necessarily does not produce the same result
"""
# METHOD 1: using Transformation Matrix
H = getSOpointRelation(process_shape, lshape, True)
for kp in kps:
kp["pt"]=tuple(cv2.perspectiveTransform(
np.array([[kp["pt"]]]), H).reshape(-1, 2)[0])
"""
# METHOD 2:
rx, ry = getSOpointRelation(pshape, lshape)
for kp in kps:
x, y = kp["pt"]
kp["pt"] = x*rx, y*ry
kp["path"] = path
else:
for kp in kps: # be very carful, this should not appear in self.feature_dict
kp["path"] = path # add paths to key-points
# for profiling individual processing times
if self.profiler is not None: self.profiler._close_point(point)
return kps, desc, pshape
def compute_keypoints(self):
"""
Computes key-points from file names.
:return: self.feature_list
"""
#################### Local features: Key-points and descriptors #################
with TimeCode("Computing features...\n", profiler=self.profiler,
profile_point=("Computing features",),
endmsg="Computed feature time was {time}\n",
enableMsg=self.verbosity) as timerK:
fns = self.filenames
self._feature_list = [] # list of key points and descriptors
for index, path in enumerate(fns):
if self.verbosity: print("\rFeatures {}/{}...".format(index + 1, len(fns)), end=' ')
kps, desc, pshape = self.compute_keypoint(path)
# number of key-points, index, path, key-points, descriptors
self._feature_list.append((len(kps), index, path, kps, desc))
return self._feature_list
def pre_selection(self):
"""
This method selects the first restored image so that self.restored is initialized
with a numpy array and self.used should specify the used image preferably in
self.feature_list.
:return: None
"""
########################### Pre-selection from a set ############################
baseImage = self.baseImage # baseImage option should not be update
# initialization and base image selection
if baseImage is None: # select first image as baseImage
_, _, baseImage, self.kps_base, self.desc_base = self.feature_list[0]
elif isinstance(baseImage, basestring):
self.kps_base, self.desc_base, _ = self.feature_dict[baseImage]
elif baseImage is True: # sort images
self.feature_list.sort(reverse=True) # descendant: from bigger to least
# select first for most probable
_, _, baseImage, self.kps_base, self.desc_base = self.feature_list[0]
else:
raise Exception("baseImage must be None, True or String")
if self.verbosity: print("baseImage is", baseImage)
self.used = [baseImage] # select first image path
# load first image for merged image
self.restored = self.load_image(baseImage, self.load_shape)
self.transformations = {baseImage:(None, None)}
def matching(self, exclude=None):
"""
Process matching.
:param exclude: list of excluded paths
:return: list of (number, path), dictionary of matches.
Where number = len(dictionary[path])
"""
if exclude is None:
exclude = self.used
comparison = self.comparison
if comparison is None:
fns = self.filenames # in this process fns should not be changed
comparison = True
########################## Order set initialization #############################
if self._orderValue: # obtain comparison with structure (value, path)
if self._orderValue == 1: # entropy
comparison = list(zip(*entropy(fns, loadfunc=loadFunc(1, self.process_shape),
invert=False)[:2]))
if self.verbosity: print("Configured to sort by entropy...")
elif self._orderValue == 2: # histogram comparison
comparison = hist_comp(fns, loadfunc=loadFunc(1, self.process_shape),
method=self.selectMethod)
if self.verbosity:
print("Configured to sort by '{}'...".format(self.selectMethod))
elif self._orderValue == 3:
comparison = self.selectMethod(fns)
if self.verbosity: print("Configured to sort by Custom Function...")
else:
raise Exception("DEBUG: orderValue {} does "
"not correspond to {}".format(self._orderValue, self.selectMethod))
elif self.verbosity: print("Configured to sort by best matches")
self.comparison = comparison
with TimeCode("Matching ...\n", profiler=self.profiler,
profile_point=("Matching",),
endmsg= "Matching overall time was {time}\n",
enableMsg= self.verbosity) as timerM:
################### remaining keypoints to match ####################
# initialize key-point and descriptor base list
kps_remain, desc_remain = [], []
for _, _, path, kps, desc in self.feature_list:
# append only those which are not in the base image
if path not in exclude:
kps_remain.extend(kps)
desc_remain.extend(desc)
if not kps_remain: # if there is not image remaining to stitch break
return None, None
desc_remain = np.array(desc_remain) # convert descriptors to array
############################ Matching ###############################
# select only those with good distance (hamming, L1, L2)
raw_matches = self.feature.matcher.knnMatch(queryDescriptors = desc_remain,
trainDescriptors = self.desc_base, k = 2) #2
# If path=2, it will draw two match-lines for each key-point.
classified = {}
for m in raw_matches:
# Apply ratio test as D Lowe suggestion, L1 or L2 distance
try:
if m[0].distance < m[1].distance * self.distanceThresh:
m = m[0]
kp1 = kps_remain[m.queryIdx] # keypoint in query image
kp2 = self.kps_base[m.trainIdx] # keypoint in train image
key = kp1["path"] # ensured that key is not in used
if key in classified:
classified[key].append((kp1, kp2))
else:
classified[key] = [(kp1, kp2)]
except IndexError:
pass # if not raw_matches pass
########################## Order set ################################
# use only those in classified of histogram or entropy comparison
if self._orderValue:
ordered = [(val, path) for val, path
in comparison if path in classified]
else: # order with best matches
ordered = sorted([(len(kps), path)
for path, kps in list(classified.items())], reverse=True)
return ordered, classified
def restore(self):
"""
Restore using file names (self.file_names) with base image (self.baseImage
calculated from self.pre_selection()) and other configurations.
:return: self.restored
"""
self.pre_selection()
self.failed = [] # registry for failed images
self.comparison = None # comparison set to None to force recalculation
with TimeCode("Restoring ...\n", profiler=self.profiler,
profile_point=("Restoring",),
endmsg= "Restoring overall time was {time}\n",
enableMsg= self.verbosity) as timerR:
while True:
# matching
ordered, classified = self.matching()
# stop if not more matches
if ordered is None:
if self.verbosity:
if self.failed:
print("No image remains to merge...")
else:
print("All images have been merged...")
break
with TimeCode("Merging ...\n", profiler=self.profiler,
profile_point=("Merging",),
endmsg= "Merging overall time was {time}\n",
enableMsg= self.verbosity) as timerH:
# feed key-points in order according to order set
for rank, path in ordered:
point = Profiler(msg=path) # profiling point
######################### Calculate Homography ###################
mkp1, mkp2 = list(zip(*classified[path])) # probably good matches
if len(mkp1)>self.minKps and len(mkp2)>self.minKps:
# get only key-points
p1 = np.float32([kp["pt"] for kp in mkp1])
p2 = np.float32([kp["pt"] for kp in mkp2])
if self.verbosity > 4:
print("Calculating Homography for '{}'...".format(path))
# Calculate homography of fore over back
H, status = cv2.findHomography(p1, p2,
cv2.RANSAC, self.ransacReprojThreshold)
else: # not sufficient key-points
if self.verbosity > 1:
print("Not enough key-points for '{}'...".format(path))
H = None
# test that there is homography
if H is not None: # first test
# load fore image
fore = self.load_image(path, self.load_shape)
h, w = fore.shape[:2] # image shape
# get corners of fore projection over back
projection = getTransformedCorners((h, w), H)
c = ImCoors(projection) # class to calculate statistical data
lines, inlines = len(status), np.sum(status)
# ratio to determine how good fore is in back
inlineratio = inlineRatio(inlines, lines)
Test = inlineratio > self.inlineThresh \
and c.rotatedRectangularity > self.rectangularityThresh
text = "inlines/lines: {}/{}={}, " \
"rectangularity: {}, test: {}".format(
inlines, lines, inlineratio, c.rotatedRectangularity,
("failed", "succeeded")[Test])
if self.verbosity>1: print(text)
if self.verbosity > 3: # show matches
MatchExplorer("Match " + text, fore,
self.restored, classified[path], status, H)
####################### probability test #####################
if Test: # second test
if self.verbosity>1: print("Test succeeded...")
while path in self.failed: # clean path in fail registry
try: # race-conditions safe
self.failed.remove(path)
except ValueError:
pass
################### merging and stitching ################
self.merge(path, H)
# used for profiling
if self.profiler is not None:
self.profiler._close_point(point)
if not self.centric:
break
else:
self.failed.append(path)
else:
self.failed.append(path)
# if all classified have failed then end
if set(classified.keys()) == set(self.failed):
# in the classification there could have been some filtered keys that
# did not pass the distance test, these are assigned to the failed list
self.failed.extend(list(set(self.used) ^
set(self.failed) ^
set(self.filenames)))
if self.verbosity:
print("Restoration finished, these images do not fit: ")
for p in self.failed:
print(p)
break
with TimeCode("Post-processing ...\n", profiler=self.profiler,
profile_point=("Post-processing",),
endmsg= "Post-processing overall time was {time}\n",
enableMsg= self.verbosity) as timerP:
processed = self.post_process_restoration(self.restored)
if processed is not None:
self.restored = processed
# profiling post-processing
self.time_postprocessing = timerP.time_end
#################################### Save image ##################################
if self.save:
self.save_image()
return self.restored # return merged image
def save_image(self, path = None, overwrite = None):
"""
save restored image in path.
:param path: filename, string to format or path to save image.
if path is not a string it would be replaced with the string
"{path}restored_{name}{ext}" to format with the formatting
"{path}", "{name}" and "{ext}" from the baseImage variable.
:param overwrite: If True and the destine filename for saving already
exists then it is replaced, else a new filename is generated
with an index "{filename}_{index}.{extension}"
:return: saved path, status (True for success and False for fail)
"""
if path is None:
path = self.save
if overwrite is None:
overwrite = self.overwrite
bbase, bpath, bname, bext = getData(self.used[0])
if isinstance(path, basestring):
# format path if user has specified so
data = getData(self.save.format(path="".join((bbase, bpath)),
name=bname, ext=bext))
# complete any data lacking in path
for i, (n, b) in enumerate(zip(data, (bbase, bpath, bname, bext))):
if not n: data[i] = b
else:
data = bbase, bpath, "_restored_", bname, bext
# joint parts to get string
fn = "".join(data)
mkPath(getPath(fn))
if not overwrite:
fn = increment_if_exits(fn)
if cv2.imwrite(fn, self.restored):
if self.verbosity: print("Saved: '{}'".format(fn))
self.log_saved.append(fn)
return fn, True
else:
if self.verbosity: print("'{}' could not be saved".format(fn))
return fn, False
def merge(self, path, H, shape = None):
"""
Merge image to main restored image.
:param path: file name to load image
:param H: Transformation matrix of image in path over restored image.
:param shape: custom shape to load image in path
:return: self.restored
"""
alpha = None
if shape is None:
shape = self.load_shape
fore = self.load_image(path, shape) # load fore image
if self.histMatch: # apply histogram matching
fore = hist_match(fore, self.restored)
if self.verbosity > 1: print("Merging...")
# process expert alpha mask if alpha was not provided by the user
if self.expert is not None:
# process _restored_mask if None
if not hasattr(self, "_restored_mask"):
# from path/name.ext get only name.ext
bname = "".join(getData(self.used[-1])[-2:])
try:
bdata = self.expert[bname]
bsh = bdata["shape"]
bm_retina = contours2mask(bdata["coors_retina"], bsh)
bm_otic_disc = contours2mask(bdata["coors_optic_disc"], bsh)
bm_defects = contours2mask(bdata["coors_defects"], bsh)
self._restored_mask = np.logical_and( np.logical_or(np.logical_not(bm_retina),
bm_defects), np.logical_not(bm_otic_disc))
except Exception as e:
#exc_type, exc_value, exc_traceback = sys.exc_info()
#lines = traceback.format_exception(exc_type, exc_value, exc_traceback)
warnings.warn("Error using expert {} to create self._restored_mask:"
" {}{}".format(bname, type(e), e.args))
# only if there is a _restored_mask
if hasattr(self, "_restored_mask"):
fname = "".join(getData(path)[-2:])
try:
fdata = self.expert[fname]
fsh = fdata["shape"]
fm_retina = contours2mask(fdata["coors_retina"], fsh)
#fm_otic_disc = contours2mask(fdata["coors_optic_disc"], fsh)
fm_defects = contours2mask(fdata["coors_defects"], fsh)
fmask = np.logical_and(fm_retina, np.logical_not(fm_defects))
self._restored_mask = maskm = np.logical_and(self._restored_mask, fmask)