diff --git a/README.md b/README.md
index b228ca1..8204a19 100644
--- a/README.md
+++ b/README.md
@@ -15,6 +15,28 @@ numpy - Python Math
[imgaug](https://github.com/aleju/imgaug) - Data Augmentation.
Panda3D - Rendering (only if you use PandaRender.py/PandaRender)
+## Run on Docker
+I have created a docker image with the dependencies needed for this code. It needs a
+functioning cuda9 on host pc and nvidia docker installed. Realistically you will want
+to share a folder containing your data and code from host to docker container. This can be
+done with the `-v` option in docker. Also you might want to have GUIs enabled for docker.
+Have a look at [my blog for docker usage](https://kusemanohar.wordpress.com/2018/10/03/docker-for-computer-vision-researchers/)
+ from a computer-vision researcher/developer perspective.
+
+### Core Docker Usage
+```
+$(host) docker run --runtime=nvidia -it mpkuse/kusevisionkit:nvidia-cuda9-tf1.11-torch0.4 bash
+```
+
+### A more realistic Usage
+```
+$(host) cd $HOME/docker_ws
+$(host) git clone
+$(host) cd ; put your data learning here.
+$(host) docker run --runtime=nvidia -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/docker_ws:/app -v /media/mpkuse/Bulk_Data/:/Bulk_Data mpkuse/kusevisionkit:nvidia-cuda9-tf1.11-torch0.4 bash
+$(docker) python noveou_train_netvlad_v3.py
+```
+
## Howto train?
The main code lies in `noveou_train_netvlad_v3.py`. It mainly depends on `CustomNets.py` (contains network definations, NetVLADLayer, data loading, data augmenters) ; on `CustomLosses.py` (contains loss functions
@@ -82,12 +104,18 @@ my [blog pose](https://kusemanohar.wordpress.com/2019/05/25/hands-on-tensorrt-on
for more details in this regard.
The following script in this repo, will help you convert hdf5 keras models
-to .uff. Beware, that this is a rapidly changing/evolving thing.
+to .uff. Beware, that this is a rapidly changing/evolving thing. Look at Nvidia's devtalk under TX2 for the latest update on this.
This info is accurate for May 2019.
```
python util_keras-h5-model_to-tensorflow-pb_to-nvinfer-uff.py --kerasmodel_h5file
```
+I have also created a script (test_tensorrt_uffparser.py) to quickly test the UFFParser on your desktop (x86). For this you need TensorRT python binding. You may use my docker image for a quick test
+
+```
+docker run --runtime=nvidia -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/docker_ws:/app -v /media/mpkuse/Bulk_Data/:/Bulk_Data mpkuse/kusevisionkit:tfgpu-1.12-tensorrt-5.1 bash
+```
+
## References
If you use my data/code or if you compare with my results, please do cite. Also cite
the NetVLAD paper whenever appropriate.
diff --git a/demo_keras_hdf5_model.py b/demo_keras_hdf5_model.py
index 6529318..6696855 100644
--- a/demo_keras_hdf5_model.py
+++ b/demo_keras_hdf5_model.py
@@ -37,7 +37,7 @@
#-----
-# Replace Input Layer's Dimensions
+# Replace Input Layer's Dimensions (optional)
im_rows = 480
im_cols = 752
im_chnls = 3
@@ -52,7 +52,7 @@
# test new model on a random input image. Besure to check the input range of the model, for example [-1,1] or [-0.5,0.5] or [0,255] etc.
X = np.random.rand(new_input_shape[0], new_input_shape[1], new_input_shape[2], new_input_shape[3] )
-# --- You might want to do any of these normalizations depending on which model files you use.
+# --- You might want to do any of these normalizations depending on which model files you use.
# i__image = np.expand_dims( cv_image.astype('float32'), 0 )
# i__image = (np.expand_dims( cv_image.astype('float32'), 0 ) - 128.)/255. [-0.5,0.5]
#i__image = (np.expand_dims( cv_image.astype('float32'), 0 ) - 128.)*2.0/255. #[-1,1]
diff --git a/test_tensorrt_uffparser.py b/demo_tensorrt_uffparser.py
similarity index 77%
rename from test_tensorrt_uffparser.py
rename to demo_tensorrt_uffparser.py
index 8184fea..de928de 100644
--- a/test_tensorrt_uffparser.py
+++ b/demo_tensorrt_uffparser.py
@@ -7,8 +7,16 @@
uff_fname = 'output_nvinfer.uff'
with trt.Builder( TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
+ # Set inputs and outputs correctly as per the model-uff
parser.register_input("input_1", (3,240,320) )
# parser.register_output( "conv_pw_5_relu/Relu6" )
parser.register_output( "net_vlad_layer_1/l2_normalize_1" )
parser.parse( LOG_DIR+'/'+uff_fname, network )
pass
+
+
+# TODO
+# you need pycuda for this
+# 1. Load Image to GPU with cudamemcpy HtoD.
+# 2. Execute
+# 3. cudamemcpy DtoH