Skip to content

Commit

Permalink
tensorrt on desktop pc and test script that works on desktop for uffp…
Browse files Browse the repository at this point in the history
…arser
  • Loading branch information
mpkuse committed May 31, 2019
1 parent 0d8bbfc commit b4125b2
Show file tree
Hide file tree
Showing 3 changed files with 39 additions and 3 deletions.
30 changes: 29 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,28 @@ numpy - Python Math <br/>
[imgaug](https://github.com/aleju/imgaug) - Data Augmentation. <br/>
Panda3D - Rendering (only if you use PandaRender.py/PandaRender)<br/>

## Run on Docker
I have created a docker image with the dependencies needed for this code. It needs a
functioning cuda9 on host pc and nvidia docker installed. Realistically you will want
to share a folder containing your data and code from host to docker container. This can be
done with the `-v` option in docker. Also you might want to have GUIs enabled for docker.
Have a look at [my blog for docker usage](https://kusemanohar.wordpress.com/2018/10/03/docker-for-computer-vision-researchers/)
from a computer-vision researcher/developer perspective.

### Core Docker Usage
```
$(host) docker run --runtime=nvidia -it mpkuse/kusevisionkit:nvidia-cuda9-tf1.11-torch0.4 bash
```

### A more realistic Usage
```
$(host) cd $HOME/docker_ws
$(host) git clone <this repo>
$(host) cd <you data dir>; put your data learning here.
$(host) docker run --runtime=nvidia -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/docker_ws:/app -v /media/mpkuse/Bulk_Data/:/Bulk_Data mpkuse/kusevisionkit:nvidia-cuda9-tf1.11-torch0.4 bash
$(docker) python noveou_train_netvlad_v3.py
```


## Howto train?
The main code lies in `noveou_train_netvlad_v3.py`. It mainly depends on `CustomNets.py` (contains network definations, NetVLADLayer, data loading, data augmenters) ; on `CustomLosses.py` (contains loss functions
Expand Down Expand Up @@ -82,12 +104,18 @@ my [blog pose](https://kusemanohar.wordpress.com/2019/05/25/hands-on-tensorrt-on
for more details in this regard.

The following script in this repo, will help you convert hdf5 keras models
to .uff. Beware, that this is a rapidly changing/evolving thing.
to .uff. Beware, that this is a rapidly changing/evolving thing. Look at Nvidia's devtalk under TX2 for the latest update on this.
This info is accurate for May 2019.
```
python util_keras-h5-model_to-tensorflow-pb_to-nvinfer-uff.py --kerasmodel_h5file <path to hdf5 file>
```

I have also created a script (test_tensorrt_uffparser.py) to quickly test the UFFParser on your desktop (x86). For this you need TensorRT python binding. You may use my docker image for a quick test

```
docker run --runtime=nvidia -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/docker_ws:/app -v /media/mpkuse/Bulk_Data/:/Bulk_Data mpkuse/kusevisionkit:tfgpu-1.12-tensorrt-5.1 bash
```

## References
If you use my data/code or if you compare with my results, please do cite. Also cite
the NetVLAD paper whenever appropriate.
Expand Down
4 changes: 2 additions & 2 deletions demo_keras_hdf5_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@


#-----
# Replace Input Layer's Dimensions
# Replace Input Layer's Dimensions (optional)
im_rows = 480
im_cols = 752
im_chnls = 3
Expand All @@ -52,7 +52,7 @@
# test new model on a random input image. Besure to check the input range of the model, for example [-1,1] or [-0.5,0.5] or [0,255] etc.
X = np.random.rand(new_input_shape[0], new_input_shape[1], new_input_shape[2], new_input_shape[3] )

# --- You might want to do any of these normalizations depending on which model files you use.
# --- You might want to do any of these normalizations depending on which model files you use.
# i__image = np.expand_dims( cv_image.astype('float32'), 0 )
# i__image = (np.expand_dims( cv_image.astype('float32'), 0 ) - 128.)/255. [-0.5,0.5]
#i__image = (np.expand_dims( cv_image.astype('float32'), 0 ) - 128.)*2.0/255. #[-1,1]
Expand Down
8 changes: 8 additions & 0 deletions test_tensorrt_uffparser.py → demo_tensorrt_uffparser.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,16 @@
uff_fname = 'output_nvinfer.uff'

with trt.Builder( TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
# Set inputs and outputs correctly as per the model-uff
parser.register_input("input_1", (3,240,320) )
# parser.register_output( "conv_pw_5_relu/Relu6" )
parser.register_output( "net_vlad_layer_1/l2_normalize_1" )
parser.parse( LOG_DIR+'/'+uff_fname, network )
pass


# TODO
# you need pycuda for this
# 1. Load Image to GPU with cudamemcpy HtoD.
# 2. Execute
# 3. cudamemcpy DtoH

0 comments on commit b4125b2

Please sign in to comment.