-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you please give me some suggestions for my use case #63
Comments
Hi @varun-tangoit, I think that the following repository is helpful for realtime face detection. If you really have to use mtcnn you need to write code to parse and split the mtcnn graph. However, it may slow down in complicated models. |
Hey Hi, Thanks for the respose, i haven't see any issues while detection alone. When i try to run both mtcnn model and recognition its going to very slow. i thought recognition of facenet model is there issue, i need split graph and multi threading of the both model. can you please give me more detail for recognition part. i don't know how to apply to this same procedure. |
Hi @varun-tangoit, Please refer to the following URL for split the graph. before: The principle is straightforward, but in complex models it will be fighting a lot of errors due to unclear computation order of nodes. For threading, you can pass it to 1st graph's outputs -> 2nd graph's inputs with thread-safe queue. BTW, |
Yes @naisy, When i run sample video with set of peoples present, it recognize with one person every frame taking too much time, around 2-3 fps only both mtcnn and facenet i got. Im refering this below github. |
Hi @varun-tangoit, I don't use facenet, but facenet seems to be slow because it uses resnet. I think that it is better to change this to more small CNN in order to increase the speed. The input size of facenet seems to be 160x160, but how about changing this to 80x80 and training? |
Hi @varun-tangoit, When I did facial expression classification in keras xception model instead of facenet, I set it to 48x48. |
Great. Sorry we have already completed our code with facenet, mtcnn model everything works fine in local cpu, alienware 1080ti desktop gpu. When we try to integrate on this jetson tx2 only we have issue. but somehow we need to optimize with respect to edge device. |
is there any other approaches to do improve the performance. I found another solution TensorRT, but i can't able to find any examples for face detection/recognition with python implementation. |
Can you reduce facenet input size and train it? I am trying to install deeplab v3 in RC car recently. Therefore, by reducing the input size at model training from 513x513 to 160x120, it was able to speed up from 40 FPS to 230 FPS on GTX1060. |
Sorry we are using pretrained model. is that sameway darkflow-object detection also taking too much time? |
Indeed, I understood the situation. Now it is the way I am using Mask R-CNN to run. I tried implementing this method also in ssd_mobilenet_v1, but because it was later than split, it became out of print. Because the balance of CPU/GPU usage can not be taken well, it will be slower than a model split perfectly. But it is faster than not doing anything. about ssd_mobilenet_v1: You can try multi-detection ssd_mobilenet_1. Add next codes in my repo.
lib/mtload_graph_nms_v2.py
run_stream.py
config.yml
And run The code of Mask R-CNN is as follows. I think that it seems to be a lag because it has a problem that the frame interval is not uniform. But this is a simple way to implement parallel processing. |
Another possibility is to incorporate tracking. |
Sorry busy with catching up work. Yeah sure we are working optimize our code with respect jetson. is it possible to achieve higher fps on jetson for yolov2 darkflow, im refereing this github https://github.com/thtrieu/darkflow. |
In my recognition, tiny-yolo is fast. For example, if you need to exceed 40 FPS, you will use ssd_mobilenet_v1(split-model or tensorrt/c++) by discarding mtcnn/facenet. (of course, need training with original datasets.) I am aware that the current problem lies in the slow execution speed of facenet. |
@naisy, Yeah it would tiny-yolo improve performance but it accuracy gonna be not good is it right? |
Yes. The accuracy of tiny-yolo seems not good. |
Hi,
I'm very impressed with this approach, I have plan to do same way multi threading with split model for my usecase. I'm currently working on face detection with recognization. I'm using both a model mtcnn and facenet model, when I run those both in Jetson tx2 that performance around 7-8 fps. The same way I have been trying with yolo object detection its around 3-4 fps. Can you please suggest how do I proceed for those problem. I'm stuck on this. Please help us. It would much appreciated.
Thanks
The text was updated successfully, but these errors were encountered: