Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High CPU usage during evaluation #29

Open
HanaRo opened this issue Mar 18, 2023 · 8 comments
Open

High CPU usage during evaluation #29

HanaRo opened this issue Mar 18, 2023 · 8 comments

Comments

@HanaRo
Copy link

HanaRo commented Mar 18, 2023

First thank you for sharing the source code and datasets. Really an excellent work.

However, When I following the manual to run the evaluation by sh leaderboard/scripts/run_evaluation.sh, I found out the python process has an extremely high CPU usage (basically about 6000%).
And I also check the usage of GPU (single 4090), only 2.3G of memory usage and average 5% volatile gpu-util, basically idle. Is that mean it was just using CPU to run the model?

So I wonder if it is a normal case and is there any suggestion about how to dealing with that? Thank you!

@sqb2145
Copy link

sqb2145 commented Mar 18, 2023

Hi,

Can you please help me with running the run_evaluation.sh file. Like how you edited the file and where you placed the ckpt file.

Thank You!

@penghao-wu
Copy link
Collaborator

Hi, your 2.3G GPU memory usage is expected. In my case, the python script has a ~180% CPU usage and the CarlaUE4 has a ~200% CPU usage.

@HanaRo
Copy link
Author

HanaRo commented Mar 19, 2023

Hi, your 2.3G GPU memory usage is expected. In my case, the python script has a ~180% CPU usage and the CarlaUE4 has a ~200% CPU usage.

@WPH-commit
Thanks for the information!
But still I'm confused about why it utilizes so much CPU rather than GPU in my case. Since my evaluation in running on a shared server, it's kind of unaffordable for a single process using up the CPU.
Please let me know if I need to provide more information. Thx!

@HanaRo
Copy link
Author

HanaRo commented Mar 19, 2023

Hi,

Can you please help me with running the run_evaluation.sh file. Like how you edited the file and where you placed the ckpt file.

Thank You!

@sqb2145
In my case, I edited the 'CARLA_ROOT' and 'TEAM_CONFIG' in the script.
For the ckpt file, I put it at '${TCP_ROOT}/xxx/xx.ckpt', so the 'TEAM_CONFIG' is edited to 'xxx/xx.ckpt'
Hope this will help you :)

@penghao-wu
Copy link
Collaborator

penghao-wu commented Mar 19, 2023

But still I'm confused about why it utilizes so much CPU rather than GPU in my case. Since my evaluation in running on a shared server, it's kind of unaffordable for a single process using up the CPU.

Could you check the GPU usage for the Carla server? It should be around 1G.

@HanaRo
Copy link
Author

HanaRo commented Mar 19, 2023

But still I'm confused about why it utilizes so much CPU rather than GPU in my case. Since my evaluation in running on a shared server, it's kind of unaffordable for a single process using up the CPU.

Could you check the GPU usage for the Carla server? It should be around 1G.

@WPH-commit
Yes CARLA server basically run as you said, with about 1G memory, 200% CPU and about 30% GPU.

@sqb2145
Copy link

sqb2145 commented Mar 19, 2023

Hi,
Can you please help me with running the run_evaluation.sh file. Like how you edited the file and where you placed the ckpt file.
Thank You!

@sqb2145 In my case, I edited the 'CARLA_ROOT' and 'TEAM_CONFIG' in the script. For the ckpt file, I put it at '${TCP_ROOT}/xxx/xx.ckpt', so the 'TEAM_CONFIG' is edited to 'xxx/xx.ckpt' Hope this will help you :)

Hi,
Thank you for your reply. I think the issue I'm having isn't because of placing the ckpt file in the wrong directory. I'll try to figure it out. Thanks a lot! :)

@Naive-Bayes
Copy link

Hi, your 2.3G GPU memory usage is expected. In my case, the python script has a ~180% CPU usage and the CarlaUE4 has a ~200% CPU usage.

@WPH-commit Thanks for the information! But still I'm confused about why it utilizes so much CPU rather than GPU in my case. Since my evaluation in running on a shared server, it's kind of unaffordable for a single process using up the CPU. Please let me know if I need to provide more information. Thx!

I think this problem is not because the model, but the Carla Simulator. When do the open-loop training, it really faster than eval. But when we do close-loop evaluation, it is so slow. Actually, I also do not have the solution to accelerate the simulation process,sad.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants