Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ninja load average -l option needs documenting #630

Open
rgeary1 opened this issue Aug 4, 2013 · 10 comments
Open

Ninja load average -l option needs documenting #630

rgeary1 opened this issue Aug 4, 2013 · 10 comments

Comments

@rgeary1
Copy link
Contributor

rgeary1 commented Aug 4, 2013

Low priority issue, but there's no mention in the ninja manual of the ninja load average -l feature. It's listed in ninja --help output, but there's no indication what the range of the value N is.

@nyh
Copy link

nyh commented Jul 31, 2016

I second this request. I just tried this option, and it's not clear what it does, or if it does anything at all, and how it relates to the "-j" option if both are used together. In particular, I used "-j 30 -l 4" and I saw the load increasing to 10. My only guess is that jobs are started so quickly that we can't measure any load increase (if we use the 5-minute average) until more than 10 jobs have started.

If you're curious why I'm using "-l", it is an attempt to solve a problem when compiling with ccache&distcc: With ccache&distcc, we start 30 jobs in parallel but most of them will end up running on other machines (with 30 being the sum of the number of cpus on all machines). The problem is that all these 30 start by doing something (run the C preprocessor) on the local machine, and when we start them all in parallel, the machine is overloaded. I wanted to start these 30 jobs gently - so that eventually we do have 30 of them running in parallel (but most not consuming CPU because they are compiling on another machine) but we don't start all of them at once. I was hoping that "-l" will help me do it, but it's not quite helping as I hoped.

@jonesmz

This comment was marked as abuse.

@iulian3144
Copy link

iulian3144 commented Apr 3, 2020

Amazing that this is still open after 7 years :). I've also stumbled upon this and only found an explanation in the code unfortunately.
build.cc@RealCommandRunner::CanRunMore()
Looking in the GetLoadAverage() function, it seems that it should return a value anywhere between 0 and the number of CPU threads. So on a 8 CPU threads system, a value of 8 would mean 100% load average, while e.g. 1 means 12.5% load average (you got the point). Hope this helps someone.

EDIT: The commit that introduced this change (-l option) describes it pretty well.
b50f7d1 Add -l N option to limit the load average.

@Makogan
Copy link

Makogan commented Jul 17, 2021

I would also want this feature to be more documented.

@null77
Copy link

null77 commented Aug 17, 2021

+1

@bevinhex
Copy link

if you build a skyscraper, then do not provide an elevator to it, what is the point building it, if this feature is low priority, then shouldn't that be delayed for later, then implementation + documents come along together?
oh forgot, actually there was a building that engineer forgot to put elevator in, life is crazy.

@hadrielk
Copy link

This issue is almost 9 years old, but if anyone happens upon it and sees this:

If you're curious why I'm using "-l", it is an attempt to solve a problem when compiling with ccache&distcc: With ccache&distcc, we start 30 jobs in parallel but most of them will end up running on other machines (with 30 being the sum of the number of cpus on all machines). The problem is that all these 30 start by doing something (run the C preprocessor) on the local machine, and when we start them all in parallel, the machine is overloaded. I wanted to start these 30 jobs gently - so that eventually we do have 30 of them running in parallel (but most not consuming CPU because they are compiling on another machine) but we don't start all of them at once. I was hoping that "-l" will help me do it, but it's not quite helping as I hoped.

There's a much better way to solve that: distcc's --localslots_cpp option. (well, calling it an "option" is a bit misleading, as distcc uses an environment variable or a file, not command-line switches)

But anyway, that controls how many parallel C/C++ preprocessing jobs distcc will use, separately from the remote compilation. So you can set ninja to use a very large number of jobs, while setting distcc to only allow a few number of preprocessor ones.

For example I set ninja to 128 jobs or more, but limit --localslots_cpp to just the number of actual CPU cores.

You'll also want to use the job pools feature of ninja to constrain the number of simultaneous linkers, separately from the huge number of compilation jobs. (for obvious reasons)

It works fairly well, in scenarios that are constrained by both CPU and RAM. It's not perfect, but build scheduling is a non-trivial problem.

@aviallon
Copy link

Busy doing preparations for the tenth birthday of this issue.

@navid-zamani
Copy link

Using ninja -j4 -l4 resulted in single-threaded building. Have you ever tried building webkit-gtk on a single thread on a Ryzen 2200G? 😬

Not only is the --help incomplete. The damn thing does not even have a man page! (“asciidoc” is so dumbed down, it might as well be an Apple spawn. The digital equivalent of delivering your manual in crayon. With the R reversed.)

It is not only that everything Google does eventually becomes abandoned. But why do we let software made by organized crime like this infect “open source” software anyway? I’m going to check out samurai as a drop-in now. Until I can trash Webkit (and Mozilla Google “monopolism alibi” Firefox aswell) for Ladybird and 9P.

@navid-zamani
Copy link

oh forgot, actually there was a building that engineer forgot to put elevator in, life is crazy.

That’s called architecture! Not engineering. :D
Just throw a few cantilevered beams, crooked angles and lots of glass in there.
#RCErepresent

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants