Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SpinLocks suffer from false sharing #55942

Open
kuszmaul opened this issue Sep 30, 2024 · 8 comments · May be fixed by #55944
Open

SpinLocks suffer from false sharing #55942

kuszmaul opened this issue Sep 30, 2024 · 8 comments · May be fixed by #55944
Labels
parallelism Parallel or distributed computation

Comments

@kuszmaul
Copy link

SpinLocks can suffer from false sharing.

@vtjnash
Copy link
Member

vtjnash commented Sep 30, 2024

Yes, and many other issues. Don't use them

@vtjnash vtjnash added the invalid Indicates that an issue or pull request is no longer relevant label Sep 30, 2024
@kuszmaul
Copy link
Author

I disagree about kind:invalid. SpinLocks are there. There are cases where they solve a problem that the other locks cannot solve. It would be easy to provide an option that doesn't suffer from false sharing. (I have a proposed patch that I'll share).

@oscardssmith oscardssmith removed the invalid Indicates that an issue or pull request is no longer relevant label Sep 30, 2024
@gbaraldi
Copy link
Member

We have a plan to remove them from the language. Their only use should be in places where we can't reschedule stuff i.e the compiler or the scheduler itself.

@nsajko nsajko added the parallelism Parallel or distributed computation label Sep 30, 2024
@oscardssmith
Copy link
Member

What's wrong with a SpinLock? For sufficiently narrow locks, (e.g. to implement atomics etc), I thought they were relatively close to optimal.

@kuszmaul
Copy link
Author

kuszmaul commented Oct 1, 2024

Since SpinLock is just 8 bytes long, you can end up with two or more of them on the same cache line.

Suppose lk1 and lk2 are both on the same cache line and you have two threads waiting on the two locks respectively. Then the cache line will bounce back and forth between the caches of the two threads. That cache traffic can use up memory bandwidth.

Suppose that the two threads aren't even waiting on the two locks, but just acquire them:
Thread A:

@lock lk1 foo1()
bar1()
@lock lk1 zot1()

and Thread B:

@lock lk2 foo2()
bar2()
@lock lk2 zot2()

Here the two threads don't contend at all, logically, but the lock instructions will run slowly because the cache line must bounce back and forth.

This kind of contention is an example of false sharing.

@kuszmaul
Copy link
Author

kuszmaul commented Oct 1, 2024

We have a plan to remove them from the language. Their only use should be in places where we can't reschedule stuff i.e the compiler or the scheduler itself.

In my experience, sometimes you need a spin lock. Is there some where I can read about the plan to remove SpinLock?

@NHDaly
Copy link
Member

NHDaly commented Oct 7, 2024

@gbaraldi after our discussion on friday I'm still not clear on this. Can you reply to Oscar's and Bradley's questions here?

@gbaraldi
Copy link
Member

gbaraldi commented Oct 7, 2024

There isn't a plan to remove it. It's just that spin locks in general are quite bad as locks. They are only good when uncontended and held for a very small amount of time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
parallelism Parallel or distributed computation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants