Skip to content
This repository has been archived by the owner on Nov 1, 2021. It is now read-only.

Question about AllReduceEA #17

Open
sidps opened this issue Jan 20, 2018 · 0 comments
Open

Question about AllReduceEA #17

sidps opened this issue Jan 20, 2018 · 0 comments

Comments

@sidps
Copy link

sidps commented Jan 20, 2018

From the code and the algorithm presented (https://github.com/twitter/torch-distlearn/blob/master/lua/AllReduceEA.md), it seems like the all-reduce step involves synchronization between workers.

The algorithm published in the original paper does not require such synchronization (sec 3.1: each worker maintains it's own clock, t_i). Is AllReduceEA then an algorithm for synchronous EASGD, for which only the formulation was presented in the paper (sec 3)?

If so, are there any comparisons between synchronous and asynchronous EASGD?

Apologies if I have misunderstood this.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant