Skip to content
/ AsySPA Public

An Exact Asynchronous Algorithm for Distributed Optimization Over Digraph

License

Notifications You must be signed in to change notification settings

jiaqi61/AsySPA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AsySPA

An exact asynchronous algorithm for distributed optimization over digraphs based on this paper.

The codes are tested on Linux (Ubuntu 16.04) with OpenMPI and Python 3.6.

Examples

  • To run a demo on a least-square problem, execute the following command
mpiexec -np $(num_nodes) -bind-to core -allow-run-as-root python ./asyspa/asy_gradient_push.py

The -bind-to core is optional, which tells the MPI to bind each process to a core, and hence the num_nodes should not be larger than the number of cores in your machine.

The command above will distributedly solve the following problem using the AsySPA

tex

where the data is randomly generated.

  • To distributedly train a multi-class logistic regression classifier on the Covertype dataset as in the paper, run the following command first
python ./asyspa/preprocess_dataset_covtype.py --n $(num_nodes) --file ./dataset_covtype/covtype.csv

This will preprocess the data and divide it into num_nodes parts. Then, run the following command to start distributed training

mpiexec -np $(num_nodes) -bind-to core -allow-run-as-root python ./asyspa/distributed_asy_logistic_regression.py --data_dir ./dataset_covtype/data_partition_$(num_nodes) --save_dir ./result/core_$(num_nodes)

This will train the classifier for 300 seconds in default, and you can change the time by passing some arguments. See the file for details.

About

An Exact Asynchronous Algorithm for Distributed Optimization Over Digraph

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages