SPARCED is a simple and efficient pipeline for construction, merging, expansion, and simulation of large-scale, single-cell mechanistic models. With minimal set-up, users can configure the model for parallel runs on a Kubernetes cluster (SPARCED-nf), or small-scale experiments on their local machine (SPARCED-jupyter). More information on the model itself can be found here.
- Docker
- Nextflow (Optional: SPARCED-nf requirement only)
- kubectl (Optional: SPARCED-nf requirement only)
- Clone this repository from the command-line using
git clone --recursive https://github.com/birtwistlelab/SPARCED.git
- Make sure the dependencies listed above are installed
- (SPARCED-nf only) Ensure you have a Kubernetes
config
file for your chosen cluster located in your~/.kube
folder
- Edit the files in the
input_files
folder as needed. These values will be built into the creation of the model. For editing the values present for the model's simulation, see the directions accompanying the configuration step. - Use
./kube-runner/kube-load.sh <pvc-name> input_files
to load your input data to the PVC of the kube cluster.kube-load.sh
assumes a/workspaces
folder as the base of the PVC, and saves this input data at the path/workspaces/$USER/input_files/
. Important: Before every run where you plan on uploading new data to the PVC, it's important that you run./kube-runner/kube-login.sh <pvc-name>
and delete the currently existing/workspaces/$USER/input_files/
folder. If not, you run the risk of your model selecting the wrong files. - Edit the values in the
kube-nextflow.config
file in theconfigs
folder (for help, see the config README here)
- Starting the workflow:
nextflow kuberun birtwistlelab/SPARCED -v <PVC-name> -c configs/kube-nextflow.config
- Retrieving data
- After the run is finished, save your data from the PVC down to your local machine with
./kube-runner/kube-save.sh <pvc-name> <work-directory>
(kube-save.sh
will find yourwork directory
path as relative to yourworkspace/$USER
directory in the PVC. So with the default configurations, it should just bework
)
- After the run is finished, save your data from the PVC down to your local machine with
- For problems, feel free to consult our troubleshooting guide or create issue requests.
- For brave souls with a lot of local computer power but no Kubernetes cluster, SPARCED-nf is also built to be able to run locally. Simply launch with
nextflow run
instead ofkuberun
and work from theconfigs/local-nextflow.config
template.
Once you have Docker installed, follow the simple steps below.
- In a terminal window, use the command
docker login
with your account credentials. If you don't have an account yet, head over to hub.docker.com to set one up. - Use
docker pull birtwistlelab/sparced-notebook:latest
to download the latest version of the docker image - Once the download is complete, use
docker run -p 8888:8888 --name testnb1 -i -t birtwistlelab/sparced-notebook:latest
, and in your browser, go to the last URL produced in your terminal from this command- N.B.
testnb1
is just a sample name for the container you're creating with this command. If you try to create another container with this name, delete the old one first withdocker rm testnb1
- N.B.
- Viola! Once this command finishes, you should see a URL in your terminal that looks similar to
http://127.0.0.1:8888/?token=4a7c71c7a3b0080a4f331b256ae435fbc70
--paste this into your browser. You can now begin stepping through the commands in each of the files in thejupyter_notebooks
folder to learn more about the model and perform small runs.
To use custom data in the SPARCED-jupyter workflow, start a container with the commands above, and look at the input_files
directory. Delete the current files and run docker cp <datafile> <container-name(e.g. testnb1)>:/app/input_files/<datafile>
to replace them.
Original source for the model.
SPARCED is a product of Birtwistle Lab and we greatly appreciate the help from multiple colloborators, including Feltus Lab, Hasenauer Lab, and Robert C. Blake from LLNL.
The acronym SPARCED is composed of following elements, based on the sub-models in the large-scale mechanistic model.