Skip to content

Ensemble Tractography

Hiromasa Takemura edited this page Feb 29, 2016 · 32 revisions

This page describes how to use Ensemble Tractography (ET) for identifying white matter fascicles from diffusion MRI data. ET is an idea proposed by Takemura, Caiafa, Wandell & Pestilli (2016) PLoS Comput Biol.

This page is still under construction. Final goal of this wiki is to describe the detailed instructions with sample dataset and scripts for performing ET analysis.

If you had a question, please feel free to contact with [Hiromasa Takemura] (http://researchmap.jp/hiromasatakemura/?lang=english).

System Requirements

ET is an extension of the LiFE Pestilli et al, 2014 Nat Methods. Recent LiFE release includes specific functions for performing ET analyses.

For ET project, we are mostly developing all codes under Ubuntu 14.04 LTS or Ubuntu 12. We have tested basic LiFE codes worked under Mac OS X and Windows 8, but not extensively tested in many OS environments.

Required software packages

  • MATLAB (2012b or later; not extensively tested in MATLAB2015b)
  • Vistasoft
  • LiFE

Optional packages

Before starting

The ET analysis requires the preprocessed diffusion MRI dataset in nifti format, with b-value and b-vecs reorientation throughout the correction. Please see [the page] (http://web.stanford.edu/group/vista/cgi-bin/wiki/index.php/DTI_Preprocessing) describing the preprocessing methods in vistasoft.

For using MrTrix for tractography, you have to convert nifti files into MRTrix (.mif) format. Please see [this page] (https://github.com/vistalab/vistasoft/wiki/Use%20MrTrix%20with%20vistasoft) for an instruction.

Performing Tractography using multiple parameter settings

For performing ET, we have to run tractography using different parameter settings.

[feTrack] (https://github.com/francopestilli/life/blob/master/track/feTrack.m) includes option to use the different parameters, such as curvature threshold and stopping criterion.

Example MATLAB script to create five different connectomes using different curvature threshold:

  dtFile = '/data/humandata/diffusion/S1/dti64trilin/dt6.mat';
  fibersFolder = '/data/humandata/diffusion/S1/dti64trilin/fibers/ET_candidate;
  lmax = 8; % CSD parameter
  nSeeds = 500000; % Number of streamlines generated
  wmMask = '/data/humandata/diffusion/S1/dti64trilin/ROIs/t1_class_twovalued.mif';
  curvature = [0.25 0.5 1 2 4];
  for i = 1:length(curvature)
   feTrack('prob', dtFile,fibersFolder,lmax,nSeeds,wmMask,curvature(i))
  end

For controlling other MRTrix tractography parameters, see the instruction in [MRTrix wiki] (http://jdtournier.github.io/mrtrix-0.2/commands/streamtrack.html).

Coverting tractography results into vistasoft format

MRTrix produces streamlines in .tck format. The first step is to convert .tck format to .pdb format used in vistasoft.

In order to convert .tck file, please use please use mrtrix_tck2pdb.m in vistasoft.

mrtrix_tck2pdb('S1_SDPROB_lmax8_500000.tck', 'S1_SDPROB_lmax8_500000.pdb')

Creating candidate Ensemble Tractography Connectome

Once you converted each .tck file into .pdb file, the next step is concatenating the connectome to create a candidate Ensemble Tractography Connectome.

To do so, we could use [et_concatenateconnectomes.m] (https://github.com/francopestilli/life/blob/master/utility/ET/et_concatenateconnectomes.m) under LiFE distribution.

Example usage:

fginput = {'S1_curv0p25SPC_cand.pdb','S1_curv0p5SPC_cand.pdb',...
'S1_curv1SPC_cand.pdb','S1_curv2SPC_cand.pdb',...
'S1_curv4SPC_cand.pdb'}
fname = 'S1_ETC_cand.mat'; # File name of candidate connectome
et_concatenateconnectomes(fginput, fname) 

This function enables us to combine the connectome filed (.pdb files derived from different tractography parameters) into a single connectome file (in this case .mat format, but this can be .pdb format too).

Run LiFE

Finally, you could run LiFE on candidate ETC file generated in a previous step.

See [life_demo] (https://github.com/francopestilli/life/blob/master/scripts/life_demo.m) for an example code for running LiFE on the connectome file.

Example usage:

dwiFile       = 'S1_DWI_run1_preprocessed.nii.gz'; # Preprocessed DWI used for tractography
dwiFileRepeat = 'S1_DWI_run2_preprocessed.nii.gz'; # Preprocessed DWI for cross-validation
t1File        = 't1_acpc.nii.gz'; # T1-weighted anatomy (nifti format) 
fgFileName    = 'S1_ETC_cand.mat'; # Candidate connectome generated in a previous step
feFileName    = 'S1_ETC_optimized.mat'; # Final connectome file as an output
fe = feConnectomeInit(dwiFile,fgFileName,feFileName,[],dwiFileRepeat,t1File);
fe = feSet(fe,'fit',feFitModel(feGet(fe,'mfiber'),feGet(fe,'dsigdemeaned'),'bbnnls'));
feConnectomeSave(fe, '-v7.3');

ET-preselection model

Depending on the computing environment and matrix size in LiFE model, ET-preselection method has an advantage. Briefly, ET-preselection method initially chose streamlines contributing diffusion signal prediction in each individual Single Parameter Connectome. Then we combine preselected streamlines to make new candidate connectome. Finally, we run LiFE on candidate connectome.

The advantage is that we can reduce the size of LiFE matrix. For example, if we are going to add 5 connectomes, each of which include 2 million streamlines, resulting connectome includes 10 million streamlines. In some computational environment, the large matrix size may cause memory swap. In that case, ET-preselection model will work better because we can select fewer number of streamlines from original connectome model, and keep the matrix size of ET candidate connectome smaller.

To perform ET-preselection model, we could use [et_createETprecandidateconnectome.m] (https://github.com/francopestilli/life/blob/master/utility/ET/et_createETprecandidateconnectome.m). In this function, we could set the number of streamlines chosen from individual Single Parameter Connectome by variable "numconcatenate".

Example Usage:

fginput = {'S1_curv0p25SPC_cand.pdb','S1_curv0p5SPC_cand.pdb',...
'S1_curv1SPC_cand.pdb','S1_curv2SPC_cand.pdb',...
'S1_curv4SPC_cand.pdb'}
numconcatenate = [50000 50000 50000 50000 50000];
fname = 'S1_ETCpreselect_cand.mat'; # File name of candidate connectome
et_createETprecandidateconnectome(feinput, numconcatenate, fname)     

Computational load on ET

Computational load of ET depends on various factors, such as computing environment, and matrix size in LiFE model. The matrix size of LiFE model depends on number of voxels, diffusion directions, and candidate streamlines in the model. In principle, the most computational load in ET is associated with the computational load in LiFE.

For example, using LiFE, the processing of one whole-brain connectome model (STN96 dataset; 2 mm isotropic, 96 directions) with 2 million streamlines requires 28.4 hours on a computer with 16 processing core with 32GB Random Access Memory.

There are several ways to reduce the computational demand in LiFE model.

First idea is to reduce the number of voxels. If you were interested in particular tract of interest, it will be helpful to exclude the voxels outside from white matter regions of interest. We sometimes select the voxels in occipital white matter in order to ask the questions in relation to visual cortex. The second idea is to use ET-preselection (See above).

We are working on some upgraded codes which make the LiFE computation faster than current version of release. We will notice the new version of the code once it becomes ready to be open to the public.

Sample dataset and script

Hiromasa Takemura is planning to prepare the sample dataset to run example Ensemble Tractography analysis.

The Stanford Digital Repository hosts [the webpage] (https://searchworks.stanford.edu/view/qw092zb0881) to make one sample STN96 dataset available for public.

Once it becomes ready, Hiromasa to make a public GitHub repository hosting the example scripts to reproduce ET analysis on this dataset. To be announced.

Reference for Ensemble Tractography

Takemura H, Caiafa CF, Wandell BA, Pestilli F (2016) Ensemble Tractography. PLoS Comput Biol 12(2): e1004692. doi:10.1371/journal.pcbi.1004692

Clone this wiki locally