Diffusion Processing Pipeline: Difference between revisions

From VrlWiki
Jump to navigation Jump to search
Jadrian Miles (talk | contribs)
Jadrian Miles (talk | contribs)
Line 34: Line 34:
''See also: [[Tubegen]]''
''See also: [[Tubegen]]''


Most of our studies involve interacting with and computing measures over "streamtubes", integral paths in the field of principle eigenvectors of the diffusion tensors, which we use as a proxy for axon tracts in the brain's white matter.  We use 2nd-order Runge-Kutta numerical integration with tricubic b-spline interpolation of the principle eigenvectors; the citation we've used for this process is:
Most of our studies involve interacting with and computing measures over "streamtubes", integral paths in the field of principle eigenvectors of the diffusion tensors, which we use as a proxy for axon tracts in the brain's white matter.  The <tt>[[tubegen]]</tt> program generates a <tt>*.sm</tt> file that [[Brainapp]] uses to visualize the streamtubes.
{{infobox|Peter J. Basser, Sinisa Pajevic, Carlo Pierpaoli, Jeffrey Duda, and Akram Aldroubi. '''In vivo fiber tractography using DT-MRI data'''.  ''Magnetic Resonance in Medicine'', 44:625632, 2000. ([http://stbb.nichd.nih.gov/bass_paj_pier.pdf PDF])}}
The streamtube process (including termination conditions, culling, and terminology) is described in {{pub|Zhang-2003-VDT}}.


* <code>$G/bin/tubegen</code>
* <code>$G/bin/tubegen</code>

Revision as of 01:51, 11 February 2009

The diffusion processing pipeline is a collection of scripts and small programs that we use to turn diffusion-weighted MRI data from our collaborators into more sophisticated models, typically streamline "tracts" along which we can compute various metrics. We can visualize and interact with the output tractogram with Brainapp. At the moment, our pipeline is built around DTI --- Diffusion Tensor Imaging --- but this is not the only thing you can do with diffusion-weighted MRI data, and in the future we may want to turn this pipeline into something that has a few more branches.

Right now little of the code used in the processing pipeline is checked into $G, and those programs that are checked in are usually very small and scattered around the source tree. After Brad's new system is in place, he and Jadrian will work on getting all the scripts we use fixed up and checked into a single location. In the text below, the prefix $dbin means /map/gfx0/common0/diffusion/Interface/bin/data, where a lot of scripts currently live.

Pipeline Steps

Step 1: Convert from Original Format

We receive imaging data in a variety of different formats, but we standardize on MRIimage for all of the programs we write. Therefore we have a few converter programs. We assume that every pipeline starts from this step with DWIs from our collaborators. They may give us files that came more or less directly from the scanner, or they may have done some pre-processing like smoothing, distortion correction, or registration. For example, Ron Cohen's group has started using FLIRT to register all the DWIs together.

  • $dbin/processNiftiDwis.py --- convert from NIfTI to MRIimage
  • $dbin/processMosaics.m (with $dbin/matlabLauncher.py) --- convert from DICOM Mosaic to MRIimage
  • $dbin/rotateBvecs.py --- rotate an original list of b vectors to correspond to registration transformations

Step 2: Process DWIs

Here our scripts take MRIimage DWIs as input and create modified MRIimage DWIs as output. We may upsample, smooth, crop, or pad the images. If we have prior knowledge about orientation issues, we may rotate or flip the images in this step before anything else, so that these operations are as inexpensive as possible in a fully automated process.

  • $G/bin/mricrop --- crop or pad images to match a specified voxel volume
  • $G/bin/mrifilt3 --- resample images to match a specified voxel volume
  • $dbin/diffusionResize.py --- by calling the previous two, automatically resample, crop, and pad images to a specified spatial volume in mm, sampled by a specified voxel volume

Step 3: Generate (and Possibly Process) Tensors

We fit tensors to the DWIs using a nonlinear sequential quadratic programming method; the citation for this process is Ahrens MRM '98 . In manual runs, we may discover a tensor orientation issue after they are generated, and correct it at the end of this step.

  • $dbin/run_mridfit.py (which calls $G/bin/mridfit) --- fit tensors
  • $G/bin/mritensormult --- multiply tensors by a specified transformation matrix

Step 4: Generate Tensor Metrics

We can generate a plethora of scalar metrics from tensors, including FA, trace, and similar things.

  • $G/bin/mritensor_v0 --- has a huge menu of options; give it the -- argument to see a list

Step 5: Generate Streamtubes

See also: Tubegen

Most of our studies involve interacting with and computing measures over "streamtubes", integral paths in the field of principle eigenvectors of the diffusion tensors, which we use as a proxy for axon tracts in the brain's white matter. The tubegen program generates a *.sm file that Brainapp uses to visualize the streamtubes.

  • $G/bin/tubegen

Step 6: Brainapp Compatibility

Brainapp, which we use to visualize and interact with the streamtubes, assumes a particular directory structure so it can find the data. Unfortunately, in opposition to the "for humans to read, not computers" principle, this directory structure is a little nonintuitive for the naive human user. As such the latest versions of the pipeline generate output in a nested directory hierarchy using simple, plain-English names so that people can navigate it easily. We must map this structure to Brainapp's expected structure to make the program work right.

  • No self-contained script exists right now, just a sequence of commands in a Makefile. Check out $G/data/diffusion/brown3t/cohen_hiv_study.2007.02.07/Makefile.patients.

This step also generates a custom Brainapp config file to open the particular file, so that users don't have to manually edit a config file for each new brain they want to view. In the code below, <BRAIN> is the path to the root directory where the pipeline was run for this particular brain, e.g. $G/data/diffusion/brown3t/cohen_hiv_study.2007.02.07/patient003/tubes.2008.07.27.

> cd $G/src/brainapp
> obj/brainapp-gcc3-d -f <BRAIN>/brainapp/settings.cfg -f settings-desktop.cfg

Tips

  • Run jobs on a fast machine rather than your workstation or a CS lab machine.
  • Especially when logged in remotely, fork off jobs so that you can log out and keep them running. Remember to use the notify target to get an email sent to you once it's done. In csh:
    make all notify >& make.out &

See Also