User:Jadrian Miles/Paper list: Difference between revisions

From VrlWiki
Jump to navigation Jump to search
Jadrian Miles (talk | contribs)
New page: '''Automatic Shape-Sensitive Curve Clustering''' (§4.1.2) --- distance measure definition, distributed clustering algorithm, spectral clustering refinement, comparison to other techni...
 
Jadrian Miles (talk | contribs)
Added a couple more ideas inspired by the toy problem initialization and goodness-of-fit stuff
 
(16 intermediate revisions by 2 users not shown)
Line 1: Line 1:
'''Automatic Shape-Sensitive Curve Clustering''' (§4.1.2) --- distance measure definition, distributed clustering algorithm, spectral clustering refinement, comparison to other techniquesBase on q-ball and DTI tractography from DTK; use Mori's atlas for ground truth?
==== Curve noise statistics ====
An upgraded version of my ISMRM abstract.
* Present original findings (expanded to range over FA + MD)
** Potentially also investigate multiple algorithms: nonlinear DT fitting, multi-DT, Q-ball, other tractography algorithms, etc.
* Explain results with first-order model
* Validate the model (repeat experiment 2) on more "realistic" data with ground truth.  Some options:
** Brain-based computational phantom (constructed like so: T1+HARDI scan, 2-tensor fit, tractography, filter curves for T1 termination criteria, lowpass curves, synthesize DWIs from curves)
** Popular phantom (tractography cup?)
** HARDI atlas
** Generate synthetic HARDI scans from 2-tensor fit of averaged LARDI scans for tons of healthy normals
* Prescribe a simple pipeline for deriving the noise equation for any combination of algorithms
* Maybe also consider writing up the application to PICo: probability of connection to a cortical ROI is the integral of the uncertainty PDF over that subcortical surfaceBut the benefit from this technique (gain in precision) may be minimal, as symmetric PDFs on either side of an ROI boundary would tend to contribute the same on either side; results might be about the same without the PDF spreading, just because you've got a lot of curves.


'''A Sparse, Volumetric Representation of Space Curve Clusters''' (§4.1.4) --- the benefit of the initial form of the macrostructure model is its ability to reconstruct its input curves.  The evaluation on this should be relatively easy, as there is no comparison to other techniques.  A manual clustering is acceptable but automatic clustering that implies some bound on reconstruction error would probably be better.  Good for Vis, SIGGRAPH, EG, EV, I3D, ISMRM.


'''Automatic Tractography Repair / "Healing a Broken Tractogram"''' --- ISMRM poster: demonstrate repairing broken curves by clustering, macrostructure generation, and then bridging gaps (§4.1.5).
==== Diffusion model parsimony ====
Compare multiple diffusion models (single-tensor, multi-tensor, higher-order tensor, higher-order ODF [e.g. QBI], multi-compartment biological [e.g. CHARMED]) in terms of their "parsimony" on real data.  Quantify parsimony with chi-squared goodness-of-fit on top of the probability integral transform for Rician noise.  This will hopefully demonstrate that most models under- or over-fit, and that the multi-compartment models are juuuuuust right. Come to think of it, a Goldilocks reference would make for a snappy title. See dhl/cad/jadrian email from 2011-12-15 "a quickie Dagstuhl thought experiment".


'''Automatic Tractography-Based DW-MRI Segmentation''' --- using DTI/QBI, automatic clustering, and simple macrostructure adjustment (dilation, splitting, merging, bridging gaps, §4.1.5), segment the WM and compare to some ground truth. Mori's atlas?
Could potentially extend this to a "constructive" algorithm: something that picks the degrees of freedom for an RBF or other higher-order diffusion model.


'''???''' (§4.2.1) --- generating synthetic images from the macrostructure model.  Who would care about this?  Possible improvement over Leemans, et al. due to gap-filling?
 
==== Improved Hough-Peak--Finding by Topological Persistence ====
Put the toy-problem initialization technique on a solid theoretical basis.  Rather than blurring the histogram, use topological persistence to disregard "sub-peaks".
 
 
==== Feature Identification in Multispectral Images Using the Hough Transform ====
Think about a "support" function for prospective line segments, with Gaussian falloff on the image plane.  Finding an application area for this one might be hard.
 
 
==== Toy Problem / Probability Integral Transform ====
Write up the toy problem as an example use of the PIT in model testing for DWIs.  Or maybe piggyback on Leemans/Sijbers MRM'05 synthetic DWIs from tractography, using PIT to get p-values for synthetic images.  This may not be worth its own paper; maybe better to hold off on PIT stuff until it can be put in context of my own "real-problem" work.
 
 
==== Curve clustering based on chi-squared reconstruction error from a boundary model ====
I've already written up an outline of this paper.  Clustering proceeds by building a boundary model of clusters/bundles, and the objective function is chi-squared goodness-of-fit derived from the tractography PDFs.
 
 
==== Automatic Shape-Sensitive Curve Clustering (§4.1.2) ====
Distance measure definition, distributed clustering algorithm, spectral clustering refinement, comparison to other techniques.  Base on q-ball and DTI tractography from DTK; use Mori's atlas for ground truth?  Moberts et al.'s ground truth clusterings? [moberts/van_wijk/vilanova might have a ground truth.  Song Zhang had a clustering ground truth paper.  Cagatay played with this at some point.  bang not huge, how big is the buck? but I could be convinced -- not sure this is a vis paper? but it could be.  Application of curve similarity to other areas (bat flight trajectories) would be convincing at Vis too.]
 
 
==== An Orientation-Aware Boundary Surface Representation of Space Curve Clusters ====
* '''Contributions:'''
*# "Natural" representation of clusters of curves, useful for higher-level operations. [I think that "natural" is not descriptive enough.  I would try to refine that.  Is the orientation a tangency constraint?  There's something about it that seems important, but "orientation aware" doesn't quite seem like what it is... .  I would give an example or two of the kind of higher-level operations you mean.
*# Superior results versus naive algorithms (and previously published work to solve the same problem?  "Competitors" to consider include Gordon's crease surfaces, flow critical surfaces, etc. [dhl mentions Ken Joy's work...] [define "superior".  Faster?  more accurate?  If you mean more "natural" then this may be the same thing as above]).
* '''Proofs:'''
*# Prose argument that contrasts cross-section-based boundary surfaces to other representations of curve clusters.  A bunch of curves is a mess and does not lend itself to operations on the cluster volume (smoothing, joining, etc.).  Median curves have no width.  Rasterization creates surface artifacts and loses orientation information.  Alpha shapes lose orientation information. [nice!]
*# Alpha shapes is the main "competitor".  Expected results show topological defects resulting from a global choice of alpha.  Run both algorithms on phantom and real data and discuss features. [I suspect that there are some other shrink-wrap approaches that might be competitors, unless alpha-shapes are always superior]
 
 
==== A Sparse, Volumetric Representation of Space Curve Clusters (§4.1.4) ====
The benefit of the initial form of the macrostructure model is its ability to reconstruct its input curves.  The evaluation on this should be relatively easy, as there is no comparison to other techniques.  A manual clustering is acceptable but automatic clustering that implies some bound on reconstruction error would probably be better.  Good for Vis, SIGGRAPH, EG, EV, I3D, ISMRM.  [but why does anyone care?  This could be a way of establishing the minimal information required to represent brain datasets... but again, does anyone care about that?  Must find related work.  dhl suggests that this may be "importance filtering" for curves, but what's the benefit?]
 
 
==== Automatic Tractography Repair / "Healing a Broken Tractogram" ====
ISMRM poster / Neuroimage paper / TVCG paper?  Demonstrate the utility of the cluster boundary surface algorithm for repairing broken curves (§4.1.5).
* '''Title, Authors''': ''"Healing a Broken DTI Tractogram with Curve Cluster Boundary Surfaces"''.  Jadrian Miles and David H. Laidlaw.
* '''Contributions'''
*# A slicing-based (or alpha-shape contraction--based) algorithm for generating a cluster boundary surface with orientation, spreading, and medial axis metadata.
*# An algorithm for "sampling" novel un-broken curves from the cluster boundary surface model ("sparsifying")
*# An algorithm for extrapolating the curve cluster along its axis ("lengthening")
*# An algorithm for joining axially-aligned curve clusters based on extrapolation and refinement against underlying DWIs ("bridging")
*# Maybe others: "fattening", "smoothing"
* '''Proofs of Contributions'''
*# Demonstration (with figures) of the boundary surface algorithm on synthetic and real-world tractography data.
*# Compare the proposed algorithm (which uses only the boundary surface, not tractography curves) against one that uses barycentric coordinates of the tractography curves that form a triangle about the seed point to propagate an interpolated curve.  This comparison should result in an error measure.
*# Demonstration (with figures) of the boundary extrapolation algorithm.  I'm currently unaware of any competitors for this.
*# Compare against local extrapolation models: tensor deflection, linear extrapolation, cubic spline extrapolation, smoothed Bezier extrapolation.  Also compare QBI tractography on HARDI data against our algorithm on angular-subsampled DTI data.
* '''Figures'''
*# Fig
*# Fig
*# Fig
* '''Related work'''
* '''Abstract'''
* '''Conclusions'''
* '''Methods'''
 
 
==== Automatic Tractography-Based DW-MRI Segmentation ====
Using DTI/QBI, automatic clustering, and simple macrostructure adjustment (dilation, splitting, merging, bridging gaps, §4.1.5), segment the WM and compare to some ground truth.  Mori's atlas?
 
[The above two could each be ISMRM posters or talks, and should be quickly followed up by an MRM paper.]
 
 
==== A ??? (§4.2.1) ====
Generating synthetic images from the macrostructure model.  Who would care about this?  Possible improvement over Leemans, et al. due to gap-filling? [use data matching to support that the model is good, and wave hands about the usefulness of the higher-level model -- has a bigger bang feel than the clustering one]
 
[check out Ken Joy's multi-material volume representation stuff (last author) -- tvcg, I believe or maybe TOG]

Latest revision as of 23:46, 15 March 2013

Curve noise statistics

An upgraded version of my ISMRM abstract.

  • Present original findings (expanded to range over FA + MD)
    • Potentially also investigate multiple algorithms: nonlinear DT fitting, multi-DT, Q-ball, other tractography algorithms, etc.
  • Explain results with first-order model
  • Validate the model (repeat experiment 2) on more "realistic" data with ground truth. Some options:
    • Brain-based computational phantom (constructed like so: T1+HARDI scan, 2-tensor fit, tractography, filter curves for T1 termination criteria, lowpass curves, synthesize DWIs from curves)
    • Popular phantom (tractography cup?)
    • HARDI atlas
    • Generate synthetic HARDI scans from 2-tensor fit of averaged LARDI scans for tons of healthy normals
  • Prescribe a simple pipeline for deriving the noise equation for any combination of algorithms
  • Maybe also consider writing up the application to PICo: probability of connection to a cortical ROI is the integral of the uncertainty PDF over that subcortical surface. But the benefit from this technique (gain in precision) may be minimal, as symmetric PDFs on either side of an ROI boundary would tend to contribute the same on either side; results might be about the same without the PDF spreading, just because you've got a lot of curves.


Diffusion model parsimony

Compare multiple diffusion models (single-tensor, multi-tensor, higher-order tensor, higher-order ODF [e.g. QBI], multi-compartment biological [e.g. CHARMED]) in terms of their "parsimony" on real data. Quantify parsimony with chi-squared goodness-of-fit on top of the probability integral transform for Rician noise. This will hopefully demonstrate that most models under- or over-fit, and that the multi-compartment models are juuuuuust right. Come to think of it, a Goldilocks reference would make for a snappy title. See dhl/cad/jadrian email from 2011-12-15 "a quickie Dagstuhl thought experiment".

Could potentially extend this to a "constructive" algorithm: something that picks the degrees of freedom for an RBF or other higher-order diffusion model.


Improved Hough-Peak--Finding by Topological Persistence

Put the toy-problem initialization technique on a solid theoretical basis. Rather than blurring the histogram, use topological persistence to disregard "sub-peaks".


Feature Identification in Multispectral Images Using the Hough Transform

Think about a "support" function for prospective line segments, with Gaussian falloff on the image plane. Finding an application area for this one might be hard.


Toy Problem / Probability Integral Transform

Write up the toy problem as an example use of the PIT in model testing for DWIs. Or maybe piggyback on Leemans/Sijbers MRM'05 synthetic DWIs from tractography, using PIT to get p-values for synthetic images. This may not be worth its own paper; maybe better to hold off on PIT stuff until it can be put in context of my own "real-problem" work.


Curve clustering based on chi-squared reconstruction error from a boundary model

I've already written up an outline of this paper. Clustering proceeds by building a boundary model of clusters/bundles, and the objective function is chi-squared goodness-of-fit derived from the tractography PDFs.


Automatic Shape-Sensitive Curve Clustering (§4.1.2)

Distance measure definition, distributed clustering algorithm, spectral clustering refinement, comparison to other techniques. Base on q-ball and DTI tractography from DTK; use Mori's atlas for ground truth? Moberts et al.'s ground truth clusterings? [moberts/van_wijk/vilanova might have a ground truth. Song Zhang had a clustering ground truth paper. Cagatay played with this at some point. bang not huge, how big is the buck? but I could be convinced -- not sure this is a vis paper? but it could be. Application of curve similarity to other areas (bat flight trajectories) would be convincing at Vis too.]


An Orientation-Aware Boundary Surface Representation of Space Curve Clusters

  • Contributions:
    1. "Natural" representation of clusters of curves, useful for higher-level operations. [I think that "natural" is not descriptive enough. I would try to refine that. Is the orientation a tangency constraint? There's something about it that seems important, but "orientation aware" doesn't quite seem like what it is... . I would give an example or two of the kind of higher-level operations you mean.
    2. Superior results versus naive algorithms (and previously published work to solve the same problem? "Competitors" to consider include Gordon's crease surfaces, flow critical surfaces, etc. [dhl mentions Ken Joy's work...] [define "superior". Faster? more accurate? If you mean more "natural" then this may be the same thing as above]).
  • Proofs:
    1. Prose argument that contrasts cross-section-based boundary surfaces to other representations of curve clusters. A bunch of curves is a mess and does not lend itself to operations on the cluster volume (smoothing, joining, etc.). Median curves have no width. Rasterization creates surface artifacts and loses orientation information. Alpha shapes lose orientation information. [nice!]
    2. Alpha shapes is the main "competitor". Expected results show topological defects resulting from a global choice of alpha. Run both algorithms on phantom and real data and discuss features. [I suspect that there are some other shrink-wrap approaches that might be competitors, unless alpha-shapes are always superior]


A Sparse, Volumetric Representation of Space Curve Clusters (§4.1.4)

The benefit of the initial form of the macrostructure model is its ability to reconstruct its input curves. The evaluation on this should be relatively easy, as there is no comparison to other techniques. A manual clustering is acceptable but automatic clustering that implies some bound on reconstruction error would probably be better. Good for Vis, SIGGRAPH, EG, EV, I3D, ISMRM. [but why does anyone care? This could be a way of establishing the minimal information required to represent brain datasets... but again, does anyone care about that? Must find related work. dhl suggests that this may be "importance filtering" for curves, but what's the benefit?]


Automatic Tractography Repair / "Healing a Broken Tractogram"

ISMRM poster / Neuroimage paper / TVCG paper? Demonstrate the utility of the cluster boundary surface algorithm for repairing broken curves (§4.1.5).

  • Title, Authors: "Healing a Broken DTI Tractogram with Curve Cluster Boundary Surfaces". Jadrian Miles and David H. Laidlaw.
  • Contributions
    1. A slicing-based (or alpha-shape contraction--based) algorithm for generating a cluster boundary surface with orientation, spreading, and medial axis metadata.
    2. An algorithm for "sampling" novel un-broken curves from the cluster boundary surface model ("sparsifying")
    3. An algorithm for extrapolating the curve cluster along its axis ("lengthening")
    4. An algorithm for joining axially-aligned curve clusters based on extrapolation and refinement against underlying DWIs ("bridging")
    5. Maybe others: "fattening", "smoothing"
  • Proofs of Contributions
    1. Demonstration (with figures) of the boundary surface algorithm on synthetic and real-world tractography data.
    2. Compare the proposed algorithm (which uses only the boundary surface, not tractography curves) against one that uses barycentric coordinates of the tractography curves that form a triangle about the seed point to propagate an interpolated curve. This comparison should result in an error measure.
    3. Demonstration (with figures) of the boundary extrapolation algorithm. I'm currently unaware of any competitors for this.
    4. Compare against local extrapolation models: tensor deflection, linear extrapolation, cubic spline extrapolation, smoothed Bezier extrapolation. Also compare QBI tractography on HARDI data against our algorithm on angular-subsampled DTI data.
  • Figures
    1. Fig
    2. Fig
    3. Fig
  • Related work
  • Abstract
  • Conclusions
  • Methods


Automatic Tractography-Based DW-MRI Segmentation

Using DTI/QBI, automatic clustering, and simple macrostructure adjustment (dilation, splitting, merging, bridging gaps, §4.1.5), segment the WM and compare to some ground truth. Mori's atlas?

[The above two could each be ISMRM posters or talks, and should be quickly followed up by an MRM paper.]


A ??? (§4.2.1)

Generating synthetic images from the macrostructure model. Who would care about this? Possible improvement over Leemans, et al. due to gap-filling? [use data matching to support that the model is good, and wave hands about the usefulness of the higher-level model -- has a bigger bang feel than the clustering one]

[check out Ken Joy's multi-material volume representation stuff (last author) -- tvcg, I believe or maybe TOG]