User:Jadrian Miles/Streamline clustering

From VrlWiki
Revision as of 23:34, 23 March 2009 by Jadrian Miles (talk | contribs) (New page: Tubegen generates an easy-to-parse <tt>.nocr</tt> file specifying points on streamlines. # Pick a good dataset (Diffusion_MRI#Collaboration_Table). # Run tubegen on it with mo...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Tubegen generates an easy-to-parse .nocr file specifying points on streamlines.

  1. Pick a good dataset (Diffusion_MRI#Collaboration_Table).
  2. Run tubegen on it with modified parameters so it doesn't cull anything---this will result in ~100k curves, with an average of ~70 points per curve.
  3. Write a python script to divide the computation of the curve-to-curve distance matrix among many computers.
    • Try max and mean minimum point-to-curve distance in overlapping region as inter-curve distance measure.
    • The per-curve script should return the assigned matrix line as well as a list of curves sorted by distance and annotated by the distance, for fast clustering.
    • After computing the upper half of the matrix, fill in the lower half in a distributed procedure and compute an ordered/annotated list of curves by smallest distance to any other curve.
  4. Build up clusters until some termination condition: satisfactory number of non-singleton clusters, satisfactory median size of non-singleton clusters, etc. Or just run until you get one huge cluster, but store the binary cluster tree. It may be really skewed but maybe a tree rebalancing algorithm could help in post-processing.
    • Initialization: each curve is a singleton cluster.
    • A curve's distance to a cluster is the minimum distance to any curve in that cluster.
    • In each iteration, merge the singleton cluster with lowest minimum curve-to-curve distance to its closest cluster.