User:Jadrian Miles/Streamline clustering
Jump to navigation
Jump to search
Tubegen generates an easy-to-parse .nocr file specifying points on streamlines.
- Pick a good dataset (Diffusion_MRI#Collaboration_Table).
- Run tubegen on it with modified parameters so it doesn't cull anything---this will result in ~100k curves, with an average of ~70 points per curve.
- Write a python script to divide the computation of the curve-to-curve distance matrix among many computers.
- Try max and mean minimum point-to-curve distance in overlapping region as inter-curve distance measure.
- The per-curve script should return the assigned matrix line as well as a list of curves sorted by distance and annotated by the distance, for fast clustering.
- After computing the upper half of the matrix, fill in the lower half in a distributed procedure and compute an ordered/annotated list of curves by smallest distance to any other curve.
- Build up clusters until some termination condition: satisfactory number of non-singleton clusters, satisfactory median size of non-singleton clusters, etc. Or just run until you get one huge cluster, but store the binary cluster tree. It may be really skewed but maybe a tree rebalancing algorithm could help in post-processing.
- Initialization: each curve is a singleton cluster.
- A curve's distance to a cluster is the minimum distance to any curve in that cluster.
- In each iteration, merge the singleton cluster with lowest minimum curve-to-curve distance to its closest cluster.