User:Jadrian Miles/Thesis manifesto: probabilistic worldview
My dissertation involves solving three problems:
- Automatically clustering tractography curves together so that the resulting clusters are neither too small (low reconstruction error, high model complexity) nor too big (high reconstruction error, low model complexity)
- Automatically adjusting macrostructure elements to match input DWIs so that the elements' surfaces are neither too bumpy (low reconstruction error, high model complexity) nor too smooth (high reconstruction error, low model complexity)
- Automatically adjusting microstructure properties within a given region in space to match input DWIs so that the spatial frequency of the microstructure parameters is neither too high (low reconstruction error, high model complexity) nor too low (high reconstruction error, low model complexity)
In each case, our goal is to balance the tradeoff between reconstruction error and model complexity. In order to do so in a principled fashion, we must define, for each case, what is meant by reconstruction error and model complexity. What probabilistic tools are available for us to go about this?
Fitting
One apparent option is fitting. Given:
- observations at independent variables
- The knowledge that the measurement error on each observation is Gaussian with standard deviation
- A model instance with parameters , which gives a reconstructed "observation" for a given independent variable as
we define the statistic as follows:
This is essentially a normalized error score, and in itself tells us nothing about goodness of fit. The interesting thing that we can do here, though, is compute the probability of seeing an error as bad or worse than the observed error score under a different set of observations drawn from the same underlying distribution, given the degrees of freedom . This probability is called , and is defined as:
where is the CDF of the distribution. Here's how to interpret :
- Really low () means that you can't do a much worse job describing the data than you did with your model instance. This means either that your fit is bad ( is really high) or that your model has so many degrees of freedom (and thus is so small) that even a low isn't convincing.
- Really high () means that you can't do much better, even given the number of degrees of freedom. This is a suspicious result and may indicate data-fudging or just overestimation of the s, so it shouldn't come up in an automated system.
This means that maximizing over all values of (from very simple to very complex models) should find the "happy medium".
Let's consider the meaning of all these variables for each of the problems defined above:
Curve Clustering
- is the number of vertices in all the curves in the tractography set. Each therefore represents the selection of a single vertex (by two parameters: the index of the curve in the curve set, and the index or arc-length distance of the desired vertex along this curve). isn't particularly well-defined in this case (see below), but amounts to the description of the curve near : the vertex's position and the angle between the consecutive segments, maybe.
- is the total number of curves that form the skeletons of the shrink-wrap polyhedra for the model, plus the total number of bundles/polyhedra. Note that even in the most "complicated" model, is smaller than by a factor of half the average number of s per curve. is the description of the reconstructed curve specified by at the specified position along it, given the association of curves with bundles specified by the model parameter values .
- is defined according to the definition of the difference that we choose (see below).
Let's assume that for comparing the reconstruction (, which we will just call ) with the observation (), we want to incorporate both a position and angle reconstruction error. Then we must define some sort of scalar pseudo-difference function
where is the difference in positions (Euclidean distance) between the vertices, and is some difference between angles, perhaps one minus the dot product. may either be hand-tuned or solved for in the course of the optimization; in this latter case, must be increased by one.
The s must be determined experimentally (with synthetic phantoms, for example). It remains for these experiments to demonstrate that the errors are normally distributed. If not, some wacky transform might be necessary to shoehorn it into the regime, or Monte-Carlo simulation of the error distribution might allow for a different but equivalent approach (see §15.6 of NR).
Bundle Adjustment
Microstructure Fitting
- is the number of voxels in the relevant region multiplied by the number of observations (diffusion weightings) in each. Each individual configuration (voxel and diffusion weighting) is an . The observed intensity for each is .
- is the number of control parameters in the space-filling microstructure model. Assuming that the microstructure instance for each voxel is determined by some sort of spline with sparse control points, is 3 parameters for position plus the number of parameters actually controlling the microstructure model, multiplied by the number of control points. is the reconstructed signal intensity in the voxel and diffusion weighting specified by , given the model parameter values .
- is known for all by estimation of the Rician noise parameters from the DWIs. But...
Unfortunately, we know that the noise on the s is not Gaussian but rather Rician. Therefore is not normally-distributed, and so the statistic is invalid in the form above. We can, however, create a "fake" observation from for which is normally distributed. We do this by solving for the that satisfies the following equality:
The two sides of the equality are values of the cumulative distribution functions of, respectively, a Rician random variable and a gaussian random variable:
where is the first-order Marcum Q-function, and is the Gauss error function. Solving, we find:
This function re-maps a Rician random variable into a Gaussian random variable with the same and central value , and so we may use this in place of in the summation for the statistic.
The inner loop of our algorithm is given a model with degrees of freedom, and fits the s to maximize (or equivalently, minimize ) for that model (see §15.5 of NR). The outer loop is a search over all models (some of which may have the same ) to find a global maximum in . As mentioned above, we should never get an unrealistically high value, as that's really only achievable by either fudging the data or overestimating the s.