16.1.2 Linear Chi Squared Minimization

16.1.2 Linear Chi Squared Minimization#

For least squares minimization we assumed that one of the variables (y) contained error that accounted for the deviation of the data from the model we want to fit it to.

This error was not quantified by the measurement, furthermore we gave each error term equal importance in the total error to be minimized.

What if we had a measurement for the uncertainty of each of our y measurements? Let’s characterize these uncertainties using the standard deviation of each yi measurement: σi.

We now want to weight the contribution that each error value ϵi gives to the total error by the uncertainties σi. Ideally we want the model to fit within the uncertainties of the data points (or at least the fraction of the data points given by the confidence of the uncertainty). This means that we want to prioritize minimizing the error given by points with low uncertainty, or conversely we want to suppress the points with high uncertainty. To solve this we will minimize the χ2 value of the data:

χ2=i=1N(ϵiσi)2

where each error value is weighted by dividing it by the uncertainty. Note that if all of the σi where constant, we’d be dealing with least squares (the multiplicative factor will drop out in the minimization)

With 1 Independent Variable#

Returning to our scenario with two variables x and y, modeled by the functional relation:

y=a0+a1x

with a data set of measured xi and yi variables, with σi as the uncertainty of the yi values for i=1,,N, χ2 can now be written as:

χ2=i=1N(ϵiσi)2=i=1N(a0+a1xiyiσi)2

Minimizing χ2 with respect to a0 and a1, will yield:

a0=(yσ2x2σ2xσ2xyσ2)/Da1=(1σ2xyσ2xσ2yσ2)/D

where

D=1σ2x2σ2xσ22

Note that in practice the 1/N factors of the expectation values cancel out.