Overview of Least Squares Estimation


The least square method determines the coefficients of an "objective function" that give the "best fit" between a data sample and a function called the "objective function". The best fit is determined on the basis of the sum of squared differences, or residuals, between the points and the function. Each sample point consists of 1 or more coordinates, an observed value at those coordinates and, optionally, a weight to assign the point to control its effect on determining the best fit.

This class uses the linear least squares method for estimating the coefficients of the "rubber function". Linear least squares requires that the coefficients and independent variables be separated into the dot product of 2 vectors: a coefficient vector and a basis vector. The coefficient vector contains the parameters of the function being fit; the basis vector evaluates a set of "weights" for each of the function terms. Therefore, the number of coefficients being fit equals the number of basis terms. For example, y = a[1] + a[2]x is the equation of a line and is "linear" in the coefficients a. Conversely, y = a[1] exp (a[2] x) is not linear because more than one coefficient appears in the term a[1] exp (a[2] x) which cannot be separated. Since the latter equation is not linear, it cannot be directly fit using linear least squares. Conversely, since the equation of a line is linear in terms of the coefficients a, it can be separated into the dot product of a coefficient vector (a[1], a[2]) and a basis vector (1, x ). The least squares estimate determines the coefficients (a[1], a[2]) which minimize the squared error between the y values estimated using (a[1], a[2]) and the value y which is observed at each coordinate point. In the CLsqFit class, x may have up to 10 dimensions (or "coordinates"). For example, fitting a function to a 2-dimensional surface, such as the intensity of a 2-d image, involves the 2 coordinates x and y, hence the least squares basis would have 2 dimensions.

Oftentimes, the data sample contains outlying values that do not follow the same "rubber function" as the majority of the data. Statistically speaking, the outliers are not drawn from the same population as the majority of the data. These "bad" data may be automatically excluded from the fit by enabling data rejection. You also may adjust the weights of sample points or manually delete outliers from use in the fitting process. Thus there are 3 ways to exclude a point from the fitting process: 1) set its weight to zero, 2) delete it from the fit, or 3) enable data rejection and allow it to be automatically rejected. A deleted point can later be reinstated by undeleting it. For a further discussion, see Rejecting Outliers from the Fit.

After a fit is computed, any number of operations be be undertaken with the results. For example, you may wish to examine the coefficient values and their uncertainties, or statistical errors. You might also wish to evaluate the fit to predict the function value at new coordinates. The independence between the calculated coefficients can be assessed by examining the covariance matrix. The overall RMS uncertainty of the fit is given by the fit standard deviation. The difference between the observed and predicted values of the sample data, known as the residual, also may be obtained to evaluate the quality of each sample point.

Related Topics

Using CLsqFit with Image Pixels

CLsqFit class