We have talked about the Numpy and Matplotlib libraries, but there is a third library that is invaluable for Scientific Analysis: Scipy. Scipy is basically a very large library of functions that you can use for scientific analysis. A good place to start to find out about the top-level scientific functionality in Scipy is the Documentation.
Examples of the functionality include:
and so on.
In this section, we will take a look at how to fit models to data. When analyzing scientific data, fitting models to data allows us to determine the parameters of a physical system (assuming the model is correct).
There are a number of routines in Scipy to help with fitting, but we will use the simplest one, curve_fit()
, which is imported as follows:
import numpy as np
from scipy.optimize import curve_fit
The full documentation for the curve_fit()
is available here, and we will look at a simple example here, which involves fitting a straight line to a dataset.
We first create a fake/mock dataset with some random noise:
import matplotlib.pyplot as plt
%matplotlib inline
x = np.random.uniform(0., 100., 100)
y = 3. * x + 2. + np.random.normal(0., 10., 100)
plt.plot(x, y, '.')
Let's now imagine that this is real data, and we want to determine the slope and intercept of the best-fit line to the data. We start off by definining a function representing the model:
def line(x, a, b):
return a * x + b
The arguments to the function should be x
, followed by the parameters. We can now call curve_fit()
to find the best-fit parameters using a least-squares fit:
popt, pcov = curve_fit(line, x, y)
The curve_fit()
function returns two items, which we call popt
and pcov
. The popt
argument are the best-fit paramters for a
and b
:
popt
which is close to the initial values of 3
and 2
used in the definition of y
.
The reason the values are not exact is because there are only a limited number of random samples, so the best-fit slope is not going to be exactly those used in the definition of y
. The pcov
variable contains the covariance matrix, which indicates the uncertainties and correlations between parameters. This is mostly useful when the data has uncertainties.
Important Note: curve_fit()
by default uses sigma just as weights, i.e., as hints how important a point should be in the least-squares computation. To compute the covariance matrix, it then estimates the actual error essentially from the scatter in the data. If, however, your data points come with (reliable) error estimates, that actually throws away a lot of information, and the covariance matrix is less expressive than it could be. To make curve_fit()
treat its sigma argument as actual error estimates and use their absolute (rather than relative, as with weights) values in pcov calculation, one needs to pass absolute_sigma=True
. This is done here.
Let's now try and fit the data assuming each point has a vertical error (standard deviation) of +/-10:
e = np.repeat(10., 100)
plt.errorbar(x, y, yerr=e, fmt="none")
popt, pcov = curve_fit(line, x, y, sigma=e, absolute_sigma=True)
popt
Now pcov
will contain the true variance and covariance of the parameters, so that the best-fit parameters are:
print("a =", popt[0], "+/-", pcov[0,0]**0.5)
print("b =", popt[1], "+/-", pcov[1,1]**0.5)
We can now plot the best-fit line:
plt.errorbar(x, y, yerr=e, fmt="none")
xfine = np.linspace(0., 100., 100) # define values to plot the function for
plt.plot(xfine, line(xfine, popt[0], popt[1]), 'r-')
You should now be able to fit simple models to datasets! Note that for more complex models, more sophisticated techniques may be required for fitting, but curve_fit()
will be good enough for most simple cases.
Note that there is a way to simplify the call to the function with the best-fit parameters, which is:
line(x, *popt)
The * notation will expand a list of values into the arguments of the function. This is useful if your function has more than one or two parameters. Hence, you can do:
plt.errorbar(x, y, yerr=e, fmt="none")
plt.plot(xfine, line(xfine, *popt), 'r-')
In the following code, we generate some random data points:
x = np.random.uniform(0., 10., 100)
y = np.polyval([1, 2, -3], x) + np.random.normal(0., 10., 100)
e = np.random.uniform(5, 10, 100)
Fit a line and a parabola to it and overplot the two models on top of the data:
# your solution here
As before, we use the data/munich_temperatures_average_with_bad_data.txt file, which gives the temperature in Munich every day for several years:
# The following code reads in the file and removes bad values
import numpy as np
date, temperature = np.loadtxt('data/munich_temperatures_average_with_bad_data.txt', unpack=True)
keep = np.abs(temperature) < 90
date = date[keep]
temperature = temperature[keep]
plt.plot(date,temperature)
Fit the following function to the data:
$$f(t) = a~\cos{(2\pi t + b)} + c$$where $t$ is the time in years.
Make a plot of the data and the best-fit model in the range 2008 to 2012.
What are the best-fit values of the parameters? What is the overall average temperature in Munich, and what are the typical daily average values predicted by the model for the coldest and hottest time of year?
What is the meaning of the b
parameter, and does its value make sense?
# your solution here