Okay, I’ll grant you that 2 + 2 = 4, but I’m not so sanguine about 2.016 + 1.975 = 3.992.
If we measured the diameter of a pinhead with a yardstick, we will likely get a different number than if we used a micrometer. We measurement wizards know that it is not only the number that matters, but also, how it was obtained. By knowing that, we can determine the quality of the measurement.
We express this as the “uncertainty” of a measurement. This is sometimes referred to as the “error,” but that term implies that there is a mistake. Nearly every measurement we make has an uncertainty.
The first thing we need to do is define the measurement conditions. For this example, we’ll consider that we are going to measure a voltage. The expected “nominal” is 3.0 Volts DC. The nominal is important, because the uncertainty, as we will see, depends on the measurement magnitude.
For the purpose of this treatise, we’ll say we’re going to use a Keithley Model 2000, a high-quality, mid-range, system-capable multimeter. For the sake of simplicity, we are only going to consider the uncertainty of the instrument, so our result will be based solely on the accuracy of the device.
The specs tell us that the 3.0V measurement will fall into the 10V range of the device. The resolution of the range is 10µV. The 1 year accuracy spec at that point is ±45 ppm (parts per million) of reading plus 6ppm of range. After a few clicks on Mr. Gates’ calculator application:
(3.0V x .000045 = 0.000135 = 135µV) + (10V x 0.000006 = .00006 = 60µV) =195uV
So, our total uncertainty is ±195µV. If our multimeter is reading 03.00000V we know that the “actual” Voltage falls somewhere between 3.000195 and 2.999805 VDC.
The total uncertainty is much larger than the instrument resolution. This is often the case. Isn’t it good to know what the uncertainty is?
In the next installment, we’ll look at how to deal with uncertainties based on multiple measured parameters. In the future, we’ll discuss how to measure the uncertainty of a measurement system.