Accuracy, precision, and resolution may all sound like different words for the same thing, but there are important distinctions.
We’ll start with the difference between precision and accuracy. The easiest way to show the difference is with an analogy. I’ll use the same one I was taught in the laboratory: Imagine data points as projectiles hitting a bulls eye. The first thing that matters to a shooter is how close their shots are to the centre. This is the accuracy. The second thing that matters is how consistent a group of shots is. Shooters refer to this as their group, and it is a measure of precision. This difference is much easier to grasp when it is visualized:
As it is hopefully clear now, accuracy is a measure of “trueness”, while precision is a measure of variability. It is not necessarily the case that they are well correlated. Low precision is easily spotted; your data is simply scattered all over the place. Low accuracy isn’t always so easy to spot. Imagine a scenario in which you didn’t have the rings of a bullseye to visualize your data; would you be able to tell that the lower left image had poor accuracy? To be sure you’d need to already have a good idea of the value that you should be reading, and recognize that the results were not what you expected. This is in effect measuring a known standard, and using it to calibrate your sensor to provide meaningful real world values. Luckily, we already have resources on calibration.
Resolution is easily mistaken for precision, but it’s not always the case that you will have a high precision just because you have a high resolution. Even with many many decimal places in the values you are getting from a sensor, you may still find there is a lot of variability in the data, or in other words that there is low precision despite the high resolution. We’ve already taken a look at some of the reasons you may have noise in your data for load cells, though many of the concepts are easily generalized. One way that the resolution and precision are always related is that resolution determines the upper limit of precision. The precision in you data cannot exceed your resolution. To visualize this, we’ll turn to a real world application that exposes people to the concept of resolution every day: pictures!
|Original image by Robert Sharp: http://www.flickr.com/photos/14772187@N00/4446920696|
A picture, specifically of the digital variety, consists of a matrix of pixels. The resolution of an image is basically the number of pixels in the image with one caveat; the effective resolution of an image is the smallest number of pixels the data has ever been. In the case of the two images above, they’re both 600×900 pixels, though the right hand one was scaled down to 50×75 pixels before being scaled back up to 600×900, and so it has an effective resolution of 50×75. If we define the precision of the image as the level of detail that can be distinguished, then it is readily evident that the precision of the second image is very low compared to the first. Details like the 10 on the door, the pattern of the brickwork, and the ornate lamp are unresolvable. In the case of the second image, by upscaling the low resolution to be comparable to the original, we’ve created a case in which the actual image has a high resolution, but a low precision. It’s a constructed example of course, but there are real world equivalents like when a photo is poorly focused. As has been seen with load cells, these problems don’t just occur for images though, they can occur in any data retrieved with a sensor.
An interesting Phidgets specific example of this occurred because of an ambiguously named API function we used to have called sensitivity. In actual fact it was not controlling the sensitivity, but setting a sensor value change threshold that needed to be exceeded before the sensor change event would occurr. Our clients thought that setting the sensitivity to 1000 would give them the maximum resolution, when in actual fact it set the event trigger resolution to a minimum. Unless the value changed from 0 to 1000 or vice versa, no change would be recorded. In effect they had set the resolution to be so low it was binary! We have since fixed this ambiguity in our API, but not before getting a good example of why resolution can really matter.
Hopefully that helps clear up the differences between the related but distinct concepts of accuracy, precision, and resolution.