When it comes to measuring forces, we here at Phidgets are of the opinion that load cells are best. We do sell thin film force sensors, and understand that they are required for certain applications that lack the room for a load cell, and which do not require a precise force or weight value. However, load cells are usually better. A few of the issues that thin film force sensors have that load cells do not are:
- Thin film sensors do not typically have a resolution better than 2% of the full scale they can read (for example, if it measures up to 2kg of force, measurements are accurate within +/- 0.04kg )
- Measurements made with thin film sensors are not reliably repeated
- Thin film sensors experience high levels of drift while under a load.
Load cells, on the other hand, consist of a precisely machined block of aluminum or steel which has been designed to respond to a specific class of forces while having no response to the others. For example; many beam style load cells are made to react when a shear force is applied, though bending forces don’t register. They do this by having strain gauges attached in specific positions to respond to the small distortions in the metal structure. The result is that they can reliably resolve even very small forces.
There are a lot of options for configuring data collection from the load cell once it has been connected to a Phidget Bridge. Two of the most important are the gain and the data rate. In general the gain should be at 128, the highest you can set it. This means the resolution of the setup is at a maximum, and you will detect the smallest changes possible . The data rate on the other hand will effect the amount of noise in your data. The Phidgets software effectively takes an average for you, which is handy if you plan to use the data in real time and aren’t just logging the data for later processing and statistical analysis like we are.
Let’s take a look at the effect of data rate on the noise. We’ve selected a 0-5kg load cell, and set it up for some data collection. First we have a chart with data points at the maximum rate of a Phidget Bridge; every 8ms.
Next we have a chart where the data rate is every second. The Phidget Bridge is still taking data samples every 8ms, but since we’ve told it we only want data every second it averages the 125 sample points for that second into one easy and convenient data point.
It’s immediately obvious that the data resulting from the slower data rate is much less sporadic. This means that if you were trying to resolve a very small signal, you’d have much better luck with the second graph. Of course, the signal is still there in the first graph, you’ll just need to perform more statistical analysis (in software) to find it.
Last, but not least, we have the amount that each data point in our 1000ms sample rate data set deviates from the the average value of the entire data set. This value has then been converted to grams to show us the minimum difference in weight that can be resolved with the load cell. In the simplest and least statistically rigorous case, the data points all appear to be within +/- 0.05g of the average value. This means if you were to compare only two data points in the absence of statistics, you wouldn’t really be able to determine if they were different unless the difference was larger than 0.1g at the 1000ms data rate. One tenth of one gram is pretty precise given that the full range of our chosen load cell is 5kg. That means the minimum theoretical resolution of the 5kg load cell with data points taken every 1000ms, as a percentage of the full range, is a mere 0.002%!
Of course, if you’re really looking for precision the advice of taking longer to make a measurement still stands; you just won’t be able the get a lower data rate than 1000ms from the Phidget directly. You’ll need to perform some statistics in your code, which provides the opportunity to use more advanced and rigorous methods as well.
For illustrative purposes, the data used in the chart above has been analyzed further using Gnumeric (the spreadsheet software the chart was created in). Using the functions built into the program, the average of all the above data points was found to be 239.350 once converted to grams, with a standard deviation of 0.023g. Finally a 95% confidence interval was calculated to be within +/-0.006g of the above average. This means that if you had the time to collect data for a full minute in each state you wished to compare, you could potentially resolve a difference of an incredibly low 0.012g on objects weighing as much as 5kg.
The other method to achieve a smaller resolvable difference in weight is to use a load cell rate to a lower maximum load. This only works if you can sacrifice the ability to measure heavier loads, but if you knew you were never going to measure anything over 100g, it would be silly to use a 5kg load cell when you could use a 100g cell and get much more precise data. This shouldn’t come as a surprise, since 0.002% of 100g is obviously going to be less than 0.002% of 5kg.
If you can’t sacrifice the ability to measure heavier loads, and taking more time to resolve differences isn’t an option, then you have no choice but to acquire a more advanced load cell. The main downside to this is that the vendors who sell them, like Omega, will often charge in excess of $1000 for these products.
Another issue to keep an eye out for is drift. Drift is a little more difficult to spot and deal with than noise is, because it systematically effects all of our sample points in the same way. In effect, it makes the value we read from the sensor drift away from the true value.
One of the most major causes of drift in load cells are changes in temperature. To illustrate this dramatically, we blasted a load cell with the liquid from an inverted compressed air canister to rapidly freeze it to some temperature below 0˚C. You can see a sudden spike downwards while the load cell is hit by the freezing fluid, followed by an even larger spike upwards to above the original value. It then takes a substantial amount of time (longer than the 800s shown in this chart) before the load cell recovers to a value similar to where it started.
Another aspect to consider when looking at load cells is something called creep, and creep-recovery. They are in effect the time it takes for a load cell to settle onto the true value after large changes are made in the load. To illustrate this, here’s two charts of a heavily loaded (overloaded in fact) 5kg load cell. In the first we have the full scale chart. It looks pretty straight forward, the reading shoots almost straight up when the force is applied, then fall straight back to base line when it’s removed. But what happens if we look closer at the base line?
It’s really quite fascinating, and somewhat unexpected. As the force is removed, the reading shoots down below base line, then slowly creeps back up to the ordinary unloaded reading over a span of about 1 minute. The exact reason for the readings to rebound to a point below base line is unclear to us, and may likely have something to do with the fact that we had overloaded the cell by about two and a half times. There may also be some odd behaviour originating from our make shift mounting setup, which consisted of an M5 bolt and nut holding it to a piece of particle board. As interesting as the fine scale effects are, it’s important to note that they are very small, and probably won’t have any really meaningful impact on measurements unless you’re looking to differentiate very small differences on forces that approach or exceed the full rating of the load cell.