Difference between revisions of "High precision analog"
Line 12: | Line 12: | ||
While a value such as "35.4000" might reasonably be displayed as "35.4", it is nevertheless helpful (and common, in test & measurement gear) to display numbers at their full resolution; this gives a visual indication of that resolution. A frontend shouldn't have to assume or truncate anything; it should have exact information about what the device provided. | While a value such as "35.4000" might reasonably be displayed as "35.4", it is nevertheless helpful (and common, in test & measurement gear) to display numbers at their full resolution; this gives a visual indication of that resolution. A frontend shouldn't have to assume or truncate anything; it should have exact information about what the device provided. | ||
'''We need | '''We need precision information transmitted along with the value.''' | ||
=== Accuracy === | === Accuracy === |
Revision as of 01:26, 2 November 2012
Problem statement
We have a problem with the current SR_DF_ANALOG packets: the analog numbers are passed along as floats -- platform-specific versions of the C float
type. The following summarizes what we need to solve:
Some numbers are just plain wrong
A floating point type is well known to be unable to store accurate representations of some numbers, as mentioned here. This hits us hard. In one driver the analog value is extracted from a device packet by dividing the derived value by 10, to convert to the single digit after the decimal point that is always implied. A value of "354" turns into not 35.4, but 35.400002; 353 turns into 35.299999. Note that these numbers are representations of the floats with printf()
's default precision of 6 digits after the decimal point; using ".20f"
produces 35.39999999999999857891.
The numbers need to be stored and transmitted exactly as they were input by the driver.
Resolution
In the example above, the device always outputs a single digit after the decimal point; there is no more information beyond that digit. But we have no way of communicating that fact to the frontend. The float
type doesn't store this information along with the value, and indeed changes the input value in such a way that the value can't even be truncated so as to make the resolution obvious (or irrelevant). There is simply no way that a frontend can receive 35.400002 and conclude that 35.4 is the proper number to display.
While a value such as "35.4000" might reasonably be displayed as "35.4", it is nevertheless helpful (and common, in test & measurement gear) to display numbers at their full resolution; this gives a visual indication of that resolution. A frontend shouldn't have to assume or truncate anything; it should have exact information about what the device provided.
We need precision information transmitted along with the value.
Accuracy
The value supplied by a device may differ from its resolution -- it may supply 35.4, but have a resolution that makes this 35.400 -- but this can still differ from how accurately the device can make a measurement. This accuracy is rarely supplied by the device along with the measurement, but commonly stated in the device's specifications. For example, The Fluke 187 has a DC voltage measurement specification stated as
Accuracy ± (0.025%+5) Max. Resolution 1 µV
This defines the accuracy of the measurement as 0.025 of the value, increased by 5 "counts" (least significant digits on the display), up or down from the given value. For example, a measurement on 1.3259 has an accuracy of ±0.0336475 — making it somewhere between 1.2922525 and 1.3595475. The resolution, however, is 0.000001 (1 µV), so the last digit is meaningless. Note the displayed value is still limited to four digits after the decimal point!
Hardware accuracy is known to the driver through knowledge of which device it's communicating with, and should be transmitted to the frontend -- including "extra counts" accuracy specifications.
Displayed value
The value displayed on the device may be limited by the number of digits available on a 7-segment display, and may well have less resolution than the device can handle. What libsigrok receives from the device may thus be a number with extra digits tacked on, compared to what's on the display. This may or may not be beyond the resolution or accuracy of the device. Nevertheless libsigrok receives the number and will transmit it to the frontend. It may be useful to a frontend to know how many digits are actually displayed on the device, however, if it chooses to mimic that display in some way.
Should the displayed value resolution be transmitted to the frontend?