Difference between revisions of "High precision analog"
Uwe Hermann (talk | contribs) m (→Resoures) |
m (→The sigrok dilemma: Fixed a typo.) |
||
(31 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
<div style="background-color:lime"> | |||
'''Status''': The '''[[High precision analog#Exact representation of raw data and metadata|Exact representation of raw data and metadata]]''' proposal below has been decided upon and is currently being implemented in [[libsigrok]] as of 12/2014. | |||
</div> | |||
== Problem statement == | == Problem statement == | ||
We have a problem with the current SR_DF_ANALOG packets: the analog numbers are passed along as floats -- platform-specific versions of the C <code>float</code> type. The following summarizes what we need to solve: | We have a problem with the current SR_DF_ANALOG packets: the analog numbers are passed along as floats -- platform-specific versions of the C <code>float</code> type. The following summarizes what we need to solve: | ||
=== Some numbers are just plain wrong === | === Some numbers are just plain wrong === | ||
A floating point type is well known to be unable to store accurate representations of some numbers, as mentioned [http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems here]. This hits us hard. In one driver the analog value is extracted from a device packet by dividing the derived value by 10, to convert to the single digit after the decimal point that is always implied. A value of "354" turns into not 35.4, but 35.400002; 353 turns into 35.299999. Note that these numbers are representations of the floats with <code>printf()</code>'s default precision of 6 digits after the decimal point; using <code>".20f"</code> produces 35.39999999999999857891. | A floating point type is well known to be unable to store accurate representations of some numbers, as mentioned [http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems here]. This hits us hard. In one driver the analog value is extracted from a device packet by dividing the derived value by 10, to convert to the single digit after the decimal point that is always implied. A value of "354" turns into not 35.4, but 35.400002; 353 turns into 35.299999. Note that these numbers are representations of the floats with <code>printf()</code>'s default precision of 6 digits after the decimal point; using <code>".20f"</code> produces 35.39999999999999857891. | ||
Line 8: | Line 14: | ||
=== Resolution === | === Resolution === | ||
==== Understanding precision ==== | |||
Precision is casually defined as how close together a series of measurements of the same quantity are. However, this definition is confusing when one speaks of the precision of one measurement. A meter stick with markings 1 cm apart can only produce a precision of 1 cm. For example, a measurement of 23 cm implies that the value is anywhere between 22.5 cm and 23.5 cm. On the other hand, a stick with marking 1 mm apart will produce a more precise measurement: 23.0 cm implies that the value is anywhere between 22.95 cm and 23.05 cm. | |||
It might be tempting to think of these measurements as: | |||
1 cm precision: 23 cm ± 0.5 cm (WRONG!!!!) | |||
1 mm precision: 23 cm ± 0.05 cm (WRONG!!!!) | |||
The correct way to denote precision is via the number of significant figures presented with the measurement: | |||
1 cm precision: 23 cm (CORRECT - implies measurement is between 22.5 cm and 23.5 cm ) | |||
1 mm precision: 23.0 cm (CORRECT - implies measurement is between 22.95 cm and 23.05 cm) | |||
It is extremely important to note that '''trailing zeroes are significant'''. The precision is inferred from the number of significant digits. | |||
==== The sigrok dilemma ==== | |||
Returning to our problem, a floating point data type has no notion whatsoever about the precision of the measurement. It has an intrinsic precision related to the size (number of bits) of the mantissa, which has nothing to do with the precision of the measurement. | |||
Consider a device which always outputs a single digit after the decimal point; there is no more information beyond that digit. But we have no way of communicating that fact to the frontend. The <code>float</code> type doesn't store this information along with the value, and indeed changes the input value in such a way that the value can't even be truncated so as to make the resolution obvious (or irrelevant). There is simply no way that a frontend can receive 35.400002 and conclude that 35.4 is the proper number to display. Removing trailing zeroes is not a solution, as a value of 35.0 might incorrectly be truncated to 35. | |||
While a value such as "35.4000" might reasonably be displayed as "35.4", it is nevertheless helpful (and common, in test & measurement gear) to display numbers at their full resolution; this gives a visual indication of that resolution. A frontend shouldn't have to assume or truncate anything; it should have exact information about what the device provided. | While a value such as "35.4000" might reasonably be displayed as "35.4", it is nevertheless helpful (and common, in test & measurement gear) to display numbers at their full resolution; this gives a visual indication of that resolution. A frontend shouldn't have to assume or truncate anything; it should have exact information about what the device provided. | ||
Line 15: | Line 42: | ||
=== Accuracy === | === Accuracy === | ||
'''Do not confuse accuracy with precision.''' | '''Do not confuse accuracy with precision.''' | ||
The value supplied by a device may differ from its resolution | ==== Understanding accuracy ==== | ||
Accuracy refers to how close a measurement is to the value being measured. A very common (and correct) way to describe accuracy is the uncertainty of a measurement. | |||
===== Offset error ===== | |||
Let's return to our meter stick example. The meter stick with 1 mm markings might be missing a 2 cm chunk. It cannot be accurate; the measurement in that case would be presented as: | |||
23.0 cm ± 2.0 cm | |||
Note that although the precision is 0.05 cm, the accuracy is completely different. | |||
===== Linearity error ===== | |||
A meter stick might, for example be calibrated at 25 °C, but specified to be accurate from -10 °C to 60 °C. Due to contraction and dilation, at -10 °C the 23.0 mark might be closer to 22.8 cm, while at 60 °C, it might be closer to 23.3 cm. In this example, temperature is an external factor which affects the final measurement. It is unreasonable or impossible to estimate the error exactly, especially since the error grows linearly with the size of the measurement. This error is usually included in the specifications of the device, as an unknown. In this case it would be reasonable for the accuracy to be specified as | |||
1.5 % of measurement | |||
So that | |||
at 23.0 cm, the value is 23.0 ± 0.3 cm | |||
at 86.0 cm, the value is 86.0 ± 1.2 cm | |||
Again, the accuracy and precision are not the same. | |||
===== Nonlinear error ===== | |||
Nonlinear errors are errors which are neither an offset error, nor a linearity error. They are beyond the scope of this article; a device's stated accuracy usually accounts for this type of error. | |||
===== Calibration ===== | |||
Generally the accuracy as specified by the manufacturer is warranted only as long as the device is calibrated (compared to a known source, not necessarily always adjusted) regularly. Some high-end multimeters might store the last calibration date and possibly other parameters relevant for high-performance measurements, e.g. temperature, power-on time, number of power cycles. Probably it would be best so add such information to the data, if available. | |||
==== The sigrok dilemma ==== | |||
The value supplied by a device may differ from its resolution — it may supply 35.4, but have a resolution that makes this 35.400 — but this can still differ from how accurately the device can make a measurement. This accuracy is rarely supplied by the device along with the measurement, but commonly stated in the device's specifications. For example, the [[Fluke 187/189|Fluke 187]] has a DC voltage measurement specification stated as | |||
Accuracy ± (0.025%+5) | Accuracy ± (0.025%+5) | ||
Max. | Max. resolution 1 µV | ||
This defines the accuracy of the measurement as 0.025 of the value | This defines the accuracy of the measurement as 0.025% of the value plus 5 "counts" (least significant digits on the display). The true value can be anywhere up or down by this amount from the displayed value. | ||
The given '''maximum resolution is not the same for every range'''. The [[Fluke 187/189|Fluke 187]] has a resolution of 50000 counts, so in order to get the maximum resolution of 1 µV, the device must be set to the 50 mV (50000 µV) range. Since the device can only distinguish 50000 values, a measurement of 1.3259, for example, will have a resolution of 0.1 mV; the device will not be able to distinguish voltage differences below that value. If it could, then the measurement would be 1.325900, and the device would have to be a 5000000 (5 million) count. | |||
Expanding on our previous example, a measurement on 1.3259 has an accuracy of ±0.000831475 — making it somewhere between 1.325068525 and 1.326731475. The resolution, however, is 0.0001 (0.1 mV), so the last significant digits are meaningless. Note the ''displayed'' value is still limited to four digits after the decimal point! The correct way to communicate this measurement is: | |||
1.3259 V ± 0.0009 V | |||
The obtained value for the accuracy is almost always rounded up. | |||
'''Hardware accuracy is known to the driver through knowledge of which device it's communicating with, and should be transmitted to the frontend -- including "extra counts" accuracy specifications.''' | '''Hardware accuracy is known to the driver through knowledge of which device it's communicating with, and should be transmitted to the frontend -- including "extra counts" accuracy specifications.''' | ||
=== Displayed value === | === Displayed value === | ||
The value displayed on the device may be limited by the number of digits available on a 7-segment display, and may well have less resolution than the device can handle. What libsigrok receives from the device may thus be a number with extra digits tacked on, compared to what's on the display. This may or may not be beyond the resolution or accuracy of the device. Nevertheless libsigrok receives the number and will transmit it to the frontend. It may be useful to a frontend to know how many digits are actually displayed on the device, however, if it chooses to mimic that display in some way. | The value displayed on the device may be limited by the number of digits available on a 7-segment display, and may well have less resolution than the device can handle. What libsigrok receives from the device may thus be a number with extra digits tacked on, compared to what's on the display. This may or may not be beyond the resolution or accuracy of the device. Nevertheless libsigrok receives the number and will transmit it to the frontend. It may be useful to a frontend to know how many digits are actually displayed on the device, however, if it chooses to mimic that display in some way. | ||
'''Should the displayed value resolution be transmitted to the frontend?''' | '''Should the displayed value resolution be transmitted to the frontend?''' | ||
== | The displayed value should always be transmitted to the frontend unaltered, unless the device datasheet or manual specifies that the precision of the measurement is higher than the one displayed, and the full precision is communicated through the PC interface. sigrok should never assume that the precision is higher than the precision of the displayed value unless explicitly documented by the manufacturer. | ||
== Proposed solutions == | |||
=== Exact decimal representation === | |||
struct sr_decimal { | |||
int64_t value; | |||
int32_t pow_ten; | |||
}; | |||
The <code>sr_decimal</code> is an exact representation of a measurement. The significand, or mantissa is kept in base 10, making it immune to all issues that a base 2 representation has. Its value is <code>value * pow(10, pow_ten)</code>. | |||
See https://gitorious.org/~mrnuke/sigrok/libsigrok/commits/acc for details of implementation proposal. | |||
'''Advantages:''' | |||
* Exact representation of measurement | |||
* Immune to floating-point inconsistencies and rounding issues | |||
'''Disadvantages:''' | |||
* Very difficult to work with | |||
* Not really usable as a math type (usually requires conversion to double beforehand) | |||
* Requires a large amount of memory -- usually more than twice of the equivalent ASCII representation. | |||
* Requires complex printf logic | |||
* Signaling conditions such as NAN or INF requires helper functions | |||
* Waste of storage space -- most of the bytes will usually be 0 | |||
* Suffers from "reinventing the wheel" problem -- requires helper functions for almost anything | |||
* Requires thoughtful and intrusive re-factoring of libsigrok | |||
* Leading zeros not handled correctly. | |||
=== Inexact floating-point representation with precision metadata === | |||
struct sr_measurement { | |||
double value; | |||
uint8_t sigfig; | |||
}; | |||
The <code>sr_measurement</code> is an inexact representation of a measurement. It relies on the fact that a <code>double</code> can store about 12 digits before the least significant digit is affected by rounding issues. Although devices with 12-digit precision do not exist, it is believed that <code>sr_measurement</code> can store about 12 digits reliably. <code>sr_measurement</code> contains a field to specify the number significant digits -- that is, the number of digits that should be printed for a correct representation of the measurement. | |||
See https://gitorious.org/~mrnuke/sigrok/libsigrok/commits/prec for details of implementation proposal. | |||
'''Advantages:''' | |||
* More natural to work with -- the value can extracted as a double with minimal overhead | |||
* Signaling of conditions such as NAN or INF is identical to treatment of a <code>double</code> | |||
* Makes more efficient use of storage space | |||
* Requires a minimal amount of helper functions -- no "reinventing the wheel" | |||
* Requires minimal re-factoring of libsigrok | |||
'''Disadvantages:''' | |||
* Inexact representation of measurement | |||
* May suffer from floating-point inconsistencies and rounding issues -- should not be an issue for most devices | |||
* printf logic requires use of log10 function | |||
* Leading zeros not handled correctly. | |||
=== Exact representation of raw data and metadata === | |||
This is a somewhat different proposal. Rather than defining a datatype to represent any single value with its own metadata, the concept is to use an array of data in a primitive type (such as may come directly from a device as raw data), along with metadata that describes the encoding and precision of those values. | |||
struct sr_analog { | |||
// Raw values | |||
void *data; | |||
uint64_t count; | |||
// Encoding of raw values | |||
uint8_t unitsize; | |||
gboolean signed; | |||
gboolean float; | |||
gboolean bigendian; | |||
// Precision of raw values | |||
uint8_t precision; | |||
gboolean decimal; | |||
// Relationship to true values | |||
struct sr_rational scale; | |||
struct sr_rational offset; | |||
}; | |||
The fields are as follows: | |||
* '''data''' - pointer to the raw, encoded values. | |||
* '''count''' - number of values. | |||
* '''unitsize''' - number of bytes used in the encoding of each individual value. | |||
* '''signed''' - whether the encoding is signed or unsigned (2's complement format is to be used; drivers have to convert the data if it's not yet in 2's complement format). | |||
* '''float''' - whether the encoding is floating-point or integer. | |||
* '''bigendian''' - whether the encoding is big-endian or little-endian (mixed endianness is not supported, drivers have to convert such data). | |||
* '''precision''' - the precision of the raw values, in decimal or binary digits, or 0 if unknown. | |||
* '''decimal''' - whether precision is specified in decimal or binary digits. | |||
* '''scale''' - Scale factor that must be applied to the raw values to obtain true units. | |||
* '''offset''' - Offset that must be applied, after scaling, to obtain true values. | |||
==== Examples ==== | |||
* A digital multimeter sends values in ASCII strings such as "-0.345". These have 4 digits of decimal precision, so can be stored in 16-bit signed integers. The single value "-0.345" could be stored with metadata as follows: | |||
int16_t value = -345; | |||
struct sr_analog metadata = { | |||
.data = &value, | |||
.count = 1, | |||
.unitsize = 2, | |||
.signed = TRUE, | |||
.float = FALSE, | |||
.bigendian = BIG_ENDIAN, // defined TRUE or FALSE according to platform. | |||
.precision = 4, | |||
.decimal = TRUE, | |||
.scale = {1, 1000}, | |||
.offset = {0, 1}, | |||
}; | |||
* An oscilloscope sends a block of raw 14-bit ADC values encoded as 16-bit unsigned integers. The meaning of these depends on the scope settings. The raw values have 14 digits of binary precision. They can be stored as follows: | |||
struct sr_analog metadata = { | |||
.data = values, // read directly from device. | |||
.count = XXX, // driver provides this information. | |||
.unitsize = 2, | |||
.signed = FALSE, | |||
.float = FALSE, | |||
.bigendian = XXX, // driver provides this information. | |||
.precision = 14, | |||
.decimal = FALSE, | |||
.scale = XXX, // driver provides this information. | |||
.offset = XXX, // driver provides this information. | |||
}; | |||
* An input format contains blocks of analog data in 32-bit little endian floats. The file format does not define the precision. | |||
struct sr_analog metadata = { | |||
.data = values, // read directly from file. | |||
.count = XXX, // input provides this information. | |||
.unitsize = 4, | |||
.float = TRUE, | |||
.bigendian = FALSE, | |||
.precision = 0, | |||
.scale = XXX, // input provides this information from metadata in file. | |||
.offset = 0, | |||
}; | |||
'''Advantages''' | |||
* Exact representation of measurement. | |||
* Efficient storage for arrays of values. | |||
* Precision can be represented correctly for both decimal and binary data. Unknown precision can also be represented. | |||
* Where e.g. raw ADC values are the input, precision applies correctly to the raw values, independent of implicit scaling and offset. | |||
* Allows direct use of raw data from a device in many scenarios, minimizing work to be done in the driver at capture time. | |||
* Math operations can be done efficiently on the raw values, with metadata for the result updated separately. | |||
'''Disadvantages''' | |||
* Non-trivial for client code to process directly - helper functions would have to be provided, eg. sr_analog_to_double(). | |||
== Resources == | |||
* [https://en.wikipedia.org/wiki/Accuracy_and_precision Wikipedia: Accuracy and precision] | * [https://en.wikipedia.org/wiki/Accuracy_and_precision Wikipedia: Accuracy and precision] |
Latest revision as of 22:38, 22 September 2017
Status: The Exact representation of raw data and metadata proposal below has been decided upon and is currently being implemented in libsigrok as of 12/2014.
Problem statement
We have a problem with the current SR_DF_ANALOG packets: the analog numbers are passed along as floats -- platform-specific versions of the C float
type. The following summarizes what we need to solve:
Some numbers are just plain wrong
A floating point type is well known to be unable to store accurate representations of some numbers, as mentioned here. This hits us hard. In one driver the analog value is extracted from a device packet by dividing the derived value by 10, to convert to the single digit after the decimal point that is always implied. A value of "354" turns into not 35.4, but 35.400002; 353 turns into 35.299999. Note that these numbers are representations of the floats with printf()
's default precision of 6 digits after the decimal point; using ".20f"
produces 35.39999999999999857891.
The numbers need to be stored and transmitted exactly as they were input by the driver.
Resolution
Understanding precision
Precision is casually defined as how close together a series of measurements of the same quantity are. However, this definition is confusing when one speaks of the precision of one measurement. A meter stick with markings 1 cm apart can only produce a precision of 1 cm. For example, a measurement of 23 cm implies that the value is anywhere between 22.5 cm and 23.5 cm. On the other hand, a stick with marking 1 mm apart will produce a more precise measurement: 23.0 cm implies that the value is anywhere between 22.95 cm and 23.05 cm.
It might be tempting to think of these measurements as:
1 cm precision: 23 cm ± 0.5 cm (WRONG!!!!) 1 mm precision: 23 cm ± 0.05 cm (WRONG!!!!)
The correct way to denote precision is via the number of significant figures presented with the measurement:
1 cm precision: 23 cm (CORRECT - implies measurement is between 22.5 cm and 23.5 cm ) 1 mm precision: 23.0 cm (CORRECT - implies measurement is between 22.95 cm and 23.05 cm)
It is extremely important to note that trailing zeroes are significant. The precision is inferred from the number of significant digits.
The sigrok dilemma
Returning to our problem, a floating point data type has no notion whatsoever about the precision of the measurement. It has an intrinsic precision related to the size (number of bits) of the mantissa, which has nothing to do with the precision of the measurement.
Consider a device which always outputs a single digit after the decimal point; there is no more information beyond that digit. But we have no way of communicating that fact to the frontend. The float
type doesn't store this information along with the value, and indeed changes the input value in such a way that the value can't even be truncated so as to make the resolution obvious (or irrelevant). There is simply no way that a frontend can receive 35.400002 and conclude that 35.4 is the proper number to display. Removing trailing zeroes is not a solution, as a value of 35.0 might incorrectly be truncated to 35.
While a value such as "35.4000" might reasonably be displayed as "35.4", it is nevertheless helpful (and common, in test & measurement gear) to display numbers at their full resolution; this gives a visual indication of that resolution. A frontend shouldn't have to assume or truncate anything; it should have exact information about what the device provided.
We need precision information transmitted along with the value.
Accuracy
Do not confuse accuracy with precision.
Understanding accuracy
Accuracy refers to how close a measurement is to the value being measured. A very common (and correct) way to describe accuracy is the uncertainty of a measurement.
Offset error
Let's return to our meter stick example. The meter stick with 1 mm markings might be missing a 2 cm chunk. It cannot be accurate; the measurement in that case would be presented as:
23.0 cm ± 2.0 cm
Note that although the precision is 0.05 cm, the accuracy is completely different.
Linearity error
A meter stick might, for example be calibrated at 25 °C, but specified to be accurate from -10 °C to 60 °C. Due to contraction and dilation, at -10 °C the 23.0 mark might be closer to 22.8 cm, while at 60 °C, it might be closer to 23.3 cm. In this example, temperature is an external factor which affects the final measurement. It is unreasonable or impossible to estimate the error exactly, especially since the error grows linearly with the size of the measurement. This error is usually included in the specifications of the device, as an unknown. In this case it would be reasonable for the accuracy to be specified as
1.5 % of measurement
So that
at 23.0 cm, the value is 23.0 ± 0.3 cm at 86.0 cm, the value is 86.0 ± 1.2 cm
Again, the accuracy and precision are not the same.
Nonlinear error
Nonlinear errors are errors which are neither an offset error, nor a linearity error. They are beyond the scope of this article; a device's stated accuracy usually accounts for this type of error.
Calibration
Generally the accuracy as specified by the manufacturer is warranted only as long as the device is calibrated (compared to a known source, not necessarily always adjusted) regularly. Some high-end multimeters might store the last calibration date and possibly other parameters relevant for high-performance measurements, e.g. temperature, power-on time, number of power cycles. Probably it would be best so add such information to the data, if available.
The sigrok dilemma
The value supplied by a device may differ from its resolution — it may supply 35.4, but have a resolution that makes this 35.400 — but this can still differ from how accurately the device can make a measurement. This accuracy is rarely supplied by the device along with the measurement, but commonly stated in the device's specifications. For example, the Fluke 187 has a DC voltage measurement specification stated as
Accuracy ± (0.025%+5) Max. resolution 1 µV
This defines the accuracy of the measurement as 0.025% of the value plus 5 "counts" (least significant digits on the display). The true value can be anywhere up or down by this amount from the displayed value.
The given maximum resolution is not the same for every range. The Fluke 187 has a resolution of 50000 counts, so in order to get the maximum resolution of 1 µV, the device must be set to the 50 mV (50000 µV) range. Since the device can only distinguish 50000 values, a measurement of 1.3259, for example, will have a resolution of 0.1 mV; the device will not be able to distinguish voltage differences below that value. If it could, then the measurement would be 1.325900, and the device would have to be a 5000000 (5 million) count.
Expanding on our previous example, a measurement on 1.3259 has an accuracy of ±0.000831475 — making it somewhere between 1.325068525 and 1.326731475. The resolution, however, is 0.0001 (0.1 mV), so the last significant digits are meaningless. Note the displayed value is still limited to four digits after the decimal point! The correct way to communicate this measurement is:
1.3259 V ± 0.0009 V
The obtained value for the accuracy is almost always rounded up.
Hardware accuracy is known to the driver through knowledge of which device it's communicating with, and should be transmitted to the frontend -- including "extra counts" accuracy specifications.
Displayed value
The value displayed on the device may be limited by the number of digits available on a 7-segment display, and may well have less resolution than the device can handle. What libsigrok receives from the device may thus be a number with extra digits tacked on, compared to what's on the display. This may or may not be beyond the resolution or accuracy of the device. Nevertheless libsigrok receives the number and will transmit it to the frontend. It may be useful to a frontend to know how many digits are actually displayed on the device, however, if it chooses to mimic that display in some way.
Should the displayed value resolution be transmitted to the frontend?
The displayed value should always be transmitted to the frontend unaltered, unless the device datasheet or manual specifies that the precision of the measurement is higher than the one displayed, and the full precision is communicated through the PC interface. sigrok should never assume that the precision is higher than the precision of the displayed value unless explicitly documented by the manufacturer.
Proposed solutions
Exact decimal representation
struct sr_decimal { int64_t value; int32_t pow_ten; };
The sr_decimal
is an exact representation of a measurement. The significand, or mantissa is kept in base 10, making it immune to all issues that a base 2 representation has. Its value is value * pow(10, pow_ten)
.
See https://gitorious.org/~mrnuke/sigrok/libsigrok/commits/acc for details of implementation proposal.
Advantages:
- Exact representation of measurement
- Immune to floating-point inconsistencies and rounding issues
Disadvantages:
- Very difficult to work with
- Not really usable as a math type (usually requires conversion to double beforehand)
- Requires a large amount of memory -- usually more than twice of the equivalent ASCII representation.
- Requires complex printf logic
- Signaling conditions such as NAN or INF requires helper functions
- Waste of storage space -- most of the bytes will usually be 0
- Suffers from "reinventing the wheel" problem -- requires helper functions for almost anything
- Requires thoughtful and intrusive re-factoring of libsigrok
- Leading zeros not handled correctly.
Inexact floating-point representation with precision metadata
struct sr_measurement { double value; uint8_t sigfig; };
The sr_measurement
is an inexact representation of a measurement. It relies on the fact that a double
can store about 12 digits before the least significant digit is affected by rounding issues. Although devices with 12-digit precision do not exist, it is believed that sr_measurement
can store about 12 digits reliably. sr_measurement
contains a field to specify the number significant digits -- that is, the number of digits that should be printed for a correct representation of the measurement.
See https://gitorious.org/~mrnuke/sigrok/libsigrok/commits/prec for details of implementation proposal.
Advantages:
- More natural to work with -- the value can extracted as a double with minimal overhead
- Signaling of conditions such as NAN or INF is identical to treatment of a
double
- Makes more efficient use of storage space
- Requires a minimal amount of helper functions -- no "reinventing the wheel"
- Requires minimal re-factoring of libsigrok
Disadvantages:
- Inexact representation of measurement
- May suffer from floating-point inconsistencies and rounding issues -- should not be an issue for most devices
- printf logic requires use of log10 function
- Leading zeros not handled correctly.
Exact representation of raw data and metadata
This is a somewhat different proposal. Rather than defining a datatype to represent any single value with its own metadata, the concept is to use an array of data in a primitive type (such as may come directly from a device as raw data), along with metadata that describes the encoding and precision of those values.
struct sr_analog { // Raw values void *data; uint64_t count; // Encoding of raw values uint8_t unitsize; gboolean signed; gboolean float; gboolean bigendian; // Precision of raw values uint8_t precision; gboolean decimal; // Relationship to true values struct sr_rational scale; struct sr_rational offset; };
The fields are as follows:
- data - pointer to the raw, encoded values.
- count - number of values.
- unitsize - number of bytes used in the encoding of each individual value.
- signed - whether the encoding is signed or unsigned (2's complement format is to be used; drivers have to convert the data if it's not yet in 2's complement format).
- float - whether the encoding is floating-point or integer.
- bigendian - whether the encoding is big-endian or little-endian (mixed endianness is not supported, drivers have to convert such data).
- precision - the precision of the raw values, in decimal or binary digits, or 0 if unknown.
- decimal - whether precision is specified in decimal or binary digits.
- scale - Scale factor that must be applied to the raw values to obtain true units.
- offset - Offset that must be applied, after scaling, to obtain true values.
Examples
- A digital multimeter sends values in ASCII strings such as "-0.345". These have 4 digits of decimal precision, so can be stored in 16-bit signed integers. The single value "-0.345" could be stored with metadata as follows:
int16_t value = -345; struct sr_analog metadata = { .data = &value, .count = 1, .unitsize = 2, .signed = TRUE, .float = FALSE, .bigendian = BIG_ENDIAN, // defined TRUE or FALSE according to platform. .precision = 4, .decimal = TRUE, .scale = {1, 1000}, .offset = {0, 1}, };
- An oscilloscope sends a block of raw 14-bit ADC values encoded as 16-bit unsigned integers. The meaning of these depends on the scope settings. The raw values have 14 digits of binary precision. They can be stored as follows:
struct sr_analog metadata = { .data = values, // read directly from device. .count = XXX, // driver provides this information. .unitsize = 2, .signed = FALSE, .float = FALSE, .bigendian = XXX, // driver provides this information. .precision = 14, .decimal = FALSE, .scale = XXX, // driver provides this information. .offset = XXX, // driver provides this information. };
- An input format contains blocks of analog data in 32-bit little endian floats. The file format does not define the precision.
struct sr_analog metadata = { .data = values, // read directly from file. .count = XXX, // input provides this information. .unitsize = 4, .float = TRUE, .bigendian = FALSE, .precision = 0, .scale = XXX, // input provides this information from metadata in file. .offset = 0, };
Advantages
- Exact representation of measurement.
- Efficient storage for arrays of values.
- Precision can be represented correctly for both decimal and binary data. Unknown precision can also be represented.
- Where e.g. raw ADC values are the input, precision applies correctly to the raw values, independent of implicit scaling and offset.
- Allows direct use of raw data from a device in many scenarios, minimizing work to be done in the driver at capture time.
- Math operations can be done efficiently on the raw values, with metadata for the result updated separately.
Disadvantages
- Non-trivial for client code to process directly - helper functions would have to be provided, eg. sr_analog_to_double().
Resources
- Wikipedia: Accuracy and precision
- Fluke: Understanding specifications for precision multimeters (PDF)
- National Instruments: Digital Multimeter Measurement Fundamentals
- Keithley: Low-level measurements handbook (PDF)
- Agilent: How to select a Handheld DMM that is RIGHT for you (short info on "resolution, digit, and accuracy") (PDF)