Float rounding as seen in Pelion Device Details

A couple comments on float rounding in pelion device management.

I built a magnetometer that reads out to nanoTesla aka 1e-9 resolution.

As per the LwM2M registry the reported sensor should be something like /3314/1/5702 for X-Axis and its a float value of Tesla units. A C++ float should provide 7 or so decimal digits of precision, so that should work. Earth’s magnetic field is about 50000 nT around here so I expected to see values like -5.1423E-5 reported, when the sensor itself is reporting 51423 nT. I have a nice USB serial interface to my magnetometer and I can read the actual result from the sensor which is stored in a C++ double and reported as per above. In other words the above is a very roundabout way to explain I passed a C++ double variable containing -5.1423e-5 to a set_value_float on a M2M resource.

The rounding I see in the web interface is a little confusing.

The web interface for Pelion for /3314/100/5702 reports -0.000052 as opposed to the actual value. It seems to be reporting in decimal parts per million as opposed to actually reporting a float (or double) c++ value.

If I pull multiple values by examining /3314/100 I’ll get a list of items including 5702 as per above. However instead of getting rounded ppm as above, or a floating point number as I would expect, I get a float rounded to ppm then get the usual fractional approximation of a float. So something like -0.00005199999941396527. That is obviously an excellent floating point number although it was unfortunately rounded to ppm before display.

Another unusual problem relates to LwM2M resource IDs such as OWA defined “5508 min x value” which according to the standard is a float.

In the web interface, if I examine device details /3314/100/5508 I’ll see a float that got rounded to a fixed decimal parts per million as per the above experience. So the minimum X-axis magnetometer reading was -0.000033 T which is about thirty thousand nanotesla (actual sensor floating point variable value passed was -0.000033075 Tesla but whatever).

If I examine device details /3314/100 I’ll get the full list of magnetometer results including resource 5508 but that will display for some reason as -1207277189 which is probably the float being displayed as a long int. Then again resource 5510 which is supposed to be the OWA assigned LwM2M resource for Y axis minimum strength displays as some kind of UTF-8 string. That UTF-8 string is probably the interpretation of the floating point value actually delivered.

Officially in the OWA LwM2M spec I should be reporting magnetometer data in floating point Tesla and its pretty easy to work around this by abandoning the standard and reserved IDs. I’ll probably report magnetic field strength in nT instead of T to work around floats being rounded to parts per million. Or because my hardware sensor only has resolution to nT level, I’ll skip floats entirely and report integer nT values. The workaround is really not a big deal although the way floats are handed and displayed was very unexpected.

Anyway I just thought this rounding of floats was interesting and noteworthy and perhaps there’s something obvious I’m missing when I do a M2MInterfaceFactory::create_resource of a M2MResourceInstance::FLOAT where its somehow implied its a decimal ppm or something.

I’m literally passing the same variable to ->set_value_float() as I am to printf in my debugging so I’m clearly not rounding on my side…

Everything else works perfectly, this really the only problem I’ve run into!