### ESP8266 Arduino - float or double?

Your new topic does not fit any of the above??? Check first. Then post here. Thanks.

Moderator: igrr

### ESP8266 Arduino - float or double?#83057

By llrjt100
#83057 After searching the web I'm still unclear whether or not I should us float or double in my code. I understand float is 4 bytes and double is 8 bytes, but I can't find reliable information about the range and accuracy of these data types. I don't want to get into fixed point integer arithmetic. Any guidance would be appreciated.

### Re: ESP8266 Arduino - float or double?#83058

By btidey

http://www.mathcs.emory.edu/~cheung/Cou ... float.html

Doubles will take significantly more time than floats so only use them if you need the extra precision or range.

Try to avoid divide operations as they take a lot longer than multiplies.

Using integer arithmetic is best of all and is not that difficult if you just scale things appropriately.

### Re: ESP8266 Arduino - float or double?#83061

By llrjt100
#83061 Thanks for your link - I'd found similar and was curious as to the meaning of 'about' in the reference to accuracy.

I've found another https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems which explains the precision aspect. What I've been trying to get a handle on is the actual effect in terms of accuracy of using float or double in my calculations.

### Re: ESP8266 Arduino - float or double?#83064

By picstart1
#83064 Mathematics has the ability to have ultimate accuracy. Pi to 100 million digits. Reality is less accurate our most accurately measured quantities like the fine structure constant can only be measured to 12 decimal places...like seeing a human hair on the surface of the moon. The issue not in a single Arduino calculation but in the cumulative effect that occurs when errors add to errors.
Anyway most things around us don't need more than 4 significant digit precision to be correct enough.
Now base conversion binary to decimal often confuses programmers and the coders will incorrectly call them rounding errors. They are perfectly correct notational artifacts of any base conversion .
0.1 is accurate in decimal notation and approximate in binary notation