http://www.mathcs.emory.edu/~cheung/Cou ... float.html
Doubles will take significantly more time than floats so only use them if you need the extra precision or range.
Try to avoid divide operations as they take a lot longer than multiplies.
Using integer arithmetic is best of all and is not that difficult if you just scale things appropriately.
I've found another https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems which explains the precision aspect. What I've been trying to get a handle on is the actual effect in terms of accuracy of using float or double in my calculations.
Anyway most things around us don't need more than 4 significant digit precision to be correct enough.
Now base conversion binary to decimal often confuses programmers and the coders will incorrectly call them rounding errors. They are perfectly correct notational artifacts of any base conversion .
0.1 is accurate in decimal notation and approximate in binary notation