Using Floating-Point Numbers to Represent Geographic Coordinates
Using 32-bit floats, also known as
float (in C-like languages), or more precisely IEEE 754
binary32 format, your worst-case precision is approximately 1.7 meters when represented using the a normal latitude/longitude coordinate system. Using 64-bit floating point format (
binary64), worst-case precision is approximately 3.16 nanometers (or
3.16×10^(-9) meters). For context, one measures the size of transistors on processors on this scale.
We’re going to assume you’re using some geographic coordinate system where longitude ranges from -180° (East side of the anti-meridian, i.e. going West from the meridian) to 180° (West side of anti-meridian), and latitude ranges from -90° (south pole) to 90° (north pole). This applies to for example WGS 84, what people normally think of as “coordinates on earth” (it’s used in GPS and most mass-market positioning systems).
The earth is almost exactly 40000 km in diameter, so on the equator, one degree of longitude is
4000/360=100/9 km (or approximately
In a nutshell, the IEEE 754 binary floating point representation represents numbers in “scientific notation” with a sign bit, a base that is a power of two, and a significand between
1 (inclusive) and
2 (exclusive). So a number like 2.125 is represented as
+2^(1)×1.0625. (For 32-bit floats, the exponent is encoded with 8 bits, with an offset of 127 (so you store
p+127 in straight unsigned binary, in this case you’d store
0b10000000); the sign bit takes 1 bit, and the significand gets the rest, 23 bits (you encode the stuff after
1. as an unsigned binary number, in this case
2^(-4)×1=.0625, so this part becomes
0b0001 followed by 19 zeros). This in binary would therefore be
0b01000000000010000000000000000000). There are some special cases including a signed zero (if everything is zero but the sign, then this is a zero with sign indicated by the sign bit), infinity (exponent of all
1s, zero significand, and a sign),
NaNs (exponent of all
1s, non-zero significand), as well as subnormal numbers (zero exponent but non-zero singificand).
Floating point precision therefore gets worse as the exponent gets larger: the significand gives exactly 23 bits of precision (or about
log10(2^23)=23×log10(2)=6.92 decimal digits), and the only thing that makes precision worse is using larger powers.
The worst case for WGS 84 coordinates therefore happens when the coordinate is near the anti-meridian with longitude close to 180° (or -180°, but the only difference is the sign bit). This would be encoded with exponent
2^(7)=128, meaning that the precision (difference between two successive values) is
Repeating this computation for 64-bit floating points, everything is the same but we have 52-bits for the
2^(7)×2^(-52)×111111=3.16×10^(-9) m=3.16 nm (nanometers).
Note on decimal digit precision
If you represent coordinates as degrees without decimals, your worst case is the aforementioned 111 km at the equator. With
n digits, you move this
n digits to the right. So with 1 digit you get 11.1 km, with 5 you get 0.0011 km, or 1.11 meters, and so on.
Whether 1.7 meters suffices for your application is of course not something I can comment on. However, there are alternate ways of storing coordinates that do not require crazy custom floating point sizes, nor 128 bits per coordinate, and get you pretty good precision.
One trick is to instead just use unsigned integers
u to represent a coordinate as
-180+360*(u/2^32) (for 32-bit ints), this gets you
111111×360×2^(-32)=0.0093 m of precision, or plenty for basically any application. Though it comes at a cost of translation.
OpenStreetMap has a good article on coordinate precision.