# math - A little diversion into floating point (im)precision, part 1

Translate

Most mathematicians agree that:

eπi + 1 = 0

However, most floating point implementations disagree. How well can we settle this dispute?

I'm keen to hear about different languages and implementations, and various methods to make the result as close to zero as possible. Be creative!

• Translate

It's not that most floating point implementations disagree, it's just that they cannot get the accuracy necessary to get a 100% answer. And the correct answer is that they can't.

PI is an infinite series of digits that nobody has been able to denote by anything other than a symbolic representation, and e^X is the same, and thus the only way to get to 100% accuracy is to go symbolic.

• Translate

Here's a short list of implementations and languages I've tried. It's sorted by closeness to zero:

• Scheme: `(+ 1 (make-polar 1 (atan 0 -1)))`
• `0.0+1.2246063538223773e-16i` (Chez Scheme, MIT Scheme)
• `0.0+1.22460635382238e-16i` (Guile)
• `0.0+1.22464679914735e-16i` (Chicken with `numbers` egg)
• `0.0+1.2246467991473532e-16i` (MzScheme, SISC, Gauche, Gambit)
• `0.0+1.2246467991473533e-16i` (SCM)
• Common Lisp: `(1+ (exp (complex 0 pi)))`
• `#C(0.0L0 -5.0165576136843360246L-20)` (CLISP)
• `#C(0.0d0 1.2246063538223773d-16)` (CMUCL)
• `#C(0.0d0 1.2246467991473532d-16)` (SBCL)
• Perl: `use Math::Complex; Math::Complex->emake(1, pi) + 1`
• `1.22464679914735e-16i`
• Python: `from cmath import exp, pi; exp(complex(0, pi)) + 1`
• `1.2246467991473532e-16j` (CPython)
• Ruby: `require 'complex'; Complex::polar(1, Math::PI) + 1`
• `Complex(0.0, 1.22464679914735e-16)` (MRI)
• `Complex(0.0, 1.2246467991473532e-16)` (JRuby)
• R: `complex(argument = pi) + 1`
• `0+1.224606353822377e-16i`

• Translate

Is it possible to settle this dispute?

My first thought is to look to a symbolic language, like Maple. I don't think that counts as floating point though.

In fact, how does one represent i (or j for the engineers) in a conventional programming language?

Perhaps a better example is sin(π) = 0? (Or have I missed the point again?)

• Translate

I agree with Ryan, you would need to move to another number representation system. The solution is outside the realm of floating point math because you need pi to represented as an infinitely long decimal so any limited precision scheme just isn't going to work (at least not without employing some kind of fudge-factor to make up the lost precision).

• Translate

Your question seems a little odd to me, as you seem to be suggesting that the Floating Point math is implemented by the language. That's generally not true, as the FP math is done using a floating point processor in hardware. But software or hardware, floating point will always be inaccurate. That's just how floats work.

If you need better precision you need to use a different number representation. Just like if you're doing integer math on numbers that don't fit in an int or long. Some languages have libraries for that built in (I know java has BigInteger and BigDecimal), but you'd have to explicitly use those libraries instead of native types, and the performance would be (sometimes significantly) worse than if you used floats.

• Translate

@Ryan Fox

In fact, how does one represent i (or j for the engineers) in a conventional programming language?

Native complex data types are far from unknown. Fortran had it by the mid-sixties, and the OP exhibits a variety of other languages that support them in hist followup.

And complex numbers can be added to other languages as libraries (with operator overloading they even look just like native types in the code).

But unless you provide a special case for this problem, the "non-agreement" is just an expression of imprecise machine arithmetic, no? It's like complaining that

``````float r = 2/3;
float s = 3*r;
float t = s - 2;
``````

ends with (t != 0) (At least if you use an dumb enough compiler)...

• Translate

I had looooong coffee chats with my best pal talking about Irrational numbers and the diference between other numbers. Well, both of us agree in this different point of view:

Irrational numbers are relations, as functions, in a way, what way? Well, think about "if you want a perfect circle, give me a perfect pi", but circles are diferent to the other figures (4 sides, 5, 6... 100, 200) but... How many more sides do you have, more like a circle it look like. If you followed me so far, connecting all this ideas here is the pi formula:

So, pi is a function, but one that never ends! because of the ∞ parameter, but I like to think that you can have "instance" of pi, if you change the ∞ parameter for a very big Int, you will have a very big pi instance.

Same with e, give me a huge parameter, I will give you a huge e.

Putting all the ideas together:

As we have memory limitations, the language and libs provide to us huge instance of irrational numbers, in this case, pi and e, as final result, you will have long aproach to get 0, like the examples provided by @Chris Jester-Young

• Translate

In fact, how does one represent i (or j for the engineers) in a conventional programming language?

In a language that doesn't have a native representation, it is usually added using OOP to create a `Complex` class to represent `i` and `j`, with operator overloading to properly deal with operations involving other `Complex` numbers and or other number primitives native to the language.

• Translate

Numerical Analysis teaches us that you can't rely on the precise value of small differences between large numbers.

This doesn't just affect the equation in question here, but can bring instability to everything from solving a near-singular set of simultaneous equations, through finding the zeros of polynomials, to evaluating log(~1) or exp(~0) (I have even seen special functions for evaluating log(x+1) and (exp(x)-1) to get round this).

I would encourage you not to think in terms of zeroing the difference -- you can't -- but rather in doing the associated calculations in such a way as to ensure the minimum error.

I'm sorry, it's 43 years since I had this drummed into me at uni, and even if I could remember the references, I'm sure there's better stuff around now. I suggest this as a starting point.

If that sounds a bit patronising, I apologise. My "Numerical Analysis 101" was part of my Chemistry course, as there wasn't much CS in those days. I don't really have a feel for the place/importance numerical analysis has in a modern CS course.

• Translate

It's a limitation of our current floating point computational architectures. Floating point arithmetic is only an approximation of numeric poles like e or pi (or anything beyond the precision your bits allow). I really enjoy these numbers because they defy classification, and appear to have greater entropy(?) than even primes, which are a canonical series. A ratio defy's numerical representation, sometimes simple things like that can blow a person's mind (I love it).

Luckily entire languages and libraries can be dedicated to precision trigonometric functions by using notational concepts (similar to those described by Lasse V. Karlsen ).

Consider a library/language that describes concepts like e and pi in a form that a machine can understand. Does a machine have any notion of what a perfect circle is? Probably not, but we can create an object - circle that satisfies all the known features we attribute to it (constant radius, relationship of radius to circumference is 2*pi*r = C). An object like pi is only described by the aforementioned ratio. r & C can be numeric objects described by whatever precision you want to give them. e can be defined "as the e is the unique real number such that the value of the derivative (slope of the tangent line) of the function f(x) = ex at the point x = 0 is exactly 1" from wikipedia.

Fun question.