*But this is different.*

This is a proof of concept, and only a proof of concept, for an approach that will allow **exact** precision arithmetic for any computable number. Want the square root of three to three decimal places? Realize later-on that the user wants twenty decimal places instead, or that the number of decimal places is dynamic? No need to refactor the original calculation; just ask the stored square root of three for twenty, or a user-supplied number, of decimal places. Have algorithms to calculate *e* and π? Add *e* and π together, and don’t worry until later about how many decimal places you want for *e* + π. Numbers are stored with exact precision and decimal approximations are print-on-demand.

The approach is outlined in the original email:

I was thinking about a way to try to have integer-clean arithmetic on algebraic numbers, and a brief Google search for “integer-clean arithmetic algebraic numbers” did not turn up obvious software tools for integer-clean handling of algebraic numbers.

However, I think I may have found a way to use objects to circumvent the corruption that comes from naive use of floating point numbers, where the corruption can increase exponentially.

Let’s say that every number is an object that is either:

- Something that primitively can return an arbitrarily specified number of digits’ accuracy, which includes π and
e, eventually, but what I originally have in mind is just integers, which will just return more zeroes after the decimal point if you keep asking for more digits of accuracy.—OR—- An object storing a first number, a second number, and an operation between them (addition, multiplication, and exponentiation, and their inverses).
Let us furthermore suppose that each number object has a method to evaluate its contents to a specified accuracy.

Numbers in class should be calculable by querying both numbers with enough additional places of accuracy that, when the operation is performed on them, the error is orders of magnitude smaller than the requested accuracy. (Note that this leaves the door open to some question of rounding error; but if a certain number of digits’ accuracy has rounding error that overlaps the rounding for the requested accuracy, more digits might be requested. Rounding error may be a fly in the ointment, although it would seem that an epsilon-delta style argument would establish that there are no corner cases that cannot be met by specifying enough digits.)

So if I request (31 / 10 + the square root of 2), accurate to three decimal places, and we say that we are giving two digits of padding, that resolves first to 31.00000 / 10.00000, round to 3.10000, and the second resolves to 2.00000 ^ .500000, which resolves to 1.41421. I add them, getting 3.51421, which I round to three decimal places, getting 3.514. And nothing in this calculation has been integer clean arithmetic in the usual sense, but the number has been evaluated accurately to three places, and it could just as well have been evaluated accurately to twenty places.

Now this abandons one feature that comes with specification floating-point arithmetic, namely that any number takes O(1) memory. This seems like numbers would have something more like O(n log n) memory, maybe more but seems subquadratic at least. And on a machine with 16 gigs of RAM, there may be some calculations where you want and can afford the memory consumption for these objects. For that matter, 16 megs of RAM might still be enough that you don’t absolutely need O(1) floating point numbers.

I think I’ll see about a Python package tomorrow.

Thoughts?

P.S. to [

Name]: I’m interviewing with Google.

The proof of concept is only intended as a proof of concept, not a production release and not necessarily something that will handle every corner case. However, it is intended to clearly outline how one would go about such things and what the concept is that’s being proven.

This is implemented in Python that is written to be executable pseudo-code, (*perhaps apart from the laborious parser that takes integer, float, string, and Decimal values and converts them to Numbers built from integers*). The code is meant to be not-clever, and serve programmers in other languages as pseudocode that demonstrates how one might go about implementing this approach to arithmetic and number in the language of one’s choice.

*Download* Proof of Concept in Python.

*View Source* for Proof of Concept in Python.

The proof of concept, and this page, are licensed to you under your choice of the terms of the MIT and GPLv2 licenses.

This page is link-ware. If you like it, please consider a link to CJSHayward.com.