Nice story. An even more powerful way to express numbers is as a continued fraction (https://en.wikipedia.org/wiki/Continued_fraction). You can express both real and rational numbers efficiently using a continued fraction representation.
As a fun fact, I have a not-that-old math textbook (from a famous number theorist) that says that it is most likely that algorithms for adding/multiplying continued fractions do not exist. Then in 1972 Bill Gosper came along and proved that (in his own words) "Continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.", see https://perl.plover.com/yak/cftalk/INFO/gosper.txt.
I have been working on a Python library called reals (https://github.com/rubenvannieuwpoort/reals). The idea is that you should be able to use it as a drop-in replacement for the Decimal or Fraction type, and it should "just work" (it's very much a work-in-progress, though). It works by using the techniques described by Bill Gosper to manipulate continued fractions. I ran into the problems described on this page, and a lot more. Fun times.
> You can express both real and rational numbers efficiently using a continued fraction representation.
No, all finite continued fractions express a rational number (for... obvious reasons), which is honestly kind of a disappointment, since arbitrary sequences of integers can, as a matter of principle, represent arbitrary computable numbers if you want them to. They're powerful than finite positional representations, but fundamentally equivalent to simple fractions.
They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
> No, all finite continued fractions express a rational number
Any real number x has an infinite continued fraction representation. By efficient I mean that the information of the continued fraction coefficients is an efficient way to compute rational upper and lower bounds that approximate x well (they are the best rational approximations to x).
> They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
I'm curious what you mean exactly. I've found them to be very convenient for evaluating arithmetic expressions (involving both rational and irrational numbers) to fairly high accuracy. They are not the most efficient solution for this, but their simplicity and not having to do error analysis is far better than any other purely numerical system.
> fundamentally equivalent to simple fractions.
This feels like it is a bit too reductionist. I can come up with a lot of example, but it's quite hard to find the best rational approximations of a number with just fractions, while it's trivial with continued fractions. Likewise, a number like the golden ratio, e, or any algebraic number has a simple description in terms of continued fractions, while this is certainly not the case for normal fractions.
That continued fractions can be easily converted to normal fractions and vice versa, is a strength of continued fractions, not a weakness.
That's the issue, no? If you go infinite you can then express any real number. You can then actually represent all those whose sequence is equivalent to a computable function.
Continued fractions are very cool. I saw in a CTF competition once a question about breaking an RSA variant that relied on the fact that a certain ratio was a term in sequence of continued fraction convergents.
Naturally the person pursing a PhD in number theory (whom I recruited to our team for specifically this reason) was unable to solve the problem and we finished in third place.
Why unnecessarily air this grievance in a public forum. If this person reads it they will be unhappy and I'm sure they have already suffered enough from this failure.
I have been working on a new definition of real numbers which I think is a better foundation for real numbers and seems to be a theoretical version of what you are doing practically. I am currently calling them rational betweenness relations. Namely, it is the set of all rational intervals that contain the real number. Since this is circular, it is really about properties that a family of intervals must satisfy. Since real numbers are messy, this idealized form is supplemented with a fuzzy procedure for figuring out whether an interval contains the number or not. The work is hosted at (https://github.com/jostylr/Reals-as-Oracles) with the first paper in the readme being the most recent version of this idea.
The older and longer paper of Defining Real Numbers as Oracles contains some exploration of these ideas in terms of continued fractions. In section 6, I explore the use of mediants to compute continued fractions, as inspired by the old paper Continued Fractions without Tears ( https://www.jstor.org/stable/2689627 ). I also explore a bit of Bill Gosper's arithmetic in Section 7.9.2. In there, I square the square root of 2 and the procedure, as far as I can tell, never settles down to give a result as you seem to indicate in another comment.
For fun, I am hoping to implement a version of some of these ideas in Julia at some point. I am glad to see a version in Python and I will no doubt draw inspiration from it and look forward to using it as a check on my work.
How do you work out an answer for x - y when eg x = sqrt(2) and y = sqrt(2) - epsilon for arbitrarily small epsilon? How do you differentiate that from x - x?
In a purely numerical setting, you can only distinguish these two cases when you evaluate the expression with enough accuracy. This may feel like a weakness, but if you think about this it is a much more "honest" way of handling inaccuracy than just rounding like you would do with floating point arithmetic.
A good way to think about the framework, is that for any expression you can compute a rational lower and upper bound for the "true" real solution. With enough computation you can get them arbitrarily close, but when an intermediate result is not rational, you will never be able to compute the true solution (even if it happens to be rational; a good example is that for sqrt(2) * sqrt(2) you will only be able to get a solution of the form 2 ± ϵ for some arbitrarily small ϵ).
The link at the end is both shortened (for tracking purposes?) and unclickable… so that’s unfortunate. Here is the real link to the paper, in a clickable format: https://dl.acm.org/doi/pdf/10.1145/3385412.3386037
Thanks for pointing that out. It should be fixed now. The shortening was done by the editor I was using ("Buffer") to draft the tweets in - I wasn't intending to track one but it probably does provide some means of seeing how many people clicked the link
Unrelated to the article, but this reminds me of being an intrepid but naive 12-year-old trying to learn programming. I had already taught myself a bit using books, including following a tutorial to make a simple calculator complete with a GUI in C++. However I wasn't sure how to improve further without help, so my mom found me an IT school.
The sales lady gave us a hard sell on their "complete package" which had basic C programming but also included a bunch of unnecessary topics like Microsoft Excel, etc. When I tried to ask if I could skip all that and just skip to more advanced programming topics, she was adamant that this wasn't an option; she downplayed my achievements trying to say I basically knew nothing and needed to start from the beginning.
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
However in the end I was naive, she was good at sales, and I was desperate for knowledge, so we signed up. However sure enough the curriculum was mostly focused on learning basic Microsoft Office products, and the programming sections barely scraped the surface of computer science; in retrospect, I doubt there was anybody there qualified to teach it at all. The only real lesson I learned was not to trust salespeople.
Thank god it's a lot easier for kids to just teach themselves programming these days online.
Nice story. Thank you share. For years, I struggled with the idea of "message passing" for GUIs. Later, I learned it was nothing more than the window procedure (WNDPROC) in the Win32 API. <sad face>
> However I wasn't sure how to improve further without help, so my mom found me an IT school.
This sounds interesting. What is an "IT school"? (What country? They didn't have these in mine.)
Probably institutes teaching IT stuff. They used to be popular (still?) in my country (India) in the past. That said, there are plenty of places which train in reasonable breadth in programming, embedded etc. now (think less intense bootcamps).
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
This literally brings rage to the fore. Downplaying a kid's accomplishments is the worst thing an educator could do, and marks her as evil.
I've often looked for examples of time travel, hints it is happening. I've looked at pictures of movie stars, to see if anyone today has traveled back in time to try to woo them. I've looked at markets, to see if someone is manipulating them in weird, unconventional ways.
I wonder how many cases of "random person punched another person in the head" and then "couldn't be found" is someone traveling back in time to slap this lady in the head.
So yeah, a kid well-versed in Office. My birthday invites were bad-ass, though. Remember I had one row in Excel per invited person with data, and in the Word document placeholders, and when printing it would make a unique page per row in Excel, so everyone got customized invites with their names. Probably spent longer setting it up than it would've taken to edit their names + print 10 times separately, but felt cool..
Luckily a teacher understood what I really wanted, and sent me home with a floppy disk with some template web-page with some small code I could edit in Notepad and see come to live.
As soon as I read the title, I chuckled, because coming from the computational mathematics background I already knew what it roughly is going to be about. IEEE 754 is like democracy in a sense that it is the worst, except for all the others. Immediately when I saw the example I thought: it is either going to be either a Kahan summation or full-scale computer algebra system. It turned out to be some subset of the latter and I have to admit I have never heard of Recursive Real Arithmetic (I knew of Real Analysis though).
If anything that was a great insight about one of my early C++ heroes, and what they did in their professional life outside of the things they are known for. But most importantly it was a reminder how deep seemingly simple things can be.
IEEE 754 is what you get when you want numbers to have huge dynamic range, equal precision across the range, and fixed bit width. It balances speed and accuracy, and produces a result that is very close to the expected result 99.9999999% of the time. A competent numerical analyst can take something you want to do on paper and build a sequence of operations in floating point that compute that result almost exactly.
I don't think anyone who worked on IEEE 754 (and certainly nobody who currently works on it) contemplated calculators as an application, because a calculator is solving a fundamentally different problem. In a calculator, you can spend 10-100 ms doing one operation and people won't mind. In the applications for which IEEE 754 is made, you are expecting to do billions or trillions of operations per second.
William Kahan worked on both IEEE 754 and HP calculators. The speed gap between something like an 8087 and a calculator was not that big back then, either.
IEEE 754 is what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions, even the seemingly weirder parts like -0, subnormals and all the rounding modes. It was not really democratically designed, but done by numerical computing experts coupled with hardware design experts. Every "simplified" implementation of floating point that has appeared (e.g. auto-FTZ mode in vector units) has eventually been dragged kicking and screaming back to the IEEE standard.
Another way to see it is that floating point is the logical extension of fixed point math to log space to deal with numbers across a large orders of magnitude. I don't know if "beautiful" is exactly the right word, but it's an incredibly solid bit of engineering.
I feel like your description comes across as more negative on the design of IEEE-754 floats than you intend. Is there something else you think would have been better? Maybe I’m misreading it.
Maybe the hardware focus can be blamed for the large exponents and small mantissas.
The reasonable only non-IEEE things that comes to mind for me are:
- bfloat16 which just works with the most significant half of a float32.
- log8 which is almost all exponent.
I guess in both cases they are about getting more out of available memory bandwidth and the main operation is f32 + x * y -> f32 (ie multiply and accumulate into f32 result).
Maybe they will be (or already are) incorporated into IEEE standards though
> what you get if you started with the idea of sign, exponent, and fraction and made the most efficient hardware implementation of it possible. It's not "beautiful", but it falls out pretty straightforwardly from those starting assumptions
This implies a strange way of defining what "beautiful" means in this context.
IEEE754 is not great for pure maths, however, it is fine for real life.
In real life, no instrument is going to give you a measurement with the 52 bits of precision a double can offer, and you are probably never going to get quantities are in the 10^1000 range. No actuator is precise enough either. Even single precision is usually above what physical devices can work with. When drawing a pixel on screen, you don't need to know its position down to the subatomic level.
For these real life situations, improving on the usual IEEE 754 arithmetic would probably be better served with interval arithmetic. It would fail at maths, but in exchange you get support for measurement errors.
Of course, in a calculator, precision is important because you don't know if the user is working with real life quantities or is doing abstract maths.
> IEEE754 is not great for pure maths, however, it is fine for real life.
Partially. It can be fine for pretty much any real-life use case. But many naive implementations of formulae involve some gnarly intermediates despite having fairly mundane inputs and outputs.
> IEEE 754 is like democracy in a sense that it is the worst, except for all the others.
I can't see what would be worse. The entire raison d'etre for computers is to give accurate results. Introducing a math system which is inherently inaccurate to computers cuts against the whole reason they exist! Literally any other math solution seems like it would be better, so long as it produces accurate results.
That's doing a lot of work. IEE-754 does very in terms of error vs representation size.
What system has accurate results? I don't know any number system at all in usage that 1) represents numbers with a fixed size 2) Can represent 1/n accurately for reasonable integers 3) can do exponents accurately
Electronic computers were created to be faster and cheaper than a pool of human computers (who may have had slide rules or mechanical adding machines). Human computers were basically doing decimal floating point with limited precision.
It's ideal for engineering calculations which is a common use of computers. There, nobody cares if 1-1=0 exactly or not because you could never have measured those values exactly in the first place. Single precision is good enough for just about any real-world measurement or result while double precision is good for intermediate results without losing accuracy that's visible in the single precision input/output as long as you're not using a numerically unstable algorithm.
The NYC subway fare is $2.90. I was using PCalc on iOS to step through remaining MetroCard values per swipe and discovered that AC, 8.7, m+, 2.9, m-, m-, m- evaluates to -8.881784197E-16 instead of zero. This doesn't happen when using Apple's calculator. I wrote to the developer and he replied, "Apple has now got their own private maths library which isn't available to developers, which they're using in their own calculator. What I need to do is replace the Apple libraries with something else - that's on my list!"
I wrote the calculator for the original blackberry. Floating point won't do. I implemented decimal based floating point functions to avoid these rounding problems. This sounds harder than it was, basically, the "exponent" part wasn't how many bits to shift, but what power of two to divide by, so that 0.1, 0.001 etc can be represented exactly. Not sure if I had two or three digits of precision beyond whats on the display. 1 digit is pretty standard for 5 function calculators, scientific ones typically have two.
It was only a 5 function calculator, so not that hard, plus there was no floating point library by default so doing any floating point really ballooned the size of an app with the floating point library.
Sounds like he's just using stock math functions. Both Javascript and Python act the same way when you save the result immediately after subtracting two numbers multiple times, rather than just 8.7 - (2.9*3).
It's not even about features. Calculators are mostly useful for napkin math - if I can't afford an error, I'll take some full-fledged math software/package and write a program that will be debuggable, testable, and have version control.
But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps. Even the devs of the apps that are supposed to be snappy, like speedcrunch, seem to completely misunderstand the niche of a calculator, are they not using it themselves? Calculator is neither a CAS nor a REPL.
For Android in particular, I've only found two non-emulated calculators worth using for that, HiPER Calc and 10BA by Segitiga.Pro. And I'm not sure I can trust the correctness.
I find that much of the time I want WolframAlpha for when basic arithmetic, because I like the way it tracks and converts units. It's such a simple way to check that my calculation isn't completely off base. If I forget to square something or I multiply when I meant to divide I get an obviously wrong answer.
Plus of course not having to do even more arithmetic when one site gives me kilograms and another gives me ounces.
If you're willing to learn to work with RPN calculators (which I think is a good idea), I can recommend RealCalc for Android. It has an RPN mode that is very economic in keypresses and it's clear the developers understand how touchscreens work and how that ties into the kind of work pocket calculators are useful for.
My only gripe with it is that it doesn't solve compounding return equations, but for that one can use an emulated HP-12c.
Proper ones are certainly usable for more than napkin math. I deal with fairly simple definite integrals and linear algebra occasionally. It's easier for me to plug this into a programmable calculator than it is to scratch in the dirt on Maxima or Mathematica most of the time if I just need an answer.
This relates to what I wrote in reply to the original tweet thread.
Performing arithmetic on arbitrarily complex mathematical functions is an interesting area of research but not useful to 99% of calculator users. People who want that functionality with use Wolfram Alpha/Mathematica, Matlab, some software library, or similar.
Most people using calculators are probably using them for budgeting, tax returns, DIY projects ("how much paint do I need?", etc), homework, calorie tracking, etc.
If I was building a calculator app -- especially if I had the resources of Google -- I would start with trying to get inside the mind of the average calculator user and figuring out their actual problems. E.g., perhaps most people just use standard 'napkin math', but struggle a bit with multi-step calculations.
> But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps.
Yes, there's probably a lot of low-hanging fruit here.
The Android calculator story sounded like many products that came out of Google -- brilliant technical work, but some sort of weird disconnect with the needs of actual users.
(It's not like the researchers ignored users -- they did discuss UI needs in the paper. But everything was distant and theoretical -- at no point did I see any mention of the actual workflow of calculator users, the problems they solve, or the particular UI snags they struggle with.)
I'm the developer of an Android calculator, called Algeo [1] and I wonder which part of it that makes it feel like slow/not snappy? I'm trying to constantly improve it, though UX is a hard problem.
Another app nobody has made is a simple random music player. Tried VLC on Android and adding 5000+ songs from SD card into a playlist for shuffling simply crashes the app. Why do we need a play list anyway, just play the folder! Is it trying to load the whole list at the same time into memory? VLC always works, but not on this task. Found another player that doesn't require building a playlist but when the app is restarted it starts from the same song following the same random seed. Either save the last one or let me set the seed!
pkg install mplayer
cd /sdcard/Music
find -type f | shuf | head -1 | xargs mplayer
(Or whatever command-line player you already have installed. I just tested with espeak that audio in Termux works for me out of the box and saw someone else mentioning mplayer as working for them in Termux: https://android.stackexchange.com/a/258228)
- It generates a list of all files in the current directory, one per line
- Shuffles the list
- Takes the top entry
- Gives it to mplayer as an argument/parameter
Repeat the last command to play another random song. For infinite play:
while true; do !!; done
(Where !! substitutes the last command, so run this after the find...mplayer line)
You can also stick these lines in a shell script, and I seem to remember you can have scripts as icons on your homescreen but I'm not super deep into Termux; it just seemed like a trivial problem to me, as in, small enough that piping like 3 commands does what you want for any size library with no specialised software needed
> Another app nobody has made is a simple random music player.
Marvis on iOS is pretty good at this. I use it to shuffle music with some rules ("low skip %, not added recently, not listened to recently")[0] and it always does a good job.
[0] Because "create playlist" is still broken in iOS Shortcuts, incredibly.
I'm pretty sure the paid version of PowerAmp for Android will do what you want, with or without explicitly creating a playlist.
I have many thousands of mp3s on my phone in nested folders. PowerAmp has a "shuffle all" mode that handles them just fine, as well as other shuffle modes. I've never noticed it repeating a track before I do something to interrupt the shuffle.
Earlier versions (>~ 5 years ago) seemed to have trouble indexing over a few thousand tracks across the phone as a whole, but AFAIK that's been fixed for awhile now.
Anything that just shuffles on the filesystem/folder level works for this. Even my Honda Civic's stereo does it. Then you have iTunes, which uses playlists, and doesn't work. It starts repeating songs before it exhausts the playlist.
I haven't used it in a while (now using streaming...), But Musicolet (https://krosbits.in/musicolet/) should be able to do this. Offline-only and lightweight.
I'd love to hear more about this. What was the other one you found? I wrote Tiny Player for iOS and another one for Mac and as more of an "album listener" myself I always struggled to keep the shuffle functionality up to other peoples expectations.
Mediamonkey allows me to just go to tracks and hit shuffle and then it randomly adds all my tracks to a queue with no repeats. You can do it at any level of hierarchy, allmusic, playlist, album, artist, genre etc.
Edit: I checked I can also shuffle a folder without adding it to the library.
I've been working on it for what will be a decade later this year. It tries to take all the features you had on these physical calculators, but present them in a modern way. It works on macOS, iOS, and iPad OS
With regards to the article, I wasn't quite as sophisticated as that. I do track rationals, exponents, square roots, and multiples of pi; then fall back to decimal when needed. This part is open source, though!
I am seriously curious when it became not a violation of the principle of least surprise that a calculator app uses the network to communicate information from my device (which definitionally belongs to me) to the developer.
Where I am standing, that never happened, but that would require that a simply staggering number of people be classified as unreasonable.
Well the 89 is a CAS in disguise most of the time which is mentioned in passing in the article.
But, I agree I almost never want the full power of Mathematica/sage initially but quickly become annoyed with calc apps. The 89 and hp prime//50 have just enough to solve anything where I wouldn’t rather just use a full programming language.
HiPER Calc Pro looks like and works like a "physical" calculator, I've use it for years to get effect. I also have Wabbitemu but hardly ever use it, the former works fine for nearly everything.
Can you tell me which emulator you're using? I loved using the open source Wabbitemu on previous Android phones, but it seems to have been removed from the app store, so I can't install it on newer devices :-/
> And almost all numbers cannot be expressed in IEEE floating points.
It is a bit stronger than that. Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
> Some problems can be avoided if you use bignums.
Or that. My momentary existential angst has been assuaged. Thanks bignums.
Wait. Could we in principle find more ways to express some of those uncomputable numbers, or have we conclusively proven we just can't reach them - can't identify any of them in any way we could express?
EDIT: let me guess - there is a proof, and it's probably a flavor of the diagonal argument, right?
> Almost all numbers cannot be practically expressed
That's certainly true, but all numbers that can be entered on a calculator can be expressed (for example, by the button sequence entered in the calculator). The calculator app can't help with the numbers that can't be practically expressed, it just needs to accurately approximate the ones that can.
This behaviour is what you get in say a cheap 1980s digital calculator, but it's not what we actually want. We want correct answers and to do that you need to be smarter. Ideally impossibly smart, but if the calculator is smarter than the person operating it that's a good start.
You're correct that the use of the calculator means we're talking about computable numbers, so that's nice - almost all Reals are non-computable but we ruled those out because we're using a calculator. However just because our results are Computable doesn't let us off the hook. There's a difference between knowing the answer is exactly 40 and knowing only that you've computed a few hundred decimal places after 40 and so far they're all zero, maybe the next one won't be.
Does it matter that some numbers are inexpressible (i.e., cannot be computed)?
I don't think it matters on a practical level--it's not like the cure for cancer is embedded in an inexpressible number (because the cure to cancer has to be a computable number, otherwise, we couldn't actually cure cancer).
But does it matter from a theoretical/math perspective? Are there some theorems or proofs that we cannot access because of inexpressible numbers?
[Forgive my ignorance--I'm just a dumb programmer.]
Well some classical techniques in standard undergraduate real analysis could lead to numbers outside the set of computable numbers, so if you don't allow non-computable numbers you will need to be more careful in the theorems you derive in real analysis. I do not believe that is important however; it's much simpler to just work with the set of real numbers rather than the set of computable numbers.
We know of at least one uncomputable number - Chaitin's constant, the probability that any given Turing machine halts.
Personally, I do wonder sometimes if real-world physical processes can involve uncomputable numbers. Can an object be placed X units away from some point, where X is an uncomputable number? The implications would be really interesting, no matter whether the answer is yes or no.
> Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
A common rebuke is that the construction of the 'real numbers' is so overwrought that most of them have no real claim to 'existing' at all.
That's pretty cool, but the downsides of switching to RRA are not only about user experience. When the result is 0.0000000..., the calculator cannot decide whether it's fine to compute the inverse of that number.
For instance, 1/(atan(1/5)-atan(1/239)-pi/4) outputs "Can't calculate".
Well alright, this is a division by zero. But then you can try 1/(atan(1/5)-atan(1/239)-pi/4+10^(-100000)), and the output is still "Can't calculate" even though it should really be 10^100000.
You missed a 4. You are trying to say 1/(4atan(1/5)-atan(1/239)-pi/4) is a division by zero.
On the other hand 1/(atan(1/5)-atan(1/239)-pi/4) is just -1.68866...
As a fun fact, I have a not-that-old math textbook (from a famous number theorist) that says that it is most likely that algorithms for adding/multiplying continued fractions do not exist. Then in 1972 Bill Gosper came along and proved that (in his own words) "Continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.", see https://perl.plover.com/yak/cftalk/INFO/gosper.txt.
I have been working on a Python library called reals (https://github.com/rubenvannieuwpoort/reals). The idea is that you should be able to use it as a drop-in replacement for the Decimal or Fraction type, and it should "just work" (it's very much a work-in-progress, though). It works by using the techniques described by Bill Gosper to manipulate continued fractions. I ran into the problems described on this page, and a lot more. Fun times.
No, all finite continued fractions express a rational number (for... obvious reasons), which is honestly kind of a disappointment, since arbitrary sequences of integers can, as a matter of principle, represent arbitrary computable numbers if you want them to. They're powerful than finite positional representations, but fundamentally equivalent to simple fractions.
They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
Any real number x has an infinite continued fraction representation. By efficient I mean that the information of the continued fraction coefficients is an efficient way to compute rational upper and lower bounds that approximate x well (they are the best rational approximations to x).
> They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
I'm curious what you mean exactly. I've found them to be very convenient for evaluating arithmetic expressions (involving both rational and irrational numbers) to fairly high accuracy. They are not the most efficient solution for this, but their simplicity and not having to do error analysis is far better than any other purely numerical system.
> fundamentally equivalent to simple fractions.
This feels like it is a bit too reductionist. I can come up with a lot of example, but it's quite hard to find the best rational approximations of a number with just fractions, while it's trivial with continued fractions. Likewise, a number like the golden ratio, e, or any algebraic number has a simple description in terms of continued fractions, while this is certainly not the case for normal fractions.
That continued fractions can be easily converted to normal fractions and vice versa, is a strength of continued fractions, not a weakness.
That's the issue, no? If you go infinite you can then express any real number. You can then actually represent all those whose sequence is equivalent to a computable function.
Naturally the person pursing a PhD in number theory (whom I recruited to our team for specifically this reason) was unable to solve the problem and we finished in third place.
(It's not a good article when it comes to the attack details, unfortunately.)
The older and longer paper of Defining Real Numbers as Oracles contains some exploration of these ideas in terms of continued fractions. In section 6, I explore the use of mediants to compute continued fractions, as inspired by the old paper Continued Fractions without Tears ( https://www.jstor.org/stable/2689627 ). I also explore a bit of Bill Gosper's arithmetic in Section 7.9.2. In there, I square the square root of 2 and the procedure, as far as I can tell, never settles down to give a result as you seem to indicate in another comment.
For fun, I am hoping to implement a version of some of these ideas in Julia at some point. I am glad to see a version in Python and I will no doubt draw inspiration from it and look forward to using it as a check on my work.
A good way to think about the framework, is that for any expression you can compute a rational lower and upper bound for the "true" real solution. With enough computation you can get them arbitrarily close, but when an intermediate result is not rational, you will never be able to compute the true solution (even if it happens to be rational; a good example is that for sqrt(2) * sqrt(2) you will only be able to get a solution of the form 2 ± ϵ for some arbitrarily small ϵ).
The sales lady gave us a hard sell on their "complete package" which had basic C programming but also included a bunch of unnecessary topics like Microsoft Excel, etc. When I tried to ask if I could skip all that and just skip to more advanced programming topics, she was adamant that this wasn't an option; she downplayed my achievements trying to say I basically knew nothing and needed to start from the beginning.
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
However in the end I was naive, she was good at sales, and I was desperate for knowledge, so we signed up. However sure enough the curriculum was mostly focused on learning basic Microsoft Office products, and the programming sections barely scraped the surface of computer science; in retrospect, I doubt there was anybody there qualified to teach it at all. The only real lesson I learned was not to trust salespeople.
Thank god it's a lot easier for kids to just teach themselves programming these days online.
Fortunately I think these days there are a lot more options for kids to learn programming, but back then the options were pretty limited.
This literally brings rage to the fore. Downplaying a kid's accomplishments is the worst thing an educator could do, and marks her as evil.
I've often looked for examples of time travel, hints it is happening. I've looked at pictures of movie stars, to see if anyone today has traveled back in time to try to woo them. I've looked at markets, to see if someone is manipulating them in weird, unconventional ways.
I wonder how many cases of "random person punched another person in the head" and then "couldn't be found" is someone traveling back in time to slap this lady in the head.
So yeah, a kid well-versed in Office. My birthday invites were bad-ass, though. Remember I had one row in Excel per invited person with data, and in the Word document placeholders, and when printing it would make a unique page per row in Excel, so everyone got customized invites with their names. Probably spent longer setting it up than it would've taken to edit their names + print 10 times separately, but felt cool..
Luckily a teacher understood what I really wanted, and sent me home with a floppy disk with some template web-page with some small code I could edit in Notepad and see come to live.
If anything that was a great insight about one of my early C++ heroes, and what they did in their professional life outside of the things they are known for. But most importantly it was a reminder how deep seemingly simple things can be.
I don't think anyone who worked on IEEE 754 (and certainly nobody who currently works on it) contemplated calculators as an application, because a calculator is solving a fundamentally different problem. In a calculator, you can spend 10-100 ms doing one operation and people won't mind. In the applications for which IEEE 754 is made, you are expecting to do billions or trillions of operations per second.
Deleted Comment
What? Pretty sure there's more precision in [0-1] than there is in really big numbers.
Maybe the hardware focus can be blamed for the large exponents and small mantissas.
The reasonable only non-IEEE things that comes to mind for me are:
- bfloat16 which just works with the most significant half of a float32.
- log8 which is almost all exponent.
I guess in both cases they are about getting more out of available memory bandwidth and the main operation is f32 + x * y -> f32 (ie multiply and accumulate into f32 result).
Maybe they will be (or already are) incorporated into IEEE standards though
This implies a strange way of defining what "beautiful" means in this context.
In real life, no instrument is going to give you a measurement with the 52 bits of precision a double can offer, and you are probably never going to get quantities are in the 10^1000 range. No actuator is precise enough either. Even single precision is usually above what physical devices can work with. When drawing a pixel on screen, you don't need to know its position down to the subatomic level.
For these real life situations, improving on the usual IEEE 754 arithmetic would probably be better served with interval arithmetic. It would fail at maths, but in exchange you get support for measurement errors.
Of course, in a calculator, precision is important because you don't know if the user is working with real life quantities or is doing abstract maths.
Partially. It can be fine for pretty much any real-life use case. But many naive implementations of formulae involve some gnarly intermediates despite having fairly mundane inputs and outputs.
The issue isn't so much that a single calculation is slightly off, it's that many calculations together will be off by a lot at the end.
Is this stupid or..?
I can't see what would be worse. The entire raison d'etre for computers is to give accurate results. Introducing a math system which is inherently inaccurate to computers cuts against the whole reason they exist! Literally any other math solution seems like it would be better, so long as it produces accurate results.
You’re going to have a hard time doing better than floats with those constraints.
That's doing a lot of work. IEE-754 does very in terms of error vs representation size.
What system has accurate results? I don't know any number system at all in usage that 1) represents numbers with a fixed size 2) Can represent 1/n accurately for reasonable integers 3) can do exponents accurately
You can only have a result that's exact enough in your desired precision
You mean what power of ten to divide by?
I can see why you wouldn't necessarily just want to use it, but I thought the RIM pager had a JVM with floating point?
I mostly just used mine for email.
Deleted Comment
I am using, in Android, and emulator for the TI-89 calculator.
Because no Android app has half the features, and works as well.
But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps. Even the devs of the apps that are supposed to be snappy, like speedcrunch, seem to completely misunderstand the niche of a calculator, are they not using it themselves? Calculator is neither a CAS nor a REPL.
For Android in particular, I've only found two non-emulated calculators worth using for that, HiPER Calc and 10BA by Segitiga.Pro. And I'm not sure I can trust the correctness.
Plus of course not having to do even more arithmetic when one site gives me kilograms and another gives me ounces.
My only gripe with it is that it doesn't solve compounding return equations, but for that one can use an emulated HP-12c.
Performing arithmetic on arbitrarily complex mathematical functions is an interesting area of research but not useful to 99% of calculator users. People who want that functionality with use Wolfram Alpha/Mathematica, Matlab, some software library, or similar.
Most people using calculators are probably using them for budgeting, tax returns, DIY projects ("how much paint do I need?", etc), homework, calorie tracking, etc.
If I was building a calculator app -- especially if I had the resources of Google -- I would start with trying to get inside the mind of the average calculator user and figuring out their actual problems. E.g., perhaps most people just use standard 'napkin math', but struggle a bit with multi-step calculations.
> But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps.
Yes, there's probably a lot of low-hanging fruit here.
The Android calculator story sounded like many products that came out of Google -- brilliant technical work, but some sort of weird disconnect with the needs of actual users.
(It's not like the researchers ignored users -- they did discuss UI needs in the paper. But everything was distant and theoretical -- at no point did I see any mention of the actual workflow of calculator users, the problems they solve, or the particular UI snags they struggle with.)
[1] - https://play.google.com/store/apps/details?id=com.algeo.alge...
- It generates a list of all files in the current directory, one per line
- Shuffles the list
- Takes the top entry
- Gives it to mplayer as an argument/parameter
Repeat the last command to play another random song. For infinite play:
(Where !! substitutes the last command, so run this after the find...mplayer line)You can also stick these lines in a shell script, and I seem to remember you can have scripts as icons on your homescreen but I'm not super deep into Termux; it just seemed like a trivial problem to me, as in, small enough that piping like 3 commands does what you want for any size library with no specialised software needed
Marvis on iOS is pretty good at this. I use it to shuffle music with some rules ("low skip %, not added recently, not listened to recently")[0] and it always does a good job.
[0] Because "create playlist" is still broken in iOS Shortcuts, incredibly.
I have many thousands of mp3s on my phone in nested folders. PowerAmp has a "shuffle all" mode that handles them just fine, as well as other shuffle modes. I've never noticed it repeating a track before I do something to interrupt the shuffle.
Earlier versions (>~ 5 years ago) seemed to have trouble indexing over a few thousand tracks across the phone as a whole, but AFAIK that's been fixed for awhile now.
I tried to make a joystick controller for a particular use case on one platform (Linux) and I gave up.
VLC solves a hard problem. Supporting lots of different libs, versions, platforms, hardware and on top of that licensing issues.
Edit: I checked I can also shuffle a folder without adding it to the library.
Deleted Comment
I've been working on it for what will be a decade later this year. It tries to take all the features you had on these physical calculators, but present them in a modern way. It works on macOS, iOS, and iPad OS
With regards to the article, I wasn't quite as sophisticated as that. I do track rationals, exponents, square roots, and multiples of pi; then fall back to decimal when needed. This part is open source, though!
Marketing page - https://jacobdoescode.com/technicalc
AppStore Link - https://apps.apple.com/gb/app/technicalc-calculator/id150496...
Open source components - https://github.com/jacobp100/technicalc-core
Where I am standing, that never happened, but that would require that a simply staggering number of people be classified as unreasonable.
https://imgur.com/a/TH14QZn
Built-in Android calculator does.
They are incomparable. TI-89 has tons of features, but can't take a square foot to high accuracy.
But, I agree I almost never want the full power of Mathematica/sage initially but quickly become annoyed with calc apps. The 89 and hp prime//50 have just enough to solve anything where I wouldn’t rather just use a full programming language.
Thanks for the heads up, I will be testing it for a few months, to see if it can replace the TI-89 emulator as my main calculator.
Edit: that calculator gives a result of 0 on this test
https://en.wikipedia.org/wiki/Derive_(computer_algebra_syste...
Deleted Comment
Edit: and Maxima as well on the mac (to back up another user's comment)
It is a bit stronger than that. Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
> Some problems can be avoided if you use bignums.
Or that. My momentary existential angst has been assuaged. Thanks bignums.
The best (and most educational) expression of that angst that I know: https://mathwithbaddrawings.com/2016/12/28/why-the-number-li....
EDIT: let me guess - there is a proof, and it's probably a flavor of the diagonal argument, right?
That's certainly true, but all numbers that can be entered on a calculator can be expressed (for example, by the button sequence entered in the calculator). The calculator app can't help with the numbers that can't be practically expressed, it just needs to accurately approximate the ones that can.
You're correct that the use of the calculator means we're talking about computable numbers, so that's nice - almost all Reals are non-computable but we ruled those out because we're using a calculator. However just because our results are Computable doesn't let us off the hook. There's a difference between knowing the answer is exactly 40 and knowing only that you've computed a few hundred decimal places after 40 and so far they're all zero, maybe the next one won't be.
I don't think it matters on a practical level--it's not like the cure for cancer is embedded in an inexpressible number (because the cure to cancer has to be a computable number, otherwise, we couldn't actually cure cancer).
But does it matter from a theoretical/math perspective? Are there some theorems or proofs that we cannot access because of inexpressible numbers?
[Forgive my ignorance--I'm just a dumb programmer.]
Personally, I do wonder sometimes if real-world physical processes can involve uncomputable numbers. Can an object be placed X units away from some point, where X is an uncomputable number? The implications would be really interesting, no matter whether the answer is yes or no.
Non-discrete real-number-based Fractals are a beautiful visual version of this.
A common rebuke is that the construction of the 'real numbers' is so overwrought that most of them have no real claim to 'existing' at all.
For instance, 1/(atan(1/5)-atan(1/239)-pi/4) outputs "Can't calculate".
Well alright, this is a division by zero. But then you can try 1/(atan(1/5)-atan(1/239)-pi/4+10^(-100000)), and the output is still "Can't calculate" even though it should really be 10^100000.