This is the kind of weird (ab)use of floating point instructions that I can imagine some DRM solution using as a means to obfuscate a VM of some kind.
The next step would be to use these properties to write a compiler to run normal source code as floating point integers, maybe with some kind of FFI to call regular OS APIs.
One day as we were getting coffee a coworker just casually drops that his buddy from school made some programs called learnfun and playfun for a YouTube video (which is my favorite Tom7 video). Tech is a surprisingly small world.
This is a proof by construction that the Intel MMU's fault handling mechanism is Turing complete. We have constructed an assembler that translates 'Move, Branch if Zero, Decrement' instructions to C source that sets up various processor control tables. After this code has executed, the CPU computes by attempting to fault without ever executing a single instruction. Optionally, the assembler can also generate X86 instructions that will display variables in the VGA frame buffer and will cause control to be transferred between the native (display) instructions and 'weird machine' trap instructions.
That whole channel (suckerpinch / Tom 7) is incredible. Extremely nerdy, thoughtful and funny content, and I just love his delivery. I cannot recommend it enough, especially to the HN crowd.
That implements conversion from an IEEE-754 double to a pair of two such doubles whose values are integers of the low and high 32 bits of the bitwise representation of the argument double, implemented only with double add/sub/mul.
To be entirely precise, it is functionally complete in combination with access to the constant false (-0.0). If not given access to this constant it is not functionally complete, unlike e.g. NAND which can produce false from any value. I shall clarify that in the article.
The point of the article was more to illustrate that using nothing but signed zeros and floating point subtraction you can simulate arbitrary circuits, and 'functionally complete' was the most concise term for that I could think of, even if it is bending the rules a little bit when strictly looking at the truth table.
It's a matter of definitions, but it always bothered me that functional completeness is defined so that it only requires the ability to produce any non-nullary function, not any function including nullary ones. That is, the set { NAND } is considered functionally complete, even though it can only produce a function that maps any input x to TRUE, and can't produce the value TRUE itself. As soon as you care about constants, which I think you should, { NAND, FALSE } or whatever isn't any more magical than { AND, NOT } or { XOR, TRUE }.
Hey, not related to the post but since you're here: your domain name has an AAAA IPv6 record but the server doesn't respond to IPv6 requests. The problem most probably is that the web server is not binded to the IPv6 address of the system.
I dont really know what you mean by truth-preserving here, but maybe a hint is thats its not ONLY subtraction which is functionally complete, it's subtraction and the constant symbol 0. From subtraction and 0 he makes false (as -0.0), and then he has the functionally complete set found in wikipedia [1] as {->, _|_ } (my attempt at rendering rightarrow and bottom).
Truth-preserving means that it maps T to T. In fact, the Wikipedia's article you link to has Post's theorem about five Post's classes of Boolean functions with all of their definitions: monotonic, affine (which has a funny definition in this article: I was taught the definiton via ANF, "is just a XOR of (some) of variables"), self-dual, truth- and false-preserving. They're all closed but functionally incomplete (in fact, they're functionally pre-complete: adding any single function outside of a class to that class produces a functionally complete set, — and there are no other pre-complete classes).
Subtraction is truth preserving on the sign bit. It's not truth-preserving in the actual subtractive bits.
(I disagree with their claim that the subtractive bit is functionally complete on its own - you're right, since it's truth-preserving, it clearly is not functionally complete)
Below the truth table for implication (with arguments reversed) they claim
> It turns out this truth table is functionally complete [1]
yet the linked Wikipedia article clearly states that
> every two-element set of connectives containing NOT and one of { AND, OR, IMPLY } is a minimal functionally complete subset of { NOT, AND, OR, IMPLY, IFF }. I.e. IMPLY by itself is not functionally complete.
Why would truth preservation prevent it from being functionally complete? How can you even tell a truth table is truth preserving? A truth table is not a logical argument.
Truth-preserving in this context means that the function value is true if all function arguments are true. If you only have truth-preserving functions available, then you can not output false if all inputs are true, hence you can not build all possible functions. An analog argument applies to falsehood-preserving functions.
If functionally complete means that any logic circuit can be constructed, doesn't this imply that IEEE-754 floating point subtraction is effectively Turing complete? (Or not?)
No, functionally complete is mussing the looping functionality to be Turing complete.
Turing complete is often misused to say functionally complete, either because people mistake the two or because it makes for a more appealing blog post / article:
- homomorphic encryption systems are functionally complete but not Turing complete (since looping leaks the number of operations done, break the encryption)
You also need an infinite memory to be called "Turing complete". It doesn't make sense to say that a bunch of operations are or are not Turing complete. It's a property of a whole machine, not just a set of operations. And no real machine can possibly be Turing complete, because they don't have infinite memory or time.
To quote someone from reddit (substitute NAND gates with subtraction)
> You can bulid a turing-complete machine out of NAND-gates, but to say that a NAND-game is turing-complete is like saying that you can live in a brick. You can't, but you can bulid a house out of bricks and live in that.
I posted this on the thread in /r/programming a while ago, but I might as well post this here too. It's possible to implement the adder in "only" 11 subtractions:
fn adder(a: Bit, b: Bit, c: Bit) -> (Bit, Bit) {
let r0 = c - b;
let r1 = c - r0;
let r2 = ZERO - r0;
let r3 = b - r1;
let r4 = r2 - r3;
let r5 = a - r4;
let r6 = r4 - a;
let r7 = ZERO - r5;
let r8 = r7 - r1;
let r9 = r7 - r6;
let r10 = ZERO - r8;
(r9, r10)
}
The next step would be to use these properties to write a compiler to run normal source code as floating point integers, maybe with some kind of FFI to call regular OS APIs.
- Using IEEE floating-point error for ML transfer functions http://tom7.org/grad/
- Using IEEE NaN and infinity to build logic gates and a whole CPU http://tom7.org/nand/
This is a proof by construction that the Intel MMU's fault handling mechanism is Turing complete. We have constructed an assembler that translates 'Move, Branch if Zero, Decrement' instructions to C source that sets up various processor control tables. After this code has executed, the CPU computes by attempting to fault without ever executing a single instruction. Optionally, the assembler can also generate X86 instructions that will display variables in the VGA frame buffer and will cause control to be transferred between the native (display) instructions and 'weird machine' trap instructions.
The mov trick is akin to OISC, the "one instruction set computer". Here instead it's building logic circuits.
Deleted Comment
https://www.teamten.com/lawrence/writings/coding-machines/
That implements conversion from an IEEE-754 double to a pair of two such doubles whose values are integers of the low and high 32 bits of the bitwise representation of the argument double, implemented only with double add/sub/mul.
The point of the article was more to illustrate that using nothing but signed zeros and floating point subtraction you can simulate arbitrary circuits, and 'functionally complete' was the most concise term for that I could think of, even if it is bending the rules a little bit when strictly looking at the truth table.
1: https://en.wikipedia.org/wiki/Functional_completeness
(I disagree with their claim that the subtractive bit is functionally complete on its own - you're right, since it's truth-preserving, it clearly is not functionally complete)
> It turns out this truth table is functionally complete [1]
yet the linked Wikipedia article clearly states that
> every two-element set of connectives containing NOT and one of { AND, OR, IMPLY } is a minimal functionally complete subset of { NOT, AND, OR, IMPLY, IFF }. I.e. IMPLY by itself is not functionally complete.
[1] https://en.wikipedia.org/wiki/Functional_completeness
Turing complete is often misused to say functionally complete, either because people mistake the two or because it makes for a more appealing blog post / article:
- mov is in fact not turing complete: it needs a jmp instruction (https://harrisonwl.github.io/assets/courses/malware/spring20...)
- homomorphic encryption systems are functionally complete but not Turing complete (since looping leaks the number of operations done, break the encryption)
https://github.com/xoreaxeaxeax/movfuscator#notes
> You can bulid a turing-complete machine out of NAND-gates, but to say that a NAND-game is turing-complete is like saying that you can live in a brick. You can't, but you can bulid a house out of bricks and live in that.
https://en.wikipedia.org/wiki/One-instruction_set_computer
so basically any attempt to use JavaScript 's number as int