AWS with Nitro v3+ iirc supports TPM, meaning I can attest my VM state via an Amazon CA. I know ARM has been working a lot with Rust, and it shows - binfmt with qemu-user mean I often forget which architecture I'm building/running/testing as the binaries seem to work the same everywhere.
I don't think so. A Jony Ive will not be in a position to solve the actual problem - what use is a non-universal payment mechanism to consumers and to retailers?
I read the linked page and don't see answers to the main adoption problem: how is the purchaser supposed to pay?
1. Purchaser has to download the app? Okay, but purchaser already has a few equivalents on their phone (Pix, etc) - added friction!
2. How does the App get money to make payment? Purchaser has to fund a new account? Okay, but that is more friction!
3. How does merchant accept the payment? Do they need a new payment terminal? Must their payment terminal be updated with new software? Even more friction!
I've worked in the EMV space, even quite recently, and merchants do not want to update and will only do so when forced to. Any new payment system (QR codes, etc) needs around 5 years (maybe more) before it is universally accepted.
The best way, where I am, to rollout a new payment terminal is to pitch it to the banks, who then offer it to the merchants who have accounts with them.
Adding new functionality to EMV terminals is a lot easier these days, since most of the new terminals are Android, and the vendors have app stores for third parties to write software for these terminals (Pax has Maxstore, etc).
Now, maybe I missed it, but I did not see this application on Maxstore, or some of the other stores. I could have missed it, because these stores have literally thousands of payment applications.
The long and short of it is, you came up with a non-universal payment method, and predictably it did not take off.
If you want to do account-to-account payments you can show the customer the account/routing number, amount & invoice ID - but obviously that's high friction and the customer needs to login to their account and send a payment with lots of manual data entry.
Making yet another app, adding a financial intermediary, requiring you to link your bank account - these aren't solving the friction points.
We already have bank apps, when I scan a QR code in an industry-wide format it should ask me or confirm which bank app to open and pre-fill all the payment information.
So from my perspective, the problem is that FedNow in the US, and Open Banking in the UK - they could have just dictated "Banks must support EPC QR, or EMV QR code scanning and deep-links", and QR code payments would happen very quickly - even with NFC/RFID you can do passive scanning to achieve the same thing.
* Choose Account * Confirm details * Press send
That's about as easy as you can get for push payments, with a real industry-wide standard for communicating payment intents via NFC/QR. But both FedNow and UK OpenBanking are structured in a way which requires friction, and onerous regulation, through their clunky APIs - meaning you can't actually solve that problem on your own.
* The generator isn't selected deterministically
* The BLAKE3(seed) in the OpenFrogget code doesn't match what I get with Python & Javascript implementation of Blake3, the index & seed aren't specified in the paper
* The paper doesn't provide a reference for why `a=-7` was chosen (presumably because of the GLV endomorphism)
* the various parameters differ between the reference implementation and the paper and the spec...
There are enough many holes in this that I wouldn't touch it yet, as a very quick glance into the spec & the code leaves me wondering why their claims of reproducibility & determinism re: the constants aren't true, and the documentation & code don't match what I can reproduce locally.
So uhh yea... No
I'm surprised they didn't include the constant in the paper and at least a short justification for this approach, despite stating "This ensures reproducibility and verifiable integrity" in section 3.2, whereas several other curves take the approach of 'smallest valid value that meets all constraints'.
Really they should answer the question of "Why can't `b` be zero... or 1" if they're going for efficiency, given they're already using GLV endomorphisms.
Likewise with the generator, I see no code or mention in the paper about how they selected it.
The corpse of OpenVMS on the other hand is being reanimated and tinkered with, presumably paid for by whatever remaining support contracts exist, and also presumably to keep the core engineers occupied with inevitably fruitless busywork while occasionally performing the contractually required on-call technomancy on the few remaining Alpha systems.
VMS is dead... and buried, deep.
It's a shame it can't be open-sourced, just like Netware won't be open-sourced, and probably has less chance of being used for new projects than RiscOS or AmigaOS.
For example, yesterday I wanted to make a 'simple' time format, tracking Earths orbits of the Sun, the Moons orbits of Earth and rotations of Earth from a specific given point in time (the most recent 2020 great conjunction) - without directly using any hard-coded constants other than the orbital mechanics and my atomic clock source. Where this would be in the format of `S4.7.... L52... R1293...` for sols, luns & rotations.
I keep having to remind to to go back to first principles, we want actual rotations, real day lengths etc. rather than hard-coded constants that approximate the mean over the year.
In that case, I'd probably choose first-aid & the basics of emergency medicine via a couple of half-day or a full-day course per year.
Presumably your OS could trap attempts to read the CSR and allow it, but if not then it's a fatal error and your program shits the bed, otherwise you rely on some OS-specific way of getting that info at runtime.