Instead the entire paradigm of centralized generation may need to be called into question and we should instead be focusing on a hybrid centralized baseline + local generation and storage. Places like China do fine with promoting residential solar where nearly half the solar was on residential rooftops (2023) [1].
[1] https://globalenergymonitor.org/report/china-continues-to-le...
| Policymakers are now attempting to come up with solutions. “You can make solar play nice with the grids,” ...
| Yet the best solution would be for energy firms to respond to the competition and sort themselves out.
The article is talking about: * how solar is disrupting the traditional utility model * in countries where the utilities provide a poor service wealthy people are doing there own thing producing their own power with PV * how this leads to less customers for the utility leading to more expensive power for people who cannot afford to generate their own power * that solutions like grid-tied home PV instead of independent systems provides a better outcome for everyone in the area.
I don't think it it to much of a stretch to say that the article is advocating for, as you say, "a hybrid centralized baseline + local generation and storage."
This is sunlight falling on a roof. If you convert it into electricity but then don't use that electricity, is it really a waste? It's like saying that the overflow from my water tank that collects rain water off the roof is 'wasting' water.
It could be argued that it's a waste in the sense that the generated electricity could have gone to someone else if there was a grid, but if the grid operator isn't allowing excess to be put back into the grid (e.g. because there's no demand at that time because it's sunny and everyone is using solar), then the grid operator needs to solve that with some form of energy storage (e.g. batteries).
This is not so straightforward when drivers are paid by the kilometer. In such cases government could mandate minimum waiting pay rate is average equivalent hourly pay rate of driving hours from same shift, or such like.
That's why IT sec all around banking is just the bare minimum required by regulations.
Those sec-specs are also usually at least one decade behind the state of the art… And they get updated only extremely seldom as this would cause "a lot of paper work" at the banks, so the banks are always against any changes to that regulations; and if something changes finally it takes the banks again at least half a decade to adapt to those changes; they can do it like that as the time windows to comply are usually set to be very long, because you know, it's really a lot of paper work…
They have successfully shifted liability for the problem to banks and merchants.
Instead the innovation has gone into things like Paywave which reduces payment friction.
Add two factor authentication, if you want, but fix the underlying giant issue first.
For the card to sign the transaction, you need to add some kind of card interface to the users device. Maybe this is what happens with chip cards when you use it at a shop with a card terminal.
The three digit CVV code should be a one time passcode (OTP). Banks have been using these since the 1990s for online logins.
Using 90s technology, the card issuer would issue one of these OTP fobs along with the card. It has the card number printed on it, a button and a LCD screen where the OTP is displayed. The CVV is already sent through to the computer that authorises the transaction, the software that checks the CVV would need to be changed.
So we have a trade off of the user having to have a separate thicker card, to fit the battery, for online use.
I just googled, you can get batteries that are 0.4mm X 22mm x 29mm, a credit card is 0.76mm. Eink is old technology now with the right performance characteristics. I suspect in volume using this technology you could integrate the OTP device in the standard card form factor for less than a couple of dollars a card.
So with a bit of innovation the friction of payment / fraud tradeoff goes away.
This all strikes me as fairly obvious to someone designing these things, is there another tradeoff going on here?
It worked fine as the system was read heavy and write light. SQLite serialises writes so does not perform well with multiple writers, particularly if the write transactions are long running. Reads were blazingly fast as there was no round-trips across the network to a separate database tier. The plan for dealing with performance problems if/when they arrived was to shard the servers into groups of customers.
I moved on and the next developer ripped it out and replaced it with Postgres because it was such an oddball system. I came back six months later to fix the mess as the new developer messed up transactions with the new database code.
Technically using SQLite with replication tacked on works fine. Superficially it is all the same because it is SQL. However the performance characteristics are very different from a conventional Multi Version Concurrency Control databases such as Postgres.
This is where the problem lies with this kind of database - developers seeing SQL and assuming they can develop exactly the same way they would with other SQL databases. That said I love approaches that get away from the database architectures of last century.
Data(set) Definition. But that name does not make any sense whatsoever by itself in this context, neither for the tool (it hardly "defines" anything), nor for UNIX in general (there are no "datasets" in UNIX).
Instead, it's specifically a reference to the DD statement in the JCL, the job control language, of many of IBM's mainframe operating systems of yore (let's not get into the specifics of which ones, because that's a whole other can of complexity).
And even then the relation between the DD statement and the dd command in UNIX is rather tenuous. To simplify a lot, DD in JCL does something akin to "opening a file", or rather "describing to the system a file that will later be opened". The UNIX tool dd, on the other hand, was designed to be useful for exchanging files/datasets with mainframes. Of course, that's not at all what it is used for today, and possibly that was true even back then.
This also explains dd's weird syntax, which consists of specifying "key=value" or "key=flag1,flag2,..." parameters. That is entirely alien to UNIX, but is how the DD and other JCL (again, of the right kind) statements work.