Readit News logoReadit News
richardc323 commented on Programmers and software developers lost the plot on naming their tools   larr.net/p/namings.html... · Posted by u/todsacerdoti
anyfoo · 2 days ago
> I couldn't for the life of me tell you what dd stands for.

Data(set) Definition. But that name does not make any sense whatsoever by itself in this context, neither for the tool (it hardly "defines" anything), nor for UNIX in general (there are no "datasets" in UNIX).

Instead, it's specifically a reference to the DD statement in the JCL, the job control language, of many of IBM's mainframe operating systems of yore (let's not get into the specifics of which ones, because that's a whole other can of complexity).

And even then the relation between the DD statement and the dd command in UNIX is rather tenuous. To simplify a lot, DD in JCL does something akin to "opening a file", or rather "describing to the system a file that will later be opened". The UNIX tool dd, on the other hand, was designed to be useful for exchanging files/datasets with mainframes. Of course, that's not at all what it is used for today, and possibly that was true even back then.

This also explains dd's weird syntax, which consists of specifying "key=value" or "key=flag1,flag2,..." parameters. That is entirely alien to UNIX, but is how the DD and other JCL (again, of the right kind) statements work.

richardc323 · 2 days ago
Ha, for the last 30 years I have been convinced it was Disk Direct.
richardc323 commented on Show HN: Unregistry – “docker push” directly to servers without a registry   github.com/psviderski/unr... · Posted by u/psviderski
richardc323 · 6 months ago
I naively sent the Docker developers a PR[1] to add this functionality into mainline Docker back in 2015. I was rapidly redirected into helping out in other areas - not having to use a registry undermined their business model too much I guess.

[1]: https://github.com/richardcrichardc/docker2docker

richardc323 commented on Cheap solar power is sending electrical grids into a death spiral   economist.com/finance-and... · Posted by u/blackhawkC17
r00fus · 10 months ago
The hubris in this article is unreal: it's posing that privately owned utilities are a good thing and that bypassing them is some crime that's done by ratepayers. I hope that business model dies in a fire.

Instead the entire paradigm of centralized generation may need to be called into question and we should instead be focusing on a hybrid centralized baseline + local generation and storage. Places like China do fine with promoting residential solar where nearly half the solar was on residential rooftops (2023) [1].

[1] https://globalenergymonitor.org/report/china-continues-to-le...

richardc323 · 10 months ago
Wow. Did you read the next three paragraphs to the end of the article:

| Policymakers are now attempting to come up with solutions. “You can make solar play nice with the grids,” ...

| Yet the best solution would be for energy firms to respond to the competition and sort themselves out.

The article is talking about: * how solar is disrupting the traditional utility model * in countries where the utilities provide a poor service wealthy people are doing there own thing producing their own power with PV * how this leads to less customers for the utility leading to more expensive power for people who cannot afford to generate their own power * that solutions like grid-tied home PV instead of independent systems provides a better outcome for everyone in the area.

I don't think it it to much of a stretch to say that the article is advocating for, as you say, "a hybrid centralized baseline + local generation and storage."

richardc323 commented on Cheap solar power is sending electrical grids into a death spiral   economist.com/finance-and... · Posted by u/blackhawkC17
PlunderBunny · 10 months ago
> "lots of self-generated power will ultimately be wasted."

This is sunlight falling on a roof. If you convert it into electricity but then don't use that electricity, is it really a waste? It's like saying that the overflow from my water tank that collects rain water off the roof is 'wasting' water.

It could be argued that it's a waste in the sense that the generated electricity could have gone to someone else if there was a grid, but if the grid operator isn't allowing excess to be put back into the grid (e.g. because there's no demand at that time because it's sunny and everyone is using solar), then the grid operator needs to solve that with some form of energy storage (e.g. batteries).

richardc323 · 10 months ago
You are reading that very narrowly. The paragraph is simply pointing out that solar power is cheaper when built out in a centralised way because of: * economies of scale for construction and maintenance * higher utilisation. They don't spell it out exactly, but it is pretty clear fro m the context that, "lots of self-generated power will ultimately be wasted", is eluding to a wider geographic area needing more panels to satisfy all demand when each house has an independent system, rather than being grid tied.
richardc323 commented on Video shows ‘ghost co-driver’ added to trucker’s ELD to skirt HOS rules   freightwaves.com/news/vid... · Posted by u/6177c40f
mistermegabyte · 3 years ago
They're not burning drive hours, they're burning on-duty hours which are also limited. You are limited to 14 hours of on-duty time. You are also limited to 11 hours of actual driving time out of those 14 hours. So, I guess if you are delayed more than 3 hours (minus breaks) you are kinda eating into drive hours because of the on-duty limit of 14.
richardc323 · 3 years ago
Sounds like as well as mandating limited on-duty time government needs to mandate pay for waiting time is same as pay for driving. This would reduce the incentive for drivers to drive more hours.

This is not so straightforward when drivers are paid by the kilometer. In such cases government could mandate minimum waiting pay rate is average equivalent hourly pay rate of driving hours from same shift, or such like.

richardc323 commented on The optimal amount of fraud is non-zero   bam.kalzumeus.com/archive... · Posted by u/piinbinary
still_grokking · 3 years ago
Banks don't have much initiative for investments in IT security. They have insurances.

That's why IT sec all around banking is just the bare minimum required by regulations.

Those sec-specs are also usually at least one decade behind the state of the art… And they get updated only extremely seldom as this would cause "a lot of paper work" at the banks, so the banks are always against any changes to that regulations; and if something changes finally it takes the banks again at least half a decade to adapt to those changes; they can do it like that as the time windows to comply are usually set to be very long, because you know, it's really a lot of paper work…

richardc323 · 3 years ago
I suspect it is the credit card company rather than the banks that have the power to fix this, but yes the incentives seem wrong.

They have successfully shifted liability for the problem to banks and merchants.

Instead the innovation has gone into things like Paywave which reduces payment friction.

richardc323 commented on The optimal amount of fraud is non-zero   bam.kalzumeus.com/archive... · Posted by u/piinbinary
jokethrowaway · 3 years ago
If each card were a public/private keypair, you could sign a message authorising a payment of X amount at current time, in zero knowledge, without leaking your secret (the credit card number) in every transaction.

Add two factor authentication, if you want, but fix the underlying giant issue first.

richardc323 · 3 years ago
This would be more secure than what I proposed, but requires changes that are out of the control of the credit card companies.

For the card to sign the transaction, you need to add some kind of card interface to the users device. Maybe this is what happens with chip cards when you use it at a shop with a card terminal.

richardc323 commented on The optimal amount of fraud is non-zero   bam.kalzumeus.com/archive... · Posted by u/piinbinary
richardc323 · 3 years ago
Sure, there is a trade off, but they have it wrong for online fraud from stolen credit cards.

The three digit CVV code should be a one time passcode (OTP). Banks have been using these since the 1990s for online logins.

Using 90s technology, the card issuer would issue one of these OTP fobs along with the card. It has the card number printed on it, a button and a LCD screen where the OTP is displayed. The CVV is already sent through to the computer that authorises the transaction, the software that checks the CVV would need to be changed.

So we have a trade off of the user having to have a separate thicker card, to fit the battery, for online use.

I just googled, you can get batteries that are 0.4mm X 22mm x 29mm, a credit card is 0.76mm. Eink is old technology now with the right performance characteristics. I suspect in volume using this technology you could integrate the OTP device in the standard card form factor for less than a couple of dollars a card.

So with a bit of innovation the friction of payment / fraud tradeoff goes away.

This all strikes me as fairly obvious to someone designing these things, is there another tradeoff going on here?

richardc323 commented on SQLite may become foundational for digital progress   venturebeat.com/2022/05/2... · Posted by u/alexrustic
richardc323 · 4 years ago
For reasons I wont go into here, I built a system with a similar approach 10 years ago. The system was horizontally scaleable. There was no database tier, instead each server had a replica of the database locally which were used for reads. The servers discovered each other other and nominated one server as the master, which writes were sent to. Replication was done by having the master sending the DML queries to a writer process on each server. When a new server joined the cluster it was sent a copy of the entire database and a stream of queries for it to catch up before it joined the cluster. There were other tricks to make sure reads from replicas waited until the replicas were sufficiently up to date.

It worked fine as the system was read heavy and write light. SQLite serialises writes so does not perform well with multiple writers, particularly if the write transactions are long running. Reads were blazingly fast as there was no round-trips across the network to a separate database tier. The plan for dealing with performance problems if/when they arrived was to shard the servers into groups of customers.

I moved on and the next developer ripped it out and replaced it with Postgres because it was such an oddball system. I came back six months later to fix the mess as the new developer messed up transactions with the new database code.

Technically using SQLite with replication tacked on works fine. Superficially it is all the same because it is SQL. However the performance characteristics are very different from a conventional Multi Version Concurrency Control databases such as Postgres.

This is where the problem lies with this kind of database - developers seeing SQL and assuming they can develop exactly the same way they would with other SQL databases. That said I love approaches that get away from the database architectures of last century.

u/richardc323

KarmaCake day57April 30, 2020View Original