Readit News logoReadit News
cletus commented on At Least 13 People Died by Suicide Amid U.K. Post Office Scandal, Report Says   nytimes.com/2025/07/10/wo... · Posted by u/xbryanx
cletus · 2 months ago
People should go to jail for this.

Anyone who has worked on a large migration eventually lands on a pattern that goes something like this:

1. Double-write to the old system and the new system. Nothing uses the new system;

2. Verify the output in the new system vs the old system with appropriate scripts. If there are issues, which there will be for awhile, go back to (1);

3. Start reading from the new system with a small group of users and then an increasingly large group. Still use the old system as the source of truth. Log whenever the output differs. Keep making changes until it always matches;

4. Once you're at 100% rollout you can start decomissioning the old system.

This approach is incremental, verifiable and reversible. You need all of these things. If you engage in a massive rewrite in a silo for a year or two you're going to have a bad time. If you have no way of verifying your new system's output, you're going to have a bad time. In fact, people are going to die, as is the case here.

If you're going to accuse someone of a criminal act, a system just saying it happened should NEVER be sufficient. It should be able to show its work. The person or people who are ultimately responsible for turning a fraud detection into a criminal complaint should themselves be criminally liable if they make a false complaint.

We had a famous example of this with Hertz mistakenly reporting cars stolen, something they ultimately had to pay for in a lawsuit [1] but that's woefully insufficient. It is expensive, stressful and time-consuming to have to criminally defend yourself against a felony charge. People will often be forced to take a plea because absolutely everything is stacked in the prosecution's favor despite the theoretical presumption of innocence.

As such, an erroneous or false criminal complaint by a company should itself be a criminal charge.

In Hertz's case, a human should eyeball the alleged theft and look for records like "do we have the car?", "do we know where it is?" and "is there a record of them checking it in?"

In the UK post office scandal, a detection of fraud from accounting records should be verified by comparison to the existing system in a transition period AND, moreso in the beginning, double checking results with forensic accountants (actual humans) before any criminal complaint is filed.

[1]: https://www.npr.org/2022/12/06/1140998674/hertz-false-accusa...

cletus commented on Model Once, Represent Everywhere: UDA (Unified Data Architecture) at Netflix   netflixtechblog.com/uda-u... · Posted by u/Bogdanp
cletus · 3 months ago
I realize scale makes everything more difficult but at the end of the day, Netflix is encoding and serving several thousand videos via a CDN. It can't be this hard. There are a few statements in this that gave me pause.

The core problem seems to be development in isolation. Put another way: microservices. This post hints at microservices having complete autonomy over their data storage and developing their own GraphQL models. The first is normal for microservices (but an indictment at the same time). The second is... weird.

The whole point of GraphQL is to create a unified view of something, not to have 23 different versions of "Movie". Attributes are optional. Pull what you need. Common subsets of data can be organized in fragments. If you're not doing that, why are you using GraphQL?

So I worked at Facebook and may be a bit biased here because I encountered a couple of ex-Netflix engineers in my time who basically wanted to throw away FB's internal infrastructure and reinvent Netflix microservices.

Anyway, at FB there a Video GraphQL object. There aren't 23 or 7 or even 2.

Data storage for most things was via write-through in-memory graph database called TAO that persisted things to sharded MySQL servers. On top of this, you'd use EntQL to add a bunch of behavior to TAO like permissions, privacy policies, observers and such. And again, there was one Video entity. There were offline data pipelines that would generally process logging data (ie outside TAO).

Maybe someone more experienced with microservices can speak to this: does UDA make sense? Is it solving an actual problem? Or just a self-created problem?

cletus commented on The world could run on older hardware if software optimization was a priority   twitter.com/ID_AA_Carmack... · Posted by u/turrini
mike_hearn · 4 months ago
I worked there too and you're talking about performance in terms of optimal usage of CPU on a per-project basis.

Google DID put a ton of effort into two other aspects of performance: latency, and overall machine utilization. Both of these were top-down directives that absorbed a lot of time and attention from thousands of engineers. The salary costs were huge. But, if you're machine constrained you really don't want a lot of cores idling for no reason even if they're individually cheap (because the opportunity cost of waiting on new DC builds is high). And if your usage is very sensitive to latency then it makes sense to shave milliseconds off because of business metrics, not hardware $ savings.

cletus · 4 months ago
The key part here is "machine utilization" and absolutely there was a ton of effort put into this. I think before my time servers were allocated to projects but even early on in my time at Google Borg had already adopted shared machine usage and therew was a whole system of resource quota implemented via cgroups.

Likewise there have been many optimization projects and they used to call these out at TGIF. No idea if they still do. One I remember was reducing the health checks via UDP for Stubby and given that every single Google product extensively uses Stubby then even a small (5%? I forget) reduction in UDP traffic amounted to 50,000+ cores, which is (and was) absolutely worth doing.

I wouldn't even put latency in the same category as "performance optimization" because often you decrease latency by increasing resource usage. For example, you may send duplicate RPCs and wait for the fastest to reply. That could be double or tripling effort.

cletus commented on The world could run on older hardware if software optimization was a priority   twitter.com/ID_AA_Carmack... · Posted by u/turrini
cletus · 4 months ago
So I've worked for Google (and Facebook) and it really drives the point home of just how cheap hardware is and how not worth it optimizing code is most of the time.

More than a decade ago Google had to start managing their resource usage in data centers. Every project has a budget. CPU cores, hard disk space, flash storage, hard disk spindles, memory, etc. And these are generally convertible to each other so you can see the relative cost.

Fun fact: even though at the time flash storage was ~20x the cost of hard disk storage, it was often cheaper net because of the spindle bottleneck.

Anyway, all of these things can be turned into software engineer hours, often called "mili-SWEs" meaning a thousandth of the effort of 1 SWE for 1 year. So projects could save on hardware and hire more people or hire fewer people but get more hardware within their current budgets.

I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands. So if you spend 1 SWE year working on optimization acrosss your project and you're not saving 5000 CPU cores, it's a net loss.

Some projects were incredibly large and used much more than that so optimization made sense. But so often it didn't, particularly when whatever code you wrote would probably get replaced at some point anyway.

The other side of this is that there is (IMHO) a general usability problem with the Web in that it simply shouldn't take the resources it does. If you know people who had to or still do data entry for their jobs, you'll know that the mouse is pretty inefficient. The old terminals from 30-40+ years ago that were text-based had some incredibly efficent interfaces at a tiny fraction of the resource usage.

I had expected that at some point the Web would be "solved" in the sense that there'd be a generally expected technology stack and we'd move on to other problems but it simply hasn't happened. There's still a "framework of the week" and we're still doing dumb things like reimplementing scroll bars in user code that don't work right with the mouse wheel.

I don't know how to solve that problem or even if it will ever be "solved".

cletus commented on Leaving Google   airs.com/blog/archives/67... · Posted by u/todsacerdoti
eikenberry · 4 months ago
> Channels were a nice idea but I've become convinced that cooperative async-await is a superior programming model.

Curious as to your reasoning around this? I've never heard this opinion before from someone not biased by their programming language preferences.

cletus · 4 months ago
Sure. First you need to separate buffered and unbuffered channels.

Unbuffered channels basically operate like cooperate async/await but without the explictness. In cooperative multitasking, putting something on an unbuffered channel is essentially a yield().

An awful lot of day-to-day programming is servicing requests. That could be HTTP, an RPC (eg gRPC, Thrift) or otherwise. For this kind of model IMHO you almost never want to be dealing with thread primitives in application code. It's a recipe for disaster. It's so easy to make mistakes. Plus, you often need to make expensive calls of your own (eg reading from or writing to a data store of some kind) so there's no really a performance benefit.

That's what makes cooperative async/await so good for application code. The system should provide compatible APIs for doing network requests (etc). You never have to worry about out-of-order processing, mutexes, thread pool starvation or a million other issues.

Which brings me to the more complicated case of buffered channels. IME buffered channels are almost always a premature optimization that is often hiding concurrency issues. As in if that buffered channels fills up you may deadlock where you otherwise wouldn't if the buffer wasn't full. That can be hard to test for or find until it happens in production.

But let's revisit why you're optimizing this with a buffered channel. It's rare that you're CPU-bound. If the channel consumer talks to the network any perceived benefit of concurrency is automatically gone.

So async/await doesn't allow you to buffer and create bugs for little benefit and otherwise acts like unbuffered channels. That's why I think it's a superior programming model for most applications.

cletus commented on Leaving Google   airs.com/blog/archives/67... · Posted by u/todsacerdoti
cletus · 4 months ago
Google has over the years tried to get several new languages off the ground. Go is by far the most successful.

What I find fascinating is that all of them that come to mind were conceived by people who didn't really understand the space they were operating in and/or had no clear idea of what problem the language solved.

There was Dart, which was originally intended to be shipped as a VM in Chrome until the Chrome team said no.

But Go was originally designed as a systems programming language. There's a lot of historical revisionism around this now but I guarantee you it was. And what's surprising about that is that having GC makes that an immediate non-starter. Yet it happened anyway.

The other big surprise for me was that Go launched without external dependencies as a first-class citizen of the Go ecosystem. For the longest time there were two methods of declaring them: either with URLs (usually Github) in the import statements or with badly supported manifests. Like just copy what Maven did for Java. Not the bloated XML of course.

But Go has done many things right like having a fairly simple (and thus fast to compile) syntax, shipping with gofmt from the start and favoring error return types over exceptions, even though it's kind of verbose (and Rust's matching is IMHO superior).

Channels were a nice idea but I've become convinced that cooperative async-await is a superior programming model.

Anyway, Go never became the C replacement the team set out to make. If anything, it's a better Python in many ways.

Good luck to Ian in whatever comes next. I certainly understand the issues he faced, which is essentially managing political infighting and fiefdoms.

Disclaimer: Xoogler.

cletus commented on Why Companies Don't Fix Bugs   idiallo.com/blog/companie... · Posted by u/foxfired
cletus · 5 months ago
A lot of the time, a lack of bugfixes comes from the incentive structure management has created. Specifically, you rarely get rewarded for fixing things. You get rewarded for shipping new things. In effect, you're punished for fixing things because that's time you're not shipping new things.

Ownership is another one. For example, product teams who are responsible for shipping new things but support for existing things get increasingly pushed onto support teams. This is really a consequence of the same incentive structure.

This is partially why I don't think that all subscription software is bad. The Adobe end of the spectrum is bad. The Jetbrains end is good. There is value in creating good, reliable software. If your only source of revenue is new sales then bugs are even less of a priority until it's so bad it makes your software virtually unusuable. And usually it took a long while to get there with many ignored warnings.

cletus commented on Comparing Fuchsia components and Linux containers [video]   fosdem.org/2025/schedule/... · Posted by u/bestorworse
cletus · 6 months ago
Xoogler here. I never worked on Fuchsia (or Android) but I knew a bunch of people who did and in other ways I was kinda adjacent to them and platforms in general.

Some have suggested Fuchsia was never intended to replace Android. That's either a much later pivot (after I left Google) or it's historical revisionism. It absolutely was intended to replace Android and a bunch of ex-Android people were involved with it from the start. The basic premise was:

1. Linux's driver situation for Android is fundamentally broken and (in the opinion of the Fuchsia team) cannot be fixed. Windows, for example, spent a lot of time on this issue to isolate issues within drivers to avoid kernel panics. Also, Microsoft created a relatively stable ABI for drivers. Linux doesn't do that. The process of upstreaming drivers is tedious and (IIRC) it often doesn't happen; and

2. (Again, in the opinion of the Fuchsia team) Android needed an ecosystem reset. I think this was a little more vague and, from what I could gather, meant different things to different people. But Android has a strange architecture. Certain parts are in the AOSP but an increasing amount was in what was then called Google Play Services. IIRC, an example was an SSL library. AOSP had one. Play had one.

Fuchsia, at least at the time, pretty much moved everything (including drivers) from kernel space into user space. More broadly. Fuchsia can be viewed in a similar way to, say, Plan9 and micro-kernel architectures as a whole. Some think this can work. Some people who are way more knowledgeable and experienced on OS design seem to be pretty vocal saying it can't because of the context-switching. You can find such treatises online.

In my opinion, Fuchsia always struck me as one of those greenfield vanity projects meant to keep very senior engineers. Put another way: it was a solution in search of a problem. You can argue the flaws in Android architecture are real but remember, Google doesn't control the hardware. At that time at least, it was Samsung. It probably still is. Samsung doesn't like being beholden to Google. They've tried (and failed) to create their own OS. Why would they abandon one ecosystem they don't control for another they don't control? If you can't answer that, then you shouldn't be investing billions (quite literally) into the project.

Stepping back a bit, Eric Schmidt when he was CEO seemed to hold the view that ChromeOS and Android could coexist. They could compete with one another. There was no need to "unify" them. So often, such efforts to unify different projects just lead to billions of dollars spent, years of stagnation and a product that is the lowest common denominator of the things it "unified". I personally thought it was smart not to bother but I also suspect at some point someone would because that's always what happens. Microsoft completely missed the mobile revolution by trying to unify everything under Windows OS. Apple were smart to leave iOS and MacOS separate.

The only fruit of this investment and a decade of effort by now is Nest devices. I believe they tried (and failed) to embed themselves with Chromecast

But I imagine a whole bunch of people got promoted and isn't that the real point?

cletus commented on Intel delays $28B Ohio chip fabs to 2030   reuters.com/technology/in... · Posted by u/alephnerd
Diggsey · 6 months ago
> Now imagine that corporate tax rate was 40% instead. It completely changes the decision-making process.

Seems more like a question of degree. Dividends are also taxed as income so ~36% is already paid in tax depoending on the income of the shareholder. Increasing the corporate tax rate to 40% brings the effective tax rate to ~52%.

In my experience there's a more fundamental problem with large companies. In a small company, the best way to succeed as an individual (whatever position you have) is for the company as a whole to succeed. At a very large company, the best way to succeed is to be promoted up the ladder, whatever the cost. This effect is the worst at the levels just below the top: you have everything to lose and nothing to gain by the company being successful. It's far more effective to sabotage your peers and elevate yourself rather than work hard and increase the value of the company by a couple of percentage points.

The thing is, the people that have been there since the beginning still have the mindset of helping the company as a whole succeed, but after enough time and enough people have been rotated out, you're left with people at the top who only care about the politics. To them the company is simply a fixture - it existed before them and will continue to exist regardless of what they do.

cletus · 6 months ago
You're alluding to the double taxation problem with dividends. This is a problem and has had a bunch of bad solutions (eg the passthrough tax break from 2017) when in fact the solution is incredibly simple.

In Australia, dividends come with what are called "franking credits". Imagine a company has a $1 billion profit and wants to pay that out as a dividend. The corporate tax rate is 30%. $700M is paid to shareholders. It comes wiht $300m (30%) in franking credits.

Let's say you own 1% of this company. When you do your taxes, you've made $10M in gross income (1% of $1B), been paid $7M and have $3M in tax credits. If your tax rate is 40% then you owe $4M on that $10M but you have already effectively paid $3M on that already.

The point is, the net tax rate on your $10M gross payout is still whatever your marginal tax rate is. There is no double taxaation.

That being said, dividends have largely fallen out of favor in favor of share buybacks. Some of those reasons are:

1. It's discretionary. Not every shareholders wants the income. Selling on the open market lets you choose if you want money or not;

2. Share buybacks are capital gains and generally enjoy lower tax rates than income;

3. Reducing the pool of available shares puts upward pressure on the share price; and

4. Double taxation of dividends.

There are some who demonize share buybacks specifically. I'm not one of them. It's simply a vehicle for returning money to shareholders, functionally very similar to dividends. My problem is doing either to the point of destroying the business.

cletus commented on Intel delays $28B Ohio chip fabs to 2030   reuters.com/technology/in... · Posted by u/alephnerd
marcosdumay · 6 months ago
On #6, that's an individual income tax (or capital gain tax, depends on how you define things). Corporate income tax is the one that is applied independently of the money being invested on the corporation or distributed.

I'm don't think you should subsidize reinvesting in huge companies anyway. What do you expect to gain from them becoming larger?

It's much better (for society) to let them send the money back to shareholders so they can invest on something else.

cletus · 6 months ago
Reinvesting in the company is the one thing we should absolutely subsidize. That goes to wages, capital expenditure and other measures to sustain and grow the company.

Paying out dividends and doing share buybacks just strips the company for cash until there's nothing of value left. It's why entshittification is a thing.

u/cletus

KarmaCake day35131July 2, 2009
About
I am a Java, C++, JavaScript and Python software engineer.

I am from Perth, Western Australia but currently live in New York City. Xoogler and Ex-Facebooker.

I am a contributor on Stackoverflow as user Cletus.

I can be contacted on wcshields at the big G's service.

View Original