Sure, no doubt. My point wasn't really about the particularities. It was around the mistaken idea that I see sometimes where people believe that TrueTime allows for synchronized global writes without any need for consensus.
But, yes TrueTime will not magically allow data to propagate at faster-than-light speeds.
Does it sting when I get a "No"? Yes, a little, but I did my best and (presumably) someone else did better. So, I take solace in that I did not have to make a (relatively large) decision.
But it is hard to say how much leetCode style interviews helped. Post-2015 Google already had quite a bit of momentum.
I did a fair number of coding interviews (for SRE positions) at Google (I don't actually know how many in total, but 25-30 is probably a safe guess) and, yes, it started with a small problem that should be solvable in well under half the interview time. I only used problems that passed my "is this fair to the candidate" screening (look at problem, try to write a working solution, if that takes more than 7 minutes, the problem is not fair).
The value in the interview comes from (a) listening to the candidate narrating their thought process during the coding, (ii) discussing the solution with the candidate, sometimes in terms of complexity, sometimes in terms of correctness, depending on what is on the board, (3) refining the code written (add more functionality, change functionality).
For a language I knew, I tended to overlook small syntactic mistakes (a whiteboard does not have any syntax highlighting) and if there was any questions about library functions, I would give an answer (and note down what I said, so that would be the standard used when judging) and the bulk of the score came from the discussion about the code, not the code itself.
If that matches what you mean by "LeetCode-style interview", that's certainly been in place since at least 2011. If it does not, things may have changed, but at least the aim wit ha coding interview back when I was still at Google was less "get some code judge, on the code" and more "get some code, judge the discussion about the code". It is also entirely possible that interviewing for SWE positions was different frmo hiring for SRE positions.
It is probably realistic to expect someone to write something useful (although possibly small) in three days. It is less realistic to expect someone to write a useful component integrated in a large system that they have to learn, in three days.
That (probably) means that the system for dealing with planning maintenances (well, usually, "approving them") needs to have a sufficiently good understanding of what humans care about what changes.
At a previous job, the planned change tracking system was REALLY good at tracking what specific compute facility was going to be impacted by any specific change taking place in that facility. And had a really good way of allowing you to filter for "only places I have stuff running" (and I think, even some breakdown of general change types as well).
It was, however, not easy to get notification of "there will be maintenance on submarine cable C, taking it off-line for 4 hours" or "there will be maintenance at cable station CS, taking cables C1, C2, and C3 down for 3h". And as one of the things "we" (the team i worked in then) was doing was world-wide low latency replication of data, we did actually care that cable C was going to be down. But, the only way we could find out was "read all upcoming changes" and stick them in the team calendar.
Was it good? Eh, it worked. Was it the best process I've seen? Probably ,yes.
As for that semantic difference, if we expect the light source to have one of exactly two states (that is, "not a dimmable light"), we probably want to express that as "lightsource: on" rather than "lightsource: true".
And that is where the friction between "humanfriendly" and "computer-friendly" starts being problematic. Computer-to-computer protocols should be painfully strict and non-ambiguous, human-to-computer should be as adapted as they can to humans, erring on "expressive" rather than "strict".
I am also not sure if I am happy or sad that the set of configuration languages in the original article didn't include Flabbergast[1], which was heavily inspired by what may be simultaneously the best and worst configuration language I have seen, BCL (a language that I once was very relieved to never have to see again, and nine months later missed so INCREDIBLY much, because all the other ones are sadly more horrible).
These all seem like good reasons to make then functions (taking a timestamp as a n argument) rather than mostly-correct constants.
I swear, the more I learn about calendars and timekeeping, the more I realise I never ever want to deal with it.
eg https://www.old-computers.com/museum/photos/sinclair_zx-spec...
The low cost and quirky masochism was mainly why we loved it.
If entire countries can run on the other squares, why can't the US?
Not unusual in the finance industry, somewhat unusual (but I have heard of it) in "more pure tech". Probably also more common the further up you get in the corporate hierarchy.