That’s probably why milk floats could go so far at the time despite obviously inferior battery technology; milk floats didn’t go very fast.
That’s probably why milk floats could go so far at the time despite obviously inferior battery technology; milk floats didn’t go very fast.
> The hypothesis underlying the new model, called STAMP (Systems-Theoretic Accident Model andProcesses) is that system theory is a useful way to analyze accidents, particularly system accidents.In this conception of safety, accidents occur when external disturbances, component failures, ordysfunctional interactions among system components are not adequately handled by the controlsystem, that is, they result from inadequate control or enforcement of safety-related constraints onthe development, design, and operation of the system[0]
[0] - http://sunnyday.mit.edu/accidents/safetyscience-single.pdf
bravo.
Love him or hate him I think his time in office has kicked off a much needed moment of self reflection and course correction.
Getting Closer to my dream install:
* WSL 2
* VS Code
* .NET 5
* Windows Terminal
* Package Manager
* Edge
All that's missing is Edge on Linux and letting me write cross platform apps that use edge as a (headless) common runtime.
Maybe if he bought and renovated an apartment building. Held events in a top floor space. Transformed the lobby into a community business and tech center.
Maybe to someone who could make sense of the DDL and read the language the label col names are written in. And understand all the implicit units, rules around nulls/empties, and presence of magic strings (SSN, SKU) and special numbers (-1) and on and on. For that you need something like RDF and a proper data model.
Sorry, but your response sounds snarky and reminds me of all the ego hurdles I had to overcome when leaving/loving databases and set theory. Please remember that your comment could be someone's first introduction or step early step in learning.
To oversimplify things massively, "how careful do we want to be to avoid testing failures" is a parameter that SpaceX management gets to control. At the extreme careful end everything costs billions of dollars because you're paralyzed analyzing things and double/triple/quadruple/quintuple checking things. At the not at all careful end of things you keep building "complete rockets", pointing them at mars without any testing, and blowing them up. Obviously neither is rational.
They've settled on some parameters that are basically "don't worry that much about test failures, but keep the failures really cheap". They seem to be doing pretty well by doing that. They're experimenting with manufacturing techniques, hiring and training a workforce, building facilities. The prototypes that they are building keep failing, but that looks to be a relatively minor cost considering that the current primary goal is (according to them) to build out the manufacturing processes and make the design easy to manufacture.
Being slightly more careful would undoubtedly reduce the numbers of prototype failures, but would it actually be worth the cost of slowing other things down? Remember that the main cost of this program to SpaceX is engineering salaries, the faster it goes the cheaper it is.
So of course this test didn't go as planned, of course it would be better if it had worked, but would it have really been worth it to management to reduce the probability of this test failing? Like I said, maybe they aren't being careful enough, but I don't think we have any real evidence for that right now and I personally doubt it.
* Fail Fast. * Red. Green. Refactor.
It’s interesting to think this information being passed is something like “Heads up. This dude does a lot of exercise which means it must be crucial to survival wherever we are.”