Your best bet is a 500 dollar GDC vault that offers relative scraps of a schematic and making your own from those experiences.
Deleted Comment
Your best bet is a 500 dollar GDC vault that offers relative scraps of a schematic and making your own from those experiences.
I guess the trick would be finding a way to securely attach the black box in a way that would ensure its release in a catastrophic disaster.
[0]: https://en.wikipedia.org/wiki/Rescue_buoy_(submarine)
[1]: https://www.quora.com/Don%E2%80%99t-submarines-have-communic...
What was particularly beneficial/unique is the P320 was kept in the holster when given to the FBI to investigate, and only removed after their forensic team X-rayed it, giving us pretty solid case study of how it happens
This guy does a great job going through the report: https://youtu.be/LfnhTYeVHHE
[0]: https://drive.google.com/file/d/1L7RXrneHlzfjrewMFIeeyc-nel3...
I'd like to learn more about how to implement this.
It was pretty clear, even 20 years ago, that OOP had major problems in terms of what Casey Muratori now calls "hierarchical encapsulation" of problems.
One thing that really jumped out at me was his quote [0]:
> I think when you're designing new things, you should focus on the hardest stuff. ... we can always then take that and scale it down ... but it's almost impossible to take something that solves simple problems and scale it up into something that solves hard [problems]
I understand the context but this, in general, is abysmally bad advice. I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
I don't agree. While starting with the simplest case and expanding out is a valid problem-solving technique, it is also often the case in mathematics that we approach a problem by solving a more general problem and getting our solution as a special case. It's a bit paradoxical, but a problem that be completely intractable if attacked directly can be trivial if approached with a sufficiently powerful abstraction. And our problem-solving abilities grow with our toolbox of ever more powerful and general abstractions.
Also, it's a general principle in engineering that the initial design decisions, the underlying assumptions underlying everything, is in itself the least expensive part of the process but have an outsized influence on the entire rest of the project. The civil engineer who halfway through the construction of his bridge discovers there is a flaw in his design is having a very bad day (and likely year). With software things are more flexible, so we can build our solution incrementally from a simpler case and swap bits out as our understanding of the problem changes; but even there, if we discover there is something wrong with our fundamental architectural decisions, with how we model the problem domain, we can't fix it just by rewriting some modules. That's something that can only be fixed by a complete rewrite, possibly even in a different language.
So while I don't agree with your absolute statement in general, I think it is especially wrong given the context of language design and system architecture. Those are precisely the kind of areas where it's really important that you consider all the possible things you might want to do, and make sure you're not making some false assumption that will massively screw you over at some later date.
In fact, as I have said before and I emphatically believe, if you had to explain the Nazis to somebody who had never heard of WWII but was an Oracle customer, there's a very good chance that you actually explain the Nazis in Oracle allegory.
So, it's like: "Really, wow, a whole country?"; "Yes, Larry Ellison has an entire country"; "Oh my god, the humanity! The License Audits!"; "Yeah, you should talk to Poland about it, it was bad. Bad, it was a blitzkrieg license audit."
[1]:
> When NTDS was eventually acclaimed not only a success, but also one of the most successful projects in the Navy; it amazed people. Especially because it had stayed within budget and schedule. A number of studies were commissioned to analyze the NTDS project to find why it had been so successful in spite of the odds against it. Sometimes it seems there was as much money spent on studying NTDS than was spent on NTDS development.
[2]:
> ...the Office of the Chief of Naval Operations authorized development of the Naval tactical Data System in April 1956, and assigned the Bureau of Ships as lead developing agency. The Bureau, in turn, assigned Commander Irvin McNally as NTDS project “coordinator” with Cdr. Edward Svendsen as his assistant. Over a period of two years the coordinating office would evolve to one of the Navy’s first true project offices having complete technical, management, and funds control over all life cycle aspects of the Naval Tactical Data System including research and development, production procurement, shipboard installation, lifetime maintenance and system improvement.
[1]:
The Freedom to Fail: McNally and Svendsen had an agreement with their seniors in the Bureau of Ships and in OPNAV that, if they wanted them to do in five years what normally took 14, they would have to forego the time consuming rounds of formal project reviews and just let them keep on working. This was reasonable because the two commanders were the ones who had defined the the new system and they knew better than any senior reviewing official whether they were on the right track or not. It was agreed, when the project officers needed help, they would ask for it, otherwise the seniors would stand clear and settle for informal progress briefings.
The key take-away is that the NTDS was set up as a siloed project office with Commanders McNally and Svendsen having responsibility for the ultimate success of the project, but other than that being completely unaccountable. There were many other things the NTDS project did well, but I believe that fundamental aspect of its organization was the critical necessary condition for its success. Lack of accountability can be bad, in other circumstances it can be useful, but diffusion of responsibility is always the enemy.
How many trillions of dollars are wasted on projects that go overbudget, get delayed and/or ultimately fail, and to what extent could that pernicious trend be remedied if such projects were led from inception to completion by one or two people with responsibility for its ultimate success who shield the project from accountability?
[0]: https://ethw.org/First-Hand:No_Damned_Computer_is_Going_to_T...
[1]: https://ethw.org/First-Hand:Legacy_of_NTDS_-_Chapter_9_of_th...
[2]: https://ethw.org/First-Hand:Building_the_U.S._Navy%27s_First...
Jell-o (gello?) is a good example, nothing tastes like it naturally. Why aren't there tasty food that are original in terms of taste and texture but good for health and the environment? I suppose part of the struggle is that food is entrenched into culture so much. burgers and bbq are inextricable from july 4th and memorial day for example.
Do you mean processing ingredients with the goal to take cheap ingredients and make a product as hyper-palatable as possible? That would generally be called "ultra-processed food"; you're not going to find a Doritos chip in nature.
Do you mean developing completely completely new flavors via chemical synthesis? I don't think there's much possibility there. Our senses have evolved to detect compounds found in nature, so it's unlikely a synthetic compound can produce a flavor completely unlike anything found in nature.
Also, I think you're overestimating jelly. Gelatine is just a breakdown product of collagen. Boil animal connective tissue, purify the gelatine, add sugar and flavoring and set it into a gel. It's really only a few of techniques removed from nature. If you want to say it's not found in nature, then fair enough, but neither is a medium-rare steak.