I'm sure ATC systems were properly tested, including the drivers. Don't compare that with cheap consumer PCs that we had.
I'm sure ATC systems were properly tested, including the drivers. Don't compare that with cheap consumer PCs that we had.
Writing your own minidump uploader in the unhandled exception filter is/was a very common practice in games, while obviously not ideal.
I think Unreal Engine might still do that. So I think that the claim that Direct3D captures exceptions is suspect.
It may trap them and return EXCEPTION_CONTINUE_SEARCH to pass it on to the next handler, but I have a hard time coming up with a reason why it would trap them in the first place. I have personally never seen Direct3D trap an exception in my long career.
Maybe you were expecting C++ exceptions to be caught, but these APIs are only for SEH.
Now Flash, I have no experience with.
Yes, I know it's a 16year old post. But I must stop myths.
> So I think that the claim that Direct3D captures exceptions is suspect.
I would think that too - but I based my claims on a stack trace captured at the time in the overridden SetUnhandledExceptionFilter. Now, computers were the Wild West then, and who knows where those DLLs actually originated, and any further details are lost to time.
> Maybe you were expecting C++ exceptions to be caught, but these APIs are only for SEH.
The distinction was clear then. And very well-documented by Microsoft. We caught all C++ exceptions before SEH.
> Yes, I know it's a 16year old post. But I must stop myths.
Your goal is laudable but I don’t love comments that discount a concrete history that I lived (and documented!). I call this out mostly because it’s happened before in discussions of old Windows APIs. I wish it were easier to get a snapshot of MSDN circa Windows XP, etc.
The use case they had is saving minidumps when the app crashes. Windows error reporting OS component is flexible enough to support the feature without hacks, they just need to write couple values to registry in the installer of their software. See details on per-application settings in that article: https://learn.microsoft.com/en-us/windows/win32/wer/collecti...
If they want better UX and/or compress & uploads the dump as soon as the app crashes (as opposed to the next launch of the app) – I would solve by making a simple supervisor app which launches the main binary with CreateProcess, waits for it to exit, then looks for the MainApp.exe.{ProcessID}.dmp file created by that WER OS component.
That said, we did have a bunch of hand-rolled state capturing (including Python thread stacks) so maybe WER wouldn't have been as useful anyway.
It made everything feel real!
Later, the company switched new employees to 3x 10,000 RPM SATA drives. Not quite as grindy, but still loud.
I remain a bit mystified about why it would be a hard maximum, though. Did such motherboards prevent the user from installing 4x256MiB for a cool 1GiB of DRAM? Was the OS having trouble addressing or utilizing it all? 640MiB is not a mathematical sort of maximum I was familiar with from the late 1990s. 4GiB is obviously your upper limit, with a 32-bit address bus... and again, if 640MiB were installed, that's only 2 free bits on that bus.
So I'm still a little curious about this number being dropped in the article. More info would be enlightening! And thank you for speaking up to correct me! No wonder it was down-voted!
That was a weird time in computing. Things were getting fast and big quickly (not that many years later, I built a dual-socket Xeon at 2.8 GHz, and before that my brother had a dual socket P3 at 700 MHz.) but all the expansion boards were so special-purpose. I remember going out of my way to pick a board with something like seven expansion slots.
But I think your question about why the author said 640 is fair! Maybe they had a machine like mine around then. Or maybe it’s something NVIDIA was designing around?
PC memory was nearly always sold in powers of two. So you could have SIMMs in capacity of 1MiB, 2MiB, 4, 8, 16MiB. You could usually mix-and-match these memory modules, and some PCs had 2 slots, some had 4, some had a different number of slots.
So if you think about 4 slots that can hold some sort of maximum, we're thinking 64MiB is a very common maximum for a consumer PC, and that may be 2x32 or 4x16MiB. Lots of people ran up against that limit for sure.
640MiB is an absurd number if you think mathematically. How do you divide that up? If 4 SIMMs are installed, then their capacity is 160MiB each? No such hardware ever existed. IIRC, individual SIMMs were commonly maxed at 64MiB, and it was not physically possible to make a "monster memory module" larger than that.
Furthermore, while 64MiB requires 26 bits to address, 640MiB requires 30 address bits on the bus. If a hypothetical PC had 640MiB in use by the OS, then only 2 pins would be unused on the address bus! That is clearly at odds with their narrative that they were able to "borrow" several more!
This is clearly a typo and I would infer that the author meant to write "64 megabytes" and tacked on an extra zero, out of habit or hyperbole.
I can’t find the purchase receipts or specific board brand but it had four SDRAM slots, and I had it populated with 2x64 and 2x256.
Edit: Found it in some old files of mine:
I was wrong! Not four DIMM slots... three! One must have been 128 and the other two 256.
Pentium II 400, 512k cache
Abit BF6 motherboard
640 MB PC100 SDRAM
21" Sony CPD-G500 (19.8" viewable, .24 dot pitch)
17" ViewSonic monitor (16" viewable, .27 dot pitch)
RivaTNT PCI video card with 16 MB VRAM
Creative SB Live!
Creative 5x DVD, 32x CD drive
Sony CD-RW (2, 4, 24)
80 GB Western Digital ATA/100
40 GB Western Digital ATA/100
17.2 GB Maxtor UltraDMA/33 HDD
10.0 GB Maxtor UltraDMA/33 HDD
Cambridge SoundWorks FourPointSurround FPS2000 Digital
3Com OfficeConnect 10/100 EtherNet card
3 Microsoft SideWinder Gamepads
Labtec AM-252 Microphone
Promise IDE Controller card
Hauppage WinTV-Theatre Tuner Card
We got it moved outside, but it took about 24 hours before I realized that I should call County Health. By that point, the bat was gone, and county health suggested I receive rabies treatment, but call my doctor. The bat could have bit or scratched without us realizing it.
The doctor concurred. Rabies treatment must be done at the ER. They strongly recommended everyone in the house receive treatment if we could not 100% rule out physical contact. (We couldn't.)
Me, my wife, my kids, EACH receiving the immunoglobulin and four rounds of vaccines at the ER. We ran the first ER out of the treatment so the kids had to go somewhere else. Also, those are big needles.
The treatment ended up billing insurance over $100,000. (Almost all of that is the immunoglobulin.) We also had to return to both ERs, three times each, with the last time being on Christmas morning.
There is research that says immunoglobulin is _likely_ not necessary if you have no visible bites, but it's current health policy in the USA, and no doctor wants to be the first to undertreat.
Most expensive Christmas tree ever.
Here’s the issue: waiting_for_elements is a Vec<Waker>. The channel cannot know how many tasks are blocked, so we can’t use a fixed-size array. Using a Vec means we allocate memory every time we queue a waker. And that allocation is taken and released every time we have to wake.
Why isn't a structure that does amortized allocation an option here? I appreciate the design goal was "no allocations in steady-state", but that's what you'd expect if you were using C++'s std::vector: After a while the reserved space for the vector gets "big enough".
And my response: https://www.reddit.com/r/rust/comments/1gbqy6c/comment/ltpv0...
One typical approach is double-buffering the allocation but it doesn't work here because you need to pull out the waker list to call `wake()` outside of the mutex. You could try to put the allocation back, but you have to acquire the lock again.
I had an implementation that kept a lock-free waker pool around https://docs.rs/wakerpool/latest/wakerpool/ but now you're paying for atomics too, and it felt like this was all a workaround for a deficiency in the language.
Intrusive lists are the "correct" data structure, so I kept pushing.
Swift has (had?) the same issue and I had to write a program to illustrate that Swift is (was?) perfectly happy to segfault under shared access to data structures.
Go has never been memory-safe (in the Rust and Java sense) and it's wild to me that it got branded as such.