They’ve been acting like a cartel for a long time now and somehow they never match the demand even after 18 months straight price increases. They already have the fab, the procedures, and everything, so stop acting like they’re setting up a brand new fab just to increase throughput.
Demand right now is so high that they'd make more net profit if they could make more dram. They could still be charging insane prices. They're literally shutting down consumer sales - that's completely lost profit.
Let’s check their books and manufacturing schedule to see if they’re artificially constraining the supply to jack up the prices on purpose.
For example: https://chipsandwafers.substack.com/p/mainstream-recovery
"Sequentially, DRAM revenue increased 15% with bit shipments increasing over 20% and prices decreasing in the low single-digit percentage range, primarily due to a higher consumer-oriented revenue mix"
(from june of this year).
The problem is that the DRAM market is pretty tight - supply or demand shocks tend to produce big swings. And right now we're seeing both an expected supply shock (transition to new processes/products) as well as a very sudden demand shock.
So much for open markets, somebody must check their books and manufacturing schedules.
It's dangerous for them in both directions: Overbuilding capacity if the boom busts vs. leaving themselves vulnerable to a competitor who builds out if the boom is sustained. Glad I don't have to make that decision. :)
I'm happy that it works out for you, and probably this is a reflection of the kind of work that I do, I wouldn't know how to begin to solve a problem like designing a braille wheel or a windmill using AI tools even though there is plenty of coding along the way. Maybe I could use it to make me faster at using OpenSCAD but I am never limited by my typing speed, much more so by thinking about what it is that I actually want to make.
I've been around long enough that I have seen four hype cycles around AI like coding environments. If you think this is new you should have been there in the 80's (Mimer, anybody?), when the 'fourth generation' languages were going to solve all of our coding problems. Or in the 60's (which I did not personally witness on account of being a toddler), when COBOL, the language for managers was all the rage.
In between there was LISP, the AI language (and a couple of others).
I've done a bit more than looking at this and saying 'huh, that's interesting'. It is interesting. It is mostly interesting in the same way that when you hand an expert a very sharp tool they can probably carve wood better than with a blunt one. But that's not what is happening. Experts are already pretty productive and they might be a little bit more productive but the AI has it's own envelope of expertise and the closer you are to the top of the field the smaller your returns in that particular setting will be.
In the hands of a beginner there will be blood all over the workshop and it will take an expert to sort it all out again, quite possibly resulting in a net negative ROI.
Where I do get use out of it: to quickly look up some verifiable fact, to tell me what a particular acronym stands for in some context, to be slightly more functional than wikipedia for a quick overview of some subfield (but you better check that for gross errors). So yes, it is useful. But it is not so useful that competent engineers that are not using AI are failing at their job, and it is at best - for me - a very mild accelerator in some use cases. I've seen enough AI driven coding projects strand hopelessly by now to know that there are downsides to that golden acorn that you are seeing.
The few times that I challenged the likes of ChatGPT with an actual engineering problem to which I already knew the answer by way of verification the answers were so laughably incorrect that it was embarrassing.
And for the better. I've honestly not had this much fun programming applications (as opposed to students stuff and inner loops) in years.
Remember that as the filter starts to get dirty, its filtration effectiveness actually increases, though the airflow rate drops. CADR will drop but less than just watching airflow would predict.
Recently a daughter moved into a really nice apartment close to a major university/freeway where she will live for the number of years it takes to get a Phd. I got concerned about tire dust. So I am about to start building a really nice air DIY air filter using eight Noctua NF-P14s (about 1000 cfm). XMas present.
I really wanted to use merv-13, but got quite worried about air flow restrictions, plus cost to replace (assume monthly). Instead I went with two 12x24 Carter reusable electrostatic merv-8 filters. I use Carter filters on my house blower, and really like them (just washed them... scary how much junk is in household air). Also, I got the 12x24 direct from Carter for a very low price as they were returns. Note: This is NOT a low cost project, but I just got scared re: merv-13 so went with what I know.
Anyway, the final product will NOT be like this guy's DIY. I will use my somewhat decent woodworking skills to fashion a good looking standing "lamp like" appliance that should look good in most living rooms. I am thinking of going with knotless cedar as I really like working with cedar, and there are some mills here in NW WA where one can go to get such wood (not a HomeDepot specialty).
My question is whether an electrostatic merv-8 filter would do well with tire dust. I am not looking to create "clean room" conditions in the apartment. Just get rid of some of the bad stuff. I am very weak re: understanding filters, mervs, etc. APPRECIATE any insights. Thx, RF
I would stick with merv-13 because you'll get solid performance across a lot of things you might want to remove, from viruses to general pm2.5 and things like volatilized cooking oil. Clean air is awesome and tire dust isn't the only thing that's annoying.
It would need to be written in the Safe Rust subset to give safety assurances. It's an important distinction.
- EDAC is a term that encompasses anything used to detect and correct errors. While this almost always involves redundancy of some sort, _how_ it is done is unspecified.
- The term ECC used stand-alone refers specifically to adding redundancy to data in the form of an error correcting code. But it is not a single algorithm - there are many ECC / FEC codes, from hamming codes used on small chunks of data such as data stored in RAM, to block codes like reed-solomon more commonly used on file storage data.
- The term ECC memory could really just mean "EDAC" memory, but in practice, error correcting codes are _the_ way you'd do this from a cost perspective, so it works out. I don't think most systems would do triple redundancy on just the RAM -- at that point you'd run an independent microcontroller with the RAM to get higher-level TMR.