Readit News logoReadit News
dgacmu commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
JAlexoid · a day ago
I mean... That is exactly how our memory works. So in a sense, the factually incorrect information coming from LLM is as reliable as someone telling you things from memory.
dgacmu · a day ago
But not really? If you ask me a question about Thai grammar or how to build a jet turbine, I'm going to tell you that I don't have a clue. I have more of a meta-cognitive map of my own manifold of knowledge than an LLM does.
dgacmu commented on RAM is so expensive, Samsung won't even sell it to Samsung   pcworld.com/article/29989... · Posted by u/sethops1
itopaloglu83 · 9 days ago
So it’s also the perfect time to constrain the product flow to jack up the prices.

They’ve been acting like a cartel for a long time now and somehow they never match the demand even after 18 months straight price increases. They already have the fab, the procedures, and everything, so stop acting like they’re setting up a brand new fab just to increase throughput.

dgacmu · 9 days ago
This seems like a weird subject on which to be so aggressive, or at least I'm interpreting your tone that way. DRAM manufacturers absolutely have engaged in illegal price fixing in the past (1998-2002 in particular). But they've also overbuilt and underbuilt in fairly regular cycles, resulting in large swings in dram price and profitability. And they've had natural disasters reduce production capacity (e.g., micron in 2021). But there's no evidence right now that this is anything except finding themselves in the nice (but nervous) position of making a product that just just had a major demand spike, combined with some clever contract work by openai.

Demand right now is so high that they'd make more net profit if they could make more dram. They could still be charging insane prices. They're literally shutting down consumer sales - that's completely lost profit.

dgacmu commented on RAM is so expensive, Samsung won't even sell it to Samsung   pcworld.com/article/29989... · Posted by u/sethops1
itopaloglu83 · 10 days ago
I don’t think they’re working at 100% capacity or don’t have any other FAB that they can utilize for other low profit stuff.

Let’s check their books and manufacturing schedule to see if they’re artificially constraining the supply to jack up the prices on purpose.

dgacmu · 10 days ago
I'd take the opposite bet on this. They're diverting wafer capacity from lower-profit items to things like HBM, but all indications are that wafer starts are up a bit. Just not up enough.

For example: https://chipsandwafers.substack.com/p/mainstream-recovery

"Sequentially, DRAM revenue increased 15% with bit shipments increasing over 20% and prices decreasing in the low single-digit percentage range, primarily due to a higher consumer-oriented revenue mix"

(from june of this year).

The problem is that the DRAM market is pretty tight - supply or demand shocks tend to produce big swings. And right now we're seeing both an expected supply shock (transition to new processes/products) as well as a very sudden demand shock.

dgacmu commented on RAM is so expensive, Samsung won't even sell it to Samsung   pcworld.com/article/29989... · Posted by u/sethops1
itopaloglu83 · 10 days ago
The manufacturers are willing to quadruple the prices for the foreseeable future but not change their manufacturing quotes a bit.

So much for open markets, somebody must check their books and manufacturing schedules.

dgacmu · 10 days ago
In their defense, how many $20 billion fabs do you want to build in response to the AI ... (revolution|bubble|other words)? It seems very, very difficult to predict how long DRAM demand will remain this elevated.

It's dangerous for them in both directions: Overbuilding capacity if the boom busts vs. leaving themselves vulnerable to a competitor who builds out if the boom is sustained. Glad I don't have to make that decision. :)

dgacmu commented on Everyone in Seattle hates AI   jonready.com/blog/posts/e... · Posted by u/mips_avatar
jacquesm · 10 days ago
> but it does quite well with boring stuff that's still a substantial amount of programming.

I'm happy that it works out for you, and probably this is a reflection of the kind of work that I do, I wouldn't know how to begin to solve a problem like designing a braille wheel or a windmill using AI tools even though there is plenty of coding along the way. Maybe I could use it to make me faster at using OpenSCAD but I am never limited by my typing speed, much more so by thinking about what it is that I actually want to make.

dgacmu · 10 days ago
I've used it a little for openscad with mixed results - sometimes it worked. But I'm a beginner at openscad and suspect if I were better it would have been faster to just code it. It took a lot of English to describe the shape - quite possibly more than it would have taken to just write in openscad. Saying "a cube 3cm wide by 5cm high by 2cm deep" vs cube([5, 3, 2]) ... and as you say, the hard part is before the openscad anyway.
dgacmu commented on Everyone in Seattle hates AI   jonready.com/blog/posts/e... · Posted by u/mips_avatar
jacquesm · 10 days ago
Punishment eh? Serves them right for being skeptical.

I've been around long enough that I have seen four hype cycles around AI like coding environments. If you think this is new you should have been there in the 80's (Mimer, anybody?), when the 'fourth generation' languages were going to solve all of our coding problems. Or in the 60's (which I did not personally witness on account of being a toddler), when COBOL, the language for managers was all the rage.

In between there was LISP, the AI language (and a couple of others).

I've done a bit more than looking at this and saying 'huh, that's interesting'. It is interesting. It is mostly interesting in the same way that when you hand an expert a very sharp tool they can probably carve wood better than with a blunt one. But that's not what is happening. Experts are already pretty productive and they might be a little bit more productive but the AI has it's own envelope of expertise and the closer you are to the top of the field the smaller your returns in that particular setting will be.

In the hands of a beginner there will be blood all over the workshop and it will take an expert to sort it all out again, quite possibly resulting in a net negative ROI.

Where I do get use out of it: to quickly look up some verifiable fact, to tell me what a particular acronym stands for in some context, to be slightly more functional than wikipedia for a quick overview of some subfield (but you better check that for gross errors). So yes, it is useful. But it is not so useful that competent engineers that are not using AI are failing at their job, and it is at best - for me - a very mild accelerator in some use cases. I've seen enough AI driven coding projects strand hopelessly by now to know that there are downsides to that golden acorn that you are seeing.

The few times that I challenged the likes of ChatGPT with an actual engineering problem to which I already knew the answer by way of verification the answers were so laughably incorrect that it was embarrassing.

dgacmu · 10 days ago
I'm not a big llm booster, but I will say that they're really good for proof of concepts, for turning detailed pseudocode into code, sometimes for getting debugging ideas. I'm a decade younger than you, but I've programmed in 4GLs (yuch), lived through a few attempts at visual programming (ugh), and ... LLM assistance is different. It's not magic and it does really poorly at the things I'm truly expert at, but it does quite well with boring stuff that's still a substantial amount of programming.

And for the better. I've honestly not had this much fun programming applications (as opposed to students stuff and inner loops) in years.

dgacmu commented on I made a quieter air purifier   chillphysicsenjoyer.subst... · Posted by u/crescit_eundo
ruralfam · 12 days ago
I agree completely re: mev-13 == optimal solution. But the word "pragmatic" hits me hard. Merv-13 when new/clean start out with pretty restrictive flow. They catch a lot of particles so restriction increases rapidly. At some point the CFM loss means the filter is much less optimal. All the studies I read used new filters, smoke-filled rooms, a day's treatment. It is obviously impractical and very, very expensive to replace a merv-13 filter every few days. There are no reusable merv-13 filters that I could find. If there is a study about merv-13 effectiveness over 30 days vs. merv-8 I would love to see it. I would love to use merv-13, but just cannot get my head around how it is a practical, affordable solution across years and years of use, and let's say a month between filter renewal. Let me know if you have good insights as I am pretty worn out researching this. Thx, RF
dgacmu · 12 days ago
I ran a corsi-rosenthal box 24/7 for a year during covid with 4 merv-13 filters and the airflow stayed pretty good. Depends of course - we don't have pets and were running the HVAC filter full time also so it was a pretty clean environment. I would bet that the lack of a pre-filter would kill things fast if you had pet hair or lots of dust. But I suspect 6 months is totally reasonable from a "provides effective filtration" perspective.

Remember that as the filter starts to get dirty, its filtration effectiveness actually increases, though the airflow rate drops. CADR will drop but less than just watching airflow would predict.

dgacmu commented on I made a quieter air purifier   chillphysicsenjoyer.subst... · Posted by u/crescit_eundo
ruralfam · 13 days ago
Appreciate any thoughts you all have re: this post. For years I have been using Noctua NF-P14 fans to circulate air in house to distribute heat in the winter from our wood stove. E.g. cut holes in the walls, and circulate remote rooms using the fans. Has worked great, and the Noctuas have been rock solid.

Recently a daughter moved into a really nice apartment close to a major university/freeway where she will live for the number of years it takes to get a Phd. I got concerned about tire dust. So I am about to start building a really nice air DIY air filter using eight Noctua NF-P14s (about 1000 cfm). XMas present.

I really wanted to use merv-13, but got quite worried about air flow restrictions, plus cost to replace (assume monthly). Instead I went with two 12x24 Carter reusable electrostatic merv-8 filters. I use Carter filters on my house blower, and really like them (just washed them... scary how much junk is in household air). Also, I got the 12x24 direct from Carter for a very low price as they were returns. Note: This is NOT a low cost project, but I just got scared re: merv-13 so went with what I know.

Anyway, the final product will NOT be like this guy's DIY. I will use my somewhat decent woodworking skills to fashion a good looking standing "lamp like" appliance that should look good in most living rooms. I am thinking of going with knotless cedar as I really like working with cedar, and there are some mills here in NW WA where one can go to get such wood (not a HomeDepot specialty).

My question is whether an electrostatic merv-8 filter would do well with tire dust. I am not looking to create "clean room" conditions in the apartment. Just get rid of some of the bad stuff. I am very weak re: understanding filters, mervs, etc. APPRECIATE any insights. Thx, RF

dgacmu · 12 days ago
A deep merv-13 with a lot of pleats can have a very reasonable pressure drop - you just have to shop a little more carefully.

I would stick with merv-13 because you'll get solid performance across a lot of things you might want to remove, from viruses to general pm2.5 and things like volatilized cooking oil. Clean air is awesome and tire dust isn't the only thing that's annoying.

dgacmu commented on Google unkills JPEG XL?   tonisagrista.com/blog/202... · Posted by u/speckx
MaxBarraclough · 12 days ago
> I think both Mozilla and Google are OK with this - if it is written in Rust in order to avoid that situation.

It would need to be written in the Safe Rust subset to give safety assurances. It's an important distinction.

dgacmu · 12 days ago
99% safe with 1% unsafe mixed in is far, far better than 100k loc of c++ -- look at Google's experience with rust in Android. It's not perfect and they had one "almost vulnerability" but the rate of vulnerabilities is much, much lower even with a bit of unsafe mixed in.
dgacmu commented on Airbus A320 – intense solar radiation may corrupt data critical for flight   airbus.com/en/newsroom/pr... · Posted by u/pyrophoenix
RealityVoid · 15 days ago
That was my initial confusion as well. It means exactly what you guessed, "Error detection and correction". The term is also spelled out in the report. I asked Claude about it (caveat emptor) and it said EDAC is the correct name for the circuitry and implementation itself whereas ECC is the algorithm. Gemini said that EDAC is the general technique and ECC is one implementation variant. So, at this point, I'm not sure. They are used interchangeably (maybe wrongly so), and in this case, we're referring to, essentially, the same thing, with maybe some small differences in the details. In my professional life, almost always I referred to ECC. In the report, they were only using EDAC. I thought I'd maintain consistency with the report so I tried using EDAC as well.
dgacmu · 15 days ago
The more correct and general answer is that:

- EDAC is a term that encompasses anything used to detect and correct errors. While this almost always involves redundancy of some sort, _how_ it is done is unspecified.

- The term ECC used stand-alone refers specifically to adding redundancy to data in the form of an error correcting code. But it is not a single algorithm - there are many ECC / FEC codes, from hamming codes used on small chunks of data such as data stored in RAM, to block codes like reed-solomon more commonly used on file storage data.

- The term ECC memory could really just mean "EDAC" memory, but in practice, error correcting codes are _the_ way you'd do this from a cost perspective, so it works out. I don't think most systems would do triple redundancy on just the RAM -- at that point you'd run an independent microcontroller with the RAM to get higher-level TMR.

u/dgacmu

KarmaCake day6897February 12, 2013
About
CTO and founder, Enriched Ag. Computer science professor at Carnegie Mellon University. Ex- university of Utah, MIT, Google Brain.

http://www.cs.cmu.edu/~dga/

Of the firm belief that distributed systems is the coolest area ever, followed by computer science in general. Yes, I'm a bit biased.

View Original