Readit News logoReadit News
seunosewa · a year ago
The article doesn't properly explain how DRAM is different from SRAM. DRAM has to constantly refresh itself in order not to 'forget' its contents.
kruador · a year ago
Indeed - the 'dynamic' comes from 'dynamic logic'. Wikipedia: "It is distinguished from the so-called static logic by exploiting temporary storage of information in stray and gate capacitances." What Dennard realised was that you don't actually need to have a separate capacitor to hold the bit value - the bit value is just held on the stray and gate capacitance of the transistor that switches on when that bit's row and column are selected, causing the stray capacitance to discharge through the output line.

Because of that, the act of reading the bit's value means that the data is destroyed. Therefore one of the jobs of the sense amplifier circuit - which converts the tiny voltage from the bit cell to the external voltage - is to recharge the bit.

But that stray capacitance is so small that it naturally discharges through the high, but not infinite, resistance when the transistor is 'off'. Hence, you have to refresh DRAM, by regularly reading every bit frequently enough that it hasn't discharged before you got to it. Usually you might only need to read every row frequently enough, because there's actually a sense amplifier for each column, reading all the bit values in that row, with the column address strobe just selecting which column bit gets output.

praptak · a year ago
Yes, it totally misses the crucial and non obvious trade off which unlocked the benefits. The rest of the system has to take care of periodically rewriting every memory cell so that the charge doesn't dissipate.

In fact it took a bit of time for the CPUs or memory controllers to do it automatically, i.e. without the programmer having to explicitly code the refresh.

dehrmann · a year ago
It isn't the point of the article, but this is true of every storage medium. It's just a question of milliseconds or years.
asdefghyk · a year ago
Static RAM ( based on how it is used) never needs to be refreshed at current typical computer power on times (hours or days ) . Current DRAM must be refreshed at very much faster rates to be able to be useful.
kstrauser · a year ago
Why would we use DRAM, then? It seems better not to have to refresh it all the time.

(I think I more or less know, but I’d rather talk about it than look it up this morning.)

myself248 · a year ago
Because SRAM is essentially a flipflop gate. It takes at least four transistors to store a single bit in SRAM, some designs use six. And current must continuously flow to keep the transistors in their state, so it's rather power hungry.

One bit of DRAM is just one transistor and one capacitor. Massive density improvements; all the complexity is in the row/column circuitry at the edges of the array. And it only burns power during accesses or refreshes. If you don't need to refresh very often, you can get the power very low. If the array isn't being accessed, the refresh time can be double-digit milliseconds, perhaps triple-digit.

Which of course leads to problems like rowhammer, where rows affected by adjacent accesses don't get additional refreshes like they should (because this has a performance cost -- any cycle spent refreshing is a cycle not spent accessing), and you end up with the RAM reading out different bits than were put in. Which is the most fundamental defect conceivable for a storage device, but the industry is too addicted to performance to tap the brakes and address correctness. Every DDR3/DDR4 chip ever manufactured is defective by design.

praptak · a year ago
The key point is that the refreshes do not need to happen very often. Something like once per 20 ms for each row was doable even by an explicit loop that the CPU had to periodically execute.

And this task soon moved to memory controllers, or at least got done by CPUs automatically without need for explicit coding.

zoeysmithe · a year ago
The inventor of DRAM, Robert Heath Dennard, just died a few months ago and I was reading his obit and his history.

I think the long and short of it is that DRAM is cheap. DRAM needs one transistor per data bit. Competing technologies needed far more. SRAM needed six transistors per bit for example.

Dennard figured out how to vastly cut down complexity, thus costs.

Deleted Comment

Deleted Comment

MichaelZuo · a year ago
Dennard scaling for SRAM has certainly halted, as demonstrated by TSMC’s 3nm process vs 5 nm.

What’s the likely ETA for DRAM?

hajile · a year ago
Years ago.

DRAM uses a capacitor. Those capacitors essentially hit a hard limit at around 400MHz for our traditional materials a very long time ago. This means that if you need to sequentially read random locations from RAM, you can't do it faster than 400MHz. Our only answer here is better AI prefetchers and less-random memory patterns in our software (the penalty for not prefetching is so great that theoretically less efficient algorithms can suddenly become more efficient if they are simply more predictable).

As to capacitor sizes, we've been at the volume limit for quite a while. When the capacitor is discharged, we must amplify the charge. That gets harder as the charge gets weaker and there's a fundamental limit to how small you can go. Right now, each capacitor has somewhere in the range of a mere 40,000 electrons holding the charge. Going lower dramatically increases the complexity of trying to tell the signal from the noise and dealing with ever-increasing quantum effects.

Getting more capacitors closer means a smaller diameter, but keeping the same volume means making the cylinder longer. You quickly reach a point where even dramatic increases in height (something very complicated to do in silicon) give only minuscule decreases in diameter.

kstrauser · a year ago
What does “faster than 400MHz” mean in this context? Does that mean you can’t ask for a unit of memory from it more than 400M times a second? If so, what’s the basic unit there, a bit? A word?

I built a little CPU in undergrad but never got around to building RAM and admit it’s still kind of a black box to me.

Bonus question: When I had an Amiga, we’d buy 50 or 60ns RAM. Any idea what that number meant, or what today’s equivalent would be?

aidenn0 · a year ago
If that's the case, why haven't we switched to SRAM? Isn't it only about 4x the price at any given process node?
Salgat · a year ago
5nm can hold roughly a gigabyte of SRAM on a cpu-sized die, that's around $130/GB I believe. At some point 5nm will be cheap enough that we can start considering replacing DRAM with SRAM directly on the chip (aka L4 cache). I wonder how big of a latency and bandwidth bonus that'd be. You could even go for a larger node size without losing much capacity for half the price.
crest · a year ago
SRAM also requires more power than DRAM and the simple regular structure of SRAM arrays compared to (other) logic makes it possible to get good yield rates through redundancy and error correction codes so you could have giant monolithic dies, but information can't exceed the speed of light in a medium. There just isn't enough time for the signals to propagate to get the latency you expect of a L3 cache out of gigabytes (in relative terms) far away big dies containing gigabytes of SRAM. Also moving that the data would to perform computations without caching would be terrible wasteful given how much energy is needed just to move the data. Instead you would probably end up with something closer to the computing memory concept to map computation to ALUs close to the data with an at least two tier network (on-die, inter-die) to support reductions.

Deleted Comment

bgnn · a year ago
5nm will never be that cheap. The performance benefit would be easily 2x or more though.
wmf · a year ago
Now? Prices have been flat for 15 years and DRAM has been stuck on 10 nm for a while.
philipkglass · a year ago
That's overstating the flatness of prices. In 2009, the best price recorded here was 10 dollars per gigabyte:

https://jcmit.net/memoryprice.htm

Recently DDR4 RAM is available at well under $2/GB, some closer to $1/GB.

cubefox · a year ago
> Dennard scaling for SRAM has certainly halted, as demonstrated by TSMC’s 3nm process vs 5 nm.

I don't think the latter (SRAM capacity remaining the same per area?) has anything to do with Dennard scaling.

ksec · a year ago
Not soon as DRAM is mostly on older node. But overall cost reduction of DRAM is moving very very slowly.
asdefghyk · a year ago
I have a recollection of a design where microprocessor reads were used to refresh DRAM contents. Late 1970s. I thought it was in a early 6800 Motorola book. Can find it now, or no mention of the technique, now. Would slow down program operation for sure. Maybe my recollection is wrong, not sure.
WarOnPrivacy · a year ago
updated June 2024

Update: Today, marking the 56th anniversary...1966

Please forgive my pedantry but 58th. It was a busy year.

neom · a year ago
I miss RAM. I feel like if you lived through that 90s RAM frenzy, you probably miss RAM too. It was crazy how quickly we move through SDRAM/DDR, prices dropped and you could make real increases in performance year over year for not much money. I'm sure some of it was the software being able to capture the hw improvements, but that certainly was my fav period in tech so far.
gregmac · a year ago
I am confused by this comment. You said "RAM" (contrast to "DRAM" in the article title) but I think you are talking about DRAM sticks? But those have not gone away (other than with some laptops where it's soldered on and not upgradable).

Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.

One difference is just that the price isn't dropping at the same rate anymore [1], so it doesn't make as much sense to buy small and re-buy next year when bigger chips are cheaper (they won't be much cheaper).

Another is that DRAM speed is at the top of an S-curve [2], so there's not that same increase in speed year-over-year, though arguably the early 2000's were when speeds most dramatically increased.

[1] https://aiimpacts.org/trends-in-dram-price-per-gigabyte/

[2] http://blog.logicalincrements.com/2016/03/ultimate-guide-com...

emptiestplace · a year ago
> Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.

This statement makes it difficult to believe you were there.

tomnipotent · a year ago
Most RAM found in consumer PC's during the 90s was still DRAM, including SDRAM, EDO, and Rambus. I believe OP is just being nostalgic over the period of time when RAM upgrades were very a exciting thing, as hardware was changing very quickly in that era and each year felt considerably more capable than the prior.
thr0w · a year ago
Getting a new stick of RAM was so damn exciting in the 90s.
myself248 · a year ago
And putting the old ones in a SIMM-stack to still use them on a new motherboard, because noone would be so crazy as to throw away good DRAM.
sva_ · a year ago
Sad indeed. All that was taken away once it became possible to download more ram[0].

0. https://downloadmoreram.com/

bloedsinnig · a year ago
I started late but i rememeber when i upgraded my system with an additional 64mb stick, i was able to reduce the GTA 3 Loadtime between one island to another from 20 seconds to 1.

And at that time i also learned how critical it was to check your ram for errors. I reinstalled win98 and windows 2000 so often until i figured this out.

IshKebab · a year ago
Nah the biggest jump in performance by far was SSDs. It was a huge step so software had no chance to "catch up" initially.
szundi · a year ago
It's happening, Windows gets slower every year to adapt to SSDs.
UltraSane · a year ago
RAM speeds are still improving pretty fast. I'm running DDR5 6000 and DDR5 8300 is available. GDDR7 uses PAM3 to get 40Gbps
malfist · a year ago
How does that contrast with the increase cas latency in real world terms? (Actually asking, not being combative, I don't know)
sulandor · a year ago
can relate

though i guess the 90's are _the_ best tech era by far and for some time to come, because that's where capable and modular computing machines became a real commodity.

Deleted Comment

Deleted Comment

metta2uall · a year ago
"8K video recording" - does anyone really need this? Seems like for negligible gain in quality people are pushed to sacrifice their storage & battery, and so upgrade their hardware sooner...
szundi · a year ago
Yes, they record with higher resolutions and then the director and the operateur has greater flexibility later when they realize they need a different framing - or just fixing the cameraman's errors cutting parts of the picture out. They need the extra pixels/captured area to be able to do this.
scheme271 · a year ago
I think the studios and anyone doing video production probably would use a 8k toolchain if possible. As others have pointed out, this lets you crop and modify video while still being able to output 4k without having to upscale.
Pet_Ant · a year ago
Well for starters 8k video lets you zoom in and crop and still get 4k in the end.
metta2uall · a year ago
I think 4k is also too much in the vast majority of cases..
AlexDragusin · a year ago
You are thinking from a consumer point of view, consumer as in Jane taking videos of her cats which 8K, even 4K would be overkill. You can set your recording device to record in 720p or 1080p and so on to suit the purpose.

For commercial purposes it's another story and it makes sense to consider shooting in 8K if possible, thus the option should exist.

bloedsinnig · a year ago
Yes why not?

Different use cases exist:

Record 8k text and you could zoom in and read things. Record 8k and crop withot quality loss or 'zoom' in

Does everyone need this? Probably not but we are on hn not at a coffee party

lightedman · a year ago
I need more than 8K. I'm working at microscopic levels when I study minerals, I need as much resolution as I can possibly get, to the limit of optical diffraction.
kstrauser · a year ago
Are you actually recording movies of them though?

Honest question. I hope I learn something about studying minerals!

HPsquared · a year ago
8K is important for VR video; otherwise, not so much. There's a really noticeable step up from 4K in that area.

On a large TV though , it's probably an improvement over 4K for sports where you need to track a small item moving fast.

DrillShopper · a year ago
Yes, it makes post-production SO MUCH EASIER