Indeed - the 'dynamic' comes from 'dynamic logic'. Wikipedia: "It is distinguished from the so-called static logic by exploiting temporary storage of information in stray and gate capacitances." What Dennard realised was that you don't actually need to have a separate capacitor to hold the bit value - the bit value is just held on the stray and gate capacitance of the transistor that switches on when that bit's row and column are selected, causing the stray capacitance to discharge through the output line.
Because of that, the act of reading the bit's value means that the data is destroyed. Therefore one of the jobs of the sense amplifier circuit - which converts the tiny voltage from the bit cell to the external voltage - is to recharge the bit.
But that stray capacitance is so small that it naturally discharges through the high, but not infinite, resistance when the transistor is 'off'. Hence, you have to refresh DRAM, by regularly reading every bit frequently enough that it hasn't discharged before you got to it. Usually you might only need to read every row frequently enough, because there's actually a sense amplifier for each column, reading all the bit values in that row, with the column address strobe just selecting which column bit gets output.
Yes, it totally misses the crucial and non obvious trade off which unlocked the benefits. The rest of the system has to take care of periodically rewriting every memory cell so that the charge doesn't dissipate.
In fact it took a bit of time for the CPUs or memory controllers to do it automatically, i.e. without the programmer having to explicitly code the refresh.
Static RAM ( based on how it is used) never needs to be refreshed at current typical computer power on times (hours or days ) . Current DRAM must be refreshed at very much faster rates to be able to be useful.
Because SRAM is essentially a flipflop gate. It takes at least four transistors to store a single bit in SRAM, some designs use six. And current must continuously flow to keep the transistors in their state, so it's rather power hungry.
One bit of DRAM is just one transistor and one capacitor. Massive density improvements; all the complexity is in the row/column circuitry at the edges of the array. And it only burns power during accesses or refreshes. If you don't need to refresh very often, you can get the power very low. If the array isn't being accessed, the refresh time can be double-digit milliseconds, perhaps triple-digit.
Which of course leads to problems like rowhammer, where rows affected by adjacent accesses don't get additional refreshes like they should (because this has a performance cost -- any cycle spent refreshing is a cycle not spent accessing), and you end up with the RAM reading out different bits than were put in. Which is the most fundamental defect conceivable for a storage device, but the industry is too addicted to performance to tap the brakes and address correctness. Every DDR3/DDR4 chip ever manufactured is defective by design.
The key point is that the refreshes do not need to happen very often. Something like once per 20 ms for each row was doable even by an explicit loop that the CPU had to periodically execute.
And this task soon moved to memory controllers, or at least got done by CPUs automatically without need for explicit coding.
The inventor of DRAM, Robert Heath Dennard, just died a few months ago and I was reading his obit and his history.
I think the long and short of it is that DRAM is cheap. DRAM needs one transistor per data bit. Competing technologies needed far more. SRAM needed six transistors per bit for example.
Dennard figured out how to vastly cut down complexity, thus costs.
DRAM uses a capacitor. Those capacitors essentially hit a hard limit at around 400MHz for our traditional materials a very long time ago. This means that if you need to sequentially read random locations from RAM, you can't do it faster than 400MHz. Our only answer here is better AI prefetchers and less-random memory patterns in our software (the penalty for not prefetching is so great that theoretically less efficient algorithms can suddenly become more efficient if they are simply more predictable).
As to capacitor sizes, we've been at the volume limit for quite a while. When the capacitor is discharged, we must amplify the charge. That gets harder as the charge gets weaker and there's a fundamental limit to how small you can go. Right now, each capacitor has somewhere in the range of a mere 40,000 electrons holding the charge. Going lower dramatically increases the complexity of trying to tell the signal from the noise and dealing with ever-increasing quantum effects.
Getting more capacitors closer means a smaller diameter, but keeping the same volume means making the cylinder longer. You quickly reach a point where even dramatic increases in height (something very complicated to do in silicon) give only minuscule decreases in diameter.
What does “faster than 400MHz” mean in this context? Does that mean you can’t ask for a unit of memory from it more than 400M times a second? If so, what’s the basic unit there, a bit? A word?
I built a little CPU in undergrad but never got around to building RAM and admit it’s still kind of a black box to me.
Bonus question: When I had an Amiga, we’d buy 50 or 60ns RAM. Any idea what that number meant, or what today’s equivalent would be?
5nm can hold roughly a gigabyte of SRAM on a cpu-sized die, that's around $130/GB I believe. At some point 5nm will be cheap enough that we can start considering replacing DRAM with SRAM directly on the chip (aka L4 cache). I wonder how big of a latency and bandwidth bonus that'd be. You could even go for a larger node size without losing much capacity for half the price.
SRAM also requires more power than DRAM and the simple regular structure of SRAM arrays compared to (other) logic makes it possible to get good yield rates through redundancy and error correction codes so you could have giant monolithic dies, but information can't exceed the speed of light in a medium. There just isn't enough time for the signals to propagate to get the latency you expect of a L3 cache out of gigabytes (in relative terms) far away big dies containing gigabytes of SRAM. Also moving that the data would to perform computations without caching would be terrible wasteful given how much energy is needed just to move the data. Instead you would probably end up with something closer to the computing memory concept to map computation to ALUs close to the data with an at least two tier network (on-die, inter-die) to support reductions.
I have a recollection of a design where microprocessor reads were used to refresh DRAM contents. Late 1970s. I thought it was in a early 6800 Motorola book. Can find it now, or no mention of the technique, now. Would slow down program operation for sure. Maybe my recollection is wrong, not sure.
I miss RAM. I feel like if you lived through that 90s RAM frenzy, you probably miss RAM too. It was crazy how quickly we move through SDRAM/DDR, prices dropped and you could make real increases in performance year over year for not much money. I'm sure some of it was the software being able to capture the hw improvements, but that certainly was my fav period in tech so far.
I am confused by this comment. You said "RAM" (contrast to "DRAM" in the article title) but I think you are talking about DRAM sticks? But those have not gone away (other than with some laptops where it's soldered on and not upgradable).
Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.
One difference is just that the price isn't dropping at the same rate anymore [1], so it doesn't make as much sense to buy small and re-buy next year when bigger chips are cheaper (they won't be much cheaper).
Another is that DRAM speed is at the top of an S-curve [2], so there's not that same increase in speed year-over-year, though arguably the early 2000's were when speeds most dramatically increased.
Most RAM found in consumer PC's during the 90s was still DRAM, including SDRAM, EDO, and Rambus. I believe OP is just being nostalgic over the period of time when RAM upgrades were very a exciting thing, as hardware was changing very quickly in that era and each year felt considerably more capable than the prior.
I started late but i rememeber when i upgraded my system with an additional 64mb stick, i was able to reduce the GTA 3 Loadtime between one island to another from 20 seconds to 1.
And at that time i also learned how critical it was to check your ram for errors. I reinstalled win98 and windows 2000 so often until i figured this out.
though i guess the 90's are _the_ best tech era by far and for some time to come, because that's where capable and modular computing machines became a real commodity.
"8K video recording" - does anyone really need this? Seems like for negligible gain in quality people are pushed to sacrifice their storage & battery, and so upgrade their hardware sooner...
Yes, they record with higher resolutions and then the director and the operateur has greater flexibility later when they realize they need a different framing - or just fixing the cameraman's errors cutting parts of the picture out. They need the extra pixels/captured area to be able to do this.
I think the studios and anyone doing video production probably would use a 8k toolchain if possible. As others have pointed out, this lets you crop and modify video while still being able to output 4k without having to upscale.
You are thinking from a consumer point of view, consumer as in Jane taking videos of her cats which 8K, even 4K would be overkill. You can set your recording device to record in 720p or 1080p and so on to suit the purpose.
For commercial purposes it's another story and it makes sense to consider shooting in 8K if possible, thus the option should exist.
I need more than 8K. I'm working at microscopic levels when I study minerals, I need as much resolution as I can possibly get, to the limit of optical diffraction.
Because of that, the act of reading the bit's value means that the data is destroyed. Therefore one of the jobs of the sense amplifier circuit - which converts the tiny voltage from the bit cell to the external voltage - is to recharge the bit.
But that stray capacitance is so small that it naturally discharges through the high, but not infinite, resistance when the transistor is 'off'. Hence, you have to refresh DRAM, by regularly reading every bit frequently enough that it hasn't discharged before you got to it. Usually you might only need to read every row frequently enough, because there's actually a sense amplifier for each column, reading all the bit values in that row, with the column address strobe just selecting which column bit gets output.
In fact it took a bit of time for the CPUs or memory controllers to do it automatically, i.e. without the programmer having to explicitly code the refresh.
(I think I more or less know, but I’d rather talk about it than look it up this morning.)
One bit of DRAM is just one transistor and one capacitor. Massive density improvements; all the complexity is in the row/column circuitry at the edges of the array. And it only burns power during accesses or refreshes. If you don't need to refresh very often, you can get the power very low. If the array isn't being accessed, the refresh time can be double-digit milliseconds, perhaps triple-digit.
Which of course leads to problems like rowhammer, where rows affected by adjacent accesses don't get additional refreshes like they should (because this has a performance cost -- any cycle spent refreshing is a cycle not spent accessing), and you end up with the RAM reading out different bits than were put in. Which is the most fundamental defect conceivable for a storage device, but the industry is too addicted to performance to tap the brakes and address correctness. Every DDR3/DDR4 chip ever manufactured is defective by design.
And this task soon moved to memory controllers, or at least got done by CPUs automatically without need for explicit coding.
I think the long and short of it is that DRAM is cheap. DRAM needs one transistor per data bit. Competing technologies needed far more. SRAM needed six transistors per bit for example.
Dennard figured out how to vastly cut down complexity, thus costs.
Deleted Comment
Deleted Comment
What’s the likely ETA for DRAM?
DRAM uses a capacitor. Those capacitors essentially hit a hard limit at around 400MHz for our traditional materials a very long time ago. This means that if you need to sequentially read random locations from RAM, you can't do it faster than 400MHz. Our only answer here is better AI prefetchers and less-random memory patterns in our software (the penalty for not prefetching is so great that theoretically less efficient algorithms can suddenly become more efficient if they are simply more predictable).
As to capacitor sizes, we've been at the volume limit for quite a while. When the capacitor is discharged, we must amplify the charge. That gets harder as the charge gets weaker and there's a fundamental limit to how small you can go. Right now, each capacitor has somewhere in the range of a mere 40,000 electrons holding the charge. Going lower dramatically increases the complexity of trying to tell the signal from the noise and dealing with ever-increasing quantum effects.
Getting more capacitors closer means a smaller diameter, but keeping the same volume means making the cylinder longer. You quickly reach a point where even dramatic increases in height (something very complicated to do in silicon) give only minuscule decreases in diameter.
I built a little CPU in undergrad but never got around to building RAM and admit it’s still kind of a black box to me.
Bonus question: When I had an Amiga, we’d buy 50 or 60ns RAM. Any idea what that number meant, or what today’s equivalent would be?
Deleted Comment
https://jcmit.net/memoryprice.htm
Recently DDR4 RAM is available at well under $2/GB, some closer to $1/GB.
I don't think the latter (SRAM capacity remaining the same per area?) has anything to do with Dennard scaling.
Update: Today, marking the 56th anniversary...1966
Please forgive my pedantry but 58th. It was a busy year.
Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.
One difference is just that the price isn't dropping at the same rate anymore [1], so it doesn't make as much sense to buy small and re-buy next year when bigger chips are cheaper (they won't be much cheaper).
Another is that DRAM speed is at the top of an S-curve [2], so there's not that same increase in speed year-over-year, though arguably the early 2000's were when speeds most dramatically increased.
[1] https://aiimpacts.org/trends-in-dram-price-per-gigabyte/
[2] http://blog.logicalincrements.com/2016/03/ultimate-guide-com...
This statement makes it difficult to believe you were there.
0. https://downloadmoreram.com/
And at that time i also learned how critical it was to check your ram for errors. I reinstalled win98 and windows 2000 so often until i figured this out.
though i guess the 90's are _the_ best tech era by far and for some time to come, because that's where capable and modular computing machines became a real commodity.
Deleted Comment
Deleted Comment
For commercial purposes it's another story and it makes sense to consider shooting in 8K if possible, thus the option should exist.
Different use cases exist:
Record 8k text and you could zoom in and read things. Record 8k and crop withot quality loss or 'zoom' in
Does everyone need this? Probably not but we are on hn not at a coffee party
Honest question. I hope I learn something about studying minerals!
On a large TV though , it's probably an improvement over 4K for sports where you need to track a small item moving fast.