Incredibly "cool"! The physical-chip-swapping version of the cold-boot attack is extremely general, but it's always been very labor intensive—that's why automation like this could be a game changer.
Here are some photos from our original experiments 15 years ago, to help you picture what's involved in doing these attacks manually:
It may require to have support from the motherboard.
EFI firmware has a var (/sys/firmware/efi/efivars/) named MemoryOverwriteRequestControl that if set is supposed to instruct the firmware to overwrite memory on next boot. If you set it and if you can somehow trigger a reboot memory should not be recoverable.
You can also set init_on_free=1 in linux cmdline, the kernel will overwrite freed memory. If a reboot is physically interrupted or if the EFI firmware does not honor the var mentioned above, encryption keys and any other private data may have been freed.
> You can also set init_on_free=1 in linux cmdline, the kernel will overwrite freed memory.
Does this only work with kernel buffers? I’m wondering how glibc handles freed memory. I feel like there’s a good chance it doesn’t always notify the kernel that the memory has been freed.
None Pro Ryzen CPUs can use Transparent SME (TSME) if the motherboard includes support for it (and many of them do). Like ECC on non Pro Ryzen chips it isn't 'officially supported' but the option is there and works.
A lot of the critical infrastructure embedded things that we depend on, almost none of them are addressing this kind of attack
Because it's beyond impractical and to attempt to "defend" against such things is effectively fighting against the base principles of (physical) property ownership?
The direction that "security" research has headed in is really disturbing.
I think this makes sense, if only because physical property ownership has always been an illusion.
You don’t really “own” anything as long as someone with a badge and a gun can break down your door and take it from you. I don’t care if you live in a democracy or a dictatorship, it happens in both.
Encrypted information, on the other hand, can theoretically be “owned” if it’s implemented correctly. No police, no court or law, no government, nothing can take your encrypted digital property from you. It’s probably the most real manifestation of property humans have ever come up with.
> Encrypted information, on the other hand, can theoretically be “owned” if it’s implemented correctly. No police, no court or law, no government, nothing can take your encrypted digital property from you.
Exactly! You can always access it from your jail cell.
If I am forced to surrender my phone at a border crossing, for example, I would like to know that the fact I powered it off is sufficient to deter these kinds of attacks.
It's relatively easy to thwart this attack--the CPU only sends encrypted data to external memory chips. The devil is in the implementation--encryption adds latency to memory access.
They can still overwrite your OS/kernel with a patched version that will upload the keys somewhere on first boot+unlock. This + a dump done at a border will let them get all your data.
Must be 25 years ago now I was chatting with a family friend, my friends dad, and he told me roughly what it was that he does:
"If I have physical access to your computer, I can probably get whatever data you have off of it". It was the first time I had even heard of computer security as a concept.
I don't know precisely what methods he was using back then (it was the 90s) but he was dead serious.
The point he made has become more and more true over time: Whoever can touch the machine, you presume they have better access than the admin.
I do not think that holds up today. While there do still exist a wide variety of unsecured vectors, those are closing rapidly. Look at any of the closed hardware platforms (Xbox, Playstation, iPhone, TPM, etc). People with strong desires to exfiltrate data are stymied.
I thought ram chips had volatile memory. As soon as it loses power it effectively wipes all data, no? How does the physical removal process preserve the contents? Are modern RAM chips not volatile anymore?
It doesn't lose content as soon as you lose power.
One fun way I saw this was when working on LCD support in a bootloader. You do a quick power cycle and if the framebuffer location is fixed in memory and you don't initialize the contents, you may see faded traces of the picture from previous boot.
In my experience that's more of a side effect from TFT displays. Each pixel has its own gate and can hold a charge for quite a long time.
It can also be a side effect of older LCDs, when power is lost the crystal can take a long amount of time to depolarize. If you apply backlight you'll see an afterimage.
LCDs are driven by the controller and have no knowledge of the framebuffer location, or even access to the host address and data busses. So if you see an afterimage, that's in the pixel.
No, physical attacks are possible for many years now. They just require very high manual skill level. Ram chips must be immediately (as fast as possible after power goes down in case you don’t have control over when computer loses power) frozen by liquid gas and then moved into ram-reading jig.
Or you know a can of bottled air upside down and a bootable usb will do it unless the efi is locked down and USB-boot disabled, in that case you also need to move it over to your own laptop.
I remember switching my Commodore 64 off and on rapidly while it was displaying something on the bitmap graphic screen. The system would reset, but you could enable graphic mode and see 90% of what was there.
At normal DRAM module operating temperatures and assuming you want to stick to JEDEC specifications, retention times may typically be 50uS[1]. This low number is due to the standard having an extremely low tolerance for errors, and the inevitable problem of some MOS transistors/capacitors (a cell) being defective and having a much higher leakage rate than typical cells. Hence the standard has to cater for the weakest cell due to the extremely low tolerance for errors.
In [2], the authors used readily available compressed gas[3] to chill (probably to approx -50oC) DDR1 and DDR2 DRAM modules during operation, then whilst the DRAM modules remained chilled, cut power to the computer, waited a period of time and then read out the DRAM module data for comparison. With power cut for 10 minutes, a read error rate of just 36 bytes per megabyte was observed. This increased to an error rate of 1700 bytes per megabyte at 60 minutes when chilling to a lower temperature using LN2.
The authors of [2] tested DDR1 and DDR2 DRAM retention without use of cooling, and results varied between different DRAM modules (different sense amplifiers, different MOS transistor/capacitor fabrication methods, different DRAM module heatsink designs, etc) but generally it showed 5 minutes unpowered would generally erase all data, but some remnants could still remain (e.g. enough to make a faint outline out of a photo stored in memory).
Note that like the 2008 paper[2], this paper also uses DDR1 and DDR2 DRAM chips. The main reason for this choice is DDR3+ specifications and modern memory controllers implement "data scrambling"[4], originally not for security reasons (that's just a bonus side effect), but for electrical reasons to reduce di/dt noise on the data bus. "Data scrambling" means that data is XOR'd with a pseudorandom function, thus if you write 11111... or 00000... you'd expect on average the data bus to have the same average electrical characteristics. Since row hammer, the pseudorandom function has been improved to provide security against cold boot attacks. It's possible for the memory controller to use low-latency strong encryption such as ChaCha8. AMD "Memory Guard" uses AES128/NIST SP 800-90[5] and Intel's "Total Memory Encryption" uses AES128/256-XTS[6].
>They also conducted a similarly successful attack on DDR3 DRAM in a CISCO IP Phone 8800 series to access the runtime ARM TrustZone memory.
Generic Pattern/Idea expressed in above statement:
a) IF a given CPU implements a security processor (i.e., Intel Management Engine, AMD PSP, Apple Secure Enclave, an ARM security coprocessor, etc.) then:
b) IF that security processor uses a small part of physical RAM of the computer it is in for its activities, and that physical RAM, during those activities is inaccessible to programs, OS'es and Hypervisors alike, then:
c) It may still be possible to get a memory dump of that protected subsection of RAM during those activities by one or more methods, including (but not limited to!) the physical freezing/robotic extraction of the RAM...
I would suggest that if point-in-time RAM extraction were the desired goal (and it apparently is) -- that it might be far simpler to use an FPGA connected to appropriate RAM and have that device emulate aka proxy (via signal paths) the device's RAM from which data is to be extracted from...
Also... prediction: Future RAM will be built such that it can be easily proxied/tapped/tee'd/traced/interposed by third party hardware devices... That is, future RAM interconnect signal paths will be intentionally engineered for that purpose...
In future computers, CPU, memory and all devices will be modular; where all interlinking signal paths could be proxied/tapped/tee'd/traced/interposed by third party hardware devices...
Think of it this way; what the original IBM-PC did for computing of the era (open the bus, open the backplane to accept 3rd party expansion cards) -- should be done between every possible set of electronic components in future PC's/computing devices...
Why not build a thermocouple into the RAM package and program the operating system to overwrite memory with random data on detection of a sudden temperature drop?
Using the OS to do this is unlikely to be effective; the attacker with this access could usually halt the CPU before they remove RAM. As in the article, the best mitigation against this kind of attack is memory encryption, either in hardware or software (although in software, you have to figure out how to keep your keys out of RAM).
There are security oriented ICs of various types (flash, RAM, secure element) which do clear data with a tripwire in a lot of ways; some even have onboard current storage of some kind which allows them to clear all data as a "dying breath" when the package is tampered with. But COTS DDR RAM needs to be cheap and doesn't usually have this kind of threat model.
You can also pot the PCB, and/or enclose it in a physically secure enclave where if it detects ambient light, actuation of a switch, breach of a conductive trace embedded in the case around it, etc, it wipes the RAM and, depending on your paranoia levels, blows the hardware up. Some key management Hardware Security Modules do this.
That's still a reactive defense. A safer approach is to encrypt the data and keep the keys in the CPU, which is what TME (intel) and SME (amd) do. The downside is some added latency, power consumption and die area.
Here are some photos from our original experiments 15 years ago, to help you picture what's involved in doing these attacks manually:
https://web.archive.org/web/20080225131822/http://citp.princ...
Needless to say, our setup (and approach to lab safety) was rather privative by today's standards:
https://web.archive.org/web/20100616201843/http://citp.princ...
It may require to have support from the motherboard.
EFI firmware has a var (/sys/firmware/efi/efivars/) named MemoryOverwriteRequestControl that if set is supposed to instruct the firmware to overwrite memory on next boot. If you set it and if you can somehow trigger a reboot memory should not be recoverable.
You can also set init_on_free=1 in linux cmdline, the kernel will overwrite freed memory. If a reboot is physically interrupted or if the EFI firmware does not honor the var mentioned above, encryption keys and any other private data may have been freed.
Does this only work with kernel buffers? I’m wondering how glibc handles freed memory. I feel like there’s a good chance it doesn’t always notify the kernel that the memory has been freed.
Because it's beyond impractical and to attempt to "defend" against such things is effectively fighting against the base principles of (physical) property ownership?
The direction that "security" research has headed in is really disturbing.
You don’t really “own” anything as long as someone with a badge and a gun can break down your door and take it from you. I don’t care if you live in a democracy or a dictatorship, it happens in both.
Encrypted information, on the other hand, can theoretically be “owned” if it’s implemented correctly. No police, no court or law, no government, nothing can take your encrypted digital property from you. It’s probably the most real manifestation of property humans have ever come up with.
Exactly! You can always access it from your jail cell.
Oh wait
Dead Comment
If I am forced to surrender my phone at a border crossing, for example, I would like to know that the fact I powered it off is sufficient to deter these kinds of attacks.
It's relatively easy to thwart this attack--the CPU only sends encrypted data to external memory chips. The devil is in the implementation--encryption adds latency to memory access.
Do I need to link to the XKCD comic?
Power off if the cover is opened. ESD is a bitch, right? Gotta protect against that...
"If I have physical access to your computer, I can probably get whatever data you have off of it". It was the first time I had even heard of computer security as a concept.
I don't know precisely what methods he was using back then (it was the 90s) but he was dead serious.
The point he made has become more and more true over time: Whoever can touch the machine, you presume they have better access than the admin.
The "natural law of security" -- If you can access it legally, you can access it illegally -- remains true.
Dead Comment
One fun way I saw this was when working on LCD support in a bootloader. You do a quick power cycle and if the framebuffer location is fixed in memory and you don't initialize the contents, you may see faded traces of the picture from previous boot.
It can also be a side effect of older LCDs, when power is lost the crystal can take a long amount of time to depolarize. If you apply backlight you'll see an afterimage.
LCDs are driven by the controller and have no knowledge of the framebuffer location, or even access to the host address and data busses. So if you see an afterimage, that's in the pixel.
In [2], the authors used readily available compressed gas[3] to chill (probably to approx -50oC) DDR1 and DDR2 DRAM modules during operation, then whilst the DRAM modules remained chilled, cut power to the computer, waited a period of time and then read out the DRAM module data for comparison. With power cut for 10 minutes, a read error rate of just 36 bytes per megabyte was observed. This increased to an error rate of 1700 bytes per megabyte at 60 minutes when chilling to a lower temperature using LN2.
The authors of [2] tested DDR1 and DDR2 DRAM retention without use of cooling, and results varied between different DRAM modules (different sense amplifiers, different MOS transistor/capacitor fabrication methods, different DRAM module heatsink designs, etc) but generally it showed 5 minutes unpowered would generally erase all data, but some remnants could still remain (e.g. enough to make a faint outline out of a photo stored in memory).
Note that like the 2008 paper[2], this paper also uses DDR1 and DDR2 DRAM chips. The main reason for this choice is DDR3+ specifications and modern memory controllers implement "data scrambling"[4], originally not for security reasons (that's just a bonus side effect), but for electrical reasons to reduce di/dt noise on the data bus. "Data scrambling" means that data is XOR'd with a pseudorandom function, thus if you write 11111... or 00000... you'd expect on average the data bus to have the same average electrical characteristics. Since row hammer, the pseudorandom function has been improved to provide security against cold boot attacks. It's possible for the memory controller to use low-latency strong encryption such as ChaCha8. AMD "Memory Guard" uses AES128/NIST SP 800-90[5] and Intel's "Total Memory Encryption" uses AES128/256-XTS[6].
[1] Page 19 (PDF), https://www.egr.msu.edu/classes/ece410/mason/files/Ch13.pdf
[2] https://www.usenix.org/legacy/event/sec08/tech/full_papers/h...
[3] https://en.wikipedia.org/wiki/Freeze_spray
[4] https://web.archive.org/web/20190616183914/https://www.eecs....
[5] https://www.amd.com/system/files/documents/amd-memory-guard-...
[6] https://cdrdv2-public.intel.com/679154/multi-key-total-memor...
https://wootconference.org/slides/3-Cryo-Mechanical_Memory_E...
The whole thing hinges on the elastomeric sockets and I had no idea what they were talking about until you referenced the paper.
Those sockets are mega cool. I need to go down the rabbit hole of those for a while.
>They also conducted a similarly successful attack on DDR3 DRAM in a CISCO IP Phone 8800 series to access the runtime ARM TrustZone memory.
Generic Pattern/Idea expressed in above statement:
a) IF a given CPU implements a security processor (i.e., Intel Management Engine, AMD PSP, Apple Secure Enclave, an ARM security coprocessor, etc.) then:
b) IF that security processor uses a small part of physical RAM of the computer it is in for its activities, and that physical RAM, during those activities is inaccessible to programs, OS'es and Hypervisors alike, then:
c) It may still be possible to get a memory dump of that protected subsection of RAM during those activities by one or more methods, including (but not limited to!) the physical freezing/robotic extraction of the RAM...
I would suggest that if point-in-time RAM extraction were the desired goal (and it apparently is) -- that it might be far simpler to use an FPGA connected to appropriate RAM and have that device emulate aka proxy (via signal paths) the device's RAM from which data is to be extracted from...
Also... prediction: Future RAM will be built such that it can be easily proxied/tapped/tee'd/traced/interposed by third party hardware devices... That is, future RAM interconnect signal paths will be intentionally engineered for that purpose...
In future computers, CPU, memory and all devices will be modular; where all interlinking signal paths could be proxied/tapped/tee'd/traced/interposed by third party hardware devices...
Think of it this way; what the original IBM-PC did for computing of the era (open the bus, open the backplane to accept 3rd party expansion cards) -- should be done between every possible set of electronic components in future PC's/computing devices...
There are security oriented ICs of various types (flash, RAM, secure element) which do clear data with a tripwire in a lot of ways; some even have onboard current storage of some kind which allows them to clear all data as a "dying breath" when the package is tampered with. But COTS DDR RAM needs to be cheap and doesn't usually have this kind of threat model.
I think you can do this with modern “trusted computing” modules.
Also this: https://en.wikipedia.org/wiki/Harvey's_Resort_Hotel_bombing
The designer could probably have made more cash patenting his systems!
Deleted Comment