Readit News logoReadit News
Posted by u/zaptheimpaler 4 years ago
Ask HN: What can I do with 48GB of RAM?
Hi HN,

After a mobo upgrade, i have ended up with an ungodly 48GB of 3200Mhz DDR4 RAM. This is a ridiculous amount to have on a personal machine for me. What are some cool things I can do with this much RAM?

All ideas are welcome. Video/audio editing?, databases, run an OS off a ramdisk??, anything.

cehrlich · 4 years ago
Run Slack and MS Teams at the same time
treis · 4 years ago
2 chats at the same time? I've always wanted to do that man
arghwhat · 4 years ago
With only 48 GB of RAM? Unlikely.

Deleted Comment

kcplate · 4 years ago
I do it all day long in 16gb with zero issues, of course it’s a Mac so…
shmoe · 4 years ago
I hope you still feel dirty at least! :)
nousermane · 4 years ago
Doesn't macOS compress memory on the fly, or something like that?
antisthenes · 4 years ago
But what if I wanted to browse internet at the same time?
thunkshift1 · 4 years ago
Lol.. good one! Also add chrome to that list
brailsafe · 4 years ago
This is psychopath behavior though. I think the feds have a watchlist waiting for you.
sillysaurusx · 4 years ago
I see you mentioned running an OS off a ramdisk. I recommend this, just to see how incredibly fast it can be.

And also how incredibly not-fast. The fact is that most applications are memory bandwidth bound, once you eliminate the disk as a bottleneck. Not CPU bound. So when you run off a ramdisk, it's not actually helping as much as I thought it would.

But! One really neat thing you can do is to save VM checkpoints, so that backing up your computer is as simple as checkpointing the VM. So there are other advantages.

Doing some video editing is fun too, and 3D modeling. Ever want to dabble with ZBrush? Now's your chance. Get yourself a nice big monitor and Wacom tablet. Yum.

(And then, y'know, set the hobby down and never touch it again, just like the rest of us. But it's fun while it lasts.)

zaptheimpaler · 4 years ago
Hey sillysaurus, thanks for the ideas. Incidentally i did recently go on a little Procreate drawing kick so Zbrush sounds perfect. BTW i really appreciate your writing and community building online.
gargarplex · 4 years ago
Can you please describe an example of where an application might be memory bandwidth bound, and what engineering techniques might be used to circumvent this restriction?
sillysaurusx · 4 years ago
In modern times, it's hard to describe an example where an application isn't memory bandwidth bound. It's basically the primary bottleneck.

Most programs spend little time doing computation, or reading I/O. Everyone knows I/O is expensive, so it's minimized.

But there's no getting around the fact that every time you want to do anything at all, you have to shuffle around memory. There's no choice.

One way to circumvent this restriction is to make memory faster. This is difficult with traditional approaches.

I was going to point to Memristors as a possible way forward, but honestly I don't know enough about the subject.

We're getting to the point where we're speed-of-light bound, I believe. I.e. running up against fundamental limits.

Still, there's a lot of room. One interesting thing is to read Feynman's lectures on computation: https://theswissbay.ch/pdf/Gentoomen%20Library/Extra/Richard...

He points out that a reversible computer is actually the most efficient, from an energy perspective. But the tradeoff is that things take more time. If you want to take less time, it generates more heat. And more heat means inevitable delay.

WithinReason · 4 years ago
georgia_peach · 4 years ago
In linux-land, sometimes I'll use a 2-3G ramdisk (tmpfs) for `$HOME/.cache` just to reduce wear-and-tear on my SSD. The web browsers put a ton of junk there, and I rarely reboot my machine.
sbierwagen · 4 years ago
I would be surprised if managing disk cache by hand is going to beat the Linux cache allocator. RAM is never wasted, every byte not being used by an application is used by the kernel for disk cache.

If you dedicate 2 gigabytes of it to the .cache folder than either it's going to be mostly empty and you're be causing more thrashing as the kernel unloads stuff it didn't need to, or it fills up and your system falls over when something tries to put a big temporary file in that folder.

inshadows · 4 years ago
Then your cache is lost on every reboot... Evolution (mail client) by default stores fetched email in there (I moved it to other place for backup purposes). It'd suck to have your mail client re-fetch emails every time.
georgia_peach · 4 years ago
I use webmail, so haven't had any problems. Seems like something under `$HOME/.local`, or maybe its own dotdir, would be a better place for downloaded messages.
anamax · 4 years ago
How often do you reboot linux? I go months between reboots, and that's with crappy hardware.
silisili · 4 years ago
Same. Also for when compiling packages from source and whatnot.
gargarplex · 4 years ago
Whoa. Super interesting! Maybe even useful for compiling during development ... develop in workspace, use unison to copy over to the ramdisk, then do all builds from the ramdisk dir?

Would you agree with this article's recommendations regarding ramdisk setup? https://www.linuxbabe.com/command-line/create-ramdisk-linux There seems to be controversy in the comments as to whether tmpfs is a proper ramdisk - although no clear tutorial as to a better method. Interested to learn more!

zaptheimpaler · 4 years ago
Good idea, ill try that thanks.
charcircuit · 4 years ago
>just to reduce wear-and-tear on my SSD

SSDs can handle a lot of writes. That is not necessary to do.

willis936 · 4 years ago
It's not necessary, but it is beneficial. Why not increase throughput and decrease latency of storage?
tanelpoder · 4 years ago
Have fun with in-memory columnar databases or SQL engines and see how fast they are (the ones that use CPU-efficient data structures and data element access, plus SIMD processing). For example Apache Arrow datafusion [1]

Edit: Also, run a cluster of anything (in VMs or containers) and muck around killing individual cluster nodes or just suspending them/throttling them to be extremely slow to simulate a brownout/straggling cluster node.

[1] https://github.com/apache/arrow-datafusion

karmakaze · 4 years ago
Make a relatively simple application, say an async video chat app. Build it with 'micro'services for everything (e.g. thumbnail generator, contacts, groups, sending, receiving, email/sms notifications). Deploy all of them in containers with redundancy and use a distributed datastore in VMs (to simulate separate machines, run some of them in different timezones).

Alternatively, try running Elasticsearch to index something.

1MachineElf · 4 years ago
You can devote 1/3 of it to CISA's Malcom, which has a minimum requirement of 16GB: https://github.com/cisagov/Malcolm

As for the other 2/3... ZFS, Google Chrome, or Electron apps maybe?

mellosouls · 4 years ago
Looking at that repo I would say it would take several gigs just to load the README...

Deleted Comment

bravetraveler · 4 years ago
If you like playing with different things (Operating systems, misc software) - virtual machines are fun.

I allocate 32 of my 128GB to 'hugepages' - basically reserved areas of memory for the VMs (or other capable software) to use. It helps performance a bit.

Aside from that, I make pretty liberal usage of tmpfs (memory backed storage) for various things. When building software with many files it can make a big difference.

Fedpkg/mock will chew through 40-50GB depending on what I'm building/the chroot involved