I don't think one needs to view memory contents of their own program to know what what the memory contents are, roughly, or to know how to use memory efficiently.
Debuggers and profilers already exist for the developers of applications to know these things.
this tool seems much more useful for the reverse engineer who is watching memory of a target application visually while they step in a debugger. this wouldn't even be for reading specific values of RAM, again the debugger is usually quite good at that, but instead would be useful to see how things change as execution continues.
> Debuggers and profilers already exist for the developers of applications to know these things.
One big difference between an intermediate and an expert programmer is that an expert develops their intuition for how the program they write will compile and run. Can you guess correctly how fast, or how slow each function will be? Or what the optimizer will do a good or a bad job at optimizing? Can you tell before you've written your code when avoiding allocations is going to speed things up, and when it won't matter?
Debuggers and profilers honestly aren't very good at giving you a "zoomed out" view of whats going on in your program. Each tool shows you a specific aspect of your program, and hides everything else. For profilers, thats usually what the CPU spends its time on. For debuggers, the execution path of a single function. Godbolt shows how the optimizer works. And so on.
But from my perspective, having more tools which show different aspects of my code is almost always a win. I never know ahead of time which perspective will let me double my program's performance, or halve memory usage. Writing code is easy. Understanding code is much more complex.
So yeah, from a software development point of view I think this is neat! I want to give it a try on some of my programs because I expect to see my mental model animated back at me, and I anticipate being surprised. This looks cool!
To be fair, memory profilers and analysers are probably much easier and more accessible than just raw memory dumps. Modern tools ranging from Valgrind to the web browser heap analyser is a lot easier to master than scrolling through megabytes of hex trying to find an area of memory that's not necessary.
Even if I were to debug memory using raw hex, I'd probably take a snapshot and open that in a good hex editor instead of just watching some blocks blink.
One of the first things I did as a c programmer on VMS (1987) was deference a pointer and look at my app memory map (I didn't know about virtual memory so I thought I was reading physical ram)
Even before that I'd scan through various parts of apple iie memory using the machine language lister. I love to troll thru ram.
I think the confusing part is the live view of utf-8 encoded memory scrolling by. As opposed to samples or profiles, which are more evidently useful to those who aren't doing systems programming regularly.
First off, Justine is a better programmer than I. But, and I don't mean this as a humblebrag, the use of all global variables for state makes me uncomfortable. If this is good C code (not saying it isn't) then maybe that's why I'm not a C programmer. Whatever the case, Justine is great.
That's a good instinct to have. 99% of the time we're writing something like an object library that's part of a much larger program. To use global variables in such code would impose difficulties on the application as a whole, with regard to things like threading, pollution of the linker symbol table, etc. But when you're writing small main.c programs like this one which don't use threads, globals can be a real advantage. In gdb, you can easy inspect their values. You can look at the linker output manifest listing to see how they're being arranged in your binary. If you look at a lot of the old original UNIX programs that were written back in the day, a lot of them looked very similar to this. So it's a great style. It's just not one that scales to large monolithic programs that most companies prefer to create. So over the years a cultural aversion to it was developed in many style guides.
if you mean the straight-forward nature of the code, I agree.
I think we over-complicate code today because we are promised ease of maintenance, or high-level declaration, or something else, and I don't think those promises have ever come true, except in very small textbook-type examples.
I wrote a similar thing with a Qt based GUI that, I think, exposes a little more information (more of the kernel's page flags). It reaches a quite respectable update rate for what it's doing (>=40 fps or so?) and it's fun to watch, though I haven't found particularly useful, err, uses.
https://github.com/KDAB/QMemstat
Please put screenshots in your README file because I'd love to see your work! Especially if they're GIFs. Contact me if you want to know the ffmpeg commands I used for memzoom.
this reminds me of the days when i was a teenager in the 80s and one of my hobbies was ripping the music from video games which basically meant identifying and isolating the code and data responsible for the audio. i remember taking a hex monitor and browsing/ scrolling through all the 64k memory of the commodore c64 and i could tell you just by looking at visual repeating data patterns in the raw hex dump where the song data was located.
Debuggers and profilers already exist for the developers of applications to know these things.
this tool seems much more useful for the reverse engineer who is watching memory of a target application visually while they step in a debugger. this wouldn't even be for reading specific values of RAM, again the debugger is usually quite good at that, but instead would be useful to see how things change as execution continues.
One big difference between an intermediate and an expert programmer is that an expert develops their intuition for how the program they write will compile and run. Can you guess correctly how fast, or how slow each function will be? Or what the optimizer will do a good or a bad job at optimizing? Can you tell before you've written your code when avoiding allocations is going to speed things up, and when it won't matter?
Debuggers and profilers honestly aren't very good at giving you a "zoomed out" view of whats going on in your program. Each tool shows you a specific aspect of your program, and hides everything else. For profilers, thats usually what the CPU spends its time on. For debuggers, the execution path of a single function. Godbolt shows how the optimizer works. And so on.
But from my perspective, having more tools which show different aspects of my code is almost always a win. I never know ahead of time which perspective will let me double my program's performance, or halve memory usage. Writing code is easy. Understanding code is much more complex.
So yeah, from a software development point of view I think this is neat! I want to give it a try on some of my programs because I expect to see my mental model animated back at me, and I anticipate being surprised. This looks cool!
Even if I were to debug memory using raw hex, I'd probably take a snapshot and open that in a good hex editor instead of just watching some blocks blink.
Even before that I'd scan through various parts of apple iie memory using the machine language lister. I love to troll thru ram.
Feels really old school. Looks like something from people used to write DOS programs.
I think we over-complicate code today because we are promised ease of maintenance, or high-level declaration, or something else, and I don't think those promises have ever come true, except in very small textbook-type examples.
[0] - https://www.youtube.com/watch?v=MVrNn5TuMkY&t=145s