Readit News logoReadit News
an-unknown commented on Running Clojure in WASM with GraalVM   romanliutikov.com/blog/ru... · Posted by u/roman01la
kokada · a year ago
Not really, but one thing that bothers me is how unreproducible GraalVM is. AFAIK every distro that has binaries for it just repacks the binaries released from Oracle, and the last time I searched I couldn't find instructions on how to build from scratch (I was the maintainer of GraalVM in nixpkgs, not anymore because I just got fed-up with it).
an-unknown · a year ago
Not sure why people always say it's so hard to build GraalVM ... all you need is roughly 2 prerequisites and one build command. The prerequisites are a "Labs JDK" which is essentially a slightly modified OpenJDK with more up to date JVMCI (the JIT interface used by Graal) and the build tool "mx".

Since you want to build completely from source, you start by installing OpenJDK. Then you clone the Labs JDK repo [0] and build it just like how you would build any other OpenJDK. Once you have the Labs JDK, you don't need the OpenJDK anymore, since that's only necessary to build the Labs JDK. If you use a normal OpenJDK instead of Labs JDK for Graal, the Graal build will most likely tell you something about "too old JVMCI" and fail. Don't do that.

Next you clone mx [1] and graal [2] into some folder and add the mx folder to PATH. You also need Python and Ninja installed, and maybe something else which I can't remember anymore (but you'd quickly figure it out if the build fails). Once you have that, you go to graal/vm and run the relevant "mx build" command. You specify the path to the Labs JDK via the "--java-home" CLI option and you have to decide which components to include by adding them to the build command line. I can't remember what exactly happens with just "mx build" but chances are this only gives you a bare GraalVM without anything else, which means also no SubstrateVM ("native-image"). By adding projects on the command line, you can include whatever languages/features are available. And that's it. After some time (depending on how beefy your computer is), you get the final GraalVM distribution in some folder, with a nice symlink to find it.

It's not exactly documented in a good way, but you can figure it out from the CI scripts which are in the git repos of Graal and Labs JDK. The "mx build" command is where you decide which languages and features to include; if you want to include languages from external repositories, you have to clone them next to the graal and mx folder and add the relevant projects to the mx build command.

[0] https://github.com/graalvm/labs-openjdk

[1] https://github.com/graalvm/mx

[2] https://github.com/oracle/graal

an-unknown commented on Users don't care about your tech stack   empathetic.dev/users-dont... · Posted by u/merkmoi
gizmo · a year ago
This argument always feels like a motte and bailey to me. Users don't literally care what what tech is used to build a product. Of course not, why would they?

But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting. When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.

Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)

Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.

an-unknown · a year ago
> Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)

While the difference is huge in your example, it doesn't sound too bad at first glance, because that hello world just includes some Rust standard libraries, so it's a bit bigger, right? But I remember a post here on HN about some fancy "terminal emulator" with GPU acceleration and written in Rust. Its binary size was over 100MB ... for a terminal emulator which didn't pass vttest and couldn't even do half of the things xterm could. Meanwhile xterm takes about 12MB including all its dependencies, which are shared by many progams. The xterm binary size itself is just about 850kB of these 12MB. That is where binary size starts to hurt, especially if you have multiple such insanely bloated programs installed on your system.

> If you want to make something that starts instantly you can't use electron or java.

Of course you can make something that starts instantly and is written in Java. That's why AOT compilation for Java is a thing now, with SubstrateVM (aka "GraalVM native-image"), precisely to eliminate startup overhead.

an-unknown commented on RT64: N64 graphics renderer in emulators and native ports   github.com/rt64/rt64... · Posted by u/klaussilveira
mouse_ · a year ago
> Uses ubershaders to guarantee no stutters due to pipeline compilation.

I may sound way out of the loop here, but... How come this was never a problem for older dx9/dx11/GL games and emulators?

an-unknown · a year ago
I think there is some confusion about "ubershaders" in the context of emulators in particular. Old Nintendo consoles like the N64 or the GameCube/Wii didn't have programmable shaders. Instead, it was a mostly fixed-function pipeline but you could configure some stages of it to kind of somewhat fake "programmable" shaders with this configurable pipeline, at least to some degree. Now the problem is, you have no idea what any particular game is going to do, right until the moment it writes a specific configuration value into a specific GPU register, which instantly configures the GPU to do whatever the game wants it to do from that very moment onwards. There literally is no "shader" stored in the ROM, it's just code configuring (parts of) the GPU directly.

That's not how any modern GPU works though. Instead, you have to emulate this semi-fixed-function pipeline with shaders. Emulators try to generate shader code for the current GPU configuration and compile it, but that takes time and can only be done after the configuration was observed for the first time. This is where "Ubershaders" enter the scene: they are a single huge shader which implements the complete configurable semi-fixed-function pipeline, so you pass in the configuration registers to the shader and it acts accordingly. Unfortunately, such shaders are huge and slow, so you don't want to use them unless it's necessary. The idea is then to prepare "ubershaders" as fallback, use them whenever you see a new configuration, compile the real shader and cache it, and use the compiled shader once it's available instead of the ubershader, to improve performance again. A few years ago, the developers of the Dolphin emulator (GameCube/Wii) wrote an extensive blog post about how this works: https://de.dolphin-emu.org/blog/2017/07/30/ubershaders/

Only starting with the 3DS/Wii U, Nintendo consoles finally got "real" programmable shaders, in which case you "just" have to translate them to whatever you need for your host system. You still won't know which shaders you'll see until you observe the transfer of the compiled shader code to the emulated GPU. After all, the shader code is compiled ahead of time to GPU instructions, usually during the build process of the game itself. At least for Nintendo consoles, there are SDK tools to do this. This, of course, means, there is no compilation happening on the console itself, so there is no stutter caused by shader compilation either. Unlike in an emulation of such a console, which has to translate and recompile such shaders on the fly.

> How come this was never a problem for older [...] emulators?

Older emulators had highly inaccurate and/or slow GPU emulation, so this was not really a problem for a long time. Only once the GPU emulations became accurate enough with dynamically generated shaders for high performance, the shader compilation stutters became a real problem.

an-unknown commented on F8 – an 8 bit architecture designed for C and memory efficiency [video]   fosdem.org/2025/schedule/... · Posted by u/mpweiher
stevefan1999 · a year ago
C is undeniably a legendary programming language, but it's time to move beyond the constraints of the C abstract machine, which was heavily shaped by the PDP-11 due to Unix's origins on that architecture. C feels outdated for modern computing needs.

It lacks features like lambda calculus, closures, and coroutines—powerful and proven paradigms that are essential in modern programming languages. These limitations make it harder to fully embrace contemporary programming practices.

The dominance of C and its descendants has forced our systems to remain tied to its design, holding back progress. Intel tried to introduce hardware assisted garbage collection, which unfortunately failed miserably because C doesn't need it, and we are still having to cope with garbage collection entirely in software.

While I’m not suggesting we abandon C entirely (I still use it, like when writing C FFI for some projects), we need to explore new possibilities and move beyond C to better align with modern architectures and programming needs.

an-unknown · a year ago
> It lacks features like lambda calculus, closures, and coroutines—powerful and proven paradigms that are essential in modern programming languages. These limitations make it harder to fully embrace contemporary programming practices.

And what features exactly would you propose for a future CPU to have to support such language constructs? It's not like a CPU is necessarily built to "support C", since a lot of code these days is written in Java/JavaScript/Python/..., but as it turns out, roughly any sane CPU can be used as a target for a C compiler. Many extensions of current CPUs are not necessarily used by an average C compiler. Think of various audio/video/AI/vector/... extensions. Yet, all of them can be used from C code, as well as from any software designed to make use of it. If there is a useful CPU extension which benefits let's say the JVM or v8, you can be sure these VMs will use those extensions, regardless of whether or not they are useful for C.

> Intel tried to introduce hardware assisted garbage collection, which unfortunately failed miserably because C doesn't need it, and we are still having to cope with garbage collection entirely in software.

Meanwhile IBM did in fact successfully add hardware assisted GC for the JVM on their Z series mainframes. IBM can do that, since they are literally selling CPUs purely for Java workloads. With a "normal" general purpose CPU, such a "only useful for Java" GC extension would be completely useless if you plan to only run let's say JavaScript or PHP code on it. The problem with such extensions is that every language needs just so slightly different semantics for a GC and as a result it's an active research topic how to generalize this to make a general "GC assist" instruction for a CPU which is useful for many different language VMs. Right now such extensions are being prototyped for RISC-V, in case you missed it. IIRC for GC in particular, research was going in the direction of adding a generalized graph traversal extension since that's the one thing most language VMs can use somehow.

C is in no way "holding back" CPU designs, but being able to efficiently run C code on any CPU architecture which hopes to become relevant is certainly a requirement, since a lot of software today is (still) written in C (and C++), including the OS and browser you used to write your comment.

Just to be clear: this topic here is about tiny microcontrollers. The only relevant languages for such microcontrollers are C/C++/assembly. Nobody cares if it can do hardware assisted GC or if it can do closures/coroutines/... or something.

an-unknown commented on 1972 Unix V2 "Beta" Resurrected   tuhs.org/pipermail/tuhs/2... · Posted by u/henry_flower
cbm-vic-20 · a year ago
There is firmware available online for some terminals; you could potentially get a lot more accuracy in emulating the actual firmware, but I'm sure a lot of that code gets into the guts of timing CRT cycles and other "real-world" difficulties. I'm not suggesting this would be easy to build out, just pointing out that it's available. While I haven't searched for the VT240 firmware, the firmware for the 8031AH CPU inside the VT420 (and a few other DEC terminals) is available on bitsavers. The VT240 has a T-11 processor, which is actually a PDP-11-on-a-chip.
an-unknown · a year ago
Actually I have the VT240 firmware ROM dumps, that's where I got the original font from. The problem is, at least the VT240 is a rather sophisticated thing, with a T-11 CPU, some additional MCU, and a graphics accelerator chip. There is an extensive service manual available, with schematics and everything, but properly emulating the whole firmware + all relevant peripherals is non-trivial and a significant amount of work. The result is then a rather slow virtual terminal.

There is a basic and totally incomplete version of a VT240 in MAME though, which is good enough to test certain behavior, but it completely lacks the graphics part, so you can't use it to check graphics behavior like DRCS and so on.

EDIT: I also know for sure that there is a firmware emulation of the VT102 available somewhere.

an-unknown commented on 1972 Unix V2 "Beta" Resurrected   tuhs.org/pipermail/tuhs/2... · Posted by u/henry_flower
yjftsjthsd-h · a year ago
I mean... Sure? Go buy an actual VT* unit ( maybe https://www.ebay.com/itm/176698465415?_skw=vt+terminal&itmme... ?), get the necessary adaptors to plug into a computer, and run simh on it running your choice of *nix. I recommend https://jstn.tumblr.com/post/8692501831 as a reference. Once you have it working, shove the host machine behind a desk or otherwise out of sight, and you can live like it's 1980.
an-unknown · a year ago
The only problem with real VTs is you have to be careful not to get one where the CRT has severe burn-in, like in the ebay listing. Sure, some VTs (like the VT240 or VT525) are a separate main box + CRT, but then you're missing the "VT aesthetics". The VT525 is probably the easiest one to get which also uses (old) standard interfaces like VGA for the monitor and PS/2 for the keyboard, so you don't need an original keyboard / CRT. At least for me, severe burn-in, insane prices, and general decay of some of the devices offered on ebay are the reason why I don't have a real VT (yet).

The alternative is to use a decent VT emulator attached to roughly any monitor. By "decent" I certainly don't mean projects like cool-retro-term, but rather something like this, which I started to develop some time ago and which I'm using as my main terminal emulator now: https://github.com/unknown-technologies/vt240

an-unknown commented on 1972 Unix V2 "Beta" Resurrected   tuhs.org/pipermail/tuhs/2... · Posted by u/henry_flower
thequux · a year ago
Compiling an emulator is quite easy: have a look at simh. It's very portable and should just work out of the box.

Once you've got that working, try installing a 2.11BSD distribution. It's well-documented and came after a lot of the churn in early Unix. After that, I've had great fun playing with RT-11, to the point that I've actually written some small apps on it.

an-unknown · a year ago
> After that, I've had great fun playing with RT-11 [...]

If you want to play around with RT-11 again, I made a small PDP-11/03 emulator + VT240 terminal emulator running in the browser. It's still incomplete, but you can play around with it here: https://lsi-11.unknown-tech.eu/ (source code: https://github.com/unknown-technologies/weblsi-11)

The PDP-11/03 emulator itself is good enough that it can run the RT-11 installer to create the disk image you see in the browser version. The VT240 emulator is good enough that the standalone Linux version can be used as terminal emulator for daily work. Once I have time, I plan to make a proper blog post describing how it all works / what the challenges were and post it as Show HN eventually.

an-unknown commented on Majora's Mask decompilation project reaches 100% completion   gbatemp.net/threads/major... · Posted by u/blastersyndrome
dcow · a year ago
Clean room is a legal defense that argues “I implemented similar functionality without consulting the copyrighted work” and sometimes with the addition of “or ever having viewed the original copyrighted work”.

For video games the binary is the covered work, not its disassembly. And disassembly is covered by the DMCA and explicitly allowed for this type of purpose, so that’s not illegal at least. I can see the argument if the clean agent is just operating off the disassembly and a different person is evaluating success.

After all, how does one communicate the goal of a “traditional” source code clean room operation? Well someone has to disassemble the target functionality into a natural language description of the algorithm and communicate the requirements and expected outcomes, the functions if you will, to the agents as well as check success. Why does using a computer to aid that process make things questionable?

IMO the whole notion of a clean room for copyright purposes is weird across the board. Not just when punks do it.

an-unknown · a year ago
> For video games the binary is the covered work, not its disassembly.

For the law it doesn't matter much if you look at binary code in a hex editor or at disassembly, since disassembly is just a 1:1 translation of the binary code. Otherwise it would be sufficient to let's say gzip compress the binary and distribute that without fearing any copyright claims since the result would be different and no longer covered, which is obviously not the case. The same applies for decompilation results. Means: for clean room implementations, you cannot look at any of it.

> And disassembly is covered by the DMCA and explicitly allowed for this type of purpose, so that’s not illegal at least.

You can, under certain circumstances, disassemble code for interoperability purposes. This explicitly does NOT cover the case of "I want to make a 1:1 clone of the whole software" which is what these decompilation projects are about. After all, the point of the "matching decompilation" projects is that if you compile the source code, you get the exact same binary again. And for the non-matching decompilation projects, you get at least very similar code after compilation.

For the DMCA exception, think about it more like you are working on GIMP and you want to add support for reading/writing PhotoShop files. You can look at PhotoShop code to understand how these files are read/written to then derive the file structures and implement the relevant I/O code for GIMP. You can NOT look at any PhotoShop code that is not absolutely required for this task, nor can you look at PhotoShop code and build your own clone of PhotoShop using that knowledge and call the result GIMP, which would still not be a "decompilation project" comparable to these game decompilation projects. I hope you can see how these "clean room" claims for such game decompilation projects are pure nonsense.

> Why does using a computer to aid that process make things questionable?

This essentially boils down to "if I didn't see the original code anyway, how am I supposed to have 'copied' it?" which you can't easily do if the computer did in fact see the original code.

Obligatory EFF link for completeness: https://www.eff.org/issues/coders/reverse-engineering-faq

an-unknown commented on My MEGA65 is finally here   lyonsden.net/my-mega65-is... · Posted by u/harel
reaperducer · a year ago
I've never understood why retro computer enthusiasts go through such effort to replace their floppy drives with CF cards, when Sony had a solution almost 25 years ago.

My DSC-30 came with a metal floppy disk that has no moving parts. But you could insert a Memory Stick into it, and then stick it in any 3.5" floppy drive and read the stick as FAT.

Every time I see someone on the VCF forums struggling with the latest floppy drive replacement board I wonder what ever happened to that technology.

an-unknown · a year ago
Simple: a real 3.5" floppy disk drive has moving parts and various things that age and eventually break. For example I have an old device with a broken floppy disk drive which can't even read a real floppy anymore. With the metal floppy "emulator disk" you mentioned, the FDD itself still has to be fully functional in order to read this "emulator disk".

A floppy emulator board which reads SD/CF cards or USB sticks doesn't have that problem at all since it's purely solid state electronics and directly connected to the electronic interface of the FDD instead of the real FDD, and usually you can put thousands of floppy disk images onto such a memory card/stick and select which disk image is to be put into the emulated floppy disk drive ⇒ there is simply no need for the "emulator disk" technology you mentioned anymore.

an-unknown commented on Veles: Open-source tool for binary data analysis   codisec.com/veles/... · Posted by u/LorenDB
tatref · 2 years ago
You should add some screenshots!
an-unknown · 2 years ago
I added a few screenshots of different files right now.

u/an-unknown

KarmaCake day157October 13, 2021View Original