Readit News logoReadit News
TehShrike · 2 years ago
Readers may prefer the original post by the programmer behind the rewrite: https://phoboslab.org/log/2023/08/rewriting-wipeout
tomcam · 2 years ago
Everything about that article is delightful. Author is way too modest to point out that it’s a stunning achievement.
_a_a_a_ · 2 years ago
I'm sorry, 6,000 FPS? How is that possible?
squeaky-clean · 2 years ago
It means it's calculating 6000 frames per second, but all of them don't need to make it to the screen. On a 60hz screen, every 100th frame calculated would actually be displayed. (If we assume a perfect 6000fps, the video shows fluctuations between 4000fps-7000fps.

If you just mean how can it be so high in general? Old, well optimized, games run really well on faster modern hardware. I remember getting over 1,000 fps in Guild Wars 1 on a GTX 1060 when looking at an area with no monsters/NPCs.

edit( this paragraph doesn't apply here) ~~The PS1 also doesn't have floating point math, mostly everything is done in fixed point with integer math which is obscenely fast compared to floating point (it could also simulate an FPU if the precision is absolutely necessary but that's not suitable for realtime)~~

Just read further down the article that they converted it to float. I guess the game is just super optimized. I would have thought fixed point savings factored in here.

lunchdetail · 2 years ago
They're talking about the speed of the internal engine, not the display. So the display could still only be showing 60 frames to the user each second (or 120, or anything) but the internal engine is running at 6000fps.

Plenty (maybe nearly all?) of games do this because modern engines decouple the engine speed from the display speed. In older systems where you knew the engine was only going to run for a specific game on specific hardware (e.g. a SNES or GameCube or PlayStation), and you knew you were always going to be targeting 30fps, no more no less, you could pretty safely assume the game would _always_ run at 30fps and could use a "frame" as a unit of time. So if you want some in-game action like a melee attack to take 1 second, you could just count 30 frames and you would know it was 1 second long. But if somehow this game was later run at 60fps, that same attack would now only take .5 seconds, since there were twice as many frames in a second now.

So if you took a game like this meant for 30fps and ran it at 60, everything would just run twice as fast. You wouldn't actually be able to play the original game at anything higher than the original frame rate.

What they're saying here is that they decoupled the two, where originally they were coupled. So now the game can run at high fps and feel smoother than the original lower fps rate, but the gameplay is still at the original intended speed.

Deleted Comment

mmastrac · 2 years ago
The code was probably hyper-optimized for decade+ old hardware.
efields · 2 years ago
> The 5000 lines of if else that handles the menu state is a striking witness to this insanity.

> He takes the code from 40,699 lines to 7,731 and notably loved an excuse to work in C. “I had an absolute blast cleaning up this mess!”

I't just marvelous to learn what kind of crap coding makes it to production. It's a huge boost to my self-esteem.

rgoulter · 2 years ago
The risk of 'bad' code is that it might be hard to make the change you want. I would believe that spaghetti code is more likely to have bugs than 'well designed' code.

When code is fresh in your mind, there's more tolerance for how disorganised the code can be and still be easy to change.

If your shipping method is "one shot".. time spent cleaning it has a chance to provide value; but time spent adding features is very likely adds value. -- Probably if the code is very clean, then at least some time would have been better spent adding features.

taeric · 2 years ago
You aren't wrong, but a lot of the risk of "good code" nowadays is that there is more of it. An added risk is a lot of "good code" is leaning heavily on practices that are not good for performance.

I don't want to push for lax testing standards. And I generally prefer modern build practices for programming. It is hard to take a lot of criticism of gaming code seriously, though, as games were shipped at what feels like a higher success throughput than most business software.

fidotron · 2 years ago
In games it happens quite a bit.

The great debate is always "will I want to change this tiny thing here without impacting everything else?" vs DRY. Studio cultures differ, but I would say at least half resort to hard coding things like positions of every item manually given the excuse they might want to shift something by a bit later (in a hurry, crunching etc.) without side effects.

Thankfully the emergence of more standard tooling and engines has pushed this to being more of an art resource concern, but it does lead to things like being told that taking a game that assumes a 16:9 1080P display and making it more flexible will take multiple years of person time.

Personally I cannot stand this tendency, but do get why they do it.

hinkley · 2 years ago
You can get pretty close to the end of a project before you really understand what it was you were trying to do. Especially if management is allowed to keep moving things on you. I think this is substantially the instinct that leads to Waterfall. I just want to know what I'm supposed to be building for a while before you 'ruin' it.
maccard · 2 years ago
I think the days of that happening are mostly gone. In the days where games were one-and-done, gameplay code was like this, but in the age of sequels & live service games, game code is as good (or bad) as any other code.
lunaticlabs · 2 years ago
To frame this, I actually worked at Sony as a software engineer on the Playstation when Wipeout came out, so I have some firsthand insight into developing games around this time. When it comes to older games like this, there were a bunch of compromises that we had to make that introduced additionally complexity, that are entirely overlooked here. You're looking at this from the perspective of how a computer/processor works now, and what it is like to develop software without having to take into actual account the limitations of the hardware and processor itself, as a part of your code design.

For example, the Playstation 1 had a MIPS R3000 CPU, and a single instruction pipeline, so it's basically doing only doing 1 thing at a time. Multithreading doesn't exist. Our only parallel processing was that we could do all the game logic + all the math to transform all the triangles into screen space, and simultaneously the GPU would be drawing the last frame we drew. We had 4MB of memory total, so when we were were working on games then, we would have actual discussions about whether it was worth the overhead of including a malloc/free, or just hard-coding all your memory addresses because of the space you'd save. We would compile out our abstracted, "nice" versions of functions, count the instructions, and compare sizes to see where we could optimize the code to reduce the compiled output size. In return, we might be able to draw an extra triangle or two.

The instruction and datacache were tiny, and loaded based on address, so sometimes we would add code or instructions that didn't do anything just to make a loop not cross a cache boundary.

So we were working on under strict time pressure, with unknown hardware and badly translated japanese print manuals (and sometimes not even translated), in small teams without any real ability to communicate with anyone else in the industry about it (since we are all under NDA).

I'm not saying that we didn't write bad code, we did write plenty of it, but a number of the decisions on HOW to write the code itself that we were considering then aren't even visible to people reading the code now and making judgements on it. I knew I was writing spaghetti code some of the time, because I didn't have the memory budget to load it as data. Is there a cleaner way to do their UX and in a data-driven fashion? Sure. But for me to get that data, I would have to write a function to load a/multiple blocks of data from a 1x CD that a super low transfer rate, and an appallingly long seek times. Making a data-driven UI is possible, but not practical when someone hits the start button, they want the menu to come up immediately.

In many cases, we knew that we were writing bad code, but didn't have the capacity to write anything better. It literally didn't fit our budget for memory or CPU. Then you have to make decisions about where you CAN afford an abstraction, and those decisions can be quite painful.

I haven't violently disagreed with anyone in any of these threads, but did want to offer a bit of my first hand perspective on things that are often overlooked.

holoduke · 2 years ago
Always impressed how some people possess the skill to plough through these kind of codebases. I wouldnt even know where to start. Yet alone find the time to execute these kind of projects. Although maybe I should remember my life before kids. Coding till 4 pm every day. Huge respect for this guy.
ars · 2 years ago
It goes like this: You read a function and understand what it does. You scream at the original programmer (who was probably yourself) for being an utter idiot and doing it badly. Then you rewrite it better.

Then you go to the next function, and realize it doesn't even need the previous function at all, so you get mad at yourself and delete all that wonderful code you just wrote.

Then you try a new way: instead of inside out (start at the internal functions) you start at the top - you find where the code handles some particular task, and you drill down deep into it, throwing away useless code left and right and rewrite the thing better. (Your advantage being you know exactly what it needs to do, unlike your previous self who did not know that since he was still developing it.)

You also use a tool that finds dead code - in particular functions that are never called by anything.

bityard · 2 years ago
Breaking big problems down into small ones is part of the process, yes, but only once you are well-versed in the problem space. The guy who rewrote this:

1. Is proficient in C.

2. Understands Playstation architecture and development better than the original programmers of the game. (Although Psygnosis had far less time and hindsight on their side.)

3. Is apparently quite familiar with 3D game programming and techniques in general, and how/when to use them.

4. Already had experience reverse-engineering parts of the game previously.

5. Had the free time to undertake a project of this scale.

So it's not something any random dev can just snap their fingers and decide to do. It's a case study on the intersection of experience, preparation, and luck.

nlunbeck · 2 years ago
I know, right? I've spent the last 15 or so minutes just marveling at how nicely cleaned up it all is, I can only imagine how overwhelming it must've been to see the mess it was. Really shows what some extra time/budget could do for major studios releasing remasters.
hinkley · 2 years ago
If we could get this kind of energy invested into shared libraries...
wredue · 2 years ago
Look at it as a whole bunch of little problems instead of one big problem.
mjevans · 2 years ago
"Either let it be, or shut this thing down and get a real remaster going."

Solidly agree, copyright / IP shouldn't be about holding the public hostage. It should be about maximizing the mutual benefit for both the creator and, very importantly, the public.

Culture deserves love and respect and must be 'accessible' (able to buy); or it should be set free (public domain).

mbStavola · 2 years ago
Wipeout was great, but I always preferred playing Extreme-G on the N64. I know the source for the third game in the series was leaked, but I'm still waiting for the original!
ChrisArchitect · 2 years ago
Previous discussion from a few weeks ago: https://news.ycombinator.com/item?id=37082304
bityard · 2 years ago
This is the third time it has made the front page in as many weeks. :)
pengaru · 2 years ago
The numbers are somewhat artificially exaggerated since it sounds like the source leak contains what are effectively multiple platform-specific copies of the same game.

That's a pretty common pattern when you're porting to a substantially different system, from a bespoke base like a small launch title, that was never intended to run on anything else. Especially in an era before everyone used vcs tools like git with cheap branches. We used to work on diverging whole copy forks constantly.

Going through this mess and cleaning it all up must have felt incredibly satisfying because of all the low-hanging fruit. It's like a long overdue spring cleaning.

blueboo · 2 years ago
Is the gameplay of Wipeout actually any good? Always admired the aesthetics, and I love F-Zero GX, but…

The bonking against sides, the short view distance induced by excessively-curved tracks makes it feel like the NES game RC Pro-Am, but instead of an tightly-cropped overhead view of the track, you see the next fifty feet of track ahead (at 1,000 scifimeters/sec).

The game felt brute-forceable through its brittle, unforgiving driving but it was so frustrating I never gelled with it. Was I missing anything?

Jare · 2 years ago
It's very good for its time, but also very harsh. It was polished to perfection in Wipeout 2/XL.