Readit News logoReadit News
nritchie · a year ago
When I was a freshman (or so) in high-school, our computer lab had just graduated from a time-share terminal to the next-door university to the Apple II. A kid (Ray Tobey) in the next grade up started to code a project to submit to a Byte Magazine game contest. The due date came and went, but he carried on in every moment of his free time. Long story short, this game became SkyFox of which Woz said "consider this flight simulator as the finest Apple game ever done." From Ray, I learned the value of using continued fraction approximations to trig functions using only integer math. Later, this became useful when I had to implement image rotation in a scan generator for a scanning electron microscope.
01HNNWZ0MV43FF · a year ago
Oh cool! I think I played that on my family's Apple II. I think it was mislabeled as "Star Fox" and probably pirated. Sorry Ray...
BearOso · a year ago
Multiplying without having a larger intermediate is much more complex than the article states. You have to use the commutative property of multiplication and split the whole and decimal parts of each number out, otherwise you're stuck with single digit whole numbers or only multiplying fractions.

So you'd take

  A.a * B.b and split it into A*B + A*b + a*B + a*b
Or

    out = ((A >> fixedbits) * (B >> fixedbits) << fixedbits)
        + ((A >> fixedbits) * b)
        + ((B >> fixedbits) * a)
        + ((a * b) >> fixedbits);
If you can get away with a little less precision and smaller whole numbers, you can avoid some of the multiplications by just doing this, which is quite common:

    (A.a >> (fixedbits / 2)) * (B.b >> (fixedbits / 2))

Joker_vD · a year ago
It's such a shame that multiplication in C (or most other languages, really) doesn't have its natural type (intM, intN) -> int{M+N}. Instead, you have to recover the higher half of the result either by doing additional narrow multiplication yourself, or by using some compiler intrinsic.
jnwatson · a year ago
Gcc can frequently tell what you're trying to accomplish and emit the correct instructions. Still I agree it would be ideal if this were explicit.
wk_end · a year ago
I always felt when learning about this stuff that people - pedagogically - make fixed point seem more complicated than it is.

Since this article is talking about more precisely positioning sprites in a 2D world, it could practically be a one-liner: "instead of tracking positions/velocities in pixels, track them in half pixels". Everything falls out of that intuition.

pistoleer · a year ago
I wonder how many people have reinvented the concept of fixed point when they calculated using "cents" instead of "dollars".
city41 · a year ago
I'm the author of the blog post. I just used sprite positioning as a simple example. Things like collision detection and physics can't be done with half pixels.
wk_end · a year ago
Not sure what you mean - sure you can.

Trying to read between the lines here, if your objection is to half-pixels because they’re not precise enough for (good) physics, then I apologize for being unclear - I mean half-pixels, or quarter-pixels, or eighth-pixels, or whatever.

Another way of wording my comment is that I think it’s easier - especially for beginners - to think in terms of smaller units (represented as integers) than in terms of a new number format for representing fixed-size fractional parts of larger units. But the two concepts are ultimately the same.

01HNNWZ0MV43FF · a year ago
They can't?
fidotron · a year ago
It's worth saying the original Playstation was entirely fixed point. You can go surprisingly far with it.

https://en.wikipedia.org/wiki/Fixed-point_arithmetic#Softwar...

I spent so much of the early stage of my career doing early mobile stuff I practically still think in fixed point, and always have to adjust to floats, for example, fixed point results can be compared exactly while with floats that is not a great idea. TeX uses fixed point entirely because it was reproducible across machines in an era where floating point was not.

hansvm · a year ago
> fixed point results can be compared exactly

You gain the ability to get stable results across machines, but there still necessarily exists a loss of precision, and different implementations of the same algorithm will get different results.

When would you want to compare fixed-point results bitwise though?

__s · a year ago
I think fixed point would be used a lot more with proper support in programming languages
fidotron · a year ago
What you would need is the language to track the expected range of the numbers. You often end up with multiple different multiply/divide implementations (shifting amounts before/after) based on if you can safely guarantee you are within an expected range or not.
SideQuark · a year ago
I doubt it. It fails for far too many useful programming situations that it would cause more problems than floating point.

Cannot use it efficiently for nearly anything: finance software, science software, engineering software, high quality graphics software... 3d software, pretty much anything that has any range needed or ability to lower errors while doing accumulation of information.

This is exactly why floating point was invented and standardized - fixed point is a failure for most any program, and only can work with much effort only for certain situations.

(I've written tons of fixed point code, numerical libraries across the spectrum from high performance, high quality, tunable quality, arbitrary precision libs, posits and unums, IEEE half-float software implementations, and more, so I do know what I'm talking about).

capitainenemo · a year ago
Hedgewars uses fixed point due to inconsistencies in floating point implementations breaking deterministic lockstep. 0AD and Spring: RTS has similar issues although I think both use streflop now.
mmaniac · a year ago
This sort of thing is ordinary for consoles without floating point numbers. It's easy to do in software with integers, but sometimes hardware acceleration will use it too.

The SNES and GBA both supported affine transformations, where the elements of the multiplication matrix are fixed point numbers. The Playstation's geometry coprocessor (GTE) used fixed point matrices with quite low 16 bit precision. An emulator feature called PGXP is able to perform these calculations at higher precision.

01HNNWZ0MV43FF · a year ago
And just to head off the discussion that happens every single thread:

- Yes, the PS1 had "jittery" vertices because 16 bits is not enough precision

- No, it was not because of using integers, you can use integers (AKA fixed-point) to do 3D just fine. If it had been 16.16 (32 bits total) it would probably look fine.

- No, this isn't the cause of the texture warping, that's because unlike the N64 it only supported affine texture mapping, not perspective-correct texture mapping. The PS1 saw every 3D triangle as just a 2D triangle, and texture mapping in 2D differs from texture mapping in 3D

- Yes, the texture warping is why many of the best-looking PS1 games were basically built on a grid or found other ways to use a large amount of small triangles instead of a small amount of large triangles.

vardump · a year ago
On the PC side, most developers stopped predominantly using fixed point for high performance code somewhere in the Pentium 1-3 era. For 486 class systems it was still pretty useful.

Other than on retro systems, fixed point is still useful in smaller microcontrollers.

VyseofArcadia · a year ago
A buddy of mine and I are working on a weekend project together. We recently realized that we don't need all that much precision, just a little, and switching from doubles or floats to 16-bit fixed point in our main data structure actually makes it small enough to fit an instance in a typical cache line (< 64 bytes).

Completely unnecessary for our target platform but deeply satisfying.

vardump · a year ago
For performance sensitive code memory bandwidth is very often the limiting factor, thus compressing values tends to make a lot of sense. The number of CPU cores is increasing much faster than memory bandwidth.

So not necessarily completely unnecessary.

pjmlp · a year ago
Michael Abrash books contain information on these kinds of optimizations, which were exactly on this transition point, as timeframe.
twoodfin · a year ago
A few years before that era, I was having a lot of fun with

https://en.wikipedia.org/wiki/Fractint

tralarpa · a year ago
On the Amiga and Atari, based on the Motorial 68000 like the NeoGeo, all 3D games used fixed point arithmetic.

At that time, such games were written in assembler, and you had to be very careful to place the instructions for scaling and descaling in the right places, not only to get the final result in the right units (i.e., screen coordinates), but also in intermediate calculations to preserve precision.