Readit News logoReadit News
giovannibajo1 commented on Bill Atkinson has died   daringfireball.net/linked... · Posted by u/romanhn
duskwuff · 3 months ago
It's a bit more than that. The list of X coordinates is cumulative - once an X coordinate has been marked as an inversion, it continues to be treated as an inversion on all Y coordinates below that, not just until the next Y coordinate shows up. (This manifests in the code as D3 never being reset within the NOTRECT loop.) This makes it easier to perform operations like taking the union of two disjoint regions - the sets of points are simply sorted and combined.
giovannibajo1 · 3 months ago
Uhm can you better explain that? I don’t get it. D3 doesn’t get reset because it’s guaranteed to be 0 at the beginning of each scanline, and the code needs to go through all “scanline blocks” until it finds the one whose Y contains the one specified as argument. It seems to me that each scanline is still self contained and begins logically at X=0 in the “outside” state?

Deleted Comment

giovannibajo1 commented on Bill Atkinson has died   daringfireball.net/linked... · Posted by u/romanhn
duskwuff · 3 months ago
> But how was the region implemented?

The source code describes it as "an unpacked array of sorted inversion points". If you can read 68k assembly, here's the implementation of PtInRgn:

https://github.com/historicalsource/supermario/blob/9dd3c4be...

giovannibajo1 · 3 months ago
Yeah those are the horizontal spans I was referring to.

It’s a sorted list of X coordinates (left to right). If you group them in couples, they are begin/end intervals of pixels within region (visibles), but it’s actually more useful to manipulate them as a flat array, as I described.

I studied a bit the code and each scanline is prefixed by the Y coordinates, and uses an out of bounds terminator (32767).

giovannibajo1 commented on Bill Atkinson has died   daringfireball.net/linked... · Posted by u/romanhn
pducks32 · 3 months ago
Would someone mind explaining the technical aspect here? I feel with modern compute and OS paradigms I can’t appreciate this. But even now I know that feeling when you crack it and the thrill of getting the imposible to work.

It’s on all of us to keep the history of this field alive and honor the people who made it all possible. So if anyone would nerd out on this, I’d love to be able to remember him that way.

(I did read this https://www.folklore.org/I_Still_Remember_Regions.html but might be not understanding it fully)

giovannibajo1 · 3 months ago
There were far fewer abstraction layers than today. Today when your desktop application draws something, it gets drawn into a context (a "buffer") which holds the picture of the whole window. Then the window manager / compositor simply paints all the windows on the screen, one on top of the other, in the correct priority (I'm simplifying a lot, but just to get the idea). So when you are programing your application, you don't care about other applications on the screen; you just draw the contents of your window and that's done.

Back at the time, there wouldn't be enough memory to hold a copy of the full contents all possible windows. In fact, there were actually zero abstraction layers: each application was responsible to draw itself directly into the framebuffer (array of pixels), into its correct position. So how to handle overlapping windows? How could each application draw itself on the screen, but only on the pixels not covered by other windows?

QuickDraw (the graphics API written by Atkinson) contained this data structure called "region" which basically represent a "set of pixels", like a mask. And QuickDraw drawing primitives (eg: text) supported clipping to a region. So each application had a region instance representing all visible pixels of the window at any given time; the application would then clip all its drawing to the region, so that only the visibile pixels would get updated.

But how was the region implemented? Obviously it could have not been a mask of pixels (as in, a bitmask) as it would use too much RAM and would be slow to update. In fact, think that the region datastructure had to be quick at doing also operations like intersections, unions, etc. as the operating system had to update the regions for each window as windows got dragged around by the mouse.

So the region was implemented as a bounding box plus a list of visible horizontal spans (I think, I don't know exactly the details). When you represent a list of spans, a common hack is to use simply a list of coordinates that represent the coordinates at which the "state" switches between "inside the span" to "outside the span". This approach makes it for some nice tricks when doing operations like intersections.

Hope this answers the question. I'm fuzzy on many details so there might be several mistakes in this comment (and I apologize in advance) but the overall answer should be good enough to highlight the differences compared to what computers to today.

giovannibajo1 commented on Libogc (Wii homebrew library) discovered to contain code stolen from RTEMS   github.com/fail0verflow/h... · Posted by u/dropbear3
ranger_danger · 4 months ago
How could one ever prove that a solution was clean-room? For example I would consider the oman leak to taint all development of N64 in existence. Even if someone didn't personally look at it, they most certainly got information from someone else that did.
giovannibajo1 · 4 months ago
I don’t understand if this question is legal or morale/technical. I will answer to the latter, from the point of view of a prospective user of the library that wants to make their own mind around this.

Its quite easy to prove that libdragon was fully clean roomed. There are thousands of proofs like the git history showing incremental evolution and discovery, the various hardware testsuites being developed in parallel to it, the Ares emulator also improving its accuracy as things are being discovered over the past 4-5 years. At the same time, the n64brew wiki has also evolved to provide a source of independently verified, trustable hardware details.

Plus there are tens of thousands of Discord log messages where development has incrementally happened.

This is completely different from eg romhack-related efforts like Nintendo microcode evolutions where the authors explicitly acknowledge to have used the leaks to study and understand the original commented source code.

Instead, libdragon microcode has evolved from scratch, as clearly visible from the git history, discovering things a bit at a time, writing fuzzy tests to observe corner case behaviors, down to even creating a custom RSP programming language.

I believe all of this will be apparent to anybody approaching the codebase and studying it.

giovannibajo1 commented on Libogc (Wii homebrew library) discovered to contain code stolen from RTEMS   github.com/fail0verflow/h... · Posted by u/dropbear3
TuxSH · 4 months ago
> How much "reverse engineering" these days really is clean room and how much of it is just ripping off proprietary software?

In Nintendo console hacking scenes? None at all, there is no point to it, going through the hassle of doing cleanroom as an individual is wasted effort.

Though, the spectrum between copy-pasting HexRays output verbatim and rewriting things yourself is fairly large.

giovannibajo1 · 4 months ago
The Nintendo 64 homebrew scene uses libdragon which is 100% clean room, 100% based on reverse engineering, is fully open source and allows to create ROMs with no proprietary libraries.
giovannibajo1 commented on Libogc (Wii homebrew library) discovered to contain code stolen from RTEMS   github.com/fail0verflow/h... · Posted by u/dropbear3
eqvinox · 4 months ago
Copyright laws (in sane countries) have (varying amounts of) exceptions for reverse engineering pieces that are required for compatibility/interoperability.

Whether this applies to the Nintendo SDK… no clue, ask your lawyer ;). (i.e.: was there an alternative option to using RE'd pieces of the Nintendo SDK?)

It makes sense from a perspective/perception of: with the Nintendo SDK, [if] there wasn't really a choice or an alternative. With the RTEMS code there was.

giovannibajo1 · 4 months ago
Of course there was. You can clean-room reverse-engineer the hardware. This is what is done daily by Libdragon maintainers for supplying an open source SDK for Nintendo 64 with zero proprietary code in it.
giovannibajo1 commented on Gemini can't be disabled on Google Docs   twitter.com/kevinbankston... · Posted by u/h1fra
manuelmoreale · a year ago
I get what you’re saying. But at a fundamental level there’s a difference between “using” content to do the thing I’m asking you to do (display it in a browser) and running it through an AI to do processing on that content.

I personally think the two situations are quite different.

I agree that if you don’t want Google to sniff on your content you shouldn’t put it on their servers to begin with.

That said, stating that Gemini won’t remember, is dubious. Because given the track record of these companies I have my doubts that they don’t log everything they can put their hands on.

giovannibajo1 · a year ago
Google Docs does a lot of algorithms over the data you put in. For instance, it paginate them and show a page count. This is an algorithm processing your data exactly like Gemini does. There is no option in Google Docs to avoid the pagination algorithm from reading my data and processing it.

Another example: Google Docs indexes the contents of your document. That is, it stores all the words in a big database that you don't see and don't have access to, so that you can search for "tax" in the Google Docs search bar and bring up all documents that contain the word "tax". There is no option in Google Docs to avoid indexing the contents of a document for the purpose of searching for it.

When you decide to put your data into Google Docs, you are OK with Google processing your data in several ways (that should hopefully be documented). The fact that you seem so upset that a specific algorithm is processing your data just because it has the "AI" buzzword attached to it, seems like an overreaction prompted by the general panic we're living in.

I agree Google should be clear (and it is clear) whether Gemini is being trained on your data or not, because that is something that can have side effects that you have the right to be informed about. But Gemini just processing your data to provide feature N+1 among the other 2 billions available, it's really not something noteworthy.

u/giovannibajo1

KarmaCake day2071March 3, 2013
About
My blog: http://giovanni.bajo.it

[ my public key: https://keybase.io/giovannibajo; my proof: https://keybase.io/giovannibajo/sigs/US31p3qsU66znSYoJyX4V7kanznZNndZDvCM1X2K_us ]

View Original