Readit News logoReadit News
Aachen · 6 months ago
Reminds me of the 100 millionth OpenStreetMap changeset (commit). A few people, myself included, were casually trying for it but in the end it went to someone who wasn't trying and just busy mapping Africa! Much more wholesome, seeing it with hindsight. This person was also previously nominated for an OSM award. I guess it helps that openstreetmap doesn't really allow for creating crap, because it's all live in production, and that's how the Nth commit is way more likely to be someone's random whim? Either way, a fun achievement for Github :)

In case anyone cares to read more about the OSM milestone, the official blog entry: https://blog.openstreetmap.org/2021/02/25/100-million-edits-... My write-up of changeset activity around the event: https://www.openstreetmap.org/user/LucGommans/diary/395954

ash_091 · 6 months ago
A friend of mine spent an entire workday figuring out how to ensure he created the millionth ticket in our help desk. Not sure how he cracked it in the end but we had a little team party to celebrate the achievement.

This was probably fifteen years ago. I feel like working in tech was more fun back then.

deruta · 6 months ago
I was involved in the 99,999th and the 100,000th one in my FQA days.

We were being onboarded, they were just for demo and were promptly deleted. No one cared about the Cool Numbers.

darkwater · 6 months ago
I wonder which is the latest ID today then...
chneu · 6 months ago
Your kind of comment is exactly why HN still rules. What a fun story. Thanks for sharing
Aachen · 6 months ago
Aww, thanks! I wasn't sure if I should go off-topic this much so I'm happy to hear this!
caleblloyd · 6 months ago
Awesome! Only a little over a billion more to go before GitHub’s very own OpenAPI Spec can start overflowing int32 on repositories too, just like it already does for workflows run IDs!

https://github.com/github/rest-api-description/issues/4511

bartread · 6 months ago
The company where I did my stint as CTO I turned up, noticed they were using 32-bit integers as primary keys on one of their key tables that already had 1.3 billion rows and, at the rate they were adding them, would overflow on primary key values within months… so we ran a fairly urgent project to upgrade the IDs to 64-bit to avoid the total meltdown that would have ensued otherwise.
hobs · 6 months ago
heh, that's happened at at least 5 companies I have worked at - go to check the database, find - currency as floats, hilarious indexes, integers gonna overflow, gigantic types with nothing in them.
gchamonlive · 6 months ago
What are the challenges of such projects? How many people are usually involved? Does it incur downtimes or significant technical challenges for either the infrastructure or the codebase?
darkwater · 6 months ago
Lived that with a MySQL table. The best thing is that the table was eventually dismissed (long after the migration) because the whole data model around it was basically wrong.
cyberax · 6 months ago
The same story happened inside Amazon.
neomantra · 6 months ago
A couple weeks ago there was some Lua community issues because LuaRocks surpassed 65,535 packages.

There was a conflict between this and the LuaRocks implementation under LuaJIT [1] [2], inflicting pain on a narrow set of users as their CI/CD pipelines and personal workflows failed.

It was resolved pretty quick, but interesting!

[1] https://github.com/luarocks/luarocks/issues/1797

[2] https://github.com/openresty/docker-openresty/issues/276

JKCalhoun · 6 months ago
I wish I were still at Apple. Probably most people here know that Apple uses an internal tool called "Radar" since, forever. Each "Radar" has an ID (bug #) associated with it.

Radars that were bug #1,000,000, etc. were kind of special. Unless someone screwed up (and let down the whole team) they were usually faux-Radars with lots of inside jokes, etc.

Pulling up one was enough since the Radar could reference other Radars ... and generally you would go down the rabbit hole at that point enjoying the ride.

I was a dumbass not to capture (heck, even print) a few of those when I had the opportunity.

bjackman · 6 months ago
At Google, the monorepo VCS has monotonic IDs like this for changes. Unfortunately a few years ago when approaching some round number, the system was DOS'd by people running scripts trying to snag the ID. So now it skips IDs in the vicinity of big round numbers :(

I think there's probably a lesson in there about schema design...

xmprt · 6 months ago
> I was a dumbass not to capture (heck, even print) a few of those when I had the opportunity.

On the other hand, given how Apple deals with confidential data, you probably wouldn't want to be caught exfiltrating internal documents however benign they are.

msarnoff · 6 months ago
#SnakesOnARadar
8organicbits · 6 months ago
While we are doing cool GitHub repo IDs, the first is here:

https://api.github.com/repositories/1

https://github.com/mojombo/grit

mkagenius · 6 months ago
The millionth one is "vim-scripts/nexus.vim"

The 1000th is missing.

CGamesPlay · 6 months ago
Probably created via a script that just repeatedly checked https://api.github.com/repositories/999999999 until it showed up, and then created a new repository. Since repositories can be modified, could have even given it some buffer and created a bunch of repos, just delete the ones that don't get the right number. [append] Looking at the author's other repo created yesterday, I'm betting "yep" was supposed to be the magic number, and "shit" was an admission of missing the mark.

Does anyone remember D666666 from Facebook? It was a massive codemod; the author used a technique similar to this one to get that particular number.

notfed · 6 months ago
Or...not. Why are you assuming this guy purposely grabbed the repo?
CGamesPlay · 6 months ago
Mostly just to share an approach to solving the "problem" of getting memorable numbers from a pool of sequential IDs.

But given that this user doesn't have activity very often, and created two repositories as the number was getting close, it feels likely that it was deliberate. I could be wrong!

topherPedersen · 6 months ago
You solved the mystery!
ramon156 · 6 months ago
Empty repo, so yes
umanwizard · 6 months ago
On a serious note, I'm a bit surprised that GitHub makes it trivial to compute the rate at which new repositories are created. Isn't that kind of information usually a corporate secret?
cheschire · 6 months ago
When your moat is a billion wide, you tend to walk around in your underwear a bit more I guess.
90s_dev · 6 months ago
Excellent Diogenes quote reference.
NooneAtAll3 · 6 months ago
unless you're youtube?
raincole · 6 months ago
Is there any reason for GitHub to hide this information though? How could it be used against them?

(I understand many companies default to not expose any information unless forced otherwise.)

xboxnolifes · 6 months ago
Companies usually hide this type of information so competitors have a harder time determining if they are growing/shrinking/neutral.
toast0 · 6 months ago
The rate of creation is like meh, but being able to enumerate all of the repos might be problematic, following new repos and scanning them for leaked credentials could be a negative... but github may have a feed of new repos anyway?

Also, having a sequence implies at least a global lock on that sequence during repo creation. Repo creation could otherwise be a scoped lock. OTOH, it's not necessarily handled that way --- they could hand out ranges of sequences to different servers/regions and the repo id may not be actually sequential.

beaugunderson · 6 months ago
and you can find the latest ID incredibly quickly using binary search! (I used to track a bunch of websites' growth this way)
paulddraper · 6 months ago
You can see the rate of creation of new users too.

Which is arguably even more interesting…

Cyphase · 6 months ago
I'm wondering if AasishPokhrel created this repo for the purpose of being the billionth.
maniacalhack0r · 6 months ago
AasishPokhrel made 2 repos yday - shit and yep. no activity between may 17th and june 10th.

i have no idea if its possible to calculate the rate at which repos are being created and time your repo creation to hit vanity numbers

paxys · 6 months ago
It's pretty easy to game this. Just keep creating repos till you hit # one billion and remove the old ones. Their API makes it trivial. The only issue will be rate limits, and other people simultaneously creating repos, so it's a matter of luck.
GodelNumbering · 6 months ago
There was a guy who got fired from Meta for creating excessive automated diffs in pursuit of a certain magic number
recursive · 6 months ago
I don't believe they will renumber the old ones. Also, it can't be trivial, since two people can try this, and only one can win.
kylehotchkiss · 6 months ago
I think he’s in university for software development in Nepal, and it’s really touching that a milestone could go so deeply into the world. Hopefully he has a big spot for this on his resume and can find a great career in development!
netsharc · 6 months ago
I don't get why this needs a big spot in his resume, and why it should lead to a great career. A company/hiring manager that thinks being lucky to hit a magic number on some system has any relevance to work, I'd rate as very insane...
notfed · 6 months ago
I find a bit of humor in the fact that this is completely unrequited attention. There's even a chance the guy is oblivious.
joshdavham · 6 months ago
I highly doubt it, but that does sound possible.
Sohcahtoa82 · 6 months ago
The repo seems to have gotten renamed and now redirects to https://github.com/AasishPokhrel/repository/

Lame. :-(

Sohcahtoa82 · 6 months ago
It was renamed back! :-D