The package with most versions still listed on PyPI is spanishconjugator [2], which consistently published ~240 releases per month between 2020 and 2024.
Regarding spanishconjugator, commit ec4cb98 has description "Remove automatic bumping of version".
Prior to that commit, a cronjob would run the 'bumpVersion.yml' workflow four times a day, which in turn executes the bump2version python module to increase the patch level. [0]
Tangential, but I've only heard about BigQuery from people being surprised with gargantuan bills for running one query on a public dataset. Is there a "safe" way to use it with a cost limit, for example?
Yes you can set price caps. The cost of a query is understandable ahead of time with the default pricing model ($6 per TB of data processed in a query). People usually get caught out by running expensive queries recursively. BigQuery is very cost effective and can be used safely.
I decided my life could not possibly go on until I knew what "elvisgogo" does, so I downloaded the tarball and poked around. it's a pretty ordinary numpy + pandas + matplotlib project that makes graphs from csv. one line jumped out at me:
str_0 = ['refractive_index','Na','Mg','Al','Si','K','Ca','Ba','Fe','Type']
the university of st. andrews has a laser named "elvis" that goes on a remote controlled submarine: https://www.st-andrews.ac.uk/~bds2/elvislaser.htm
I was hoping it'd be about go-go dancing to elvis music, but physics experiments on light in seawater is pretty cool too.
> spanishconjugator [2], which consistently published ~240 releases per month between 2020 and 2024
They also stopped updating major and minor versions after hitting 2.3 in Sept 2020. Would be interesting to hear the rationale behind the versioning strategy. Feels like you might as well use a datetimestamp for the version.
The author has run into the same problem that anyone who wants to do analysis on the NPM registry runs into, there's just no good first party API for this stuff anymore.
It seems this was their first time going down this rabbit hole, so for them and anyone else, I'd urge you to use the deps.dev Google BigQuery dataset [0] for this kind of analysis. It does indeed include NPM and would have made the author's work trivial.
I hate to deride the entire community, but many of the collective community decisions are smells. I think that the low barrier to entry means that the community has many inexperienced influential people.
A lot of these decisions were made after Javascript went "enterprise" to make it seem more like a "serious" programming language to SV entrepreneurs by a small number of corporations, not necessarily the community.
The bar for entry was always low with javascript, but it also used to be a lot more sane when it was a publicly-driven language.
The Julia General registry is locally stored as a tar.gz and has version info for all registered packages, so I tried this out for Julia packages. The top 5 are:
So, no crazy numbers or random unknown packages, all are major packages that have just had a lot of work and history to them. Out of the top 10, pretty much half were from the SciML ecosystem.
Caveats/constraints: Like the post, this ignores non-SemVer packages (which mostly used date-based versions) and also jll (binary wrapper) packages which just use their underlying C libraries' versions. Among jlls, the largest that isn't a date afaict is NEO_jll with 25.31.34666+0 as its version.
There are many that are, but I feel like your comment is based on the same faulty assumption as your sibling comment - that this is an ordering of version numbers as a whole. It's not, the ordering is on the same basis as in the post, the largest single number within the MAJOR.Minor.patch trio.
Incidentally I once ran into a mature package that had lived in the 0.0.x lane forever and treated every release as a patch, racking up a huge version number, and I had to remind the maintainer that users depending with caret ranges won't get those updates automatically. (In semver caret ranges never change the leftmost non-zero digit; in 0.0.x that digit is the patch version, so ^0.0.123 is just a hard pin to 0.0.123). There may occasionally be valid reasons to stay on 0.0.x though (e.g. @types/web).
It's the type definitions for developing chrome extensions. They'd been incrementing in the 0.0.x lane for almost a decade and bumped it to 0.1.0 after I raised the issue, so I doubt it was intentional:
Anthony Fu’s epoch versioning scheme (to differentiate breaking change majors from "marketing" majors) could yield easy winners here, at least on the raw version number alone (not the number of sequential versions released):
I don't know if this is the origin, but the semver spec says 0.x.y is unstable. Sure, not everybody uses semver, but it is popular enough for people to make incorrect assumptions.
The "winner" just had its 3000th release on GitHub, already a few patch versions past the version referenced in this article (which was published today): https://github.com/wppconnect-team/wa-version
I made a fairly significant (dumb) mistake in the logic for extracting valid semver versions. I was doing a falsy check, so if any of major/minor/patch in the version was a 0, the whole package was ignored.
Brief reminder/clarification that these tools are used to circumvent WhatsApp ToS, and that they are used to:
1- Spam
2- Scam
3- Avoid paying for Whatsapp API (which is the only form of monetization)
And that the reason this thing gets so many updates is probably because of a mouse and cat game where Meta updates their software continuously to avoid these types of hacks and the maintainers do so as well, whether in automated or manual fashion.
Considering the 18 billions price tag and the current mixing of user data between meta and WhatsApp I believe that meta has now revenue streams in mind than just the API pricing
Hmm yeah, I decided that one counts because the new packages have (slightly) different content, although it might be the case that the changes are junk/pointless anyway.
> Time to fetch version data for each one of those packages: ~12 hours (yikes)
The author could improve the batching in fetchAllPackageData by not waiting for all 50 (BATCH_SIZE) promises to resolve at once. I just published a package for proper promise batching last week: https://www.npmjs.com/package/promises-batched
Just spin up a loop of 50 call chains. When one completes you just do the next on next tick. It's like 3 lines of code. No libraries needed. Then you're always doing 50 at a time. You can still use await.
async work() { await thing(); nextTick(work); }
for(to 50) { work(); }
then maybe a separate timer to check how many tasks are active I guess.
Promise.all waits for all 50 promises to resolve, so if one of these promises takes 3s, while the other 49 are taking 0.5s, you're waisting 2.5s awaiting each batch.
The package with most versions still listed on PyPI is spanishconjugator [2], which consistently published ~240 releases per month between 2020 and 2024.
[1] https://console.cloud.google.com/bigquery?p=bigquery-public-...
[2] https://pypi.org/project/spanishconjugator/#history
Prior to that commit, a cronjob would run the 'bumpVersion.yml' workflow four times a day, which in turn executes the bump2version python module to increase the patch level. [0]
Edit: discussed here: https://github.com/Benedict-Carling/spanish-conjugator/issue...
[0] https://github.com/Benedict-Carling/spanish-conjugator/commi...
The underlying dataset is hosted at sql.clickhouse.com e.g. https://sql.clickhouse.com/?query=U0VMRUNUIGNvdW50KCkgICBGUk...
disclaimer: built this a a while ago but we maintain this at clickhouse
oh and rubygems data is also there.
[0] https://sql.clickhouse.com?query=U0VMRUNUIHByb2plY3QsIE1BWCh...
[1] Quota read limit exceeded. Results may be incomplete.
They also stopped updating major and minor versions after hitting 2.3 in Sept 2020. Would be interesting to hear the rationale behind the versioning strategy. Feels like you might as well use a datetimestamp for the version.
It seems this was their first time going down this rabbit hole, so for them and anyone else, I'd urge you to use the deps.dev Google BigQuery dataset [0] for this kind of analysis. It does indeed include NPM and would have made the author's work trivial.
Here's a gist with the query and the results https://gist.github.com/jonchurch/9f9283e77b4937c8879448582b...
[0] - https://docs.deps.dev/bigquery/v1/
This is insane
I hate to deride the entire community, but many of the collective community decisions are smells. I think that the low barrier to entry means that the community has many inexperienced influential people.
The bar for entry was always low with javascript, but it also used to be a lot more sane when it was a publicly-driven language.
Caveats/constraints: Like the post, this ignores non-SemVer packages (which mostly used date-based versions) and also jll (binary wrapper) packages which just use their underlying C libraries' versions. Among jlls, the largest that isn't a date afaict is NEO_jll with 25.31.34666+0 as its version.
https://www.npmjs.com/package/@types/chrome?activeTab=versio...
https://antfu.me/posts/epoch-semver
I wonder why. Conventions that are being broken, maybe.
https://semver.org/#spec-item-4
If the guy writing and maintaining the software is stating "this software is not stable yet" then who am I to disagree?
I made a fairly significant (dumb) mistake in the logic for extracting valid semver versions. I was doing a falsy check, so if any of major/minor/patch in the version was a 0, the whole package was ignored.
The post has been updated to reflect this.
1- Spam 2- Scam 3- Avoid paying for Whatsapp API (which is the only form of monetization)
And that the reason this thing gets so many updates is probably because of a mouse and cat game where Meta updates their software continuously to avoid these types of hacks and the maintainers do so as well, whether in automated or manual fashion.
The author could improve the batching in fetchAllPackageData by not waiting for all 50 (BATCH_SIZE) promises to resolve at once. I just published a package for proper promise batching last week: https://www.npmjs.com/package/promises-batched
Just spin up a loop of 50 call chains. When one completes you just do the next on next tick. It's like 3 lines of code. No libraries needed. Then you're always doing 50 at a time. You can still use await.
async work() { await thing(); nextTick(work); }
for(to 50) { work(); }
then maybe a separate timer to check how many tasks are active I guess.
The implementation is rather simple, but more than 3 LoC: https://github.com/whilenot-dev/promises-batched/blob/main/s...
Couldn't find any specific rate limit numbers besides the one mentioned here[0] from 2019:
> Up to five million requests to the registry per month are considered acceptable at this time
[0]: https://blog.npmjs.org/post/187698412060/acceptible-use.html