Readit News logoReadit News
0x500x79 commented on Which NPM package has the largest version number?   adamhl.dev/blog/largest-n... · Posted by u/genshii
stabbles · 3 months ago
For Python (or PyPI) this is easier, since their data is available on Google BigQuery [1], so you can just run

    SELECT * FROM `bigquery-public-data.pypi.distribution_metadata` ORDER BY length(version) DESC LIMIT 10
The winner is: https://pypi.org/project/elvisgogo/#history

The package with most versions still listed on PyPI is spanishconjugator [2], which consistently published ~240 releases per month between 2020 and 2024.

[1] https://console.cloud.google.com/bigquery?p=bigquery-public-...

[2] https://pypi.org/project/spanishconjugator/#history

0x500x79 · 3 months ago
deps.dev has a similar bigquery dataset across a couple more languages if someone wanted to do analysis across the other ecosystems they support.
0x500x79 commented on Intel Arc Pro B50 GPU Launched at $349 for Compact Workstations   guru3d.com/story/intel-ar... · Posted by u/qwytw
mythz · 3 months ago
Really confused why the Intel and AMD both continue to struggle and yet still refuse to offer what Nvidia wont, i.e. high ram consumer GPUs. I'd much prefer paying 3x cost for 3x VRAM (48GB/$1047), 6x cost for 6x VRAM (96GB/$2094), 12x cost for 12x VRAM (192GB/$4188), etc. They'd sell like hotcakes and software support would quickly improve.

At 16GB I'd still prefer to pay a premium for NVidia GPUs given its superior ecosystem, I really want to get off NVidia but Intel/AMD isn't giving me any reason to.

0x500x79 · 3 months ago
I think it's a bit of planned obsolescence as well. The 1080ti has been a monster with it's 11GB VRAM up until this generation. A lot of enthusiasts basically call out that Nvidia won't make that mistake again since it led to longer upgrade cycles.
0x500x79 commented on Vibe code is legacy code   blog.val.town/vibe-code... · Posted by u/simonw
pyman · 5 months ago
Something interesting is happening. A false narrative is spreading online, pushed by people who know little about engineering, and others who should know better.

They claim junior devs are now 10x more productive, and project managers are shipping code themselves. Now, close your eyes for five seconds and try to picture what that code looks like. It's 100% legacy, disposable code.

The problem isn't AI, or PMs turning Figma into code, or junior devs prompting like mad. The real problem is the disconnect between expectations and outcomes. And that disconnect exists because people are mixing up terminology that took engineers years to define properly.

- A lean prototype is not the same as a disposable prototype

- An MVP is not the same as a lean prototype

- And a product is not the same as an MVP

A lean prototype is a starting point, a rough model used to test and refine an idea. If it works, it might evolve into an MVP. An MVP becomes a product once it proves the core assumptions and shows there's a real need in the market. And a disposable prototype is exactly that, something you throw away after initial use.

Vibing tools are great for building disposable prototypes, and LLM-assisted IDEs are better for creating actual products. Right now, only engineers are able to create lean prototypes using LLM prompts outside the IDE. Everyone else is just building simple (and working?) software on top of disposable code.

0x500x79 · 5 months ago
I had a PM at my company (with an engineering background) post AI generated slop in a ticket this week. It was very frustrating.

We asked them: "Where is xyz code". It didn't exist, it was a hallucination. We asked them: "Did you validated abc use cases?" no they did not.

So we had a PM push a narrative to executives that this feature was simple, that he could do it with AI generated code: and it didn't solve 5% of the use cases that would need to be solved in order to ship this feature.

This is the state of things right now: all talk, little results, and other non-technical people being fed the same bullshit from multiple angles.

0x500x79 commented on Vibe code is legacy code   blog.val.town/vibe-code... · Posted by u/simonw
code_runner · 5 months ago
I’m also surprised at the progress but don’t quite share the “AI is doing a good job” perspective.

It’s fine. Some things it’s awful at. The more you know about what you’re asking for the worse the result in my opinion.

That said a lot of my complaints are out of date apis being referenced and other little nuisances. If ai is writing the code, why did we even need an ergonomic api update in the first place. Maybe apis stabilize and ai just goes nuts.

0x500x79 · 5 months ago
LLMs are doing a great job at generating syntactically correct output related to the prompt or task at hand. The semantics, hierarchy, architecture, abstraction, security, and maintainability of a code base is not being handled by LLMs generating code.

So far, the syntax has gotten better in LLMs. More tooling allows for validation of the syntax even more, but all those other things are still missing.

I feel like my job is still safe: but that of less experienced developers is in jeopardy. We will see what the future brings.

0x500x79 commented on Two narratives about AI   calnewport.com/no-one-kno... · Posted by u/RickJWagner
wslh · 5 months ago
I think part of the solution is to start discussing the specific limitations of LLMs, rather than speaking broadly about AI/AGI. For example, many people assume these models can understand arbitrarily long inputs, but LLMs have strict token limits. Even when large inputs fit within the model's context window, it may not reason effectively over the entire content. This happens because the model's attention is spread across all tokens, and its ability to maintain coherence or focus can degrade with length. These constraints along with hardware limitations like those in NPUs are not always obvious to everyday users.
0x500x79 · 5 months ago
I agree, but unfortunately it falls flat IME. The hype is too strong and being pushed by the Fab Five that is causing an unbearable wall to these conversations.

I have these conversations on a day-to-day basis and you are labeled as a hater or stupid because XYZ CEO says that AI should be in everything/making things 100x easier.

There is a constant stream of "What if we use an LLM/AI for this?" even when it's a terrible tool for the job.

0x500x79 commented on Two narratives about AI   calnewport.com/no-one-kno... · Posted by u/RickJWagner
PaulDavisThe1st · 5 months ago
> you have [ ... ]. then you have [ ... ]

Those are groups defined by something other than actual LLM usage, which makes them both not particularly interesting. What is interesting:

You have people who've tried using LLMs to generate code and found it utterly useless.

Then you have people who've tried using LLMs to generate code and believe that it has worked very well for them.

0x500x79 · 5 months ago
I think this is an easy thing to wrap my mind around (since I have been in both camps):

AI can generate lots of code very quickly.

AI does not generate code that follows taste and or best practices.

So in cases where the task is small, easily plannable, within the training corpus, or for a project that doesn't have high stakes it can produce something workable quickly.

In larger projects or something that needs maintainability for the future code generation can fall apart or produce subpar results.

0x500x79 commented on Two narratives about AI   calnewport.com/no-one-kno... · Posted by u/RickJWagner
0x500x79 · 5 months ago
There is a great article floating around on the economics of AI and how parasitic the current market is between the Fab Five.

We are 27-ish months since the claim that all software engineers would be replaced within six months by some of these CEOs. It is their job to analyze the market and determine what the next big thing is, but they can be wrong - no one has a crystal ball here.

The difficulty for me is how disconnected a lot of the takes are (or even flat out manipulative) that are being pushed out. I am an early adopter of AI tools. I utilize them on a day-to-day basis, but there is no way that I see AI taking SW jobs right now.

You have others claiming that these tools will just get exponentially better now, time will tell, but as of right now there is still too much value in human coders any anyone that is actively pushing for replacing SWE with "Agents" is either betting big on the future (that is unproven) or attempting to entice/manipulate the larger market.

Deleted Comment

0x500x79 commented on MCP is eating the world   stainless.com/blog/mcp-is... · Posted by u/emschwartz
pydry · 6 months ago
It's not the solution to every problem but it's a great substitute for a infrequently used app with mediocre UX and most of the world's apps probably do fall into that category actually.
0x500x79 · 6 months ago
Agree, but I think we should hold those Apps to a higher bar. Chat interfaces are not a replacement for good UX.
0x500x79 commented on MCP is eating the world   stainless.com/blog/mcp-is... · Posted by u/emschwartz
tempodox · 6 months ago
> … agents you don't control. It's awesome …

What have we come to when losing control in software development is called “awesome”.

0x500x79 · 6 months ago
I don't think that the goal of MCP is for software developers.

MCP is great for: "I would like Claude Desktop/VSCode/Cursor to know about my JIRA tickets". AFAIK Most of the tools that are being used for AI Coding tools are not delivered through MCP.

u/0x500x79

KarmaCake day396July 1, 2020View Original