Readit News logoReadit News
webprofusion · 4 months ago
Looks interesting.

I see they also contributed a fix to the OnlyFans notification robot. Clearly doing the important work that the internet needs.

brokencode · 4 months ago
This is what I want to do when I retire. Maybe not OnlyFans fixes specifically, but just go around fixing random stuff.

Like if Batman turned out to be bad at fighting criminals so had to fight null pointer exceptions instead.

nurettin · 4 months ago
Maybe hack into facilities, optimize their scripts and deployment, then leave without a trace confusing the IT department.
gnarlouse · 4 months ago
bugman
OrderlyTiamat · 4 months ago
"Fear not the bugs citizen! For in my utility belt, I have REGEX and VIM!"
anticensor · 4 months ago
That notification robot codebase is actually generic, Zara Darcy just used OnlyFans branding to boost her follower base.
hyperhello · 4 months ago
> Real-world impact: With 50+ view parts (text, cursors, minimap, scrollbar, widgets, decorations, etc.), this wastes 1-2ms per frame

Good thing to find...

blharr · 4 months ago
How does it possibly take 1-2ms to sort... 50 items? I'd expect that to happen in an order of microseconds
klodolph · 4 months ago
It’s being sorted not once per frame, but once per item.

If you have 50 items in the list, then the list gets sorted 50 times. If you have 200 items in the list, the list is sorted 200 times.

This is unnecessary. The obvious alternative is a binary heap… which is what the fix does. Although it would also be obvious to reuse an existing binary heap implementation, rather than inventing your own.

retsibsi · 4 months ago
The issue seems to be a direct copy-paste from an LLM response, so I suspect "this wastes 1-2ms per frame" is estimated/made up.
hdjfjkremmr · 4 months ago
it's sorting 50 times a list going from 50 to 0 items.
adwn · 4 months ago
I'm confused: Does top.execute() modify currentQueue in some way, like pushing new elements to it? If it doesn't, then why not simply move the sort out of the loop? This is simpler and faster than maintaining a binary heap.
nateb2022 · 4 months ago
> If it doesn't, then why not simply move the sort out of the loop?

Yup, they should definitely move the sort outside of the loop. Shifting is O(N) so overall complexity would be O(N^2) but they could avoid shifting by reverse-sorting outside the loop and then iterating backwards using pop()

adwn · 4 months ago
One more thing: Nowadays sort() functions ary usually heavily optimized and recognize already sorted subsequences. If currentQueue isn't modified during the loop, then the sort() call should run in O(n) after the first iteration, instead of O(n * log n). Still worse than not having it inside the loop at all, of course.
geokon · 4 months ago
I feel with Valgrind (in C++land) or VisualVM (JVMland) stuff like this is very easy to zero in on.

I don't work in JS-land.. but are Electron apps difficult to do performance profiling on?

nawgz · 4 months ago
No. Browser dev tools are available, and make it pretty easy to do performance profiling, and get a flamegraph etc..

Just seems like the reality of things is that the number of extensions or widgets or whatever has remained low enough that this extra sorting isn't actually that punitive in most real-world use cases. As a long-time developer working mainly in VSCode, I notice no difference between performance/snappiness in VSCode compared to JetBrains Rider, which is the main other IDE I have meaningful experience with these days.

muglug · 4 months ago
Given that the issue already gives a before-and-after metric it's extremely odd there's no POC PR attached.

This just seems like an AI slop GitHub issue from beginning to end.

And I'd be very surprised if VS Code performance could be boosted that much by a supposedly trivial fix.

ollin · 4 months ago
Yeah the issue reads as if someone asked Claude Code "find the most serious performance issue in the VSCode rendering loop" and then copied the response directly into GitHub (without profiling or testing anything).
duskwuff · 4 months ago
Even if it is a real performance issue, the reasonable fix would be to move the sort call out of the loop - implementing a new data structure in JS is absolutely not the way to fix this.
oe · 4 months ago
Adding a new data structure just for this feels like such an AI thing. I've added to our agents.md a rule to prefer using existing libraries and types, otherwise Gemini will just happily generate things like this.
muglug · 4 months ago
Right, and also this would show up in the profiler if it were a time sink — and I'm 100% certain this code has been profiled in the 10 years it's been in the codebase.
nneonneo · 4 months ago
There’s clearly functionality to push more work to the current window’s queue, so I would not be surprised if the data structure needs to be continually kept sorted.

(Somewhere in the pile of VSCode dependencies you’d think there’d be a generic heap data structure though)

sillythrowawy9 · 4 months ago
OP’s account also seems automated. This certainly feel like automated post to social media for PR clout
anticensor · 4 months ago
Not really, I read HN more than I post to it, but I found this one interesting.
a-dub · 4 months ago
i see emojis in the comments.

also no discussion of measured runtimes for the rendering code. (if it saves ~1.3ms that sounds cool, but how many ms is that from going over the supposed 16ms budget.)

gigatexal · 4 months ago
I’ve already moved from VSCode to Zed. It’s native. Faster. Has most of the functionality I had before. I’m a huge fan.
nawgz · 4 months ago
A bit sloppy but easily resolved - surprised it took so long to notice, or maybe it was new?
minitech · 4 months ago
It’s been around since the root commit in 2015: https://github.com/microsoft/vscode/blob/8f35cc4768393b25468...
anticensor · 4 months ago
Yeah, it was a bit surprising to me as well.
rockorager · 4 months ago
If you work with LLM agents, you will immediately be able to tell this issue is written by one. The time cost of this sort is almost certainly not real, as others have pointed out.

I’ve had agents find similar “performance bottlenecks” that are indeed BS.

znpy · 4 months ago
as one of the commenters to the issue wrote, arguing whether the text is ai-generated or not is essentially useless.

the important question is: is this an actual performance bug?

cgriswald · 4 months ago
The question is whether credence, and therefore time, should be given to the claim that the performance bug exists.
ec109685 · 4 months ago
I hate ai sometimes — an AI generated pull request (really some rando found a way of shaving 12% off the run loop?) responded to by an ai comment bot:

> This feature request is now a candidate for our backlog. The community has 60 days to upvote the issue. If it receives 20 upvotes we will move it to our backlog. If not, we will close it. To learn more about how we handle feature requests, please see our documentation.

dkdcio · 4 months ago
reasonably confident that’s just an automated response bot, not AI…

also it’s an issue, not a PR

ec109685 · 4 months ago
Even worse (issue part).