Readit News logoReadit News
notsylver commented on Node.js is able to execute TypeScript files without additional configuration   nodejs.org/en/blog/releas... · Posted by u/steren
rmonvfer · 7 days ago
I’m not a heavy JS/TS dev so here’s an honest question: why not use Bun and forget about node? Sure I understand that not every project is evergreen but isn’t Bun a much runtime in general? It supports TS execution from day 1, has much faster dependency resolution, better ergonomics… and I could keep going.

I know I’m just a single data point but I’ve had a lot of success migrating old node projects to bun (in fact I haven’t used node itself since Bun was made public)

Again, I might be saying something terribly stupid because JS/TS isn’t really my turf so please let me know if I’m missing something.

notsylver · 7 days ago
I have tried fully switching to bun repeatedly since it came out and every time I got 90% of the way there only to hit a problem that couldn't be worked around. Last I tried I was still stuck on some libraries requiring napi functions that weren't implemented in bun yet, as well an issue I forget but it was vaguely something like `opendir` silently ignoring the `recursive` option causing a huge headache.

I'm waiting patiently for bun to catch up because I would love to switch but I don't think its ready for production use in larger projects yet. Even when things work, a lot of the bun-specific functionality sounds nice at first but feels like an afterthought in practice, and the documentation is far from the quality of node.js

notsylver commented on I want everything local – Building my offline AI workspace   instavm.io/blog/building-... · Posted by u/mkagenius
andylizf · 15 days ago
Yeah, that's a fair point at first glance. 50GB might not sound like a huge burden for a modern SSD.

However, the 50GB figure was just a starting point for emails. A true "local Jarvis," would need to index everything: all your code repositories, documents, notes, and chat histories. That raw data can easily be hundreds of gigabytes.

For a 200GB text corpus, a traditional vector index can swell to >500GB. At that point, it's no longer a "meager" requirement. It becomes a heavy "tax" on your primary drive, which is often non-upgradable on modern laptops.

The goal for practical local AI shouldn't just be that it's possible, but that it's also lightweight and sustainable. That's the problem we focused on: making a comprehensive local knowledge base feasible without forcing users to dedicate half their SSD to a single index.

notsylver · 15 days ago
You already need very high end hardware to run useful local LLMs, I don't know if a 200gb vector database will be the dealbreaker in that scenario. But I wonder how small you could get it with compression and quantization on top
notsylver commented on OpenFront: Realtime Risk-like multiplayer game in the browser   openfront.io/... · Posted by u/thombles
notsylver · a month ago
I'ts fun, I think it needs queues for different game modes because with 150 players you almost always get horded by neighbours. Being able to queue for a team game would make it a bit easier to learn I think
notsylver commented on GitHub Copilot Coding Agent   github.blog/changelog/202... · Posted by u/net01
nodja · 3 months ago
I wish they optimized things before adding more crap that will slow things down even more. The only thing that's fast with copilot is the autocomplete, it sometimes takes several minutes to make edits on a 100 line file regardless of the model I pick (some are faster than others). If these models had a close to 100% hit rate this would be somewhat fine, but going back and forth with something that takes this long is not productive. It's literally faster to open claude/chatgpt on a new tab and paste the question and code there and paste it back into vscode than using their ask/edit/agent tools.

I've cancelled my copilot subscription last week and when it expires in two weeks I'll mostly likely shift to local models for autocomplete/simple stuff.

notsylver · 3 months ago
I've had this too, especially it getting stuck at the very end and just.. never finishing. Once the usage-based billing comes into effect I think I'll try cursor again. What local models are you using? The local models I tried for autocomplete were unusable, though based on aiders benchmark I never really tried with larger models for chat. If I could I would love to go local-only instead.
notsylver commented on Viral ChatGPT trend is doing 'reverse location search' from photos   techcrunch.com/2025/04/17... · Posted by u/jnord
imposterr · 4 months ago
Hmm, not sure I understand how you made use of OpenAI to guess the location oh a photo. Could you expand on that a bit? Thanks!
notsylver · 4 months ago
I showed the model a picture and any text written on that picture and asked it to guess a latitude/longitude using the tool use API for structured outputs. That was in addition to having it transcribe the hand written text and extracting location names, which was my original goal until I saw how good it was at guessing exact coordinates. It would guess within ~200km on average, even on pictures with no information written on them.
notsylver commented on Viral ChatGPT trend is doing 'reverse location search' from photos   techcrunch.com/2025/04/17... · Posted by u/jnord
notsylver · 4 months ago
I've been digitising family photos using this. I scanned the photo itself and the text on it, then passed that to an LLM for OCR and used tools to get the caption verbatim, the location mentioned and the date in a standard format. That was going to be the end of it, but the OpenAI docs https://platform.openai.com/docs/guides/function-calling?lan... suggest letting the model guess coordinates instead of just grabbing names, so I did both and it was impressive. My favourite was taking a picture looking out to sea from a pier and pinpointing the exact pier.

Deleted Comment

notsylver commented on Tailscale is pretty useful   blog.6nok.org/tailscale-i... · Posted by u/thm
jdolak · 6 months ago
On your first point, I've been using tailscale for a bit and its ACL feature addresses most of my concerns there. My laptop can ssh into any of my servers but not the other way around, and my servers cant talk to each other unless I set them to.
notsylver · 6 months ago
Could you share your ACL setup? I haven't had time to look at it much but this sounds like exactly what I want to do.
notsylver commented on Microsoft begins turning off uBlock Origin and other extensions in Edge   neowin.net/news/microsoft... · Posted by u/thombles
somenameforme · 6 months ago
#1 existed for literally 24 hours and was clearly a stupid idea.

#2 opt-in only.

#3 agreed. annoying. can opt out.

#4 opt-in only.

#5 disagreed.

notsylver · 6 months ago
opt-in but afaik they still show up in places unless you disable them
notsylver commented on Zed now predicts your next edit with Zeta, our new open model   zed.dev/blog/edit-predict... · Posted by u/ahamez
coder543 · 6 months ago
Based on the blogpost, this appears to be hosted remotely on baseten. The model just happens to be released openly, so you can also download it, but the blogpost doesn't talk about any intention to help you run it locally within the editor. (I agree that would be cool, I'm just commenting on what I see in the article.)

On the other hand, network latency itself isn't really that big of a deal... a more powerful GPU server in the cloud can typically run so much faster that it can make up for the added network latency and then some. Running locally is really about privacy and offline use cases, not performance, in my opinion.

If you want to try local tab completions, the Continue plugin for VSCode is a good way to try that, but the Zeta model is the first open model that I'm aware of that is more advanced than just FIM.

notsylver · 6 months ago
I'm stuck using somewhat unreliable starlink to a datacenter ~90ms away, but I can run 7b models fine locally. I agree though, cloud completions aren't unusably slow/unreliable for me, it's mostly about privacy and it being really fun.

I tried continue a few times, I could never get consistent results, the models were just too dumb. That's why I'm excited about this model, it seems like a better approach to inline completion and might be the first okay enough™ model for me. Either way, I don't think I can replace copilot until a model can automatically fine tune itself in the background on the code I've written

u/notsylver

KarmaCake day134November 2, 2019View Original