Readit News logoReadit News
notsylver commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
Barbing · 2 days ago
Hope it works well for you!

In my eyes, one specific example they show (“Prompt: Restore photo”) deeply AI-ifies the woman’s face. Sure it’ll improve over time of course.

notsylver · 2 days ago
I tried a dozen or so images. For some it definitely failed (altering details, leaving damage behind, needing a second attempt to get a better result) but on others it did great. With a human in the loop approving the AI version or marking it for manual correction I think it would save a lot of time.

This is the first image I tried:

https://i.imgur.com/MXgthty.jpeg (before)

https://i.imgur.com/Y5lGcnx.png (after)

Sure, I could manually correct that quite easily and would do a better job, but that image is not important to us, it would just be nicer to have it than not.

I'll probably wait for the next version of this model before committing to doing it, but its exciting that we're almost there.

notsylver commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
zwog · 2 days ago
Do you happen to know some software to repair/improve video files? I'm in the process of digitalizing a couple of Video 2000 and VHS casettes of childhood memories of my mom who start suffering from dementia. I have a pretty streamlined setup for digitalizing the videos but I'd like to improve the quality a bit.
notsylver · 2 days ago
I didn't do any videos, just pictures, but considering how little I found for pictures I doubt you'll find much
notsylver commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
Almondsetat · 2 days ago
All of the defects you have listed can be automatically fixed by using a film scanner with ICE and a software that automatically performs the scan and the restoration like Vuescan. Feeding hundreds (thousands?) of photos to an experimental proprietary cloud AI that will give you back subpar compressed pictures with who knows how many strange artifacts seems unnecessary
notsylver · 2 days ago
I scanned everything into 48-bit RAW and treat those as the originals, including the IR scan for ICE and a lower quality scan of the metadata. The problem is sharing them - important images I manually repair and export as JPEG which is time consuming (15-30 minutes per image, there are about 14000 total) so if its "generic family gathering picture #8228" I would rather let AI repair it, assuming it doesn't butcher faces and other important details. Until then I made a script that exports the raws with basic cropping and colour correction but it can't fix the colours which is the biggest issue.
notsylver commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
notsylver · 2 days ago
I digitised our family photos but a lot of them were damaged (shifted colours, spills, fingerprints on film, spots) that are difficult to correct for so many images. I've been waiting for image gen to catch up enough to be able to repair them all in bulk without changing details, especially faces. This looks very good at restoring images without altering details or adding them where they are missing, so it might finally be time.
notsylver commented on Vanguard hits new 'bans-per-second' record   playvalorant.com/en-us/ne... · Posted by u/Wingy
notsylver · 3 days ago
If CV cheats are good enough that people are using them (and then getting banned), and other people are willing to pay >$1000 for "undetected" cheats (that still get them banned)... wouldn't making custom hardware that is just a capture card and USB keyboard+mouse running one of those CV models that sends the inputs back over a "real" keyboard work?
notsylver commented on Node.js is able to execute TypeScript files without additional configuration   nodejs.org/en/blog/releas... · Posted by u/steren
rmonvfer · 11 days ago
I’m not a heavy JS/TS dev so here’s an honest question: why not use Bun and forget about node? Sure I understand that not every project is evergreen but isn’t Bun a much runtime in general? It supports TS execution from day 1, has much faster dependency resolution, better ergonomics… and I could keep going.

I know I’m just a single data point but I’ve had a lot of success migrating old node projects to bun (in fact I haven’t used node itself since Bun was made public)

Again, I might be saying something terribly stupid because JS/TS isn’t really my turf so please let me know if I’m missing something.

notsylver · 11 days ago
I have tried fully switching to bun repeatedly since it came out and every time I got 90% of the way there only to hit a problem that couldn't be worked around. Last I tried I was still stuck on some libraries requiring napi functions that weren't implemented in bun yet, as well an issue I forget but it was vaguely something like `opendir` silently ignoring the `recursive` option causing a huge headache.

I'm waiting patiently for bun to catch up because I would love to switch but I don't think its ready for production use in larger projects yet. Even when things work, a lot of the bun-specific functionality sounds nice at first but feels like an afterthought in practice, and the documentation is far from the quality of node.js

notsylver commented on I want everything local – Building my offline AI workspace   instavm.io/blog/building-... · Posted by u/mkagenius
andylizf · 19 days ago
Yeah, that's a fair point at first glance. 50GB might not sound like a huge burden for a modern SSD.

However, the 50GB figure was just a starting point for emails. A true "local Jarvis," would need to index everything: all your code repositories, documents, notes, and chat histories. That raw data can easily be hundreds of gigabytes.

For a 200GB text corpus, a traditional vector index can swell to >500GB. At that point, it's no longer a "meager" requirement. It becomes a heavy "tax" on your primary drive, which is often non-upgradable on modern laptops.

The goal for practical local AI shouldn't just be that it's possible, but that it's also lightweight and sustainable. That's the problem we focused on: making a comprehensive local knowledge base feasible without forcing users to dedicate half their SSD to a single index.

notsylver · 19 days ago
You already need very high end hardware to run useful local LLMs, I don't know if a 200gb vector database will be the dealbreaker in that scenario. But I wonder how small you could get it with compression and quantization on top
notsylver commented on OpenFront: Realtime Risk-like multiplayer game in the browser   openfront.io/... · Posted by u/thombles
notsylver · 2 months ago
I'ts fun, I think it needs queues for different game modes because with 150 players you almost always get horded by neighbours. Being able to queue for a team game would make it a bit easier to learn I think
notsylver commented on GitHub Copilot Coding Agent   github.blog/changelog/202... · Posted by u/net01
nodja · 3 months ago
I wish they optimized things before adding more crap that will slow things down even more. The only thing that's fast with copilot is the autocomplete, it sometimes takes several minutes to make edits on a 100 line file regardless of the model I pick (some are faster than others). If these models had a close to 100% hit rate this would be somewhat fine, but going back and forth with something that takes this long is not productive. It's literally faster to open claude/chatgpt on a new tab and paste the question and code there and paste it back into vscode than using their ask/edit/agent tools.

I've cancelled my copilot subscription last week and when it expires in two weeks I'll mostly likely shift to local models for autocomplete/simple stuff.

notsylver · 3 months ago
I've had this too, especially it getting stuck at the very end and just.. never finishing. Once the usage-based billing comes into effect I think I'll try cursor again. What local models are you using? The local models I tried for autocomplete were unusable, though based on aiders benchmark I never really tried with larger models for chat. If I could I would love to go local-only instead.
notsylver commented on Viral ChatGPT trend is doing 'reverse location search' from photos   techcrunch.com/2025/04/17... · Posted by u/jnord
imposterr · 4 months ago
Hmm, not sure I understand how you made use of OpenAI to guess the location oh a photo. Could you expand on that a bit? Thanks!
notsylver · 4 months ago
I showed the model a picture and any text written on that picture and asked it to guess a latitude/longitude using the tool use API for structured outputs. That was in addition to having it transcribe the hand written text and extracting location names, which was my original goal until I saw how good it was at guessing exact coordinates. It would guess within ~200km on average, even on pictures with no information written on them.

u/notsylver

KarmaCake day159November 2, 2019View Original