Readit News logoReadit News
notsylver commented on AirPods libreated from Apple's ecosystem   github.com/kavishdevar/li... · Posted by u/moonleay
notsylver · a month ago
It seems to let you access head tracking data, so now I'm really curious if it would be accurate enough to use with games (eg, microsoft flight sim/arma 3/euro truck simulator 2 head tracking). There is probably a lot of other interesting use cases for it too, but I'm stuck with windows for now so :(
notsylver commented on BERT is just a single text diffusion step   nathan.rs/posts/roberta-d... · Posted by u/nathan-barry
notsylver · 2 months ago
I've really wanted to fine tune an inline code completion model to see if I could get at all close to cursor (I can't, but it would be fun), but as far as I know there are no open diffusion models to use as a base, and especially not any that would be good as a base. Hopefully something comes out soon that is viable for it
notsylver commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
Barbing · 4 months ago
Hope it works well for you!

In my eyes, one specific example they show (“Prompt: Restore photo”) deeply AI-ifies the woman’s face. Sure it’ll improve over time of course.

notsylver · 4 months ago
I tried a dozen or so images. For some it definitely failed (altering details, leaving damage behind, needing a second attempt to get a better result) but on others it did great. With a human in the loop approving the AI version or marking it for manual correction I think it would save a lot of time.

This is the first image I tried:

https://i.imgur.com/MXgthty.jpeg (before)

https://i.imgur.com/Y5lGcnx.png (after)

Sure, I could manually correct that quite easily and would do a better job, but that image is not important to us, it would just be nicer to have it than not.

I'll probably wait for the next version of this model before committing to doing it, but its exciting that we're almost there.

notsylver commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
zwog · 4 months ago
Do you happen to know some software to repair/improve video files? I'm in the process of digitalizing a couple of Video 2000 and VHS casettes of childhood memories of my mom who start suffering from dementia. I have a pretty streamlined setup for digitalizing the videos but I'd like to improve the quality a bit.
notsylver · 4 months ago
I didn't do any videos, just pictures, but considering how little I found for pictures I doubt you'll find much
notsylver commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
Almondsetat · 4 months ago
All of the defects you have listed can be automatically fixed by using a film scanner with ICE and a software that automatically performs the scan and the restoration like Vuescan. Feeding hundreds (thousands?) of photos to an experimental proprietary cloud AI that will give you back subpar compressed pictures with who knows how many strange artifacts seems unnecessary
notsylver · 4 months ago
I scanned everything into 48-bit RAW and treat those as the originals, including the IR scan for ICE and a lower quality scan of the metadata. The problem is sharing them - important images I manually repair and export as JPEG which is time consuming (15-30 minutes per image, there are about 14000 total) so if its "generic family gathering picture #8228" I would rather let AI repair it, assuming it doesn't butcher faces and other important details. Until then I made a script that exports the raws with basic cropping and colour correction but it can't fix the colours which is the biggest issue.
notsylver commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
notsylver · 4 months ago
I digitised our family photos but a lot of them were damaged (shifted colours, spills, fingerprints on film, spots) that are difficult to correct for so many images. I've been waiting for image gen to catch up enough to be able to repair them all in bulk without changing details, especially faces. This looks very good at restoring images without altering details or adding them where they are missing, so it might finally be time.
notsylver commented on Vanguard hits new 'bans-per-second' record   playvalorant.com/en-us/ne... · Posted by u/Wingy
notsylver · 4 months ago
If CV cheats are good enough that people are using them (and then getting banned), and other people are willing to pay >$1000 for "undetected" cheats (that still get them banned)... wouldn't making custom hardware that is just a capture card and USB keyboard+mouse running one of those CV models that sends the inputs back over a "real" keyboard work?
notsylver commented on Node.js is able to execute TypeScript files without additional configuration   nodejs.org/en/blog/releas... · Posted by u/steren
rmonvfer · 4 months ago
I’m not a heavy JS/TS dev so here’s an honest question: why not use Bun and forget about node? Sure I understand that not every project is evergreen but isn’t Bun a much runtime in general? It supports TS execution from day 1, has much faster dependency resolution, better ergonomics… and I could keep going.

I know I’m just a single data point but I’ve had a lot of success migrating old node projects to bun (in fact I haven’t used node itself since Bun was made public)

Again, I might be saying something terribly stupid because JS/TS isn’t really my turf so please let me know if I’m missing something.

notsylver · 4 months ago
I have tried fully switching to bun repeatedly since it came out and every time I got 90% of the way there only to hit a problem that couldn't be worked around. Last I tried I was still stuck on some libraries requiring napi functions that weren't implemented in bun yet, as well an issue I forget but it was vaguely something like `opendir` silently ignoring the `recursive` option causing a huge headache.

I'm waiting patiently for bun to catch up because I would love to switch but I don't think its ready for production use in larger projects yet. Even when things work, a lot of the bun-specific functionality sounds nice at first but feels like an afterthought in practice, and the documentation is far from the quality of node.js

notsylver commented on I want everything local – Building my offline AI workspace   instavm.io/blog/building-... · Posted by u/mkagenius
andylizf · 5 months ago
Yeah, that's a fair point at first glance. 50GB might not sound like a huge burden for a modern SSD.

However, the 50GB figure was just a starting point for emails. A true "local Jarvis," would need to index everything: all your code repositories, documents, notes, and chat histories. That raw data can easily be hundreds of gigabytes.

For a 200GB text corpus, a traditional vector index can swell to >500GB. At that point, it's no longer a "meager" requirement. It becomes a heavy "tax" on your primary drive, which is often non-upgradable on modern laptops.

The goal for practical local AI shouldn't just be that it's possible, but that it's also lightweight and sustainable. That's the problem we focused on: making a comprehensive local knowledge base feasible without forcing users to dedicate half their SSD to a single index.

notsylver · 5 months ago
You already need very high end hardware to run useful local LLMs, I don't know if a 200gb vector database will be the dealbreaker in that scenario. But I wonder how small you could get it with compression and quantization on top

u/notsylver

KarmaCake day161November 2, 2019View Original