Readit News logoReadit News
jeremyscanvic commented on SSH Secret Menu   twitter.com/rebane2001/st... · Posted by u/piccirello
halapro · 3 days ago
I was never able to properly parse large man pages, I'm so happy that llms can now prepare half a usable command without spending an hour reading a time without a single usage example.
jeremyscanvic · 3 days ago
What I usually do when I have to read large man pages like bash(1) is I read them as PDFs:

man -Tpdf bash | zathura -

Replace zathura with any PDF viewer reading from stdin or just save the PDF. Hope that can be useful to someone!

jeremyscanvic commented on Seed of Might color correction process (2023) [pdf]   andrewvanner.github.io/so... · Posted by u/haunter
virtualritz · 12 days ago
I just skimmed this but if you try any sort of color correction in a non-linear color space, e.g. display-transformed sRGB, your'e in for a world of pain. What R, G & B mean must be known exactly, otherwise you may as well be rolling dice.

G = 1.0 has a completely different meaning in sRGB, ACEScg or Adobe ProPhoto. And even what 'white' means depends on the color space you work in.

I started in commercials and VFx in the 90's when almost all places I worked at had poor at best incomplete understanding of color science.

Allmost all rendering, grading, etc. was done in the aforementioned (display-transformed) sRGB space.

So while there is the aspect of how something should look which I understand is a huge part of what this PDF is about, there is also the part of how to attain that look, once you know what it should be.

jeremyscanvic · 11 days ago
For those interested you can also look up for opto-electronic transfer functions (OETF) and electro-optical transfer functions (EOTF).
jeremyscanvic commented on Use the Mikado Method to do safe changes in a complex codebase   understandlegacycode.com/... · Posted by u/foenix
jeremyscanvic · 12 days ago
Is it possible in practice to control the side effects of making changes in a huge legacy code base?

Maybe the software crashes when you write 42 in some field and you're able to tell it's due to a missing division-by-zero check deep down in the code base. Your gut tells you you should add the check but who knows if something relies on this bug somehow, plus you've never heard of anyone having issues with values other than 42.

At this point you decide to hard code the behavior you want for the value 42 specifically. It's nasty and it only makes the code base more complex, but at least you're not breaking anything.

Anyone has experience of this mindset of embracing the mess?

jeremyscanvic commented on HackMyClaw   hackmyclaw.com/... · Posted by u/hentrep
eric-burel · 25 days ago
I've been working on making the "lethal trifecta" concept more popular in France. We should dedicate a statue to Simon Wilinson: this security vulnerability is kinda obvious if you know a bit about AI agents but actually naming it is incredibly helpful for spreading knowledge. Reading the sentence "// indirect prompt injection via email" makes me so happy here, people may finally get it for good.
jeremyscanvic · 24 days ago
How would you refer to it in French out of genuine curiosity?
jeremyscanvic commented on It's all a blur   lcamtuf.substack.com/p/it... · Posted by u/zdw
dangond · a month ago
I believe diffusion image models learn to model a reverse-noising function, rather than reverse-blurring.
jeremyscanvic · a month ago
Most of them do but it's not mandatory and deblurring can be used [1]

[1] Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise, Bansal et al., NeurIPS 2023

jeremyscanvic commented on It's all a blur   lcamtuf.substack.com/p/it... · Posted by u/zdw
criddell · a month ago
Isn't that roughly (ok, very roughly) how generative diffusion AIs work when you ask them to make an image?
jeremyscanvic · a month ago
You're absolutely right! Diffusion models basically invert noise (random Gaussian samples that you add independently to every pixel) but they can also work with blur instead of noise.

Generally when you're dealing with a blurry image you're gonna be able to reduce the strength of the blur up to a point but there's always some amount of information that's impossible to recover. At this point you have two choices, either you leave it a bit blurry and call it a day or you can introduce (hallucinate) information that's not there in the image. Diffusion models generate images by hallucinating information at every stage to have crisp images at the end but in many deblurring applications you prefer to stay faithful to what's actually there and you leave the tiny amount of blur left at the end.

jeremyscanvic commented on It's all a blur   lcamtuf.substack.com/p/it... · Posted by u/zdw
dsego · a month ago
Can this be applied to camera shutter/motion blur, at low speeds the slight shake of the camera produces this type of blur. This is usually resolved with IBIS to stabilize the sensor.
jeremyscanvic · a month ago
The missing piece of the puzzle is how to determine the blur kernel from the blurry image. There's a whole body of literature on that that's called blind deblurring.

For instance: https://deepinv.github.io/deepinv/auto_examples/blind-invers...

jeremyscanvic commented on It's all a blur   lcamtuf.substack.com/p/it... · Posted by u/zdw
jeremyscanvic · a month ago
Blur is perhaps surprisingly one of the degradations we know best how to undo. It's been studied extensively because there's just so many applications, for microscopes, telescopes, digital cameras. The usual tricks revolve around inverting blur kernels, and making educated guesses about what the blur kernel and underlying image might look like. My advisors and I were even able to train deep neural networks using only blurry images using a really mild assumption of approximate scale-invariance at the training dataset level [1].

[1] https://ieeexplore.ieee.org/document/11370202

jeremyscanvic commented on Iterative image reconstruction using random cubic bézier strokes   tangled.org/luthenwald.tn... · Posted by u/luthenwald
ronsor · 2 months ago
Oklab is a great color space that does what you expect[0] much better than HSL/HSV.

[0] https://bottosson.github.io/posts/oklab/. The better a color space matches human perception, the easier it is to certain processing operations, such as converting to grayscale while preserving the perceived brightness.

jeremyscanvic · 2 months ago
Thanks!
jeremyscanvic commented on Iterative image reconstruction using random cubic bézier strokes   tangled.org/luthenwald.tn... · Posted by u/luthenwald
jeremyscanvic · 2 months ago
Really cool! Any specific reason for the choice of Oklab instead of say HSL/HSV?

u/jeremyscanvic

KarmaCake day185November 4, 2023
About
https://jeremyscanvic.com/
View Original