Readit News logoReadit News
sheepshear commented on WebP is so great except it's not (2021)   eng.aurelienpierre.com/20... · Posted by u/enz
urbandw311er · 2 years ago
So here’s what I don’t get about this post:

> this is WebP re-encoding of an already lossy compressed JPEG

Author is clearly passionate about imagery and quality, so why are they not re-encoding using the original file rather than a lossy copy?

sheepshear · 2 years ago
> So, I wondered how bad it was for actual raw photos encoded straight in darktable. Meaning just one step of encoding.
sheepshear commented on An interactive guide to the Fourier transform (2012)   betterexplained.com/artic... · Posted by u/uticus
chasil · 2 years ago
It would also be nice to have an understanding of its relation to the Laplace transform, something more than saying that the real component goes to zero.
sheepshear · 2 years ago
Maybe this? https://youtu.be/iP4fckfDNK8

It shows how the FT is a 2D slice of the 3D LT in the s-domain.

Deleted Comment

sheepshear commented on Cruise slashes 24% of self-driving car workforce in sweeping layoffs   techcrunch.com/2023/12/14... · Posted by u/plasticchris
londons_explore · 2 years ago
Lets imagine there is a stop sign that is half hidden by a bush and pretty faded, so the stop sign detection logic says "only 5% chance it's a stop sign". That in turn isn't enough to make the car stop.

The hardcoding approach says "This is a sign. 100% sure.".

The vector approach says "There's a hard to see stop sign around here, boost up the probability of anything stop-sign-ish a bunch".

The difference functionally is that nothing in the real world is ever 100% certain. So you should never tell any bayesian machine (which a neural network effectively is) that anything is 100% true.

The vector approach I outlined is far more general than the above though - it allows any behaviour of the car to be tweaked automatically or manually. location-specific vectors can be learned from data, and/or put in by operatives. The way the neural net trains, the meaning of a vector could 'evolve' too - for example, whenever a human puts in that there is a hidden stop sign, the neural net might learn that that means other human drivers might occasionally fail to see the sign and stop in those locations. Even though it had never witnessed a human failing to stop in this specific location, it has learnt that is part of the meaning of this vector.

sheepshear · 2 years ago
I'm saying it's very likely that the business has exactly zero tolerance for the detector missing those signs while there are known, unresolved issues. They can't have something happen when they "knew it could happen".

After "resolving" the issue, then they can resume tolerating the normal probabilities.

In the US, it might be more of a precaution. In Europe, it could be more of a legal obligation. Either way, I doubt engineering has a say in the matter.

sheepshear commented on Cruise slashes 24% of self-driving car workforce in sweeping layoffs   techcrunch.com/2023/12/14... · Posted by u/plasticchris
londons_explore · 2 years ago
I also see questionable engineering decisions. For example, some reverse engineering showed that there was an overlay map of problematic stop signs and road signals that was downloaded per-road on all navigation routes. Surprise surprise, as that data got out of date, the car would randomly slam the brakes on as the overlay data told it there must be a stop sign, when in fact it was a temporary sign from construction months ago.

If you're aiming for end-to-end neural networks, you shouldn't have hard overlay facts like this. Ideally you get all the behaviour you need from curating training data, but if you can't do that the most you should have location specific info-vectors that nudge the network into making the right decision if needed. Such info-vectors can be used for all kinds of things, like "this city has aggressive drivers", "this street has a blind corner", "there's a dog at this house who loves to run under wheels", "this country drives on the other side of the road", etc.

sheepshear · 2 years ago
If the list is of objects they can't yet reliably detect, then how would they implement your suggestion to detect those objects? I'm sure they have a lower tolerance for false negatives, so of course the known problems must be hard-coded.
sheepshear commented on Stop Hiding the Sharp Knives: The WebAssembly Linux Interface   arxiv.org/abs/2312.03858... · Posted by u/yurivish
pxeger1 · 2 years ago
It’s not clear to me how this is especially novel. Why is it a paper and not just like a GitHub repo or something?
sheepshear · 2 years ago
Papers are for resumes and funding. Novelty is something the journal or investor could define and require, if they wanted it.
sheepshear commented on Oldest Fortresses in the World Discovered   phys.org/news/2023-12-old... · Posted by u/wglb
drewcoo · 2 years ago
> The Siberian findings, along with other global examples like Gobekli Tepe in Anatolia, contribute to a broader reassessment of evolutionist notions that suggest a linear development of societies from simple to complex.

WARNING: Crazy train off the tracks!

Evolution doesn't pretend to tell us about the social sciences.

The creationists who call other people "evolutionists" would like us to believe that strawman so that evolution can be wrong.

sheepshear · 2 years ago
It should be obvious what "evolutionist" means considering that it's explained in the sentence you quoted starting immediately after the word. If anything, it seems like you might actually want creationists to own the word. Crazy warning, indeed.
sheepshear commented on Low-frequency sound can reveal that a tornado is on its way   bbc.com/future/article/20... · Posted by u/billybuckwheat
dylan604 · 2 years ago
"They rarely have much warning, but it is often enough to save lives."

I really wish this trope would go away. If you live in an area prone to tornadoes and you "have no warning", then you're just not paying attention. We know tornadoes exist. We know where they tend to frequently occur. The local weather stations in those areas are pretty damn good with warnings. We know days ahead of time that the conditions will be right for potential activity. We can now see potential tornadoes before they are formed. We can track their paths with neighborhood cross street precision.

Nevermind the fact that there's a pretty good indicator when the sky turns dark and the weather changes. Thunder and lightning and wind are essentially the knocking on the door. It's not like it's a sunny day and a tornado just pops out of the sky to say hello.

To say no warning just means they are not paying attention. I don't know what the tornado activity is like where the BBC is from, but it is woefully out of date.

sheepshear · 2 years ago
Even though the article used a trope as a line to set up the story, it doesn't propagate that trope.
sheepshear commented on Microsoft is looking at next-generation nuclear reactors   theverge.com/2023/9/26/23... · Posted by u/moneil971
pfdietz · 2 years ago
Nuclear fans grossly overstate the cost of dealing with renewable intermittency, especially if the load has dispatchability. Servers don't necessarily have to run 24/7.
sheepshear · 2 years ago
What you think that will accomplish? We already schedule industrial loads for times of low demand from residential and commercial. Rescheduling those loads to coincide with high intermittent generation usually requires more transmission and distribution capacity.

Debating total cost depends almost entirely on the particular grid in question. For all we know, those "nuclear fans'" estimates could be spot-on but completely irrelevant to you.

sheepshear commented on NASA says SpaceX’s next Starship flight could test refueling tech   arstechnica.com/space/202... · Posted by u/_Microft
gpm · 2 years ago
I don't have any problem with them adopting a test flight heavy methodology, however it is extremely misleading to suggest that this is the methodology which gave them the Falcon 9.

The falcon 9 succeed on it's first launch, it was not a test flight heavy program.

The falcon 1, SpaceX's only orbital rocket prior to the falcon 9, did fail on it's first three flights, however it was never intended to and that nearly bankrupt the company. They destroyed third party payloads on flights 1 and 3, because these were intended to work, not be test flights.

Long after it was in production the falcon 9 did begin a "test-flight" heavy program, but that was for developing the ability to recover boosters after they had successfully delivered a customer payload in an operational flight, not for developing the rocket itself.

sheepshear · 2 years ago
All of the Falcon 1 payloads were expendable (which includes low-priority). It was a test vehicle conducting test flights in all but name. Had they continued developing it instead of pivoting to Falcon 9, it would have been its own test vehicle.

Suppose that the eventual production vehicle has a fairly different design, a different name, and it succeeds on its first flight. Would it then have used the same methodology that resulted in Falcon 9?

I'm only asking rhetorically. The Ship of Theseus is an interesting philosophical question, but the way you define "methodology" shouldn't depend on your answer.

u/sheepshear

KarmaCake day149July 24, 2023View Original