Readit News logoReadit News
woolion commented on Achieving 10,000x training data reduction with high-fidelity labels   research.google/blog/achi... · Posted by u/badmonster
NooneAtAll3 · 18 days ago
didn't parent comment cited sentence about clickbait?

why did you change subject to scams?

woolion · 18 days ago
Parent says it's an outlandish claim that they can reliably tell whether ads are clickbait.

I believe that detecting whether an ad is clickbait is a similar problem -- not exactly the same, but it suffers from the same issues:

- it's not well defined at all.

- any heuristic is constantly gamed by bad actors

- it requires a deeper, contextual analysis of the content that is served

- content analysis requires a notion of what is reputable or reasonable

If I take an LLM's definition of "clickbait", I get "sensationalized, misleading, or exaggerated headlines"; so scams would be a subset of it (it is misleading content that you need to click through). They do not provide their definition though.

So you have Google products (both the Products search and the general search) that recommend scams with an incredible rate, where the stakes are much higher. Is it reasonable that they're able to solve the general problem? How can anyone verify such a claim, or trust it?

woolion commented on Achieving 10,000x training data reduction with high-fidelity labels   research.google/blog/achi... · Posted by u/badmonster
ericyd · 19 days ago
> in production traffic only very few (<1%) ads are actually clickbait

That's a fascinating claim, and it does not align with my anecdotal experience using the web for many years.

woolion · 18 days ago
In the last 6 months, I've had to buy a few things that 'normal people' tend to buy (a coffee machine, fuel, ...), for which we didn't already have trusted sellers, and so checked Google.

For fuel, Google results were 90% scams, for coffee machines closer to 75% The scams are fairly elaborate: they clone some legitimate looking sites, then offer prices that are very competitive -- between 50% and 75% of market prices -- that put them on top of SEO. It's only by looking in details at contact information that there are some things that look off (one common thing is that they may encourage bank transfers since there's no buyer protection there, but it's not always the case).

A 75% market rate is not crazy "too good to be true" thing, it's in the realm of what a legitimate business can do, and with the prices of the items being in the 1000s, that means any hooked victim is a good catch. A particular example was a website copying the one for a massive discount appliance store chain in the Netherlands. They had a close domain name, even though the website looked different, so any Google search linked it towards the legitimate business.

You really have to apply a high level of scrutiny, or understand that Google is basically a scam registry.

woolion commented on Does the Bitter Lesson Have Limits?   dbreunig.com/2025/08/01/d... · Posted by u/dbreunig
woolion · 24 days ago
I believe the main problem could be reframed as an improper use of analogies. People are pushing the analogy of "artificial intelligence" and "brain", etc. creating a confusion that leads to such "laws". What we have is a situation that is similar to birds and planes, they do not operate under the same principles at all.

Looking at the original claim, we can take from birds a number of optimization regarding air flows that are far beyond what any plane can do. But, the impact that could be transfer to planes would be minimal compared to a boost in engine technology. Which is not surprising since the way both systems achieve "flight" are completely different.

I don't believe such discourse would happen at all if it was just considered to be a number of techniques, of different categories with their own strength and weaknesses, used to tackle problems.

Like all fake "laws", it is based on a general idea that is devoid of any time-frame prediction that would make it falsifiable. In "the short term" is beaten by "in the long run". How far is "the long run"? This is like the "mean reversion law", saying that prices will "eventually" go back to their equilibrium price; will you survive bankruptcy by the time of "eventually"?

woolion commented on I know when you're vibe coding   alexkondov.com/i-know-whe... · Posted by u/thunderbong
palata · a month ago
To echo the article, I don't want to know it was written with an AI. Just like I don't want to see that it was obviously copy-pasted from StackOverflow.

The developer can do whatever they want, but at the end, what I review is their code. If that code is bad, it is the developer's responsibility. No amount of "the agent did it" matters to me. If the code written by the agent requires heavy refactoring, then the developer has to do it, period.

woolion · a month ago
100% agree.

However, you'll probably get an angry answer that it's management fault, or something of the sort, that is to blame (because there isn't enough time). Responsibility would have to be taken up before in pushing back if some objectives truly are not reasonable.

woolion commented on I know when you're vibe coding   alexkondov.com/i-know-whe... · Posted by u/thunderbong
philipp-gayret · a month ago
All major AI assistants already come with ways to not have any of these issues.

Claude Code has /init, Cursor comes with /Generate Cursor Rules, and so on. It's not even context engineering: There are out of the box tools you can use not to have this happen. And even if they do happen: you can make them never happen again, with these same tools, for your entire organization - if you had invested the time to know how to use them.

It is interesting how these tools split up the development community.

woolion · a month ago
Serious question: I'm currently re-evaluating if Cursor can speed up my daily work. Currently it is not really the case because of the many subtle errors (like switching a ":" for a ","). But mostly the problem I face is that the code base is big, with entirely outdated parts and poorly coded ones. So the AI favors the most common patterns, which are the bad ones. Even with basic instructions like "take inspiration from <part of the code that is very similar and well-written>" it still mostly takes from the overall codebase (which, by the way, was worsened by a big chunk of vibe-coded output that was hastily merged). My understanding is that a rule should essentially do the same as if it is put in the prompt directly. Is there a solution to that?
woolion commented on I know when you're vibe coding   alexkondov.com/i-know-whe... · Posted by u/thunderbong
palata · a month ago
A risk with vibe coding is that it may make a good developer slightly faster, but it will make bad developers waaaay faster. Resulting in more bad code being produced.

The question then is: do the bad developers improve by vibe coding, or are they stuck in a local optimum?

woolion · a month ago
So, I was wondering when I would see that... from my experience, I would say it also makes mediocre developers bad ones very fast. The reason being a false sense of confidence, but mostly it's because of the sheer volume that is produced.

If we want to be more precise, I think the main issue is that the AI-generated code lacks a clear architecture. It has no (or very little) respect for overall information flow, and single-responsibility principle.

Since the AI wants you to have "safe" code, so it will catch things and return non-results instead. In practice, that means the calling code has to inspect the result to see if it's a placeholder or not, instead of being confident because you'd get an exception otherwise.

Similarly, to avoid problems the AI might tweak some parameter. If for example you were to design an program to process something with AI, you might to gather_parameters -> call -> process_results. Call should not try to do funky things with parameters because that should be fixed at the gathering step. But locally the AI is always going to suggest having a bunch of "if this parameter is not good, swap it silently so that it can go through anyway".

Then tests are such a problem it would require an even longer explanation...

woolion commented on The new literalism plaguing today’s movies   newyorker.com/culture/cri... · Posted by u/frogulis
Duanemclemore · a month ago
I don't know if calling it a "New Literalism" is helpful. I just don't know that a penchant for literalism ever went away.

Now, what IS relatively new is the "ruined punchline" phenomena that they identify (without naming) on the movie recap podcast Kill James Bond, which is that contemporary movies always ruin jokes by telling one, say... "x" and then having another character chime in with "Did you just say 'x' !?"

I think there's a fear of losing attention because you're asking people to think about something other than the eyewash happening right in front of them by inviting them to have to -think- about a movie.

Anyway, to close: "No one in this world ... has ever lost money by underestimating the intelligence of the great masses of the plain people..."

- HL Mencken

woolion · a month ago
>No one in this world ... has ever lost money by underestimating the intelligence of the great masses of the plain people...

I think you're disproving your own point. If you look the major flops in all industries (video-games, movies, ...) the general trend is contempt for the audience. This generally results in some form of uproar from the most involved fans, which is disregarded because of the assumption that the general public won't pick up on it. At the very least, I would say that for this to be true you need to have a very specific definition of intelligence that would exclude a lot of crowd behaviors.

woolion commented on Replicube: 3D shader puzzle game, online demo   replicube.xyz/staging/... · Posted by u/inktype
woolion · a month ago
The developers are also behind JellyCar Worlds, which I found to be a wonderfully creative physics based "platforming" (there's a twist!) challenges/puzzles. It's ton of fun to play with a kid, yet there's a lot of really complex setups to really challenge yourself if you want to. A real gem!
woolion commented on The Death of the Middle-Class Musician   thewalrus.ca/the-death-of... · Posted by u/pseudolus
tptacek · 2 months ago
The lede of this article, about Rollie Pemberton, is about a "360" deal where the label gets a cut of all revenue related to the act (Pemberton's "Cadence Weapon"). Unusually, in Pemberton's case, it appears that most of his revenue came in from prizes and grants, not from recording sales or touring. The structure of his deal thus made Upper Class Records an outsized return. The deal seems pretty exploitative.

The problem with this as a framing device is that it doesn't describe very many working musical acts. 360 deals are probably generally gross? But Pemberton's situation is weird. In most cases, labels are in fact going to lose money from midlist acts.

The more you look at these kinds of businesses the more striking the pattern is. It's true of most media, it's true of startups, it's true for pharmaceuticals. The winners pay for the losers; in fact, the winners are usually the only thing that matter, the high-order bit of returns.

What's challenging about this is that you can't squeeze blood from a stone. The package offered to a midlist act might in fact be a loss leader; incentive to improve dealflow and optionality for the label, to get a better shot at the tiny number of acts whose returns will keep the label afloat. There may not be much more to offer to acts that aren't going to generate revenue.

David Lowery (a mathematician and the founder/lead vocalist of Camper Van Beethoven and Cracker) had an article about this years ago:

https://news.ycombinator.com/item?id=3850935

It's worth a read (though things have probably changed in a number of ways since then). It's an interesting counterpoint to the automatic cite to Albini's piece that comes up in these discussions. Not that you should have sympathy for labels, just it's useful to have a clearer idea of what the deal was. The classic label deal with a mid-sized advance that never recouped (and which the labels never came back looking for when it didn't) was basically the driver for "middle-class" rock lifestyles; it's dead now.

woolion · 2 months ago
This is a very sensible analysis of the problems. On the one hand, people tend to ignore how many bands fail, and how much money and effort is spent on the process. On the other hand, labels have a deathgrip on the industry, using payola and other practices that they can afford thanks to their financial (and accounting) abilities.

One thing that could help is transparency, but in a way the lack of transparency is a good part of what keeps the system going. Most people would not agree if they knew how little they would keep if they were successful; "what do you mean I have to pay for the losers?". They would just want to pay for what was necessary for their success, ignoring every expense that didn't work as a "stupid label decision". The thing is that nobody has a true recipe for success, you can just get reasonable estimates on your bets, but each bet will always be a biased coin flip.

woolion commented on AGI is Mathematically Impossible 2: When Entropy Returns   philarchive.org/archive/S... · Posted by u/ICBTheory
anal_reactor · 2 months ago
What's wrong with that? Most likely, the discussion coming from various people has more value than any single article, unless it's something truly phenomenal.
woolion · 2 months ago
I never said it was wrong, nor right. In fact, you might even read that as an excuse for "counter-hypists", as it's a pretty bad look to upvote such a low-quality submission. And I've made my own fun of AGI hype, but with knowledge of the fact that brevity is the fool of wit.

u/woolion

KarmaCake day1207October 14, 2021
About
Programmer who drawz. Working on the open collaborative project Cosmoose. Come say hi!

Homepage: https://woolion.art/

View Original