Readit News logoReadit News
latexr commented on Measuring the environmental impact of AI inference   arstechnica.com/ai/2025/0... · Posted by u/ksec
raincole · 17 hours ago
It's not what the report says.

> It's very concerning that this marketing puff piece is being eaten up by HN of all places as evidenced by the other thread.

It's very concerning that you can just make shit up on HN and be the top comment as long as it's to bash Google.

> Never mind that Gemini 2.5 Pro, which is what everyone here would actually be using, may well consume >100x much

Yes, exactly, never mind that. The report is to compare against a data point from May 2024, before Gemini 2.5 Pro became a thing.

latexr · 12 hours ago
> make shit up on HN and be the top comment as long as it's to bash Google.

I don’t think that’s fair. Same would’ve happened if it were Microsoft, or Apple, or Amazon. By now we’re all used to (and tired) of these tech giants lying to us and being generally shitty. Additionally, for decades we haven’t been able to trust reports from big companies which say “everything is fine, really” when they publish it themselves, about themselves, contradicting the general wisdom of something bad they’ve been doing. Put those together and you have the perfect combination; we’re primed to believe they’re trying to deceive us again, because that’s what happens most of the time. It has nothing to do with it being Google, they just happened to be the target this time.

latexr commented on AI crawlers, fetchers are blowing up websites; Meta, OpenAI are worst offenders   theregister.com/2025/08/2... · Posted by u/rntn
Nextgrid · 2 days ago
But what's the difference between one user making 900k hits and 900k different users making one hit? In both cases you have made a resource available and people are requesting it, some more than others.

If serving traffic for free is a problem, don't. If you are only able to serve N requests per second/minute/day/etc, do that. But don't complain if you give out something for free and people take it.

(also, a lot of the numbers people quote during these AI scraper "attacks" are very tame and the fact they are branded as problematic makes me suspect there's substantial incompetence in the solutions deployed to serve them)

latexr · 2 days ago
> But what's the difference between one user making 900k hits and 900k different users making one hit?

What’s the difference between giving 900K meals to one person and feeding 900K people? The former is being abusive, wasteful, and depriving almost 900K other people of food. They are also being deceitful by pretending to be 900K different people.

Resources are finite. Web requests aren’t food, but you still pay for them. A spike in traffic may mean your service being down for the rest of the month, which is more acceptable if you helped a bunch of people who have now learned about and can talk about and share what you provided, versus having wasted all your traffic on a single bad actor who didn’t even care because they were just a robot.

> makes me suspect there's substantial incompetence in the solutions deployed to serve them

So you see bots scraping the Wikipedia webpages instead of downloading their organised dump, or scraping every git service webpage instead of cloning a repo, and think the incompetence is with the website instead of the scraper wasting time and resources to do a worse job?

latexr commented on My AI had fixed the code before I saw it   every.to/source-code/my-a... · Posted by u/Garbage
latexr · 2 days ago
> I launched GitHub expecting to dive into my usual routine—flag poorly named variables, trim excessive tests, and suggest simpler ways to handle errors.

If these are routine, in what kind of state is the repository? All of those easily can and should’ve been done properly at write/merge time in any half-decent code base. Sure, sometimes one case slips by, but if these are routine fixes, there is something deeply wrong with the process.

> I can't write a function anymore without thinking about whether I'm teaching the system or just solving today's problem.

This isn’t a positive thing.

> When you're done reading this, you'll have the same affliction.

No, not all. What you have described is a deeply broken system which will lead to worse developers and even worse software. I hope your method propagates as little as possible.

> But AI outputs aren't deterministic—a prompt that works once might fail the next time.

> So I have Claude run the test 10 times. When it only identifies frustration in four out of 10 passes, Claude analyzes why it failed the other six times.

All of this is wasteful and insane. You don’t learn anything or understand your own system, you just throw random crap at it a bunch of times and then pick some of it that sticks. No wonder it’s part of your routine to have to fix basic things. It’s bonkers that this is lauded as an improvement over doing things right.

latexr commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
ants_everywhere · 2 days ago
If you don't disclose the use of

- books

- search engines

- stack overflow

- talking to a coworker

then it's not clear why you would have to disclose talking to an AI.

Generally speaking, when someone uses the word "slop" when talking about AI it's a signal to me that they've been sucked into a culture war and to discount what they say about AI.

It's of course the maintainer's right to take part in a culture war, but it's a useful way to filter out who's paying attention vs who's playing for a team. Like when you meet someone at a party and they bring up some politician you've barely heard of but who their team has vilified.

latexr · 2 days ago
> then it's not clear why you would have to disclose talking to an AI.

It’s explained right there in the PR:

> The disclosure is to help maintainers assess how much attention to give a PR. While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.

That is not true of books, search engines, stack overflow, or talking to a worker, because in all those cases you still had to do the work yourself of comprehending, preparing, and submitting the patch. This is also why they ask for a disclosure of “the extent to which AI assistance was used”. What about that isn’t clear to you?

latexr commented on The AI Job Title Decoder Ring   dbreunig.com/2025/08/21/a... · Posted by u/dbreunig
latexr · 2 days ago
> Even when you live and breathe AI, the job titles can feel like a moving target. I can only imagine how mystifying they must be to everyone else.

> Because the field is actively evolving, the language we use keeps changing. Brand new titles appear overnight or, worse, one term means three different things at three different companies.

How can you write that and not realise “maybe this is all made up bullshit and everyone is pulling titles out of their asses to make themselves look more important and knowledgeable than they really are, thus I shouldn’t really be wasting my time giving the subject any credence”? If you’re all in on the field and can’t keep up, why should anyone else care?

latexr commented on AI crawlers, fetchers are blowing up websites; Meta, OpenAI are worst offenders   theregister.com/2025/08/2... · Posted by u/rntn
renewiltord · 2 days ago
Websites aren't people. They don't have desires. Machines have communication protocols. You can set your machine to blackhole the traffic or TCP RST or whatever you want. It's just network traffic. Do what you want with it.

People send me spam. I don't whine about it. I block it.

latexr · 2 days ago
> Websites aren't people. They don't have desires.

Obviously I’m talking about the people behind them, and I very much doubt you lack the minimal mental acuity to understand that when I used “website owners” in the preceding sentence. If you don’t want to engage in a good faith discussion you can just say so, no need to waste our time with fake pedantry. But alright, I edited that section.

> You can set your machine to blackhole the traffic or TCP RST or whatever you want. It's just network traffic.

And then you spend all your time in a game of cat and mouse, while these scrappers bring your website down and cost you huge amounts of money. Are you incapable of understanding how that is a problem?

> People send me spam. I don't whine about it. I block it.

Is the amount of spam you get so overwhelming that it swamps your inbox every day to a level you’re unable to find the real messages? Do those spammers routinely circumvent your rules and filters after you’ve blocked them? Is every spam message you get costing you money? Are they increasing every day? No? Then it’s not the same thing at all.

latexr commented on Show HN: OS X Mavericks Forever   mavericksforever.com/... · Posted by u/Wowfunhappy
Wowfunhappy · 2 days ago
(I know I already replied in a different comment, but just thinking about this more.)

> Both can also be true of your elderly relative, or your partner, or your cousin, or your friend who doesn’t want to fiddle with the damn machine, they just want to get their shit done without having to worry about screwing up anything. Your other friend will want the freedom to do everything and ask you for help.

...you know, this is also why, as much as I love the hackability of Mavericks, I also kind of liked the way Apple initially implemented System Integrity Protection in El Capitan.

It was easy to turn off! Just boot into recovery mode, open the Terminal, type in a short command, and boom, SIP will never bother you again for the entire life of that computer! The process wasn't onerous, or even difficult as long as you knew how to open a Terminal in recovery mode, or were willing to learn. And if you couldn't do those things, well, you probably shouldn't turn off SIP!

Where I get annoyed is with the signed system volume stuff, because that consistently gets in your way! It is impossible for any type of user to "unlock" modern macOS.

Although then again, even going back to the original SIP without SSV... well, we did already have a system for this before SIP, didn't we? It was called UNIX permissions! If you didn't know what you're doing, or didn't want to learn, why were you using an administrator account? Why did your elderly relative ever have superuser privileges in the first place?

...the answer is kind of obvious, actually. Administrator accounts are the default, and even if you went out of your way to avoid one, you'd be unable to, for example, install Photoshop.

I wish that is the problem Apple had solved! Instead of introducing an entirely new layer on top of the UNIX security model, make non-admin accounts the default setting for new users, and then make those accounts a tad more capable (and lean on Adobe to stop being awful).

latexr · 2 days ago
There is also another layer: when SIPS was introduced, there were tons of articles and videos teaching people to turn it off when they shouldn’t. This ranged from uninformed social media “developers” who confidently spewed dangerous bad advice, to outright bad actors trying to compromise your machine. Non-savvy users could still break their own systems by disabling these features easily.

But largely I agree with you. I wish Apple had taken longer to fully develop a robust solution from the ground up instead of the status quo of piling on year after year to a semi-broken system.

latexr commented on AI crawlers, fetchers are blowing up websites; Meta, OpenAI are worst offenders   theregister.com/2025/08/2... · Posted by u/rntn
renewiltord · 2 days ago
If you don't want to receive data, don't. If you don't want to send data, don't. No one is asking you to receive traffic from my IPs or send to my IPs. You've just configured your server one way.

Or to use a common HN aphorism “your business model is not my problem”. Disconnect from me if you don’t want my traffic.

latexr · 2 days ago
> Disconnect from me if you don’t want my traffic.

The problem is precisely that that is not possible. It is very well known that these scrapers aren’t respecting the wishes of website owners and even circumvent blocks any way they can. If these companies respected the website owners’ desires for them to disconnect, we wouldn’t be having this conversation.

u/latexr

KarmaCake day19759December 12, 2017View Original