Readit News logoReadit News
tomku commented on Something Big Is Happening   twitter.com/mattshumer_/s... · Posted by u/mhb
njoyablpnting · 3 days ago
> I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed.

I use Opus 4.6 all day long and this is not my experience at all. Maybe if you're writing standard CRUD apps or other projects well-represented in the training data. Anyone who has written "real" software knows that it's lots of iterating, ambiguity and shifting/opposing requirements.

The article seems to be written in order to feed into some combination of hype/anxiety. If the author wants to make a more compelling case for their stance I would suggest they build and deploy some of this software they're supposedly getting the LLM to perfectly create.

Yes, it's a very useful tool, but these sort of technically-light puff pieces are pretty tiresome and reflect poorly on the people who author and promote them. Also, didn't this guy previously make up some benchmarks that turned out to be bogus? https://www.reddit.com/r/LocalLLaMA/comments/1fd75nm/out_of_...

tomku · 3 days ago
I went down a bit of a rabbit-hole trying to figure out exactly who Matt Shumer is and why anyone should care what he thinks. The best information I found came from this article, which was from before he pivoted to being an AI startup bro:

https://www.newsweek.com/i-couldnt-play-rules-so-i-became-en...

It's kind of a sad read. He would benefit a lot from getting outside the startup bubble and talking to some people who do useful work for a living instead of riding internet fads and growthmaxxing via viral social media posts.

tomku commented on Something Big Is Happening   twitter.com/mattshumer_/s... · Posted by u/mhb
tomku · 3 days ago
Thought this name sounded familiar... Matt Shumer was one of the people responsible for the "Reflection 70b" hoax a few years ago. There is no reason to take anything he writes seriously, he has a history of flat-out lying to go viral.

Edit: Summary for anyone who didn't follow this saga at the time: https://www.ignorance.ai/p/the-fable-of-reflection-70b

Shumer is at best a fool and at worst a con artist.

tomku commented on Ask HN: Any real OpenClaw (Clawd Bot/Molt Bot) users? What's your experience?    · Posted by u/cvhc
azinman2 · 14 days ago
If it’s a local model, why would you care if it sees your messages or notes?
tomku · 14 days ago
https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

Note that nothing about that depends on it being a local or remote model, it was just less of a concern for local models in the past because most of them did not have tool calling. OpenClaw, for all the cool and flashy uses, is also basically an infinite generator for lethal trifecta problems because its whole pitch is combining your data with tools that can both read and write from the public internet.

tomku commented on I have to give Fortnite my passport to use Bluesky   spitfirenews.com/p/why-i-... · Posted by u/malshe
charcircuit · 2 months ago
Is this just clickbait or is the passport verification really done by Fortnite? The article doesn't explain it and I am very skeptical of the claim.
tomku · 2 months ago
It's using "Fortnite" as a synecdoche for Epic Games, because "I have to give an age verification company owned by Epic Games my passport to use Bluesky" isn't quite as effective at revving the outrage engines, even if it has the benefit of being true. Personally, I don't think people who are willing to do that are showing themselves to be trustworthy but you might feel differently.
tomku commented on Using AI Generated Code Will Make You a Bad Programmer   unsolicited-opinions.rudi... · Posted by u/speckx
threethirtytwo · 2 months ago
Exactly 2 years ago I remember people calling AI stochastic parrots with no actual intellectual capability and people on HN weren’t remotely worried that AI would take over there jobs.

I mean in 2 years the entire mentality shifted. Most people on HN were just completely and utterly wrong (also quite embarrassing if you read how self assured these people were, this is like 70 percent of HN at the time).

First AI is clearly not a stochastic parrot and second it hasn’t taken our jobs yet but we can all see that potential up ahead.

Now we get articles like this saying your skills will atrophy with AI because the entire industry is using it now.

I think it’s clear. Everyone’s skills will atrophy. This is the future. I fully expect in the coming decades that the generation after zoomers have never coded ever without the assistance of AI and they will have an even harder time finding jobs in software.

Also: because the change happened so fast you see tons of pockets of people who aren’t caught up yet. People who don’t realize that the above is the overarching reality. You’ll know you’re one of these people if AI hasn’t basically taken over your work place and you and your coworkers aren’t going all in on Claude or Codex. Give it another 2 years and everyone will flip here too.

tomku · 2 months ago
Two years ago there were also hundreds of people constantly panic-posting here about how our jobs would be gone in a month, that learning anything about programming was now a waste of time and the entire profession was already dead, with all other knowledge work guaranteed to follow. People were posting about how they were considering giving up on CS degrees because AI would make them pointless. The people who used language like "stochastic parrots" were regularly mocked by AI enthusiasts, and the AI enthusiasts were then mocked in return for their absurd claims about fast take-off and imminent AGI. It was a cesspool of bad takes coming from basically every angle, strengthening in certainty as they bounced off each other's idiocy.

Your memory of the discourse of that era has apparently been filtered by your brain in order to support the point you want to make. Nobody who thoughtlessly adopted an extreme position at a hinge point where the future was genuinely uncertain came out of that looking particularly good.

tomku commented on GNU Octave Meets JupyterLite: Compute Anywhere, Anytime   blog.jupyter.org/gnu-octa... · Posted by u/bauta-steen
imploded_sub · 4 months ago
I did that with Octave too. I didn't mind the language much, but it wasn't great. I had significant experience with both coding and simple models when doing it, so I wasn't a beginner; I can see it being an additional hurdle for some people. What are they using now? Python?
tomku · 4 months ago
Believe Andrew Ng's new course is all Python now, yeah. Amusingly enough another class that I took (Linear Algebra: Foundations to Frontiers) kinda did the opposite move - when I took it, it was all Python, but shortly after they transitioned to full-powered MATLAB with limited student licenses. Guess it makes sense given that LAFF was primarily about the math.
tomku commented on GNU Octave Meets JupyterLite: Compute Anywhere, Anytime   blog.jupyter.org/gnu-octa... · Posted by u/bauta-steen
kjgkjhfkjf · 4 months ago
Early versions of Andrew Ng's ML MOOC used Octave, if you are looking for examples and exercises.

YouTube playlist: https://www.youtube.com/playlist?list=PLiPvV5TNogxIS4bHQVW4p...

tomku · 4 months ago
I was in one of those early cohorts that used Octave, one of the things the course had to deal with was that at the time (I don't know about now) Octave did not ship with an optimization function suitable for the coursework so we ended up using an implementation of `fmincg` provided along with the homework by the course staff. If you're following along with the lectures, you might need to track down that file, it's probably available somewhere.

Using Octave for a beginning ML class felt like the worst of both worlds - you got the awkward, ugly language of MATLAB without any of the upsides of MATLAB-the-product because it didn't have the GUI environment or the huge pile of toolbox functions. None of that is meant as criticism at Octave as a project, it's fine for what it is, it just ended up being more of a stumbling block for beginners than a booster in that specific context.

tomku commented on Ofcom fines 4chan £20K and counting for violating UK's Online Safety Act   theregister.com/2025/10/1... · Posted by u/klez
kijin · 4 months ago
Durov's plane wasn't redirected to France, nor were the French planning to extradite him anywhere else for all we know. He willingly landed his own private jet in Paris.

I understand what point you're trying to make, but Protasevich would have been a better example. Beware of whose airspace you fly over.

tomku · 4 months ago
Durov is also, relevantly, a naturalized French citizen in addition to his various other passports. It's not just "some jurisdiction", it's one he opted into!
tomku commented on A recent chess controversy   chicagobooth.edu/review/d... · Posted by u/indigodaddy
mft_ · 5 months ago
> It's not that good for top-level chess because a Magnus or Hikaru or basically anyone in the top few hundred players can bang out a series of extremely accurate moves in a critical spot - that's why they're top chess players, they're extremely good.

Interesting; I thought I'd read that even the very best players only average ~90% accuracy, whereas the best engines average 99.something%?

tomku · 5 months ago
Top-level players regularly are in the 90-95% range aggregated over many games, with spikes up to 98-99%. If you have 98 or 99% accuracy over the course of an entire game (which happens sometimes!), it's either very short or you had significant sequences where you were 100% accurate. If that happened in one of my games it'd be clear evidence I was cheating, if it happens in a Magnus game it's him correctly calculating a complex line and executing it, which he does pretty often.

Edit: Even lower-level cheated games are rarely 100% accurate for the whole game, cheaters usually mix in some bad or natural moves knowing that the engine will let them win anyways. That's why analysis is usually on critical sections, if someone normally plays with a 900 rating but spikes to 100% accuracy every time there's a critical move where other options lose, that's a strong suggestion they're cheating. One of the skills of a strong GM is sniffing out situations like that and being able to calculate a line of 'only moves' under pressure, so it's not nearly as surprising when they pull it off.

tomku commented on A recent chess controversy   chicagobooth.edu/review/d... · Posted by u/indigodaddy
giancarlostoro · 5 months ago
I'm just trying to figure out how you even cheat on chess, the only thing that comes to mind is moving pieces, and sneaking new ones on the board, but if there's enough cameras, how do you get away with it, eventually someone WILL notice, highlight it, point it out, and you will be shamed.
tomku · 5 months ago
The vast, overwhelming majority of chess games are not played in front of cameras or even in-person. The accusation in the article was about online play, and specifically blitz which is played online even more commonly than slower formats of chess because moving quickly is easier for many people with a mouse than a physical board.

The way people cheat online is by running a chess engine that analyzes the state of the board in their web browser/app and suggests moves and/or gives a +/- rating reflecting the balance of the game. Sometimes people run it on another device like their phone to evade detection, but the low-effort ways are a browser extension or background app that monitors the screen. The major online chess platforms are constantly/daily banning significant amounts of people trying to cheat in this way.

Chess.com and Lichess catch these cheaters using a variety of methods, some of which are kept secret to make it harder for cheaters to circumvent them. One obvious way is to automatically compare people's moves to the top few engine moves and look for correlations, which is quite effective for, say, catching people who are low-rated but pull out the engine to help them win games occasionally. It's not that good for top-level chess because a Magnus or Hikaru or basically anyone in the top few hundred players can bang out a series of extremely accurate moves in a critical spot - that's why they're top chess players, they're extremely good. Engine analysis can still catch high-level cheaters, but it often takes manual effort to isolate moves that even a world-champion-class human would not have come up with, and offers grounds for suspicion and further investigation rather than certainty.

For titled events and tournaments, Chess.com has what's effectively a custom browser (Proctor) that surveils players during their games, capturing their screen and recording the mics and cameras that Chess.com requires high-level players to make available to show their environment while they play. This is obviously extremely onerous for players, but there's often money on the line and players do not want to play against cheaters either so they largely put up with the inconvenience and privacy loss.

Despite all of the above, high-level online cheating still happens and some of it is likely not caught.

Edit: More information on Proctor here: https://www.chess.com/proctor

u/tomku

KarmaCake day2068December 9, 2011View Original