Readit News logoReadit News
yeahwhatever10 commented on Claude Opus 4 and 4.1 can now end a rare subset of conversations   anthropic.com/research/en... · Posted by u/virgildotcodes
rogerkirkness · 14 days ago
It seems like Anthropic is increasingly confused that these non deterministic magic 8 balls are actually intelligent entities.

The biggest enemy of AI safety may end up being deeply confused AI safety researchers...

yeahwhatever10 · 14 days ago
Is it confusion, or job security?
yeahwhatever10 commented on Tao on “blue team” vs. “red team” LLMs   mathstodon.xyz/@tao/11491... · Posted by u/qsort
ants_everywhere · a month ago
I have a couple of thoughts here:

(a) AI on both the "red" and "blue" teams is useful. Blue team is basically brain storming.

(b) AlphaEvolve is an example of an explicit "red/blue team" approach in his sense, although they don't use those terms [0]. Tao was an advisor to that paper.

(c) This is also reminiscent of the "verifier/falsifier" division of labor in game semantics. This may be the way he's actually thinking about it, since he has previously said publicly that he thinks in these terms [0]. The "blue/red" wording may be adapting it for an audience of programmers.

(d) Nitpicking: a security system is not only as strong as its weakest link. This depends on whether there are layers of security or if the elements are in parallel. A corridor consisting of strong doors and weak doors (in series) is as strong as the strongest door. A fraud detection algorithm made by aggregating weak classifiers is often much better than the weakest classifier.

[0] https://storage.googleapis.com/deepmind-media/DeepMind.com/B...

[1] https://mathoverflow.net/questions/38639/thinking-and-explai...

yeahwhatever10 · a month ago
How is the LLM in AlphaEvolve red team? All the LLM does is generate new code when prompted with examples. It doesn’t evaluate the code.
yeahwhatever10 commented on All Good Editors Are Pirates: In Memory of Lewis H. Lapham   laphamsquarterly.org/roun... · Posted by u/Caiero
Aachen · 2 months ago
The title is explained way down:

> Lewis was fond of saying that “all good editors are pirates”—they steal from everyone

yeahwhatever10 · 2 months ago
A contradictory quote for publishers and editors in the AI era.
yeahwhatever10 commented on Dull Men’s Club   theguardian.com/society/2... · Posted by u/herbertl
guicen · 2 months ago
There’s something oddly comforting about this. In a world where everyone’s trying to stand out, some people find peace in just noticing ordinary things. Maybe being "boring" is underrated. You don’t always need a big story to feel connected. Sometimes it's enough to care about small details nobody else pays attention to.
yeahwhatever10 commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
ChrisMarshallNY · 3 months ago
I use AI every day, basically as a "pair coder."

I used it about 15 minutes ago, to help me diagnose a UI issue I was having. It gave me an answer that I would have figured out, in about 30 minutes, in about 30 seconds. My coding style (large files, with multiple classes, well-documented) works well for AI. I can literally dump the entire file into the prompt, and it can scan it in milliseconds.

I also use it to help me learn about new stuff, and the "proper" way to do things.

Basically, what I used to use StackOverflow for, but without the sneering, and much faster turnaround. I'm not afraid to ask "stupid" questions -That is critical.

Like SO, I have to take what it gives me, with a grain of salt. It's usually too verbose, and doesn't always match my style, so I end up doing a lot of refactoring. It can also give rather "naive" answers, that I can refine. The important thing, is that I usually get something that works, so I can walk it back, and figure out a better way.

I also won't add code to my project, that I don't understand, and the refactoring helps me, there.

I have found the best help comes from ChatGPT. I heard that Claude was supposed to be better, but I haven't seen that.

I don't use agents. I've not really ever found automated pipelines to be useful, in my case, and that's sort of what agents would do for me. I may change my mind on that, as I learn more.

yeahwhatever10 · 3 months ago
I use it as a SO stand in as well.

What I like about Chatbots vs SO is the ability to keep a running conversation instead of 3+ tabs and tuning the specificity toward my problem.

I've also noticed that if I look up my same question on SO I often find the source code the LLM copied. My fear is that if chatbots kill SO where will the LLM's copied code come from in the future?

yeahwhatever10 commented on OpenAI to buy AI startup from Jony Ive   bloomberg.com/news/articl... · Posted by u/minimaxir
philosophty · 3 months ago
This is a sign of OpenAI's weakness.

Altman is desperately trying to use OpenAI's inflated valuation to buy some kind of advantage. Which is why he's buying ads, paying $6.5 billion in stock to Jony Ive, and $3 billion for a VSCode fork created in a few months.

Almost anything makes sense when you see your valuation going to zero unless you can figure something out.

yeahwhatever10 · 3 months ago
Agreed. This doesn't look like 4D chess, this stinks of desperation.
yeahwhatever10 commented on Show HN: I built a hardware processor that runs Python   runpyxl.com/gpio... · Posted by u/hwpythonner
yeahwhatever10 · 4 months ago
How are you simulating the designs for the FPGA? Are you paying for ModelSim?
yeahwhatever10 commented on Jagged AGI: o3, Gemini 2.5, and everything after   oneusefulthing.org/p/on-j... · Posted by u/ctoth
mellosouls · 4 months ago
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

yeahwhatever10 · 4 months ago
This is the forum that fell the hardest for the superconductor hoax a few years ago. HN has no superiority leg to stand on.
yeahwhatever10 commented on Ironwood: The first Google TPU for the age of inference   blog.google/products/goog... · Posted by u/meetpateltech
_hark · 5 months ago
Can anyone comment on where efficiency gains come from these days at the arch level? I.e. not process-node improvements.

Are there a few big things, many small things...? I'm curious what fruit are left hanging for fast SIMD matrix multiplication.

yeahwhatever10 · 5 months ago
Specialization. Ie specialized for inference.
yeahwhatever10 commented on US Administration announces 34% tariffs on China, 20% on EU   bbc.com/news/live/c1dr7vy... · Posted by u/belter
pyrale · 5 months ago
> Tunisian consumers don't have the ability to afford products from the US.

They do use products from the US, just not physical ones. It's weird to read such takes on HN of all sites.

yeahwhatever10 · 5 months ago
There is this group-think on HN today that services are intentionally left out as part of the US trade balance. That confusion likely comes from tax and corporate structures. Ie all those profits are locked into sub-corps, so Apple-Cayman Islands or Google-Ireland (corporate tax havens) which is why they don't show up on the balance sheet as "trade" into the US (typically those sub-corps buy financial assets with those profits). Read the first chapters of Trade Wars are Class Wars for more depth.

u/yeahwhatever10

KarmaCake day248July 5, 2016View Original