Readit News logoReadit News

Deleted Comment

kouteiheika commented on X offices raided in France as UK opens fresh investigation into Grok   bbc.com/news/articles/ce3... · Posted by u/vikaveri
kouteiheika · 5 days ago
No. I'm just saying that people should be consistent and if they apply a certain standard to Grok then they should also apply the same standard to other things. Be consistent.

Meanwhile what I commonly see is people dunking on anything Musk-related because they dislike him, but give a free pass on similar things if it's not related to him.

kouteiheika commented on X offices raided in France as UK opens fresh investigation into Grok   bbc.com/news/articles/ce3... · Posted by u/vikaveri
JumpCrisscross · 5 days ago
> everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing)

Well, yes. You can make child pornography with any video-editing software. How is this exoneration?

kouteiheika · 5 days ago
I'm not talking about video editing software; that's a different class of software. I'm talking about other generative AI models, which you can download today onto your computer, and have it do the same thing as Grok does.

> How is this exoneration?

I don't know; you tell me where I said it was? I'm just stating a fact that Grok isn't unique here, and if you want to ban Grok because of it then you need to also ban open weight models which can do exactly the same thing.

kouteiheika commented on X offices raided in France as UK opens fresh investigation into Grok   bbc.com/news/articles/ce3... · Posted by u/vikaveri
kouteiheika · 5 days ago
This is really amusing to watch, because everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing).

There's nothing special about Grok in this regard. It wasn't trained to be a MechaHitler, nor to generate CSAM. It's just relatively uncensored[1] compared to the competition, which means it can be easily manipulated to do what the users tell it to, and that is biting Musk in the ass here.

And just to be clear, since apparently people love to jump to conclusions - I'm not excusing what is happening. I'm just pointing out the fact that the only special thing about Grok is that it's both relatively uncensored and easily available to a mainstream audience.

[1] -- see the Uncensored General Intelligence leaderboard where Grok is currently #1: https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard

kouteiheika commented on MaliciousCorgi: AI Extensions send your code to China   koi.ai/blog/maliciouscorg... · Posted by u/tatersolid
ecshafer · 6 days ago
I don't really get how VSCode got so popular. You can use a language server perfectly easily with Vim, Emacs, Helix, Sublime, etc. You can customize basically everything in those editors, syntax, etc. You can just alias console commands for all of your build tools with some custom scripts if you need more complex build commands routinely. The git terminal tool works better than any VScode option. And VSCode is slower than all of those.

We already have so many good fast secure polygot customizable text editors. Why run one through Chrome and fill it with extensions for everything that will have arbitrary access to everything?

kouteiheika · 6 days ago
> I don't really get how VSCode got so popular. You can use a language server perfectly easily with Vim, Emacs, Helix, Sublime, etc.

You open it. It just works. And the learning curve is smooth.

Compare this to Vim where, if it's the first time you're opening it, you are forced to kill the process because you don't even know how to quit it, never mind actually do any productive work.

kouteiheika commented on Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT   openai.com/index/retiring... · Posted by u/rd
Hard_Space · 9 days ago
> you respond in 1-3 sentences" becomes long bulleted lists and multiple paragraphs very quickly

This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity. From experimentation, I can see no hope of reproducing that with 5x, and even 5x admits as much to me, when I discussed it with them today:

> Prolixity is a side effect of optimization goals, not billing strategy. Newer models are trained to maximize helpfulness, coverage, and safety, which biases toward explanation, hedging, and context expansion. GPT-4 was less aggressively optimized in those directions, so it felt terser by default.

Share and enjoy!

kouteiheika · 9 days ago
> This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity.

Maybe you should consider basing your workflows on open-weight models instead? Unlike proprietary API-only models no one can take these away from you.

kouteiheika commented on Ask HN: What's the Point Anymore?    · Posted by u/fnoef
worldsavior · 12 days ago
People still appreciate human art. People don't appreciate AI art because it's fake. If you enjoy AI art, you're probably fake and have no appreciation. That's my take.

Just remember that AI can not create art, it can only remember art. AI is not a human, AI is a probabilistic function.

kouteiheika · 12 days ago
> If you enjoy AI art, you're probably fake and have no appreciation. That's my take.

I enjoy AI art. I don't enjoy AI slop. There's a fundamental difference between the two. It's true that the Internet is flooded with low-effort AI slop, but AI is just a tool like any other, and you can create real art with it. It just takes skill.

Here's an experiment: try visiting CivitAI's featured images page[1] and then tell me with a straight face that you'd classify none of those images as art.

[1] - https://civitai.com/images

kouteiheika commented on Qwen3-Max-Thinking   qwen.ai/blog?id=qwen3-max... · Posted by u/vinhnx
behnamoh · 13 days ago
> Why is this surprising?

Because the promise of "open-source" (which this isn't; it's not even open-weight) is that you get something that proprietary models don't offer.

If I wanted censored models I'd just use Claude (heavily censored).

kouteiheika · 13 days ago
> Because the promise of "open-source" (which this isn't; it's not even open-weight) is that you get something that proprietary models don't offer. If I wanted censored models I'd just use Claude (heavily censored).

You're saying it's surprising that a proprietary model is censored because the promise of open-source is that you get something that proprietary models don't offer, but you yourself admit that this model is neither open-source nor even open-weight?

kouteiheika commented on TSMC Risk   stratechery.com/2026/tsmc... · Posted by u/swolpers
kouteiheika · 13 days ago
> Anthropic Chief Executive Officer Dario Amodei said selling advanced artificial intelligence chips to China is a blunder with “incredible national security implications” [...] “I think this is crazy. It’s a bit like selling nuclear weapons to North Korea.”

This is all a smoke screen. He knows very well that China can and will develop their own hardware to train AI models (and in fact, they are successfully doing just that; e.g. the recently released GLM-Image was trained on their own silicon). His only objective here is to slow them down enough so that they don't eat Anthropic/Claude's lunch releasing open-weight models that are increasingly competitive. But he can't just openly say "hey, we don't like that they release open weight models for free", so he's engaging in the AI version of the "think of the children" argument.

Anthropic's whole modus operandi was always pretty much "we should control this technology, no one else". It's not a coincidence they're the only major lab which has not released any open weight models, they don't publish any useful research (for training models) and they actively lobby the lawmakers to restrict people's access to open weight models. It's incredibly ironic that Dario is worried about (I quote) "1984 scenarios" while that's exactly what his company is aiming towards (e.g. giving Palantir access to those models is not "unsafe", but an average Joe having unrestricted local access is an immediate 1984-style dystopia).

kouteiheika commented on Claude's new constitution   anthropic.com/news/claude... · Posted by u/meetpateltech
jychang · 17 days ago
Yeah, that was tried. It was called GPT-4.5 and it sucked, despite being 5-10T params in size. All the AI labs gave up on pretrain only after that debacle.

GPT-4.5 still is good at rote memorization stuff, but that's not surprising. The same way, GPT-3 at 175b knows way more facts than Qwen3 4b, but the latter is smarter in every other way. GPT-4.5 had a few advantages over other SOTA models at the time of release, but it quickly lost those advantages. Claude Opus 4.5 nowadays handily beats it at writing, philosophy, etc; and Claude Opus 4.5 is merely a ~160B active param model.

kouteiheika · 17 days ago
> and Claude Opus 4.5 is merely a ~160B active param model

Do you have a source for this?

u/kouteiheika

KarmaCake day3107June 20, 2016View Original