Readit News logoReadit News
beering commented on The Enterprise Experience   churchofturing.github.io/... · Posted by u/Improvement
BrenBarn · 9 days ago
Always worth keeping in mind Remy's Law of Enterprise Software (https://thedailywtf.com/articles/graceful-depredations): if a piece of software is in any way described as being “enterprise”, it’s a piece of garbage.

Joking aside, I was intrigued by the list of good things at the end of the post. Some I could understand, but some seemed to fall into that strange category of things that people say are good but really seem only to lead to more of the things they say are bad. In this list we have:

> There are actual opportunities for career development.

Does "career development" just mean "more money"? If so, why not just say "there are opportunities to make more money"? If not, what is "career development" that is not just becoming more deeply buried in an organization with the various dysfunctions described in the rest of the post?

> It's satisfying to write software used by millions of people.

Is it still satisfying if that software is bad, or harms many of those people?

beering · 9 days ago
> Does "career development" just mean "more money"?

Big companies means more opportunities to lead bugger project. At a big company, it’s not uncommon to in-house what would’ve been an entire startup’s product. And depending on the environment, you may work on several of those project over the course of a few years. Or if you want to try your hand at leading bigger teams, that’s usually easier to find in a big company.

> Is it still satisfying if that software is bad, or harms many of those people?

There’s nothing inherently good about startups and small companies. The good or bad is case-by-case.

beering commented on OpenAI Progress   progress.openai.com... · Posted by u/vinhnx
ComplexSystems · 10 days ago
Why would they leave out GPT-3 or the original ChatGPT? Bold move doing that.
beering · 10 days ago
I think text-davinci-001 is GPT-3 and original ChatGPT was GPT-3.5 which was left out.
beering commented on OpenAI Progress   progress.openai.com... · Posted by u/vinhnx
shubhamjain · 10 days ago
Geez! When it comes to answering questions, GPT-5 almost always starts with glazing about what a great question it is, where as GPT-4 directly addresses the answer without the fluff. In a blind test, I would probably pick GPT-4 as a superior model, so I am not surprised why people feel so let down with GPT-5.
beering · 10 days ago
GPT-4 is very different from the latest GPT-4o in tone. Users are not asking for the direct no-fluff GPT-4. They want the GPT-4o that praises you for being brilliant, then claims it will be “brutally honest” before stating some mundane take.
beering commented on Model intelligence is no longer the constraint for automation   latentintent.substack.com... · Posted by u/drivian
mrlongroots · 11 days ago
I very much disagree. To attempt a proof by contradiction:

Let us assume that the author's premise is correct, and LLMs are plenty powerful given the right context. Can an LLM recognize the context deficit and frame the right questions to ask?

They can not: LLMs have no ability to understand when to stop and ask for directions. They routinely produce contradictions, fail simple tasks like counting the letters in a word etc. etc. They can not even reliably execute my "ok modify this text in canvas" vs "leave canvas alone, provide suggestions in chat, apply an edit once approved" instructions.

beering · 11 days ago
It feels crazy to keep arguing about LLMs being able to do this or that, but not mention the specific model? The post author only mentions the IMO gold-medal model. And your post could be about anything. Am I to believe that the two of you are talking about the same thing? This discussion is not useful if that’s not the case.
beering commented on ChatGPT 5 is slow and no better than 4    · Posted by u/iwontberude
binarymax · 17 days ago
My primary use case for LLMs are running jobs at scale over an API, and not chat. Yes it's very slow, and it is annoying. Getting a response from GPT-5-mini for <Classify these 50 tokens as true or false> takes 5 seconds, compared to GPT-4o which takes about a second.
beering · 17 days ago
The 5 seconds delay is probably due to reasoning. Maybe try setting it to minimal? If your use case isn’t complex maybe reasoning is overkill and gpt-4.1 would suffice.
beering commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
charlie0 · 19 days ago
Not so sure about the behind the scenes "automatic router". What's to stop OpenAI from slowing gimping GPT-5 over time or during times of high demand? It seems ripe for delivering inconsistent results while not changing the price.
beering · 19 days ago
Because people will switch. It’s trivial to go to old conversations in your history and try those prompts again and see if chatgpt used to be smarter.
beering commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
jjani · 19 days ago
Yup! Nice play to get a picture of every API user's legal ID - deprecating all models that aren't locked behind submitting one. And yep, GPT-5 does require this.
beering · 19 days ago
I think you got some different things mixed up. the deprecation is for chatgpt. (but i think Pro users can still use the old models)
beering commented on Ask HN: What do you dislike about ChatGPT and what needs improving?    · Posted by u/zyruh
ComplexSystems · 20 days ago
It makes too many mistakes and is just way too sloppy with math. It shouldn't be this hard to do pair-theorem-proving with it. It cannot tell the difference between a conjecture that sounds kind of vaguely plausible and something that is actually true, and literally the entire point of math is to successfully differentiate between those two situations. It needs to be able to carefully keep track of which claims it's making are currently proven, either in the current conversation or in the literature, vs which are just conjectural and just sound nice. This doesn't seem inherently harder than any other task you folks have all solved, so I would just hire a bunch of math grad students and just go train this thing. It would be much better.
beering · 20 days ago
Curious to know how the different models compare for you for doing math. Heard o4-mini is really good at math but haven’t tried o3-pro much.
beering commented on OpenAI raises $8.3B at $300B valuation   nytimes.com/2025/08/01/bu... · Posted by u/mfiguiere
disgruntledphd2 · a month ago
They're currently "worth" 3.2 Stripes, which seems pretty absurd to me. (I'm now using 1 Stripe as a metric to measure the valuation of AI companies).
beering · 25 days ago
Do you think that is absurd because OpenAI is overvalued? Or because Stripe is overvalued? Or one of them is undervalued?
beering commented on OpenAI claims gold-medal performance at IMO 2025   twitter.com/alexwei_/stat... · Posted by u/Davidzheng
Dilettante_ · a month ago
>Why is that less exciting?

Because if I have to throw 10000 rocks to get one in the bucket, I am not as good/useful of a rock-into-bucket-thrower as someone who gets it in one shot.

People would probably not be as excited about the prospect of employing me to throw rocks for them.

beering · a month ago
It’s exciting because nearly all humans have 0% chance of throwing the rock into the bucket, and most people believed a rock-into-bucket-thrower machine is impossible. So even an inefficient rock-into-bucket-thrower is impressive.

But the bar has been getting raised very rapidly. What was impressive six months ago is awful and unexciting today.

u/beering

KarmaCake day2756May 18, 2012View Original