> The way work gets done has changed, and enterprises are starting to feel it in big ways.
Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.
When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.
For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.
For classic engineering it's been a boon. This is in a pretty similar vein to the gains mathematicians have been making with AI.
These models can pretty reliably bang out what once was long mathematical solves for hypothetical systems in incredibly short periods of time. It also enables you to do second and third order approximations way easier. What was a first order approach that would take a day, is now a second order approach taking an hour.
And to top it off, they're also pretty damn competent in at least pointing you in the right direction (if nothing else) for getting information about adjacent areas you need to understand.
I've been doing an electro-optical project recently as an electronics guys, and LLMs have been infinitely useful in helping with the optics portion (on top of the mathing electronics speed up).
It's still "trust, but verify" for sure, but damn, it's powerful.
"And for this cause God shall send them strong delusion, that they should believe a lie:
That they all might be damned who believed not the truth, but had pleasure in unrighteousness."
For a more modern take, paraphrasing Hannah Arendt.
“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.”
We live in a an age where for many media is reality, uncritical, unchecked. Press releases are about creating reality, not reporting it, they are about psychological manipulation, not information.
> As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.
This actually happened in reverse with the spread of social media dynamics to politics and major media. Twitter made Trump president, not the other way around.
* No LLMs were harmed in the making of this comment.
I disagree with your sentiment and genuinely think something big is coming. It doesn't need to be perfect now, but it could be good enough to disrupt SaaS market.
> say all of this fluff when everyone knows it’s not exactly true yet
How do you know it's not exactly true? I am already seeing employees in enterprises are heavily reliant on LLMs, instead of using other SaaS vendors.
* Want to draft email and fix your grammar -> LLMs -> Grammarly is dying
* Want to design something -> Lovable -> No need to wait designer, no need to get access to Figma, let designer design and present, for anything else use lovable, or alternatives
* want to code -> obviously LLMs -> I sometimes feel like JetBrains is probably in code red at the moment, because I am barely opening it (saying this as a heavy user in the past)
To make message shorter, I will share my vision in the reply
Let's imagine AI is not there yet and won't be there for 100% accuracy, but you still need accountability, you can't run everything in autopilot and hope you will make 10B ARR.
How do you overcome this limitation?
By making human accountable, imagine you come to work in the morning and your only task is to: "Approve / Request improvement / Reject". You just press 3 buttons all day long:
* Customer is requesting pricing for X, based on the requirements, I found CustomerA had similar requirements and we offered them 100$ / piece last month. What should I do? Approve / Reject / "Ask for 110$"
* Customer (or their agent) is not happy with your 110$ proposal, I used historical data and based on X,Y,Z min we can offer is 104$ to make our ARR increase 15% year-over-year, what should I do? Approve / Reject / Your input
It sure reads like it.. These days unfortunately so many things do, there is a real “impersonality” (if that’s the right word) to the whole new communication theme.
> Why do they say all of this fluff when everyone knows it’s not exactly true yet.
There isn’t an incentive not to lie when people will read a lie, understand it to be a lie, and then characterize the lie as “not true yet”. Like if your audience has already invented a term to excuse your lies before you start talking, you categorically do not need to tell the truth.
If people judged OpenAI/Sam Altman’s statements under the premise that they are either true or untrue and that there’s no third thing I imagine that we wouldn’t hear as much about OpenAI.
To be frank I don’t think your worldview is directionally accurate. OpenAI is certainly trying to sell something but every incremental update to these models there are more avenues of value generation being unlocked. For sure it’s not as it was hyped up to be as all the talking heads in the industry were spouting, but there is a lot of interesting ways to use these tools and it’s not for generating slop.
I am already tired of the disaster that is social media. Hilariously, we’ve gotten to the point that multiple countries are banning social media for under 18s.
The costs of AI slop are going to be paid by everyone, social media will ironically becoming far less useful, and the degree of fraud we will see will be … well cyber fraud is already terrifying, what’s the value of infinity added to infinity.
I would say that tech firms are definitely running around setting society on fire at this point.
God, they built all of this on absurd amounts of piracy, and while I am happy to dance on the grave of the MPAA and RIAA, the farming of content from people who have no desire to be harvested is absurd. I believe wikipedia has already started seeing a drop in traffic, which will lead to a reduction in donations. Smaller sites are going to have an even worse time.
> At a major semiconductor manufacturer, agents reduced chip optimization work from six weeks to one day.
I call BS right there. If you can actually do that, you’d spin up a “chip optimization” consultancy and pocket the massive efficiency gain, not sell model access at a couple bucks per million-tokens.
There should be a massive “caveats and terms apply” on that quote.
So far the AI productivity gains have been all bark and no bite. I’ll believe when I see either faster product development, higher quality or lower prices (which indeed happened with other technological breakthroughs, whether the printing press or the loom) - if anything, software quality is going down suggesting we aren’t there yet.
I'm willing to bet "chip optimization work" doesn't mean "the work required to optimize a chip", but "some work tasks performed as part of chip optimization". Basically they sped up some unknown subset of the work from six weeks to one day. Which could be big or could be negligible
> ChatGPT, we have been optimizing production work for six weeks. <uploads some random documents that management team has uploaded to SharePoint, most of them generated by LLMs>. Finalize this optimization work.
> <ChatGPT spits out another document and claims that production work is now optimal>
I have a hard time believing that the right move for most organizations that aren't already bought into an OpenAI enterprise plan is going to be building their entire business around something like this. This ties you to one model provider that has been having issues keeping up with the other big labs and provides what looks like superficially some extremely useful tools but with unclear amounts of rigor. I don't think I would want to build my business on this if I was an AI-native company that was just starting right now unless they figure out how to make this much more legible and transparent to people.
This is a crowded solution space with participation from cloud, SaaS and data infrastructure vendors. All of these players and their customers have been trying to operationalize LLMs in enterprise workflows for 2+ years. Two big challenges are business ontology and fitting probabilistic tools into processes requiring deterministic outcomes. Overcoming these problems require significant systems integration and process engineering work. What does OpenAI have that makes them specifically capable of solving these problems over Azure, Databricks, Snowflake, etc., who have all been working on these problems for quite a while? I don't know if the press release really addresses any of this, which makes it seem more like marketing copy than anything else.
The question of lock-in is also a major one. Why tether your workflow automation platform to your LLM vendor when that may just be a component of the platform, especially when the pace of change in LLMs specifically is so rapid in almost every conceivable way. I think you'd far rather have an LLM-vendor neutral control plane and disaggregate the lock-in risk somewhat.
As someone who would be in a position to advise enterprises on whether to adopt Frontier, there is simply not enough information for me to follow the "Contact Sales" CTA.
We need technical details, example workflows, case studies, social proof and documentation. Especially when it's so trivial to roll your own agent.
I’m imagining this is like ai native slack. Which would be a super useful thing. But I’m with you, who knows? I had a ceo sign up - I’m curious to see one of my companies try it out.
> "75% of enterprise workers say AI helped them do tasks they couldn’t do before."
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?
- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT
- OpenAI going for the agent management market share (Dust, n8n, crewai)
> Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT|
Because that requires human thought and it might take couple weeks more to design and develop. Do something fast is the mantra, not doing something good.
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
This is a weird flex. Organizations have long strived to ship multiple times per day, it’s even one of the main business metrics for “high” performance orgs in DORA.
The fact that the premier “AI” company is barely able to deliver at a rate that is considered “high” instead of “medium” (the line is at shipping once per week) tells me that even at OpenAI writing the code is not the bottleneck.
Organizational inefficiency is as usual the real culprit.
I imagine the salary bumps occur when the individuals who have developed these productivity boosting skills apply for jobs at other companies, and either get those jobs or use the offer to negotiate a pay increase with their current employer.
Over the past few months mentions of AI in job applications have gone from "Comfortable using AI assisted programming - Cursor, Windsurf" to "Proficient in agentic development" and even mentions of "Claude code" in the desired skills sections. Yet the salary range has remained the exact same.
Companies are literally expecting junior/mid level devs to have management skills (for those even hiring juniors). They expect you to come in and perform on the level of a lead architect - not just understand the codebase but the data, the integrations, build pipelines to ingest the entire companies documentation into your agentic platform of choice, then begin delegating to your subordinates (agents). Does this responsibility shift not warrant an immediate compensation shift?
Ahh, but its not 2022 anymore, even senior devs are struggling to change companies. Only companies that are hiring are knee deep into AI wrappers and have no possibility of becoming sustainable.
I doubt that there is any risk at all. One is that this is probably a minor aspect of the business seeing these AIs. AIs that ive seen deployed so far for real work seem to best be done in side business offerings that you can tolerate a high false positive / false negative rate, but also isnt price sensitive enough that building a fully done automated pipeline classically is worth it or possible.
There is a reason Apple chose gemini for apple intelligence, despite Google being in many ways a foe, and OpenAI and Anthropic both having way more "Apple flavor" to them.
"Year of X" is so cringe. They said it was all about Agents last year... yawn. Wake me up when they have something to show that makes people go "wow this is amazing" and has real economic consequences.
Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.
When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.
For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.
These models can pretty reliably bang out what once was long mathematical solves for hypothetical systems in incredibly short periods of time. It also enables you to do second and third order approximations way easier. What was a first order approach that would take a day, is now a second order approach taking an hour.
And to top it off, they're also pretty damn competent in at least pointing you in the right direction (if nothing else) for getting information about adjacent areas you need to understand.
I've been doing an electro-optical project recently as an electronics guys, and LLMs have been infinitely useful in helping with the optics portion (on top of the mathing electronics speed up).
It's still "trust, but verify" for sure, but damn, it's powerful.
I genuinely feel AI makes the ability to come up with approaches worse in software dev.
I think two things:
1. Not everyone knows.
2. As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.
"And for this cause God shall send them strong delusion, that they should believe a lie: That they all might be damned who believed not the truth, but had pleasure in unrighteousness."
For a more modern take, paraphrasing Hannah Arendt.
“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.”
We live in a an age where for many media is reality, uncritical, unchecked. Press releases are about creating reality, not reporting it, they are about psychological manipulation, not information.
> As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.
This actually happened in reverse with the spread of social media dynamics to politics and major media. Twitter made Trump president, not the other way around.
* No LLMs were harmed in the making of this comment.
> say all of this fluff when everyone knows it’s not exactly true yet
How do you know it's not exactly true? I am already seeing employees in enterprises are heavily reliant on LLMs, instead of using other SaaS vendors.
* Want to draft email and fix your grammar -> LLMs -> Grammarly is dying
* Want to design something -> Lovable -> No need to wait designer, no need to get access to Figma, let designer design and present, for anything else use lovable, or alternatives
* want to code -> obviously LLMs -> I sometimes feel like JetBrains is probably in code red at the moment, because I am barely opening it (saying this as a heavy user in the past)
To make message shorter, I will share my vision in the reply
How do you overcome this limitation?
By making human accountable, imagine you come to work in the morning and your only task is to: "Approve / Request improvement / Reject". You just press 3 buttons all day long:
* Customer is requesting pricing for X, based on the requirements, I found CustomerA had similar requirements and we offered them 100$ / piece last month. What should I do? Approve / Reject / "Ask for 110$"
* Customer (or their agent) is not happy with your 110$ proposal, I used historical data and based on X,Y,Z min we can offer is 104$ to make our ARR increase 15% year-over-year, what should I do? Approve / Reject / Your input
....
There isn’t an incentive not to lie when people will read a lie, understand it to be a lie, and then characterize the lie as “not true yet”. Like if your audience has already invented a term to excuse your lies before you start talking, you categorically do not need to tell the truth.
If people judged OpenAI/Sam Altman’s statements under the premise that they are either true or untrue and that there’s no third thing I imagine that we wouldn’t hear as much about OpenAI.
Because it's easy to paraphrase in a myriad of ways without having any real information.
A renowned scientist coined the term "bullshitting" for it, I think.
Why stop though? Google didn't say Altavista and Yahoo is good enough for the majority of power users, let's not create something better.
When you have something good at your hand and you see other possibilities, would you say let's stop, this is enough?
I’m good for now.
I am already tired of the disaster that is social media. Hilariously, we’ve gotten to the point that multiple countries are banning social media for under 18s.
The costs of AI slop are going to be paid by everyone, social media will ironically becoming far less useful, and the degree of fraud we will see will be … well cyber fraud is already terrifying, what’s the value of infinity added to infinity.
I would say that tech firms are definitely running around setting society on fire at this point.
God, they built all of this on absurd amounts of piracy, and while I am happy to dance on the grave of the MPAA and RIAA, the farming of content from people who have no desire to be harvested is absurd. I believe wikipedia has already started seeing a drop in traffic, which will lead to a reduction in donations. Smaller sites are going to have an even worse time.
They're desperate?
Deleted Comment
I call BS right there. If you can actually do that, you’d spin up a “chip optimization” consultancy and pocket the massive efficiency gain, not sell model access at a couple bucks per million-tokens.
There should be a massive “caveats and terms apply” on that quote.
So far the AI productivity gains have been all bark and no bite. I’ll believe when I see either faster product development, higher quality or lower prices (which indeed happened with other technological breakthroughs, whether the printing press or the loom) - if anything, software quality is going down suggesting we aren’t there yet.
> <ChatGPT spits out another document and claims that production work is now optimal>
Deleted Comment
The question of lock-in is also a major one. Why tether your workflow automation platform to your LLM vendor when that may just be a component of the platform, especially when the pace of change in LLMs specifically is so rapid in almost every conceivable way. I think you'd far rather have an LLM-vendor neutral control plane and disaggregate the lock-in risk somewhat.
We need technical details, example workflows, case studies, social proof and documentation. Especially when it's so trivial to roll your own agent.
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?
- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT
- OpenAI going for the agent management market share (Dust, n8n, crewai)
Because that requires human thought and it might take couple weeks more to design and develop. Do something fast is the mantra, not doing something good.
Deleted Comment
This is a weird flex. Organizations have long strived to ship multiple times per day, it’s even one of the main business metrics for “high” performance orgs in DORA.
The fact that the premier “AI” company is barely able to deliver at a rate that is considered “high” instead of “medium” (the line is at shipping once per week) tells me that even at OpenAI writing the code is not the bottleneck.
Organizational inefficiency is as usual the real culprit.
Increased efficiency benefits capital not labor; always good to remember to look at which side you prefer to be on
Revenue bumps and ROI bumps both gotta come first. Iirc, there's a struggle with the first one.
Over the past few months mentions of AI in job applications have gone from "Comfortable using AI assisted programming - Cursor, Windsurf" to "Proficient in agentic development" and even mentions of "Claude code" in the desired skills sections. Yet the salary range has remained the exact same.
Companies are literally expecting junior/mid level devs to have management skills (for those even hiring juniors). They expect you to come in and perform on the level of a lead architect - not just understand the codebase but the data, the integrations, build pipelines to ingest the entire companies documentation into your agentic platform of choice, then begin delegating to your subordinates (agents). Does this responsibility shift not warrant an immediate compensation shift?
Ahh, but its not 2022 anymore, even senior devs are struggling to change companies. Only companies that are hiring are knee deep into AI wrappers and have no possibility of becoming sustainable.
Let me increase salary to all my employees 2x, because productivity is 4x'ed now - never said a capitalist.
OpenAI might burn through all their money, and end up dropping support for these features and/or being sold off for parts altogether.