Readit News logoReadit News
fudged71 · 9 months ago
OpenAI is racing against two clocks: the commoditization clock (how quickly open-source alternatives catch up) and the monetization clock (their need to generate substantial revenue to justify their valuation).

The ultimate success of this strategy depends on what we might call the enterprise AI adoption curve - whether large organizations will prioritize the kind of integrated, reliable, and "safe" AI solutions OpenAI is positioning itself to provide over cheaper but potentially less polished alternatives.

This is strikingly similar to IBM's historical bet on enterprise computing - sacrificing the low-end market to focus on high-value enterprise customers who would pay premium prices for reliability and integration. The key question is whether AI will follow a similar maturation pattern or if the open-source nature of the technology will force a different evolutionary path.

danpalmer · 9 months ago
The problem is that OpenAI don't really have the enterprise market at all. Their APIs are closer in that many companies are using them to power features in other software, primarily Microsoft, but they're not the ones providing end user value to enterprises with APIs.

As for ChatGPT, it's a consumer tool, not an enterprise tool. It's not really integrated into an enterprises' existing toolset, it's not integrated into their authentication, it's not integrated into their internal permissions model, the IT department can't enforce any policies on how it's used. In almost all ways it doesn't look like enterprise IT.

informal007 · 9 months ago
This remind me why enterprise don't integrated OpenAI product into existing toolset, trust is root reason.

It's hard to provide trust to OpenAI that they won't steal data of enterprise to train next model in a market where content is the most valuable element, compared office, cloud database, etc.

apugoneappu · 9 months ago
ChatGPT does have an enterprise version.

I've seen the enterprise version with a top-5 consulting company, and it answers from their global knowledgebase, cites references, and doesn't train on their data.

lolive · 9 months ago
Won’t name my company, but we rely on Palantir Foundry for our data lake. And the only thing everybody wants [including Palantir itself] is to deploy at scale AI capabilities tied properly to the rest of the toolset/datasets.

The issues at the moment are a mix of IP on the data, insurance on the security of private clouds infrastructures, deals between Amazon and Microsoft/OpenAI for the proper integration of ChatGPT on AWS, all these kind of things.

But discarding the enterprise needs is in my opinion a [very] wrong assumption.

CreRecombinase · 9 months ago
This is what's so brilliant about the Microsoft "partnership". OpenAI gets the Microsoft enterprise legitimacy, meanwhile Microsoft can build interfaces on top of ChatGPT that they can swap out later for whatever they want when it suits them
Rastonbury · 9 months ago
OAI through their API probably does but I do agree that ChatGPT is not really Enterprise product.For the company the API is the platform play, their enterprise customers are going to be the likes of MSFT, salesforce, zendesk or say Apple to power Siri, these are the ones doing the heavy lifting of selling and making an LLM product that provides value to their enterprise customers. A bit like stripe/AWS. Whether OAI can form a durable platform (vs their competitors or inhouse LLM) is the question here or whether they can offer models at a cost that justifies the upsell of AI features their customers offer
benterix · 9 months ago
That's why Microsoft included OpenAI access in Azure. However, their current offering is quite immature so companies are using several prices of infra to make it usable (for rate limiting, better authentication etc.).
dartharva · 9 months ago
> As for ChatGPT, it's a consumer tool, not an enterprise tool. It's not really integrated into an enterprises' existing toolset, it's not integrated into their authentication, it's not integrated into their internal permissions model, the IT department can't enforce any policies on how it's used. In almost all ways it doesn't look like enterprise IT.

What according to you is the bare minimum of what it will take for it to be an enterprise tool?

magic_hamster · 9 months ago
OpenAI's enterprise access is probably mostly happening through Azure. Azure has AI Services with access to OpenAI.
outside415 · 9 months ago
have used it at 2 different enterprises internally, the issue is price more than anything. enterprises definitely do want to self host, but for frontier tech they want frontier models for solving complicated unsolved problems or building efficiencies in complicated workflows. one company had to rip it out for a time due to price, I no longer work there anymore though so can't speak on if it was reintegrated.
cess11 · 9 months ago
Decision making in enterprise procurement is more about whether it makes the corporation money and whether there is immediate and effective support when it stops making money.
osigurdson · 9 months ago
>> internal permissions model

This isn't that big of a deal any more. A company just needs to add the application to Azure AD (now called Entra for some reason).

gorgoiler · 9 months ago
Is their valuation proposition self fulfilling: the more people pipe their queries to OpenAI, the more training data they have to get better?
devjab · 9 months ago
I wonder if OpenAI can break into enterprise. I don’t see much of a path for them, at least here in the EU. Even if they do manage to build some sort of trust as far as data safety goes, and I’m not sure they’ll have much more luck with that than Facebook had trying to sell that corporate thing they did (still do?). But if they did, they will still be facing the very real issue of having to compete with Microsoft.

I view that competition a bit like the Teams vs anything else. Teams wasn’t better, but it was good enough and it’s “sort of free”. It’s the same with the Azure AI tools, they aren’t feee but since you don’t exactly pay list pricing in enterprise they can be fairly cheap. Co-pilot is obviously horrible compared to CharGPT, but a lot of the Azure AI tooling works perfectly well and much of it integrates seamlessly with what you already have running in Azure. We recently “lost” our OCR for a document flow, and since it wasn’t recoverable we needed to do something fast. Well the Azure Document Intelligence was so easy to hook up to the flow it was ridiculous. I don’t want to sound like a Microsoft commercial. I think they are a good IT business partner, but the products are also sort of a trap where all those tiny things create the perfect vendor lock-in. Which is bad, but it’s also where European Enterprise is at since the “monopoly” Microsoft has on the suite of products makes it very hard to not use them. Teams again being the perfect example since it “won” by basically being a 0 in the budget even though it isn’t actually free.

CobrastanJorji · 9 months ago
Man, if they can solve that "trust" problem, OpenAI could really have an big advantage. Imagine if they were nonprofit, open source, documented all of the data that their training was being done with, or published all of their boardroom documents. That'd be a real distinguishing advantage. Somebody should start an organization like that.
btown · 9 months ago
Sometimes I wish Apple did more for business use cases. The same https://security.apple.com/blog/private-cloud-compute/ tech that will provide auditable isolation for consumer user sessions would be incredibly welcome in a world where every other company has proven a desire to monetize your data.
navane · 9 months ago
Teams winning on price instead of quality is very telling of the state of business. Your #1/#2 communication tool being regarded as a cost to be saved upon.
raverbashing · 9 months ago
> I don’t see much of a path for them, at least here in the EU. Even if they do manage to build some sort of trust as far as data safety goes

They are already selling (API) plans, well, them and MS Azure, with higher trust guarantees. And companies are using it

Yes if they deploy a datacenter in the EU or close it will be a no-brainer (kinda pun intended)

wkat4242 · 9 months ago
> I wonder if OpenAI can break into enterprise. I don’t see much of a path for them, at least here in the EU.

Uhh they're already here. Under the name CoPilot which is really just ChatGPT under the hood.

Microsoft launders the missing trust in OpenAI :)

But why do you think copilot is worse? It's really just the same engine (gpt-4o right now) with some RAG grounding based on your SharePoint documents. Speaking about copilot for M365 here.

I don't think it's a great service yet, it's still very early and flawed. But so is ChatGPT.

Dead Comment

vessenes · 9 months ago
Agreed on the strategy questions. It's interesting to tie back to IBM; my first reaction was that openai has more consumer connectivity than IBM did in the desktop era, but I'm not sure that's true. I guess what is true is that IBM passed over the "IBM Compatible" -> "MS DOS Compatible" business quite quickly in the mid 80s; seemingly overnight we had the death of all minicomputer companies and the rise of PC desktop companies.

I agree that if you're sure you have a commodity product, then you should make sure you're in the driver seat with those that will pay more, and also try and grind less effective players out. (As a strategy assessment, not a moral one).

You could think of Apple under JLG and then being handed back to Jobs as precisely being two perspectives on the answer to "does Apple have a commodity product?" Gassée thought it did, and we had the era of Apple OEMs, system integrators, other boxes running Apple software, and Jobs thought it did not; essentially his first act was to kill those deals.

fudged71 · 9 months ago
The new pricing tier suggests they're taking the Jobs approach - betting that their technology integration and reliability will justify premium positioning. But they face more intense commoditization pressure than either IBM or Apple did, given the rapid advancement of open-source models.

The critical question is timing - if they wait too long to establish their enterprise position, they risk being overtaken by commoditization as IBM was. Move too aggressively, and they might prematurely abandon advantages in the broader market, as Apple nearly did under Gassée.

Threading the needle. I don't envy their position here. Especially with Musk in the Trump administration.

downrightmike · 9 months ago
IBM was legally compelled to spin that off
Balgair · 9 months ago
One of the main issues that enterprise AI has is the data in large corporations. It's typically a nightmare of fiefdoms and filesystems. I'm sure that a lot of companies would love to use AI more, both internally and commercially. But first they'd have to wrangle their own systems so that OpenAI can ingest the data at all.

Unfortunately, those are 5+ year projects for a lot of F500 companies. And they'll have to burn a lot of political capital to get the internal systems under control. Meaning that the CXO that does get the SQL server up and running and has the clout to do something about non-compliance, that person is going to be hated internally. And then if it's ever finished? That whole team is gonna be let go too. And it'll all just then rot, if not implode.

The AI boom for corporations is really going to let people know who is swimming naked when it comes to internal data orderliness.

Like, you want to be the person that sell shovels in the AI boom here for enterprise? Be the 'Cleaning Lady' for company data and non-compliance. Go in, kick butts, clean it all up, be hated, leave with a fat check.

CabSauce · 9 months ago
You just hit the chatGPT api for every row of data. Obviously. (Only 70% joking.)
m3kw9 · 9 months ago
ChatGPTs analogy is more like google. People use enough google, they ain’t gonna switch unless is w quantum leap better + with scale. On the API side things could get commoditized, but it’s more than just having a slightly better LLM in the benchmarks.
freediver · 9 months ago
I would say this differently.

There exists no future where OpenAI both sells models through API and has its own consumer product. They will have to pick one of these things to bet the company on.

a1j9o94 · 9 months ago
That's not necessarily true. There are many companies that have both end user products and B2 products they sell. There are a million specific use cases that OpenAI won't build specific products for.

Think Amazon that has both AWS and the retail business. There's a lot of value in providing both.

trod1234 · 9 months ago
There is no real future in AI long term.

Its use caustically destroys more than it creates. It is worthy successor of Pandora's box.

rmbyrro · 9 months ago
In the early days of ChatGPT, I'd get constantly capped, every single day, even on the paid plan. At the time I was sending them messages, begging to charge me $200 to let me use it unlimited.

Finally!..

skeeter2020 · 9 months ago
The enterprise surface area that OpenAI seems to be targeting is very small. The cost curve looks similar to classic cloud providers, but gets very steep much faster. We started on their API and then moved out of the OpenAI ecosystem within ~ 2years as costs grew fast and we see equivalent or better performance with much cheaper and/or OS models, combined with pretty modest hardware. Unless they can pull a bunch of Netflix-style deals the economics here will not work out.
FooBarWidget · 9 months ago
The "open source nature" this time is different. "Open source" models are not actually open source, in the sense that the community can't contribute to their development. At best they're just proprietary freeware. Thus, the continuity of "open source" models depends purely on how long their sponsors sustain funding. If Meta or Alibaba or Tencent decide tomorrow that they're no longer going to fund this stuff, then we're in real trouble, much more than when Red Hat drops the ball.

I'd say Meta is the most important player here. Pretty much all the "open source" models are built in Llama in one way or the other. The only reason Llama exists is because Meta wants to commoditize AI in order to prevent the likes of OpenAI from overtaking them later. If Meta one day no longer believes in this strategy for whatever reason, then everybody is in serious trouble.

dragonwriter · 9 months ago
> OpenAI is racing against two clocks: the commoditization clock (how quickly open-source alternatives catch up) and the monetization clock (their need to generate substantial revenue to justify their valuation).

Also important to recognize that those clocks aren’t entirely separated. Monetization timeline is shorter if investors perceive that commodification makes future monetization less certain, whereas if investors perceive a strong moat against commodification, new financing without profitable monetization is practical as long as the market perceives a strong enough moat that investment in growth now means a sufficient increase in monetization down the road.

jijji · 9 months ago
What ever happened to IBM Watson? IBM wishes it would have taken off like ChatGPT
datameta · 9 months ago
Has anyone heard or seen it used anywhere? I was in-house when it launched to big fanfare by upper management and the vast majority of the company was tasked to create team projects utilizing Watsonm
dtagames · 9 months ago
Watson was a pre-LLM technology, an evolution of IBM's experience with the expert systems which they believed would rule the roost in AI -- until transformers blew all that away.
epigramx · 9 months ago
And the catch-up of logic clock, how fast people catch-up we don't have Skynet within 2 years, but a glorified google search for the next 20 years.
ezst · 9 months ago
Am I the only one who's getting annoyed of seeing LLMs be marketed as competent search engines? That's not what they've been designed for, and they have been repeatedly bad at that.
EagnaIonat · 9 months ago
> the commoditization clock (how quickly open-source alternatives catch up)

I believe we are already there at least for the average person.

Using Ollama I can run different LLMs locally that are good enough for what I want to do. That's on a 32GB M1 laptop. No more having to pay someone to get results.

For development Pycharm Pro latest LLM autocomplete is just short of writing everything for me.

I agree with you in relation to the enterprise.

gmerc · 9 months ago
Claude has much better enterprise momentum and sits in AWS support while OpenAI is fighting their own supplier / Big Tech investor.

Deleted Comment

TZubiri · 9 months ago
"whether large organizations will prioritize the kind of integrated, reliable, and "safe" AI solutions"

While safe in output quality control. SaaS is not safe in terms of data control. Meta's Llama is the winner in any scenario where it would be ridiculous to send user data to a third party.

bambax · 9 months ago
Yes, but how can this strategy work, and who would choose ChatGPT at this point, when there are so many alternatives, some better (Anthropic), some just as good but way cheaper (Amazon Nova) and some excellent and open-source?
ec109685 · 9 months ago
Microsoft is their path into the enterprise. You can use their so-so enterprise support directly or have all the enterprise features you could want via Azure.

They also are still leading in the enterprise space: https://www.linkedin.com/posts/maggax_market-share-of-openai...

anticensor · 9 months ago
They have a third clock: schools and employers that try to forbid its use.
interludead · 9 months ago
AI's utility isn't fully locked into large enterprises
hackernewds · 9 months ago
There is really not a lot of Open source large language models with that capability. the only game changer so far has been meta open sourcing llama, and that's about it with models of that caliber

Dead Comment

submeta · 9 months ago
I actually pay 166 Euros a month for Claude Teams. Five seats. And I only use one. For myself. Why do I pay so much? Because the normal paid version (20 USD a month) interrups the chats after a dozen questions and wants me to wait a few hours until I can use it again. But Teams plan gives me way more questions.

But why do I pay that much? Because Claude in combination with the Projects feature, where I can upload two dozen or more files, PDFs, text, and give it a context, and then ask questions in this specific context over a period of week or longer, come back to it and continue the inquiry, all of this gives me superpowers. Feels like a handful of researchers at my fingertips that I can brainstorm with, that I can ask to review the documents, come up with answers to my questions, all of this is unbelievably powerful.

I‘d be ok with 40 or 50 USD a month for one user, alas Claude won’t offer it. So I pay 166 Euros for five seats and use one. Because it saves me a ton of work.

mtlynch · 9 months ago
Kagi Ultimate (US$25/mo) includes unlimited use of all the Anthropic models.

Full disclosure: I participated in Kagi's crowdfund, so I have some financial stake in the company, but I mainly participated because I'm an enthusiastic customer.

__MatrixMan__ · 9 months ago
I'm uninformed about this, it may just be superstition, but my feeling while using Kagi in this way is that after using it for a few hours it gets a bit more forgetful. I come back the next day and it's smart again, for while. It's as if there's some kind of soft throttling going on in the background.

I'm an enthusiastic customer nonetheless, but it is curious.

gandalfgreybeer · 9 months ago
> Kagi Ultimate (US$25/mo) includes unlimited use of all the Anthropic models.

What am I losing here if I switch over to this from my current Claude subscription?

baobabKoodaa · 9 months ago
Thanks for the tip! Now I'm a Kagi user too.
antihero · 9 months ago
How would you rate Kagi Ultimate vs Arc search? IE is it scraping relevant websites live and summarising them? Or is it just access to ChatGPT and other models (with their old data).

At some point I'm going to subscribe to Kagi again (once I have a job) so be interested to see how it rates.

rumblefrog · 9 months ago
I presume no access to Anthropic project?
ryandvm · 9 months ago
I bet you never get tired of being told LLMs are just statistical computational curiosities.
ganzuul · 9 months ago
There are people like that. We don't know what's up with them.
e1g · 9 months ago
Unironically, thank you for sharing this strategy. I get throttled a lot, and I'm happy to pay to remove those frustrating limits.
arcastroe · 9 months ago
Sounds like you two could split the cost of the family plan-- ahem the team plan.
ipsum2 · 9 months ago
Pay as you go using the anthropic API and an open source UI frontend like librechat would be a lot cheaper I suspect.
handfuloflight · 9 months ago
Depends on how much context he loads up into the chat. The web version is quite generous when compared to the API, from my estimations.
esafak · 9 months ago
You.com (search engine and LLM aggregator) has a team plan for $25/month.

https://you.com/plans

carbine · 9 months ago
I have ChatGPT ($20/month tier) and Claude and I absolutely see this use case. Claude is great but I love long threads where I can have it help me with a series of related problems over the course of a day. I'm rarely doing a one-shot. Hitting the limits is super frustrating.

So I understand the unlimited use case and honestly am considering shelling out for the o1 unlimited tier, if o1 is useful enough.

A theoretical app subscription for $200/month feels expensive. Having the equivalent a smart employee work beside me all day for $200/month feels like a deal.

CosmicShadow · 9 months ago
Yep, I have 2 accounts I use because I kept hitting limits. I was going to do the Teams to get the 5x window, but I got instantly banned when clicking the teams button on a new account, so I ended up sticking with 2 separate accounts. It's a bit of a pain, but I'm used to it. My other account has since been unbanned, but I haven't needed it lately as I finished most of my coding.
archon810 · 9 months ago
Have you tried NotebookLM for something like this?
bn-l · 9 months ago
Isn’t that Google’s garbage models only?
jakubtomanik · 9 months ago
Have you tried LibreChat https://www.librechat.ai/ and just use it with your own API keys? You pay for what you use and can use and switch between all major model providers
archerx · 9 months ago
Why not use the API? You can ask as many questions as you can pay for.
jjfoooo4 · 9 months ago
I haven’t implemented this yet, but I’m planning on doing a fallback to other Claude models when hitting API limits, IIUC they rate limit per model
fragmede · 9 months ago
Do you not have any friends to share that with? Or share a family cell phone plan or Netflix with?
strunz · 9 months ago
They're probably lay an adult, so I would guess not.
umeshunni · 9 months ago
Out of curiosity, why don't you use NotebookLM for the same functionality?
klntsky · 9 months ago
Are the limits applied to the org or to each individual user?
submeta · 9 months ago
Individual users
dbbk · 9 months ago
And how often is it wrong?
js212 · 9 months ago
Try typingmind.com with the API
interludead · 9 months ago
A great middle ground
pentagrama · 9 months ago
The argument of more compute power for this plan can be true, but this is also a pricing tactic known as the decoy effect or anchoring. Here's how it works:

1. A company introduces a high-priced option (the "decoy"), often not intended to be the best value for most customers.

2. This premium option makes the other plans seem like better deals in comparison, nudging customers toward the one the company actually wants to sell.

In this case for Chat GPT is:

Option A: Basic Plan - Free

Option B: Plus Plan - $20/month

Option C: Pro Plan - $200/month

Even if the company has no intention of selling the Pro Plan, its presence makes the Plus Plan seem more reasonably priced and valuable.

While not inherently unethical, the decoy effect can be seen as manipulative if it exploits customers’ biases or lacks transparency about the true value of each plan.

TeMPOraL · 9 months ago
Of course this breaks down once you have a competitor like Anthropic, serving similarly-priced Plan A and B for their equivalently powerful models; adding a more expensive decoy plan C doesn't help OpenAI when their plan B pricing is primarily compared against Anthropic's plan B.
thomassmith65 · 9 months ago
Leadership at this crop of tech companies is more like followership. Whether it's 'no politics', or sudden layoffs, or 'founder mode', or 'work from home'... one CEO has an idea and three dozen other CEOs unthinkingly adopt it.

Several comments in this thread have used Anthropic's lower pricing as a criticism, but it's probably moot: a month from now Anthropic will release its own $200 model.

wrsh07 · 9 months ago
As Nvidia's CEO likes to say, the price is set by the second best.

From an API standpoint, it seems like enterprises are currently split between anthropic and ChatGPT and most are willing to use substitutes. For the consumer, ChatGPT is the clear favorite (better branding, better iPhone app)

willy_k · 9 months ago
It might not affect whether people decide to use ChatGPT over Claude, but it could get more people to upgrade from their free plan.
gist · 9 months ago
An example of this is something I learned from a former employee who went to work for Encyclopedia Brittanica 'back in the day'. I actually invited the former employee to come back to our office so I could understand and learn from exactly what he had been taught (noting of course this was back before the internet obviously where info like that was not as available...)

So they charge (as I recall from what he told me I could be off) something like $450 for shipping the books (don't recall the actual amount but it seemed high at the time).

So the salesman is taught to start off the sales pitch with a set of encylopedia's costing at the time let's say $40,000 some 'gold plated version'.

The potential buyer laughs and then salesman then says 'plus $450 for shipping!!!'.

They then move on to the more reasonable versions costing let's say $1000 or whatever.

As a result of the first example of high priced the customer (in addition to the positioning you are talking about) the customer is setup to accept the shipping charge (which was relatively high).

omega3 · 9 months ago
This is called price anchoring.
josters · 9 months ago
This is also known as the Door-in-the-face technique[1] in social psychology.

[1]: https://en.m.wikipedia.org/wiki/Door-in-the-face_technique

kortilla · 9 months ago
That’s a really basic sales technique much older than the 1975 study. I wonder if it went under a different name or this was a case of studying and then publishing something that was already well-known outside of academia.
sethd · 9 months ago
Wouldn’t this be an example of anchoring?

https://en.wikipedia.org/wiki/Anchoring_effect

riazrizvi · 9 months ago
I use GPT-4 because 4o is inferior. I keep trying 4o but it consistently underperforms. GPT-4 is not working as hard anymore compared to a few months ago. If this release said it allows GPT-4 more processing time to find more answers and filter them, I’d then see transparency of service and happily pay the money. As it is I’ll still give it a try and figure it out, but I’d like to live in a world where companies can be honest about their missteps. As it is I have to live in this constructed reality that makes sense to me given the evidence despite what people claim. Am I fooling/gaslighting myself?? Who knows?
blharr · 9 months ago
Glad I'm not the only one. I see 4o as a lot more of a sidegrade. At this point I mix them up and I legitimately can't tell, sometimes I get bad responses from 4, sometimes 4o.

Responses from gpt-4 sound more like AI, but I haven't had seemingly as many issues as with 4o.

Also the feature of 4o where it just spits out a ton of information, or rewrites the entire code is frustrating

Deleted Comment

m3kw9 · 9 months ago
But you are not getting nothing there is actual value if you are able use that much and consistently hitting limits in the 20$ plan.
Someone1234 · 9 months ago
Why doesn't Pro include longer context windows?

I'm a Plus member, and the biggest limitation I am running into by far is the maximum length of a context window. I'm having context fall out of scope throughout the conversion or not being able to give it a large document that I can then interrogate.

So if I go from paying $20/month for 32,000 tokens, to $200/month for Pro, I expect something more akin to Enterprise's 128,000 tokens or MORE. But they don't even discuss the context window AT ALL.

For anyone else out there looking to build a competitor I STRONGLY recommend you consider the context window as a major differentiator. Let me give you an example of a usage which ChatGPT just simply cannot do very well today: Dump a XML file into it, then ask it questions about that file. You can attach files to ChatGPT, but it is basically pointless because it isn't able to view the entire file at once due to, again, limited context windows.

carbocation · 9 months ago
Pro does have longer context windows, specifically 128k. Take a look at the pricing page for this information: https://openai.com/chatgpt/pricing/
nstj · 9 months ago
Thanks for this. I’m surprised they haven’t made this more obvious in their release and other documentation
mattwallace · 9 months ago
o1 pro failed to accept 121903 tokens input into the chat (claude took it just fine)
katamari-damacy · 9 months ago
ChatGPT and GPT4o APIs have 128K window as well. The 32K is from the days of GPT4.
thomasahle · 9 months ago
It's disappointing because the o1-preview had 128k context length. At least on the API. So they nerfed it and made the original product $200/month.
dudus · 9 months ago
The longer the context the more backtracking it needs to do. It gets exponentially more expensive. You can increase it a little, but not enough to solve the problem.

Instead you need to chunk your data and store it in a vector database so you can do semantic search and include only the bits that are most relevant in the context.

LLM is a cool tool. You need to build around it. OpenAI should start shipping these other components so people can build their solutions and make their money selling shovels.

Instead they want end user to pay them to use the LLM without any custom tooling around. I don't think that's a winning strategy.

gcr · 9 months ago
This isn't true.

Transformer architectures generally take quadratic time wrt sequence length, not exponential. Architectural innovations like flash attention also mitigate this somewhat.

Backtracking isn't involved, transformers are feedforward.

Google advertises support for 128k tokens, with 2M-token sequences available to folks who pay the big bucks: https://blog.google/technology/ai/google-gemini-next-generat...

solarkraft · 9 months ago
> you need to chunk your data and store it in a vector database so you can do semantic search and include only the bits that are most relevant in the context

Be aware that this tends to give bad results. Once RAG is involved you essentially only do slightly better than a traditional search, a lot of nuance gets lost.

tom1337 · 9 months ago
> Instead you need to chunk your data and store it in a vector database so you can do semantic search and include only the bits that are most relevant in the context.

Isn't that kind of what Anthropic is offering with projects? Where you can upload information and PDF files and stuff which are then always available in the chat?

hackernewds · 9 months ago
I don't know whether using exponential in the general English language usage of the word, but it does not get exponentially more expensive
Melatonic · 9 months ago
Seems like a good candidate for a "dumb" AI you can run locally to grab data you need and filter it down before giving to OpenAI
danpalmer · 9 months ago
Because they can't do long context windows. That's the only explanation. What you can do with a 1m token context window is quite a substantial improvement, particularly as you said for enterprise usage.
KTibow · 9 months ago
In my experience OpenAI models perform worse on long contexts than Anthropic/Google's, even when using the cheaper ones.
kranke155 · 9 months ago
Claude is clearly the superior product id say.

The only reason I open Chat now is because Claude will refuse to answer questions on a variety of topics including for example medication side effects.

visarga · 9 months ago
When I tested o1 a few hours ago, it seemed like it was losing context. After I asked it to use a specific writing style, and pasting a large reference text, it forgot my demand. I reminded it, and it kept the rule for a few more messages, and after another long paste it forgot again.
j45 · 9 months ago
If a $200/month pro level is successful it could open the door to a $2000/month segment, and the $20,000/month segment will appear and the segregation of getting ahead with AI will begin.

Deleted Comment

johnisgood · 9 months ago
Agreed. Where may I read about how to set up an LLM similar to that of Claude, which has the minimum length of Claude's context window, and what are the hardware requirements? I found Claude incredibly useful.
m3kw9 · 9 months ago
Full blown agents but they have to really able to replace a semi competent, harder than it sounds especially for edge cases where a human can easily get past
dr_kiszonka · 9 months ago
This is a significant concern for me too.
itissid · 9 months ago
I've been concatenating my source code of ~3300 lines and 123979 bytes(so likely < 128K context window) into the chat to get better answers. Uploading files is hopeless in the web interface.
fragmede · 9 months ago
why not use aider/similar and upload via API?
frakt0x90 · 9 months ago
Have you considered RAG instead of using the entire document? It's more complex but would at least allow you to query the document with your API of choice.
mark_l_watson · 9 months ago
Switch to Gemini Pro just when you need huge context size. That is what I do.
8n4vidtmkvmk · 9 months ago
Just? You don't think the model is as capable when the context does fit?
domysee · 9 months ago
When talking about context windows I'm surprised no one mentions https://poe.com/. Switched over from ChatGPT about a year ago, and it's amazing. Can use all models and the full context window of them, for the same price as a ChatGPT subscription.
EVa5I7bHFq9mnYK · 9 months ago
Poe.com goes straight to login page, doesn't want to divulge ANY information to me before I sign up. No About Us or Product description or Pricing - nothing. Strange behavior. But seeing it more and more with modern web sites.
WhitneyLand · 9 months ago
What don’t you like about Claude? I believe the context is larger.

Coincidentally I’ve been using it with xml files recently (iOS storyboard files), and it seems to do pretty well manipulating and refactoring elements as I interact with it.

rmbyrro · 9 months ago
Google models have huge contexts, but are terrible...
bn-l · 9 months ago
Agreed. The new 1121 is better but still garbage relatively.
A_D_E_P_T · 9 months ago
I just bought a pro subscription.

First impressions: The new o1-Pro model is an insanely good writer. Aside from favoring the long em-dash (—) which isn't on most keyboards, it has none of the quirks and tells of old GPT-4/4o/o1. It managed to totally fool every "AI writing detector" I ran it through.

It can handle unusually long prompts.

It appears to be very good at complex data analysis. I need to put it through its paces a bit more, though.

Mordisquitos · 9 months ago
> Aside from favoring the long em-dash (—) which isn't on most keyboards

Interesting! I intentionally edit my keyboard layout to include the em-dash, as I enjoy using it out of sheer pomposity—I should undoubtedly delve into the extent to which my own comments have been used to train GPT models!

gen220 · 9 months ago
On my keyboard (en-us) it's ALT+"-" to get an em-dash.

I use it all the time because it's the "correct" one to use, but it's often more "correct" to just rewrite the sentence in a way that doesn't call for one. :)

ValentinA23 · 9 months ago
–: alt+shift+minus on my azerty(fr) mac keyboard. I use it constantly. "Stylometry" hazard though !
paulddraper · 9 months ago
Word processors -- MS Word, Google Docs -- will generally convert three hyphens to em dash.

(And two hyphens to en dash.)

Filligree · 9 months ago
I just use it because it's grammatically correct—admittedly I should use it less, for example here.
creesch · 9 months ago
Just so you know, text using the em-dash like that combined with a few other "tells" makes me double check if it might be LLM written.

Other things are the overuse of transition words (e.g., "however," "furthermore," "moreover," "in summary," "in conclusion,") as well as some other stuff.

It might not be fair to people who write like that naturally, but it is what it is in the current situation we find ourselves in.

bambax · 9 months ago
On Windows em dash is ALT+0151; the paragraph mark (§) is ALT+0167. Once you know them (and a couple of others, for instance accented capitals) they become second nature, and work on all keyboards, everywhere.
jgalt212 · 9 months ago
delve?

Did ChatGPT write this comment for you?

Atotalnoob · 9 months ago
AI writing detectors are snake oil
CharlieDigital · 9 months ago
Startup I'm at has generated a LOT of content using LLMs and once you've reviewed enough of the output, you can easily see specific patterns in the output.

Some words/phrases that, by default, it overuses: "dive into", "delve into", "the world of", and others.

You correct it with instructions, but it will then find synonyms so there is also a structural pattern to the output that it favors by default. For example, if we tell it "Don't start your writing with 'dive into'", it will just switch to "delve into" or another synonym.

Yes, all of this can be corrected if you put enough effort into the prompt and enough iterations to fix all of these tells.

daemonologist · 9 months ago
They're not very accurate, but I think snake oil is a bit too far - they're better than guessing at least for the specific model(s) they're trained on. OpenAI's classifier [0] was at 26% recall, 91% precision when it launched, though I don't know what models created the positives in their test set. (Of course they later withdrew that classifier due to its low accuracy, which I think was the right move. When a company offers both an AI Writer and an AI Writing detector people are going to take its predictions as gospel and _that_ is definitely a problem.)

All that aside, most models have had a fairly distinctive writing style, particularly when fed no or the same system prompt every time. If o1-Pro blends in more with human writing that's certainly... interesting.

[0] https://openai.com/index/new-ai-classifier-for-indicating-ai...

mirrorlake · 9 months ago
Anecdotally, English/History/Communications professors are confirming cheaters with them because they find it easy to identify false information. The red flags are so obvious that the checker tools are just a formality: student papers now have fake URLs and fake citations. Students will boldly submit college papers which have paragraphs about nonexistent characters, or make false claims about what characters did in a story.

The e-mail correspondence goes like this: "Hello Professor, I'd like to meet to discuss my failing grade. I didn't know that using ChatGPT was bad, can I have some points back or rewrite my essay?"

A_D_E_P_T · 9 months ago
Yeah but they "detect" the characteristic AI style: The limited way it structures sentences, the way it lays out arguments, the way it tends to close with an "in conclusion" paragraph, certain word choices, etc. o1-Pro doesn't do any of that. It writes like a human.

Damnit. It's too good. It just saved me ~6 hours in drafting a complicated and bespoke legal document. Before you ask: I know what I'm doing, and it did a better job in five minutes than I could have done over those six hours. Homework is over. Journalism is over. A large slice of the legal profession is over. For real this time.

dangdetector · 9 months ago
Doubtful AI writing is obvious as hell.
efficax · 9 months ago
of course they are. it’s simple: if they worked they would be incorporated into the loss function of the models and then they would no longer work
karaterobot · 9 months ago
I use the emdash a lot. Maybe too much. On MacOS, it's so easy to type—just press shift-option-minus—that I don't even think about it anymore!
bhtru · 9 months ago
Or double type ‘-‘ and in many apps it’ll auto transform the two dashes to emdash. However, the method you’re describing is far more reliable, thanks!
vessenes · 9 months ago
I noticed a writing style difference, too, and I prefer it. More concise. On the coding side, it's done very well on large (well as large as it can manage) codebase assessment, bug finding, etc. I will reach for it rather than o1-preview for sure.
imgabe · 9 months ago
Writers love the em-dash though. It's a thing.
thomasfromcdnjs · 9 months ago
I love using it in my creative writing, I use it for an abrupt change. Find it kinda weird that it's so controversial.
carabiner · 9 months ago
My 10th grade english teacher (2002, just as blogging was taking off) called it sloppy and I gotta agree with her. These days I see it as youtube punctuation, like jump cut editing for text.
dougb5 · 9 months ago
That's encouraging to hear that it's a better writer, but I wonder if "quirks and tells" can only be seen in hindsight. o1-pro's quirks may only become apparent after enough people have flooded the internet with its output.
heyjamesknight · 9 months ago
> Aside from favoring the long em-dash (—)

This is a huge improvement over previous GPT and Claude, which use the terrible "space, hyphen, space" construct. I always have to manually change them to em-dashes.

layer8 · 9 months ago
> which isn't on most keyboards

This shouldn’t really be a serious issue nowadays. On macOS it’s Option+Shift+'-', on Windows it’s Ctrl+Alt+Num- or (more cryptic) Alt+0151.

The Swiss army knife solution is to configure yourself a Compose key, and then it’s an easy mnemonic like for example Compose 3 - (and Compose 2 - for en dash).

_cs2017_ · 9 months ago
No internet access makes it very hard to benefit from o1 pro. Most of the complex questions I would ask require google search for research papers, language or library docs, etc. Not sure why o1 pro is banned from the internet, was it caught downloading too much porn or something?
ilt · 9 months ago
Or worse still, referencing papers it shouldn’t be referencing because of paywalls may be.
veidr · 9 months ago
Macs have always been able to type the em dash — the key combination is ⌥⇧- (Option-Shift-hyphen). I often use them in my own writing. (Hope it doesn't make somebody think I'm phoning it in with AI!)
davidmurphy · 9 months ago
Anyone who read "The Mac is not a typewriter" — a fantastic book of the early computer age — likely uses em dashes.
jwpapi · 9 months ago
Wait how did you buy it. I’m just getting forwarded to Team Plan I already have. Sitting in Germany, tried US VPN as well.
apstls · 9 months ago
The endpoint for upgrading for the normal web interface was returning 500s for me. Upgrading through the iOS app worked though.
cableshaft · 9 months ago
Some autocorrect software automatically converts two hyphens in a row into an emdash. I know that's how it worked in Microsoft Word and just verified it's doing that with Google Docs. So it's not like it's hard to include an emdash in your writing.

Could be a tell for emails, though.

galleywest200 · 9 months ago
This is interesting, because at my job I have to manually edit registration addresses that use the long em-dash as our vendor only supports ASCII. I think Windows automatically converts two dashes to the long em-dash.
aucisson_masque · 9 months ago
> It managed to totally fool every "AI writing detector" I ran it through.

For now, as ai power increase, ai powered ai writing detection tool also gets better.

bigfudge · 9 months ago
I’m less sure. This seems like an asymmetrical battle with a lot more money flowing to develop the models that write than detect.
onlyrealcuzzo · 9 months ago
It's also because it's brand new.

Give it a few weeks for them to classify its outputs, and they won't have a problem.

pests · 9 months ago
> the long em-dash (—) which isn't on most keyboards

On Windows its Windows Key + . to get the emoji picker, its in the Symbols tab or find it in recents.

Wolfenstein98k · 9 months ago
Well not for me it's not, that is a zoom function.

En dash is Alt+0150 and Em dash is Alt+0151

pjs_ · 9 months ago
Long emdash is the way -- possible proof of AGI here
rahimnathwani · 9 months ago
Would you mind sharing any favourite example chats?
A_D_E_P_T · 9 months ago
Give me a prompt and I'll share the result.
the_clarence · 9 months ago
You can use the emdash by writing dash twice -- it works in a surprising number of editors and rendering engines

Deleted Comment

ed_elliott_asc · 9 months ago
Does it still hallucinate? This for me is key, if it does it will be limited.
yCombLinks · 9 months ago
The current architect of LLMs will always "hallucinate".
az226 · 9 months ago
What’s the context window?
rahimnathwani · 9 months ago
128k tokens
griomnib · 9 months ago
I consistently get significantly better performance from Anthropic at a literal order of magnitude less cost.

I am incredibly doubtful that this new GPT is 10x Claude unless it is embracing some breakthrough, secret, architecture nobody has heard of.

MP_1729 · 9 months ago
That's not how pricing works.

If o1-pro is 10% better than Claude, but you are a guy who makes $300,000 per year, but now can make $330,000 because o1-pro makes you more productive, then it makes sense to give Sam $2,400.

echoangle · 9 months ago
Having a tool that’s 10% better doesn’t make your whole work 10% better though.
015a · 9 months ago
The math is never this clean, and no one has ever experienced this (though I'm sure its a justification that was floated at OAI HQ at least once).
jnsaff2 · 9 months ago
It would be a worthy deal if you started making $302,401 per year.
truetraveller · 9 months ago
Yes. But also from the perspective of saving time. If it saves an additional 2 hours/month, and you make six figures, it's worth it.

And the perspective of frustration as well.

Business class is 4x the price of regular. definitely not 4x better. But it saves times + frustration.

pvarangot · 9 months ago
That's also not how pricing works, it's about perceived incremental increases in how useful it is (marginal utility), not about the actual more money you make.
richardhowes · 9 months ago
Yeah, the $200 seems excessive and annoying, until you realise it depends on how much it saves you. For me it needs to save me about 6 hours per month to pay for itself.

Funny enough I've told people that baulk at the $20 that I would pay $200 for the productivity gains of the 4o class models. I already pay $40 to OpenAI, $20 to Anthropic, and $40 to cursor.sh.

pie420 · 9 months ago
ah yes, you must work at the company where you get paid per line of code. There's no way productivity is measured this accurately and you are rewarded directly in any job unless you are self-employed and get paid per website or something
bloppe · 9 months ago
I love it when AI bros quantify AI's helpfulness like this
vessenes · 9 months ago
I think of them as different people -- I'll say that I use them in "ensemble mode" for coding, the workflow is Claude 3.5 by default -- when Claude is spinning, o1-preview to discuss, Claude to implement. Worst case o1-preview to implement, although I think its natural coding style is slightly better than Claude's. The speed difference isn't worth it.

The intersection of problems I have where both have trouble is pretty small. If this closes the gap even more, that's great. That said, I'm curious to try this out -- the ways in which o1-preview fails are a bit different than prior gpt-line LLMs, and I'm curious how it will feel on the ground.

vessenes · 9 months ago
Okay, tried it out. Early indications - it feels a bit more concise, thank god, certainly more concise than 4o -- it's s l o w. Getting over 1m times to parse codebases. There's some sort of caching going on though, follow up queries are a bit faster (30-50s). I note that this is still superhuman speeds, but it's not writing at the speed Groqchat can output Llama 3.1 8b, that is for sure.

Code looks really clean. I'm not instantly canceling my subscription.

404mm · 9 months ago
I pay for both GPT and Claude and use them both extensively. Claude is my go-to for technical questions, GPT (4o) for simple questions, internet searches and validation of Claude answers. GPT o1-preview is great for more complex solutions and work on larger projects with multiple steps leading to finish. There’s really nothing like it that Anthropic provides. But $200/mo is way above what I’m willing to pay.
griomnib · 9 months ago
I have several local models I hit up first (Mixtral, Llama), if I don’t like the results then I’ll give same prompt to Claude and GPT.

Overall though it’s really just for reference and/or telling me about some standard library function I didn’t know of.

Somewhat counterintuitively I spend way more time reading language documentation than I used to, as the LLM is mainly useful in pointing me to language features.

After a few very bad experiences I never let LLM write more than a couple lines of boilerplate for me, but as a well-read assistant they are useful.

But none of them are sufficient alone, you do need a “team” of them - which is why I also don’t see the value is spending this much on one model. I’d spend that much on a system that polled 5 models concurrently and came up with a summary of sorts.

aliasxneo · 9 months ago
I haven't used ChatGPT in a few weeks now. I still maintain subscriptions to both ChatGPT and Claude, but I'm very close to dropping ChatGPT entirely. The only useful thing it provides over Claude is a decent mobile voice mode and web search.
asterix_pano · 9 months ago
If you don't want to necessarily have to pick between one or the other, there are services like this one that let you basically access all the major LLMs and only pay per use: https://nano-gpt.com/
HanClinto · 9 months ago
I'm in the same boat — I maintain subscriptions to both.

The main thing I like OpenAI for is that when I'm on a long drive, I like to have conversations with OpenAI's voice mode.

If Claude had a voice mode, I could see dropping OpenAI entirely, but for now it feels like the subscriptions to both is a near-negligible cost relative to the benefits I get from staying near the front of the AI wave.

bluedays · 9 months ago
I’ve been considering dropping ChatGPT for the same reason. Now that the app is out the only thing I actually care about is search.
xixixao · 9 months ago
Which ChatGPT model have you been using? In my experience nothing beats 4. (Not claude, not 4o)
cryptoegorophy · 9 months ago
I've heard so much about Claude and decided to give it a try and it has been rather a major disappointment. I ended up using chatgpt as an assistant for claude's code writing because it just couldn't get things right. Had to cancel my subscription, no idea why people still promote it everywhere like it is 100x times better than chatgpt.
sumedh · 9 months ago
> Had to cancel my subscription, no idea why people still promote it everywhere like it is 100x times better than chatgpt.

You need to learn how to ask it the right questions.

acchow · 9 months ago
I find o1 much better for having discussions or solving problems, then usually switch to Claude for code generation.
rmbyrro · 9 months ago
Sonnet isn't good at debugging, or even architecting. o1 shines, it feels like magic. The kinds of bugs it helped me nail were incredible to me.
superfrank · 9 months ago
I've heard this a lot and so I switched to Claude for a month and was super disappointed. What are you mainly using ChatGPT for?

Personally, I found Claude marginally better for coding, but far, far worse for just general purpose questions (e.g. I'm a new home owner and I need to winterize my house before our weather drops below freezing. What are some steps I should take or things I should look into?)

BoorishBears · 9 months ago
It's ironic because I never want to ask an LLM for something like your example general purpose question, where I can't just cheaply and directly test the correctness of the answer

But we're hurtling towards all the internet's answers to general purpose questions being SEO spam that was generated by an LLM anyways.

Since OpenAI probably isn't hiring as many HVAC technicians to answer queries as they are programmers, it feels like we're headed towards a death spiral where either having the LLM do actual research from non-SEO affected primary sources, or finding a human who's done that research will be the only options for generic knowledge questions that are off the beaten path

-

Actually to test my hypothesis I just tried this with ChatGPT with internet access.

The list of winterization tips cited an article that felt pretty "delvey". I search the author's name and their LinkedIn profile is about how they professionally write marketing content (nothing about HVAC), one of their accomplishments is Generative AI, and their like feed is full of AI mentions for writing content.

So ChatGPT is already at a place where when it searches for "citations", it's just spitting back out its own uncited answers above answers by actual experts (since the expert sources aren't as SEO-driven)

nurettin · 9 months ago
Or Anthropic will follow suit.
MuffinFlavored · 9 months ago
Am I wrong that Anthropic doesn't really have a match yet to ChatGPT's o1 model (a "reasoning" model?)
VeejayRampay · 9 months ago
Claude is so much better
moralestapia · 9 months ago
I mean ... anecdata for anecdata.

I use LLMs for many projects and 4o is the sweet spot for me.

>literal order of magnitude less cost

This is just not true. If your use case can be solved with 4o-mini (I know, not all do) OpenAI is the one which is an order of magnitude cheaper.

bhouston · 9 months ago
Yeah, I've switched to Anthropic fully as well for personal usage. It seems better to me and/or equivalent in all use cases.
replwoacause · 9 months ago
Same. Honestly if they released a $200 a month plan I’d probably bite, but OpenAI hasn’t earned that level of confidence from me yet. They have some catching up to do.
minimaxir · 9 months ago
The main difficulty when pricing a monthly subscription for "unlimited" usage of a product is the 1% of power users who use have extreme use of the product that can kill any profit margins for the product as a whole.

Pricing ChatGPT Pro at $200/mo filters it to only power users/enterprise, and given the cost of the GPT-o1 API, it wouldn't surprise me if those power users burn through $200 worth of compute very, very quickly.

thih9 · 9 months ago
They are ready for this, there is a policy against automation, sharing or reselling access; it looks like there are some unspecified quotas as well:

> We have guardrails in place to help prevent misuse and are always working to improve our systems. This may occasionally involve a temporary restriction on your usage. We will inform you when this happens, and if you think this might be a mistake, please don’t hesitate to reach out to our support team at help.openai.com using the widget at the bottom-right of this page. If policy-violating behavior is not found, your access will be restored.

Source: https://help.openai.com/en/articles/9793128-what-is-chatgpt-...

lm28469 · 9 months ago
> can kill any profit margins for the product as a whole.

Especially when the base line profit margin is negative to begin with

sebzim4500 · 9 months ago
Is there any evidence to suggest this is true? IIRC there was leaked information that OpenAI's revenue was significantly higher than their compute spending, but it wasn't broken down between API and subscriptions so maybe that's just due to people who subscribe and then use it a few times a month.
nine_k · 9 months ago
Is compute that expensive? An H100 rents at about $2.50/hour, it's 80 hours of pure compute. Assuming 720 hours a month, 1/9 duty cycle around the clock, or 1/3 if we assume 8-hour work day. It's really intense, constant use. And I bet OpenAI spend less on operating their infra than the rate at which cloud providers rent it out.
drdrey · 9 months ago
are you assuming that you can do o1 inference on a single h100?
ssl-3 · 9 months ago
Does an o1 query run on a singular H100, or on a plurality of H100s?
peab · 9 months ago
I was testing out a chat app that supported images. Long conversations with multiple images in the conversation can be like .10cents per message after a certain point. It sure does add up quickly
londons_explore · 9 months ago
I wouldn't be surprised if the "unlimited" product is unlimited requests, but the quality of the responses drop if you ask millions of questions...
rrr_oh_man · 9 months ago
like throttled unlimited data
paxys · 9 months ago
$200 is a lot of compute. Amortized over say 3 years, that's a dedicated A100 GPU per user, or an H100 for every 3 users.
wkat4242 · 9 months ago
Not counting power or servers etc. But yeah it does put it into perspective.
rubymamis · 9 months ago
I believe they have many data points to back up this decision. They surely know how people are suing their products.
ta_1138 · 9 months ago
There are many use cases for which the price can go even higher. I look at recent interactions with people that were working at an interview mill: Multiple people in a boiler room interviewing for companies all day long, with a computer set up so that our audio was being piped to o1. They had a reasonable prompt to remove many chatbot-ism, and make it provide answers that seem people-like: We were 100% interviewing the o1 model. The operator said basically nothing, in both technical and behavioral interviews.

A company making money off of this kind of scheme would be happy to pay $200 a seat for an unlimited license. And I would not be surprised if there were many other very profitable use cases that make $200 per month seem like a bargain.

yosito · 9 months ago
So, wait a minute, when interviewing candidates, you're making them invest their valuable time talking to an AI interviewer, and not even disclosing to them that they aren't even talking to a real human? That seems highly unethical to me, yet not even slightly surprising. My question is, what variables are being optimized for here? It's certainly not about efficiently matching people with jobs, it seems to be more about increasing the number of interviews, which I'm sure benefits the people who get rewarded for the number of interviews, but seems like entirely the wrong metric.
vundercind · 9 months ago
Scams and other antisocial use cases are basically the only ones for which the damn things are actually the kind of productivity rocket-fuel people want them to be, so far.

We better hope that changes sharply, or these things will be a net-negative development.

lcnPylGDnU4H9OF · 9 months ago
It sounds like a setup where applicants hire some third-party company to perhaps "represent the client" in the interview and that company hired a bunch of people to be the interviewee on their clients behalf. Presumably also neither the company nor the applicant disclose this arrangement to the hiring manager.

Deleted Comment

YeGoblynQueenne · 9 months ago
>> My question is, what variables are being optimized for here?

The ones that start with a "$".

interludead · 9 months ago
Yep, deceptive practices like this undermine trust in the hiring process
fschuett · 9 months ago
If any company wants me to be interviewed by AI to represent the client, I'll consider it ethical to let an AI represent me. Then AIs can interview AIs, maybe that'll get me the job. I have strong flashbacks to the movie "Surrogates" for some reason.