Readit News logoReadit News
chambers · 4 months ago
Hats off to Statsig. They built a stellar product. Superior to many of their industry competitors like Optimizely. Back when I was on an internal Experimentation platform, we were impressed how they balanced dev velocity & stat rigor https://www.statsig.com/updates These guys ship.

Business-wise, I think getting acquired was the right choice. Experimentation is too small & treacherous to build a great business, and the broader Product Analytics space is also overcrowded. Amplitude (YC 2012), to date, only has a 1.4B market cap.

Joining the hottest name next door gives Statsig a lot more room to explore. I look forward to their evolution.

stavros · 4 months ago
It gives Statsig a lot more room to explore how our cursor movements and keystrokes can train LLMs to emulate humans browsing the web, you mean?

Deleted Comment

Dead Comment

debarshri · 4 months ago
At peak amplitude's market cap was 10B
utyop22 · 4 months ago
Amplitude is on track to be delisted lol

Deleted Comment

pjmlp · 4 months ago
Really? I never heard of them.

Meanwhile Optimizely is a new partner in our agency portfolio.

Dead Comment

mxstbr · 4 months ago
Initial Show HN four years ago: https://news.ycombinator.com/item?id=26629429

Congrats to the Statsig team!

morkalork · 4 months ago
1 comment - 13 points. Guess you should never feel bad if your own Show HN doesn't take off, it's not the end of the world!
utyop22 · 4 months ago
TBH this is looking more like an acqui-hire (Im sure they don't want the key people of Statsig to go away....), similar to Windsurf. Consider the fact that the CEO of Apps at OAI worked closely with the CEO of Statsig at Meta.
apetresc · 4 months ago
Can someone ELI5 what Statsig actually is? Their landing page is full of gems like "Turn action into insights and insights into action" and "Scale your experimentation culture with the world's leading experimentation platform" so I have no clue. It appears to be another analytics + A/B testing platform, but surely that can't be worth $1.1B to OpenAI?
chambers · 4 months ago
Statsig's core value is their experimentation platform— the automation of Data Science.

Big Tech teams want to ship features fast, but measuring impact is messy. It usually requires experiments and traditionally every experiment needed one Data Scientist (DS) to ensure statistical validity, i.e., "can we trust these numbers?". Ensuring validity means DS has to perform multiple repetitive but specialized tasks throughout the experiment process: debugging bad experiment setups, navigating legacy infra, generating & emailing graphs, compensating for errors and biases in post-analysis, etc. It's a slog for folks involved. Even then, cases still arise where Team A reports wonderful results & ships their feature while unknowingly tanking Team B's revenue— a situation discovered only months later when a DS is tasked to trace the cause.

Experimentation platforms like Statsig exist to lower the high cost of experimenting. To show a feature's potential impact before shipping, while reducing frustrations along the way. Most platforms will eliminate common statistical errors or issues at each stage of the experiment process, with appropriate controls for each user role. Engs setup experiments via SDK/UI with nudges and warnings for misconfigurations. DS can focus on higher-value work like metric design. PMs view shared dashboards and get automatic coordination emails with other teams if their feature is seen as breaking. People still fight but earlier on and in the same "room" with fewer questions about what's real versus what's noise.

Separating real results from random noise is the meaning of "statsig" / "statistically significant". I think it's similar to how companies define their own metrics (their sense of reality) while the platform manages the underlying statistical and data complexity. The ideal outcome is less DS needed, less crufty tooling to work around, less statistics learning, and crucially, more trust & shared oversight. But it comes at considerable, unsaid cost as well.

Is Statsig worth $1B to OpenAI? Maybe. There's an art & science to product development, and Facebook's experimentation platform was central to their science. But it could be premature. I personally think experimentation as an ideology best fits optimization spaces that previously achieved strong product-market fit ages ago. However, it's been years since I've worked in the "Experimentation" domain. I've glossed over a few key details in my answer and anyone is welcome to correct me.

siva7 · 4 months ago
If such platforms are the result of what facebook is today, it's not exactly an advertisement for these products.
jijapiopq · 4 months ago
A buzz word driven company with a product meant to track users, their mouse movement, keyboard usage across the internet. Of course to help make the world a better place .... for shoving advertisements.
laichzeit0 · 4 months ago
Tell me you've never used Statsig without telling me you've never used Statsig. It's an online controlled experiment platform. You know like when you want to figure out if a blue button gets more clicks than a red button or a green button, and you want to avoid the situation of some tech-bro calculating the average clicks on all 3 groups and going "this one is higher, so it must be better" because they have zero background in statistics and don't know how to correct for multiple comparisons or what power analysis is, etc.
Brajeshwar · 4 months ago
I'm of the opinion that the marketing gimmicks that we see on some products but ends up either being acquired big or gets those big elusive contracts, is that they did those messaging on purpose to steer the general onlooker something else. However, their internals or when a customer talks to, say, the founders, they would narrate and then show such things that are 100x better than what we see in the open.
ygouzerh · 4 months ago
It seems a mix of analytics + session-replay (e.g MixPanel) and feature flags platform (e.g Growthbook)
iamleppert · 4 months ago
The problem with A/B testing, and anyone who has ever done it at scale can tell you, beyond the banal basic stuff that people are already aware of like making things accessible and discoverable, once you get to a certain point, people just have no opinion. The default opinion is no opinion.

It's why every mass consumer product devolves into a feed or a list of content delivered by an algorithm. Once you reach a certain point, you come full circle and even that doesn't matter anymore: users will happily consume whatever you give them, within reason.

A/B testing platforms are mostly used by an odd collection of marketers and "data driven" people who love to run experiments and drag out every little change in the name of optimization. In the end, it all completely doesn't matter and doesn't tell you anything more than just talking to an average user will.

But, boy, are they sure a great way to look busy and dress up an underperforming product!

beeon · 4 months ago
Maybe the industry you work in is relevant here. In e-commerce, A/B testing the position and color of "add to cart" button can yield legitimate revenue multipliers. People's opinions are irrelevant in that kind of A/B test, all that matters is the likelihood they continue down the funnel.
iamleppert · 4 months ago
This is always the straw man A/B testing people reach for. When in reality, this only works on the worst possible designs or bizarre layouts, that no one really uses. It's a myth that you can somehow ring out extra few % by making tiny changes to fonts and colors -- in every case, they simply stop the experiment when the results tip in their favor.

The ONLY time I've ever seen it used successfully was to make changes to the layout of ads, making them look like more organic content or likely to be accidentally clicked. You can see this at work if you've ever clicked on an ad by mistake, or maybe you were trying to close the ad and noticed someone has "optimized" the close button placement using one of these A/B tools.

There it is, that's the market for these tools. That represents the vast majority of the these companies use-cases, revenue and usage. Anyone else who implies an innocent intent is either ignorant or inexperienced.

actualwitch · 4 months ago
behnamoh · 4 months ago
Are we supposed to post every blog/news post of OpenAI and keep fueling the AI hype? I think at this point people should know that OpenAI is just like any other company.
tibbar · 4 months ago
Statsig is big enough that its acquisition is interesting in its own right. Or maybe I'm biased because I spent quite some time setting up a Statsig integration at $FORMER_EMPLOYER. But if I've done that, odds are that a lot of the other people here have too...
MontgomeryPy · 4 months ago
This seems like a big shift for OpenAI into an enterprise applications vendor to me.
rchaud · 4 months ago
What else could they have been? Microsoft didn't give them $10bn to build out their B2C homework autocomplete service.
babelfish · 4 months ago
It was almost certainly purchased just for internal usage. See: Rockset
drewda · 4 months ago
Agreed. There are dozens of startups and established companies providing analytics-y software. The fact that this one is being acquired by OpenAI doesn't make it any more newsworthy to anyone other than the people who are getting some OpenAI equity...
gchadwick · 4 months ago
The CTO of applications reporting to CEO of applications (who reports to the actual CEO) is kinda weird? I figure you're either the actual CTO or you're not a C-level exec and should have another title. Just more title inflation I guess. Maybe in same way you see VPs of X everywhere in some organizations we'll be starting to see CEO/CTO of X lower and lower down the org chart.
citizenpaul · 4 months ago
Sounds to me like another phase in the growth of the "Unaccountability Machine"

https://www.amazon.com/Unaccountability-Machine-Systems-Terr...

Oh the CTO approved it so we should blame them. No not that CTO the other CTO. Oh so who decided on the final out come. The CTO! So who is on first again?

nerdsniper · 4 months ago
“CTO” makes sense as a signal that “the buck stops here” for technical issues. They are the highest-ranking authority on technical decisions for their silo, with no one above them (but two CEO’s above them for business decisions)

If Mira Murati (CTO of OpenAI) has authority over their technical decisions, then it’s an odd title. If I was talking with a CTO, I wouldn't expect another CTO to outrank or be able to overrule them.

atty · 4 months ago
It would be quite strange indeed for Mira Murati to have a say over their technical decisions, considering she does not work for OpenAI :)
neom · 4 months ago
It's signaling P&L responsibilities. It's not that weird, at least not unheard of at all, just that it's typically done through "EVP" - So EVP Applications, VP of Applications Engineering, etc. - I'm guessing that the line items those "C"s who are not Sam are responsible for, are bigger than most F500 executives, and they're using titles to reflect that reality.
swyx · 4 months ago
just gonna point out that Google has done this as well and its not so much title inflation as it is just acknownledging the fact that if the unit they command was a standalone business they would well be worth the CEO/CTO title.
geodel · 4 months ago
This C level thing is happening for decades. There are many with CTO title who manage groups sometime as small as 5-10 people. And they are not startups but large corporates.
prasadjoglekar · 4 months ago
Just look at any media agency (OMG as an example). There are CEOs up the wazoo, one for North America, one for EU etc.

In practice, these are just internal P&Ls.

giancarlostoro · 4 months ago
I'm just sitting here wondering what in the world "Applications" is, is that a subsidiary or what?
kridsdale1 · 4 months ago
A thing that uses a model to have customers.
therealbilliam · 4 months ago
Apparently that's what they call ChatGPT, Codex, etc ¯\_(ツ)_/¯
kevinastone · 4 months ago
Looks like Fidji is reconstituting her Org from Meta
kridsdale1 · 4 months ago
It was extremely effective while it was running. I was there.