Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.
I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.
Well, in a way they are endorsed. They actively censor things they don’t like. Since there’s no moderation log, nobody prevents them from removing things just because they don’t like them.
When dealing with organizations that hold a disproportionate amount of power over your life, it's essential to view them in a somewhat cynical light.
This is true for governments, corporations, unions, and even non-profits. Large organizations, even well-intentioned ones, are "slow AI"[1]. They don't care about you as an individual, and if you don't treat everything they do and say with a healthy amount of skepticism and mistrust, they will trample all over you.
It's not that being openly hostile towards OpenAI on a message board will change their behavior. Only Slow AI can defeat other Slow AI. But it's our collective duty to at least voice our disapproval when a company behaves unethically or builds problematic technology.
I personally enjoy using LLMs. I'm a pretty heavy user of both ChatGPT and Claude, especially for augmenting web search and writing code. But I also believe building these tools was an act of enclosure of the commons at an unprecedented scale, for which LLM vendors must be punished. I believe LLMs are a risk to people who are not properly trained in how to make the best use of them.
It's possible to hold both these ideas in your head at the same time: LLMs are useful, but the organizations building them must be reined in before they cause irreparable damage to society.
My takeaway is actually the opposite, major props to YC for allowing this free speech unfettered - I cant think of any other organization or country on the planet where such a free setup exists
Unfettered? Have you ever seen how many posts disappear from being flagged for the most dubious reasons imaginable? Have you been on other sites on the internet? Hell, Reddit is more unfettered and that’s terrible.
I don't want to be glib - but perhaps it is because our "context window lengths" extend back a bit further than yours?
Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.
Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.
These guys are pursuing what they believe to be the biggest prize ever in the history of capitalism. Given that, viewing their decisions as a cynic, by default, seems like a rational place to start.
I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?
Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.
I’d do some self reflection and ask yourself why you need to carry water for them.
I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.
I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.
I would call it skepticism, not cynicism. And there is a long list of reasons that big tech and big AI companies are met with skepticism when they trot out nice sounding ideas that require everyone to just trust in their sincerity despite prior evidence.
This. I’ve been on HN for a while. I am barely hanging on to this community. It is near constant negativity and the questioning of every potential motive.
Sadly, yes, a lot of people want to be entrepreneurs for prestige/wealth. In their imagination they skip ahead to a fantastical ending: being rich and respected.
I find this disturbing. How can someone be useful to others without an idea of what that even means? How can one provide a novel offering without even caring about it? It's an expression of missing craft and bad taste. These aspirations are reactive, not generated by something beautiful (like kindness, or optimism).
Fortunately it is not hopeless; aspiring entrepreneurs can find deeper motivation if they look for it.
(I like to give the following advice: it is easier to first be useful to others and become rich than it is to be rich and then become useful to others. This almost certainly requires sufficient empathy and care to have a hypothesis and be "post-idea".)
Hey, "missing craft and bad taste?" Perhaps this hiring technique actually makes sense for OpenAI.
From my firsthand observations of the startup world, there are already plenty of pre-idea rich guys having expensive "conferences" where they talk about nothing and feel very good about themselves because of it. That OpenAI feels the need to write a blog about their shiny new cohort of useless trust fund boys is peculiar, but plenty of companies do this sort of thing.
I cannot imagine not having far more ideas than I could possibly ever do. Today I was describing one to my partner and she told me the only reason I shouldn't do it is that I have too many other things to do.
The thing that makes me continually have ideas is the same thing that makes me not want to dedicate my life to implementing just one of them. It would be like picking a favourite child if I were producing offspring like a queen bee.
I think there is value in the effort to develop something and frequently implementing something well is worth as much and sometimes much more than just a simple proof of concept. Someone has to build the things, It should be the people who are good at that and feel rewarded by a job done well more than a job done differently.
I do think that there isn't enough perspective of the lives that other people lead that can cause odd side-effects. Some people keep their ideas secret, or overvalue the idea because it was the one they had. This is a perspective I find hard to relate to. Most of the creative people I know are much happier someone knowing about their creations. They're like grains of sand, each one with their own details and can be evaluated many different ways. A lot of intellectual property feels like watching a man jealously protecting their grain of sand while standing on a beach.
I believe that is why the intent of things like copyright is to not protect ideas themselves. You cannot copyright an idea, and as an ideas person (a rather horrid term) that feels appropriate. The thing that you have built around the idea is the valuable thing you have contributed to the world. I think that is why items that are copyrightable are referred to as work. The value you bring comes from the from the work you did, not the idea you had, ideas just come to you (often at inconvenient times).
Mass media causes a bit of an aberration because of this. The thing that makes someone wealthy from a popular work is not proportional to the work done to produce it or even the quality of the work. Works that can be easily reproduced and distributed receive a disproportionate reward to their quality. A median quality work in many fields can receive next to no reward. The most popular works receive a masssive reward. The mechanism allowing a control of supply to provide reward for work ends up influencing a supply demand curve that gives massive rewards to a very few and very little to the majority. There is still an element of merit to the successes, the popular things are popular for a reason, some of those things really are the best. The question is would they have still been the best if everyone who worked to create stuff were rewarded more linearly to quality, would that support enough development of ability and opportunity that the pool from which the best can be selected becomes much larger.
[this might have gone off topic, but obviously my brain has things that have to come out]
Can someone give the counter argument to my initial cynical read of this? That read being: OpenAI has more money than it can invest productively within it's own company and is trying to cast a net to find new product ideas via an incubator?
I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this and it implies they have run out of ideas internally. But I think I'm probably being too reflexively cynical
I think that MIT study of 95% of internal AI projects failing has scared off a lot of corporations from risking time in it. I think they also see they are hitting a limit of profitable intelligence from their services. (with the growth in inelegance the past 6–8 months being more realistic, not the unbelievable like in the past few years)
I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
I think it’s more like Open AI has the name to throw around and a lot of credibility but not products that are profitable. They are burning cash and need to show a curve that they can reach profitability. Getting 15 people with 15 ideas they can throw their weight behind is worth a lot
Yeah, more or less. Being in the application space as well as the inference space hedges a variety of risks, that inference margins will squeeze, that competition will continue to increase, etc etc.
Without putting my weight behind them, here's some counterarguments:
- OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway
- the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs
- OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.
Thanks! I think these are strong points, especially about the reaction to deepseek. I did have an assumption I didn't put in my original message, that they would probably be making investment offers to founders who walked into this with something like deepseek and that would balloon the costs well beyond office space and engineer time. But even having advanced knowledge of a next big idea from this would be worth the cost of entry yep.
I don't think it's about money, they don't invest anything. They gather data about "technical talent" working on AI related ideas. They will connect with 15 of these people to see if they can build it together.
It seems almost like... an internship program for would-be AI founders?
My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.
Softbank or Microsoft can’t be happy or sad. CEOs only care about the share price going up while they’re holding the wheel. If Sam wants to start the idea incubator, why would they want to shut it down?
My thinking was that both of these large investors specifically want openAI to produce something like agi or failing that, something so popular and useful they make enough money not to care. And they want results this year/early next year. Softbank's latest investment round is partially tied up in openAI resolving their non-profit status by the end of this year. Training random founding engineers with no expectations of even using GPT-5 instead of traditional hiring feels either like a lack of focus or niave during this critical juncture.
But having said that, I do see the wisdom in the comments that the costs in running a 5 week course/workshop are low and the value in having a view into what people are making outside of the openAI bubble is a decent return all its own.
Yeah, my thoughts where along the same line. Seems like they want to be another Ycombinator but more focused on AI. (Although TBF, I guess AI would also get the most traction at Ycombinator these days, given the hype wave).
We don't invest in ideas, we invest in founders. That's why OpenAI partnered with Y Combinator to bring you investments at the pre-founder stage.
We'll invest in your baby even before it's born! Simply accept our $10,000 now, and we'll own 30% of what your child makes in its lifetime. The womb is a hostile environment where the fetus needs to fight for survival, and a baby that actually manages to be born has the kind of can-do attitude and fierce determination and grit we're looking for in a founder.
Feels like the next logical move to me: they need to build and grow the demand for their product and API.
What better than companies whose central purpose is putting their API to use creatively? Rather than just waiting and hoping every F500 can implement AI improvements that aren't cut during budget crunches.
...no one thinks it's weird for the supposedly most transformational digital technology ever invented to need manufactured demand?? None of us think it's strange that a startup currently vying for a half a trillion dollar valuation is looking to "pre-idea founders" to help them find PMF??
Would this have been viewed with skepticism if any other startup from like 5+ years ago selling an API did this? If so, then how is it not even worse when a startup that is supposed to be providing access to what is pushed as a technical marvel of a panacea or something does it?
Sometimes I feel like I'm taking crazy pills...
I literally help companies implement AI systems. So I'm not denying there being any value...just...I don't understand how we can say with a straight face that they need to "build and grow demand for their product and API" while the same company was just reported on inking a $300B deal with Oracle for infra...like come on...the demand isn't there yet?!
There’s a difference between having product ideas rooted in compelling hypotheses on the one hand, and random ideas you throw against a wall and see what sticks.
I suspect, but could be wrong, that in OpenAI’s case it is because they believed they will reach AGI imminently and then “all problems are solved”, in other words the ultimate product. However, since that isn’t going to happen, they now have to think of more concrete products that are hard to copy and that people are willing to pay for.
> Thank you for your application. We will contact a select group of applicants in the coming weeks. If you are not contacted, we’d love to have you apply for the next cohort.
They can't even be bothered to ask ChatGPT to send a "no" email. Incredible.
If you are pre-idea today, does OpenAI believe your startup will still be relevant in the face of the AGI progress they forecast to make in the time it takes you to ship?
I ask questions like that in my head all the time. My metric is once their AI is smart enough to make their website not throw up an error half the time, I'll have to more deeply consider any AGI claims
It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.
Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.
This is true for governments, corporations, unions, and even non-profits. Large organizations, even well-intentioned ones, are "slow AI"[1]. They don't care about you as an individual, and if you don't treat everything they do and say with a healthy amount of skepticism and mistrust, they will trample all over you.
It's not that being openly hostile towards OpenAI on a message board will change their behavior. Only Slow AI can defeat other Slow AI. But it's our collective duty to at least voice our disapproval when a company behaves unethically or builds problematic technology.
I personally enjoy using LLMs. I'm a pretty heavy user of both ChatGPT and Claude, especially for augmenting web search and writing code. But I also believe building these tools was an act of enclosure of the commons at an unprecedented scale, for which LLM vendors must be punished. I believe LLMs are a risk to people who are not properly trained in how to make the best use of them.
It's possible to hold both these ideas in your head at the same time: LLMs are useful, but the organizations building them must be reined in before they cause irreparable damage to society.
[1]: https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...
Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.
Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.
Deleted Comment
Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.
I’d do some self reflection and ask yourself why you need to carry water for them.
I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.
Because our views are our own and not reflective of the feelings of the company that hosts the forum?
Skepticism is healthy. Cynicism is exhausting.
Thank you for posting this.
OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.
First time I am hearing this term. It is a euphemism like pre-owned cars (instead of used cars).
What does this mean? People who do not yet have any idea? Weird.
I find this disturbing. How can someone be useful to others without an idea of what that even means? How can one provide a novel offering without even caring about it? It's an expression of missing craft and bad taste. These aspirations are reactive, not generated by something beautiful (like kindness, or optimism).
Fortunately it is not hopeless; aspiring entrepreneurs can find deeper motivation if they look for it.
(I like to give the following advice: it is easier to first be useful to others and become rich than it is to be rich and then become useful to others. This almost certainly requires sufficient empathy and care to have a hypothesis and be "post-idea".)
From my firsthand observations of the startup world, there are already plenty of pre-idea rich guys having expensive "conferences" where they talk about nothing and feel very good about themselves because of it. That OpenAI feels the need to write a blog about their shiny new cohort of useless trust fund boys is peculiar, but plenty of companies do this sort of thing.
Spoiler: it didn't go anywhere. The story on HN is still here:
https://news.ycombinator.com/item?id=3700712
but the link is 404
https://www.ycombinator.com/noidea.html
The thing that makes me continually have ideas is the same thing that makes me not want to dedicate my life to implementing just one of them. It would be like picking a favourite child if I were producing offspring like a queen bee.
I think there is value in the effort to develop something and frequently implementing something well is worth as much and sometimes much more than just a simple proof of concept. Someone has to build the things, It should be the people who are good at that and feel rewarded by a job done well more than a job done differently.
I do think that there isn't enough perspective of the lives that other people lead that can cause odd side-effects. Some people keep their ideas secret, or overvalue the idea because it was the one they had. This is a perspective I find hard to relate to. Most of the creative people I know are much happier someone knowing about their creations. They're like grains of sand, each one with their own details and can be evaluated many different ways. A lot of intellectual property feels like watching a man jealously protecting their grain of sand while standing on a beach.
I believe that is why the intent of things like copyright is to not protect ideas themselves. You cannot copyright an idea, and as an ideas person (a rather horrid term) that feels appropriate. The thing that you have built around the idea is the valuable thing you have contributed to the world. I think that is why items that are copyrightable are referred to as work. The value you bring comes from the from the work you did, not the idea you had, ideas just come to you (often at inconvenient times).
Mass media causes a bit of an aberration because of this. The thing that makes someone wealthy from a popular work is not proportional to the work done to produce it or even the quality of the work. Works that can be easily reproduced and distributed receive a disproportionate reward to their quality. A median quality work in many fields can receive next to no reward. The most popular works receive a masssive reward. The mechanism allowing a control of supply to provide reward for work ends up influencing a supply demand curve that gives massive rewards to a very few and very little to the majority. There is still an element of merit to the successes, the popular things are popular for a reason, some of those things really are the best. The question is would they have still been the best if everyone who worked to create stuff were rewarded more linearly to quality, would that support enough development of ability and opportunity that the pool from which the best can be selected becomes much larger.
[this might have gone off topic, but obviously my brain has things that have to come out]
Deleted Comment
I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
The gap was that workers were using their own implementation instead of the company's implementation.
- OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway
- the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs
- OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.
My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.
Imagining one negative spin doesn’t an imagination make. Imagine harder.
Deleted Comment
But having said that, I do see the wisdom in the comments that the costs in running a 5 week course/workshop are low and the value in having a view into what people are making outside of the openAI bubble is a decent return all its own.
Deleted Comment
I don't think there is any money given, except travel costs for first and last week.
Deleted Comment
https://x.com/paulg/status/1796107666265108940
Exactly what I read between the lines on this.
Next up, we're funding prenatal individuals.
Dead Comment
This feels like a program to see what sticks.
We'll invest in your baby even before it's born! Simply accept our $10,000 now, and we'll own 30% of what your child makes in its lifetime. The womb is a hostile environment where the fetus needs to fight for survival, and a baby that actually manages to be born has the kind of can-do attitude and fierce determination and grit we're looking for in a founder.
What better than companies whose central purpose is putting their API to use creatively? Rather than just waiting and hoping every F500 can implement AI improvements that aren't cut during budget crunches.
Would this have been viewed with skepticism if any other startup from like 5+ years ago selling an API did this? If so, then how is it not even worse when a startup that is supposed to be providing access to what is pushed as a technical marvel of a panacea or something does it?
Sometimes I feel like I'm taking crazy pills...
I literally help companies implement AI systems. So I'm not denying there being any value...just...I don't understand how we can say with a straight face that they need to "build and grow demand for their product and API" while the same company was just reported on inking a $300B deal with Oracle for infra...like come on...the demand isn't there yet?!
Isn't that how we got (and eventually lost) most Google products?
I suspect, but could be wrong, that in OpenAI’s case it is because they believed they will reach AGI imminently and then “all problems are solved”, in other words the ultimate product. However, since that isn’t going to happen, they now have to think of more concrete products that are hard to copy and that people are willing to pay for.
> Thank you for your application. We will contact a select group of applicants in the coming weeks. If you are not contacted, we’d love to have you apply for the next cohort.
They can't even be bothered to ask ChatGPT to send a "no" email. Incredible.