* Software R&D Amortization - taxes on make-believe profits
* Patent law - protect small businesses from patent trolls
* Automate government-driven compliance standards - enable small businesses to sell into large companies/government entities, automatic certification when using pre-approved cloud solutions.
* Healthcare insurance - employees of SMBs automatically get access to medicare
> * Automate government-driven compliance standards - enable small businesses to sell into large companies/government entities, automatic certification when using pre-approved cloud solutions.
I don't see how this will end well. I appreciate the reasoning behind it, but this is not a good solution.
I'd prefer to see more "startup friendly" compliance frameworks that don't require tens to hundreds of thousands of dollars and make both the startup and their customers satisfied with the outcome. Something like a SOC2-lite that isn't so onerous but still provides a decent snapshot of their current situation from a third party's perspective.
I'd also prefer to see these standards go away. I haven't seen any proof they are providing meaningful security at any company I've been at and several of them have had massive hacks despite being SOC2 on paper. They also eat up InfoSec time instead of being productive on meaningful stuff like "Hey, are patching everything?"
Most of these compliance just seem like barber licenses. A way for existing entities entrench themselves.
Try ISO 27001. Everyone says it's more onerous, but for startups, it's actually a lighter lift. It is a lot worse for big companies than SOC2, but it's a lot easier for startups.
Yeah, the only thing worse than the current status quo would be giving some SV startups a privileged position as gatekeepers for regulatory compliance (the Watershed strategy).
Couldn't agree more. In the SMEs that I've been involved with, this has had a huge chilling effect on both hiring and innovation. I think that the change is a primary contributing factor to the layoffs and offshoring that have seized the market ever since.
I'm not convinced that this wasn't the intent of the change in the first place.
Could someone elaborate this for an uninformed like me? Does it mean if you (a company) pay $1M as salary this year, only $0.2M can be treated as cost?
The house passed a bill, but the senate is working on their own version. I havent looked too deeply at either proposal, I just hope it doesnt make it even worse. I am waiting till the final proposal gets voted on.
How about health insurance not being tied to profits. Startups pay the full brunt of health insurance since they don't have real profits they have nothing to write off. Meanwhile large orgs get to write off a ton of profits as Healthcare costs for employees.
So startups tend to have real garbage insurance. As someone older with kids startups are getting more and more prohibitive because I need that Healthcare. Maybe startups should be a young man's game. Maybe not.
There is no particular linkage between health insurance and profits. Small businesses such as early-stage startups tend to have worse and more expensive employee health plans because they lack the scale to go self-insured or to negotiate lower premiums with payers for fully-insured plans. Profitable small businesses face the same problem.
Can you remind me what the issue with software R&D amortization (or point me to something that explains it)? I remember reading about the issue in the past and thinking it was a problem, but I've forgotten all the details.
> Automate government-driven compliance standards - enable small businesses to sell into large companies/government entities, automatic certification when using pre-approved cloud solutions.
This is something the market can solve. You can't lobby it into existence.
> * Healthcare insurance - employees of SMBs automatically get access to medicare
Perhaps better decouple healthcare insurance from employment status? (Perhaps remove the tax dodge where companies can buy health insurance cheaper than individuals can?)
While you're there, try to help the Army National Guard Incentive Management System, or GIMS, not have multi-year downtimes, while the corporate sector wrings their hands over minutes. It's funny, yet hurtful to read. They exist in different universes.
> the system crashed in late 2018 and was inoperable for about 10 months; another 10-month outage occurred in 2021. While the system was down, bonuses had to be filed through a complicated manual process, creating a backlog that states are still trying to fix. (2023 story) [1]
> Two adjutants general, top commanders in their respective states, described discovering their staff tracking enlistment bonuses on dry-erase boards or through email traffic and handwritten notes. [1]
Sorry, your bonus is goin around on somebody's handwritten note somewhere. Also, see if you can maybe do something about that VA medical data system. Heard they still hate it last reference.
Healthcare insurance - employees of SMBs automatically get access to medicare
This puts you in the same company of abusing the system as Walmart, the nation's biggest welfare queen.
Employers should just have to give health benefits. You want workers, you pay benefits. Period. Maybe then you all will get on board for a single payer system. Its what you want, but only in fits and starts. quit fucking around already.
I really don't want my at-will employment status to be the arbiter of whether an unforeseen health issue will bankrupt me. Tying either private insurance or public insurance eligibility to your employer seems like a bad pattern we should be trying to get away from.
Fuck that employers should be legally barred from offering health benefits. Combining the two might have been one of the worst things to happen the health system in this country.
I’ve never understood the idea that employers like Walmart are “abusing the system” or “welfare queens”. If Walmart employees were capable of getting jobs that paid enough that they weren’t eligible for public assistance, they wouldn’t work for Walmart. Conversely, if Walmart didn’t employ those people, they would be an even greater burden on the welfare state.
> [...] history shows that once we assign power to governments, they're loathe to subsequently give that power back to the people. Policy is a ratchet and things tend to accrete over time. That means whatever power we assign governments today represents the floor of their power in the future - so we should be extremely cautious in assigning them power because I guarantee we will not be able to take it back.
I'm curious what history Jack Clark is referring to here.
If I think of the last thirty years of policy in most of Europe and the US I'm thinking of a strong trend of deregulation and giving more powers to markets, removing international trade barriers and so on.
That seems to be a dynamic opposite to the one the quoted article is suggesting.
> If I think of the last thirty years of policy in most of Europe and the US I'm thinking of a strong trend of deregulation and giving more powers to markets
That's been the PR spin, but it's not actually true. It's a smoke screen to help governments avoid actual accountability.
For example, the crash of 2008 was blamed on too much market and not enough regulation, but in fact it was the opposite: regulatory thumbs on the scale, for example the US government wanting to encourage home ownership and skewing the mortage market and the money supply and requiring lenders to accept more default risk, and governments implicitly giving a "too big to fail" guarantee to large financial institutions and then being extremely arbitrary in when that implicit guarantee was broken. A true free market would never have produced such a thing.
> removing international trade barriers and so on.
Globalization of trade has been going on for much longer than the last 30 years. If anything, the last 30 years have seen more of things like trade wars (for example between the US and China) and other disruptions to smooth international trade.
Right, and many of those deregulation moves turned out poorly in hindsight. Governments have ratcheted up control in some cases, but stating that it is a universal law is patently false, although it sounds good as a quip.
Regulations are kind of like security practices. When done well they are often taken for granted, but poor ones get a lot of negative attention. I'm glad that I don't have to wonder if the cereal I buy at a store is filled with rat poison. I'm fine if the government never relinquishes the power to oversee that.
Unfortunately the current leaders in the latest AI craze have not inspired much confidence that they will act responsibly in the future. Maybe if different people were running these companies it would make sense for the government to keep out of it, but in this world we're going to need some reasonable regulation.
Deregulation is not returning power to the people, it's bestowing carte blanche privatization of profits to corpos, in the wake of near complete regulatory capture, while they dump the negative externalities on the public.
At least for Europe this is wrong. I mean yes, there are new trade treaties but internally EU regulations and at least for Germany its regulations increase by the day. Just this year the personal tax declaration form got 10 or so additional pages. And the new supply chain law needs medium to large companies to prove that all their purchases are morally correct (sorry not sure how to phrase this properly). And I don't even follow new laws closely.
I don’t disagree with this assessment but it’s also a narrow view[0] that allows the problem to persist in the first place.
Rather, I’d like to see what positive oversight would look like, but that has not been put forth by any of these organizations thus far. It all comes down to “trust us” which is also hard to stomach
[0]: most often but not exclusively held by Americans (of which i am one). We collectively fail to imagine government being a positive force and what that would look like.
This is a relatively new and carefully cultivated state of things.
I mean, the early phases of this era are a half-century old at this point, but it’s not like it’s a law of nature that at least half the population of the US and about half the politicians must regard government as rarely-useful. It didn’t used to be that way. It’s not an American trait in some holistic historical sense.
Well said, also applies to many human dynamics - friendships, relationships, work relationships, etc. Ceding power is a ratchet and you either put your foot down to start with or it grinds away.
Healthy relationships include negotiating when potential boundaries are in question, or if things change that require re-aligning boundaries.
It's reasonable to give more to the other party from time-to-time, and reasonable to discuss with the other party if it becomes a point where it feels unfair.
Instead we (Americans) take an unnecessarily adversarial stance against what our government could do, ensuring it is perpetually unprepared.
It is true that sometimes you learn something at time T2 that invalidates something you learned at time T1 (T2 > T1), and thus you do need a de-ratcheting system of some sort.
But what actually drives the ratchet is experience with current policy (or lack thereof). "Oh, we had no plans to deal with X, and we got screwed, so lets add policy for X".
The ratcheting aspect of policy reflects the ratcheting aspect of societal experience accumulation.
Largely because they feared that large language models would be useful for large scale disinformation, election interference, impersonation, automated phishing, etc.
> First, let’s prioritize open source models and more tailored AI applications to shape the competitive landscape and create real opportunities for startups.
How does YC square this statement with the fact that their ex-president closed their models with the explicit intention of slowing down competitors [1]? Or is the argument "we want politicians to discourage people like us from doing what we did"?
"How does YC square this statement about the organization's current stances with the fact that someone who doesn't work there anymore but who had previously communicated similar stances has now apparently changed his mind?"
Why would they need to square anything there? There's nothing contradictory about a former exec not matching the current values of a company.
How can YC be responsible for something their former executive does?
And more importantly, doing one thing while advocating a policy prohibit the exact thing isn't necessarily wrong. If the tax rate is 20%, and I advocate a 25% tax while not paying the extra 5% until the law is passed, I would say there isn't any contradiction in my actions.
Late is better than never in my view, and in spite of whatever happened in the past I will support the people doing the right thing today.
I’m likewise a little wary given some of the history, so maybe a little “wait and see” is in order, but this sounds like a really positive thing to be doing.
I mean, they parted ways long ago, and pg released a statement recently as to the nature of that breakup. People and organizations can change their mind.
It's well and good to advocate for change within the industry (large and small) but realistically, all of this is missing the forest for trees, or equating symptom with the cause.
The only way out of this long term, is to take money out of politics, repeal citizens united, revolving doors and other methods of lining politicians' pockets.
While I agree toward the harm it's done, Citizens United v. Federal Election Commission is a Supreme Court ruling; you can't "repeal" those. It must either be reversed/overturned by the Supreme Court or a constitutional amendment be made by the states. Those are both extremely hard and rare.
So you think it should be illegal to make documentaries critical of Hillary Clinton? Because that's what Citizens United was about, but most people who are against Citizens United don't seem to understand what the case was actually about.
I'm pretty unsure about CU, but the context is the film was created by a political action committee to get standing so they could challenge election law, it's not like they were a bona-fide commercial film-maker (which is why the FEC blocked the film in the first place)
Well, I think even that may be missing a bit of forest. How about: institute far more redistributive tax policies to prevent individuals or companies from gaining undue market power by becoming extremely wealthy.
It is frankly challenging to reconcile 1st amendment protections and a CU repeal. I'm not sure what the solution is, but think recent events should show that govt can abuse campaign finance law to pick winners and losers in the town square.
> It is frankly challenging to reconcile 1st amendment protections and a CU repeal. I'm not sure what the solution is, [...]
Reconciliation: Companies aren't human entities.
Non-human entities aren't entitled to 1st amendment protections.
Campaign finance is equally simple: run your campaign on public funding. Give all candidates who meet a threshold equal amounts of money.
I have yet to hear a convincing argument about what benefit a democracy receives from campaigns having different amounts of funding. That feels like the tail wagging the dog (your supporters fund you, so you can spend that money to buy more supporters).
After some research I realized that since 2010 at least (probably true if you go back a bit, but I didn't) EVERY SINGLE PRESIDENTIAL CAMPAIGN has been fined for major violations of campaign finance laws (usually related to in-kind contributions, or expenses FEC decides to make election expenses later on).
In other words, it seems impossible to run for president without breaking the law.
This is not okay. One of the issues here is that by making candidates break a law, you basically now have some kind of weird leverage over them. You can make threats to prosecute or fine further and thus have them by the proverbial balls. You also naturally push away people who would be wanting to follow the law, which I argue is sorely needed in Washington DC.
This opens the door to a lot of bad bad bad blackmail opportunities.
If no party and no candidate is able to stage a presidential campaign without being fined, ranging from Trump's chaotic, high energy campaigns to Clinton's 'proper' campaign with lots of decorum to Biden's bring-back-normal campaign, then I think something is seriously the matter with campaign finance laws.
It was funny how the closing statement was admiring rugged individualism too when that philosophy is encouraging a lot of the damaging effects similar to how social media has been harming the world.
I'm not sure what I was expecting, but I had a surprisingly positive reaction to this piece. It will be hard to avoid, but I think that preventing GenAI from turning into yet another tech oligopoly is hugely important.
For one, it absolutely will stifle innovation if one or a few companies can control the market. Just look at what Google has done with their money printing monopoly money over the past decade.
Competition will be doubly important if modern AI can fullfill much of the current hype. That kind of power in the hands of a sophisticated used car salesman like Sam Altman will be bad in so many ways.
What’s ironic is that Michael Seibel has discussed many times on the YC podcast that you should avoid building whatever’s hot for VCs because their attention tends to change every year but you’ll be stuck building for a decade.
2020 was remote work, 2021 was web3, now we have the big LLM boom.
Honestly it seems there’s a lot of advantages to “riding a wave” and a lot of advantages to being contrarian. But if raising money is your priority I do think you should ride the wave. Being contrarian sounds romantic, but don’t expect funding from people who disagree with you.
The most charitable thing I’d say about YCs AI focus is it’s hard to think of a startup idea that couldn’t benefit from AI in some way.
Tbf I feel remote work really improved significantly in the past years, though I don't know if the contribution from those startups matters or not. Web3 and blockchain is a moot, there's little to no practical reason to have them.
AI though, will be very useful, at least a good one. Theoretically, AI can swim in a good ocean of company documentations and save time searching. They can help doctors diagnose a ct scan faster (if not already).
Definitely not, there are several other topical areas that currently have significant VC appetite.
If anything, the enormous amount of social media attention on AI has made it easier to raise VC in other trending areas because all of the low-quality "me too" startups have gotten pulled into the fashionable AI orbit. This has significantly improved the signal-to-noise ratio in these non-AI areas because the legions of trend-jumping founders are all doing AI, the startups that remain tend to be founded by people with substantial investment and expertise in their domain without regard for fashion. This is good for VCs and for founders.
> made it easier to raise VC in other trending areas because all of the low-quality "me too" startups have gotten pulled into the fashionable AI orbit
You are trying to say "adding IA to the pitch won't improve a founder's chances", but what you are saying is "granting money to founders that didn't add AI to their pitch improves a smart VC's chances".
Filling the form wasn't too hard. It's probably the kind of thing every founder should know and be able to answer. It's easier than most job applications. Definitely easier than college applications... if you wanted to compare it to college, getting financial aid has a tiny acceptance rate as well. YC is bundling the education and the financial aid.
Rejection isn't hard either. If you're a founder, you'll be rejected by VCs all the time, and many early stage investors offer far lower amounts and far worse advice. You'll be rejected by your product - MVPs often have to be reworked and pivoted. And then you'll face some more rejection when doing sales and product interviews.
I think it's important to be transparent about the rate though, but also important to make it clear that the low rate doesn't mean they're trying to push people away.
It's the same as selective universities telling every student that they should apply regardless of their chance. It makes sense they would do this since it's completely open.
It's not really a lottery, more like a messy matching algorithm for supply and demand. They give everyone a shot since they look at everything, but getting in is not evenly distributed :)
> I love the constant flex of their tiny acceptance rate. "We only accept 1% of applicants. Btw everyone should apply!"
I've seen this mindset among people promoting open positions at desirable companies that don't hire often. In most cases, applying is a waste of time because the company already made its decision to fill the position with an internal candidate.
For us, the application process was a big help in getting focus and better defining what we propose to build. And not getting in has led us to find some other, potentially much better programs which we will apply to. We don’t regret the effort at all.
My only complaint is that keeping everyone hanging on until May 29, ready to clear our plates in June, and then giving absolutely zero feedback for the rejection, was the sort of blatantly self-interested and founder-unfriendly move that, I suppose, it’s good to remember happens a lot in VC Land.
It makes sense that startup folks will gravitate to where there's VC interest as they always have (including the fakers and scammers). But regardless of the current hype cycle the same rules still apply: the reasons why you're doing the startup, if you're an expert in the space, whether you can build a 10x solution, your access to early adopters, your unique point of view on the market. AI is just a new set of tools to generate more value for your specific users, if applied in a unique way that makes a significant difference.
Season 9 Erlich: "Oh my god. It's an AI play. That's the frothiest space in the Valley right now. Nobody understands it but everyone wants in. Any idiot could walk into a fucking room, utter the letters A and I, and VCs would hurl bricks of cash at them.”.
The current tech environment is very different from 6 months ago.
Now most VCs are actually somewhat cautious about AI largely because they over-invested in companies at ridiculous high valuations and aren't seeing the ROI. Especially with many companies simply not seeing the promised benefits from deploying AI.
YC is actually the one that is out of touch with reality.
> Open source AI models allow for greater transparency, collaboration, and innovation by making the underlying code publicly accessible and modifiable.
I feel like open models do virtually nothing for transparency, collaboration, or innovation, and are only modifiable in that they can be fine-tuned. It's "open source" training processes and data that will lead to "transparency, collaboration, and innovation", and I'm unaware of any large company that does this.
Correct. Worse is that there are models being touted as "open source" that don't allow for a bunch of different uses and specify their own custom licensing (look at what Falcon originally had, Meta's models with specific commercial carveouts, etc.), we need an rms of the new age to call these fake OSS approaches out as they feel more like they are being done to get the OSS marketing shine, without actually being free and open.
Your "source" is not open nor is it transparent if training code, original dataset, model architecture details, and training methodology are not all there.
Closest to this would be https://www.eleuther.ai whose training data is largely public and training processes are openly discussed, planned, and evaluated on their Discord server. Much of their training dataset is available at https://the-eye.eu (their onion link is considered "primary", however, due to copyright concerns)
You're correct if you're focused exclusively on the work surrounding building foundation models to begin with. But if you take a broader view, having open models that we can legally fine tune and hack with locally has created a large and ever-growing community of builders and innovators that could not exist without these open models. Just take a look at projects like InvokeAI [0] in the image space or especially llama.cpp [1] in the text generation space. These projects are large, have lots of contributors, move very fast, and drive a lot of innovation and collaboration in applying AI to various domains in a way that simply wouldn't be possible without the open models.
Taking the broader view of this nature feels like an attempt to change the narrative.
The entire point of having transparency is around building those foundations so they don’t inherit the biases of humans, for starters. Right now, we have zero introspection into this and no ability to improve upon it with the widely deployed models being used today, and that has already created problematic situations, let alone situations that are problematic and not known yet.
Transparency around this is a very good thing to prevent AI from inheriting negative human ideas and biases, and broadens access to improve training data that benefits everyone
* Software R&D Amortization - taxes on make-believe profits
* Patent law - protect small businesses from patent trolls
* Automate government-driven compliance standards - enable small businesses to sell into large companies/government entities, automatic certification when using pre-approved cloud solutions.
* Healthcare insurance - employees of SMBs automatically get access to medicare
I don't see how this will end well. I appreciate the reasoning behind it, but this is not a good solution.
I'd prefer to see more "startup friendly" compliance frameworks that don't require tens to hundreds of thousands of dollars and make both the startup and their customers satisfied with the outcome. Something like a SOC2-lite that isn't so onerous but still provides a decent snapshot of their current situation from a third party's perspective.
Most of these compliance just seem like barber licenses. A way for existing entities entrench themselves.
I'm not convinced that this wasn't the intent of the change in the first place.
When combined with:
- Pressure to make use of office spaces again, away from remote work
- The AI bubble
- The layoffs that started before section 174 that demonstrated how headcounts had inflated
- The collapse of Silicon Valley Bank last year
... it is not looking good for software engineers in the US.
Bootstrapping a tech company in a post Section 174 world doesn’t even seem feasible. I can’t believe this issue isn’t being taken more seriously.
So startups tend to have real garbage insurance. As someone older with kids startups are getting more and more prohibitive because I need that Healthcare. Maybe startups should be a young man's game. Maybe not.
No. Anybody of any age should be able to take part in labor, otherwise you're arguing for ageism.
This is something the market can solve. You can't lobby it into existence.
Perhaps better decouple healthcare insurance from employment status? (Perhaps remove the tax dodge where companies can buy health insurance cheaper than individuals can?)
> the system crashed in late 2018 and was inoperable for about 10 months; another 10-month outage occurred in 2021. While the system was down, bonuses had to be filed through a complicated manual process, creating a backlog that states are still trying to fix. (2023 story) [1]
> Two adjutants general, top commanders in their respective states, described discovering their staff tracking enlistment bonuses on dry-erase boards or through email traffic and handwritten notes. [1]
Sorry, your bonus is goin around on somebody's handwritten note somewhere. Also, see if you can maybe do something about that VA medical data system. Heard they still hate it last reference.
[1] https://www.military.com/daily-news/2023/10/27/soldiers-unpa...
This puts you in the same company of abusing the system as Walmart, the nation's biggest welfare queen.
Employers should just have to give health benefits. You want workers, you pay benefits. Period. Maybe then you all will get on board for a single payer system. Its what you want, but only in fits and starts. quit fucking around already.
> [...] history shows that once we assign power to governments, they're loathe to subsequently give that power back to the people. Policy is a ratchet and things tend to accrete over time. That means whatever power we assign governments today represents the floor of their power in the future - so we should be extremely cautious in assigning them power because I guarantee we will not be able to take it back.
If I think of the last thirty years of policy in most of Europe and the US I'm thinking of a strong trend of deregulation and giving more powers to markets, removing international trade barriers and so on.
That seems to be a dynamic opposite to the one the quoted article is suggesting.
That's been the PR spin, but it's not actually true. It's a smoke screen to help governments avoid actual accountability.
For example, the crash of 2008 was blamed on too much market and not enough regulation, but in fact it was the opposite: regulatory thumbs on the scale, for example the US government wanting to encourage home ownership and skewing the mortage market and the money supply and requiring lenders to accept more default risk, and governments implicitly giving a "too big to fail" guarantee to large financial institutions and then being extremely arbitrary in when that implicit guarantee was broken. A true free market would never have produced such a thing.
> removing international trade barriers and so on.
Globalization of trade has been going on for much longer than the last 30 years. If anything, the last 30 years have seen more of things like trade wars (for example between the US and China) and other disruptions to smooth international trade.
Regulations are kind of like security practices. When done well they are often taken for granted, but poor ones get a lot of negative attention. I'm glad that I don't have to wonder if the cereal I buy at a store is filled with rat poison. I'm fine if the government never relinquishes the power to oversee that.
Unfortunately the current leaders in the latest AI craze have not inspired much confidence that they will act responsibly in the future. Maybe if different people were running these companies it would make sense for the government to keep out of it, but in this world we're going to need some reasonable regulation.
Rather, I’d like to see what positive oversight would look like, but that has not been put forth by any of these organizations thus far. It all comes down to “trust us” which is also hard to stomach
[0]: most often but not exclusively held by Americans (of which i am one). We collectively fail to imagine government being a positive force and what that would look like.
I mean, the early phases of this era are a half-century old at this point, but it’s not like it’s a law of nature that at least half the population of the US and about half the politicians must regard government as rarely-useful. It didn’t used to be that way. It’s not an American trait in some holistic historical sense.
Healthy relationships include negotiating when potential boundaries are in question, or if things change that require re-aligning boundaries.
It's reasonable to give more to the other party from time-to-time, and reasonable to discuss with the other party if it becomes a point where it feels unfair.
Instead we (Americans) take an unnecessarily adversarial stance against what our government could do, ensuring it is perpetually unprepared.
Experience.
It is true that sometimes you learn something at time T2 that invalidates something you learned at time T1 (T2 > T1), and thus you do need a de-ratcheting system of some sort.
But what actually drives the ratchet is experience with current policy (or lack thereof). "Oh, we had no plans to deal with X, and we got screwed, so lets add policy for X".
The ratcheting aspect of policy reflects the ratcheting aspect of societal experience accumulation.
https://openai.com/index/better-language-models/ See "Policy Implications" and down.
> First, let’s prioritize open source models and more tailored AI applications to shape the competitive landscape and create real opportunities for startups.
How does YC square this statement with the fact that their ex-president closed their models with the explicit intention of slowing down competitors [1]? Or is the argument "we want politicians to discourage people like us from doing what we did"?
[1] https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-lau...
Why would they need to square anything there? There's nothing contradictory about a former exec not matching the current values of a company.
And more importantly, doing one thing while advocating a policy prohibit the exact thing isn't necessarily wrong. If the tax rate is 20%, and I advocate a 25% tax while not paying the extra 5% until the law is passed, I would say there isn't any contradiction in my actions.
I’m likewise a little wary given some of the history, so maybe a little “wait and see” is in order, but this sounds like a really positive thing to be doing.
I cannot say whether YC changed their mind because I don't know what their mind is. Therefore I commented with the hope of an official answer.
They still have strong connections to OpenAI.
The only way out of this long term, is to take money out of politics, repeal citizens united, revolving doors and other methods of lining politicians' pockets.
So you think it should be illegal to make documentaries critical of Hillary Clinton? Because that's what Citizens United was about, but most people who are against Citizens United don't seem to understand what the case was actually about.
Reconciliation: Companies aren't human entities.
Non-human entities aren't entitled to 1st amendment protections.
Campaign finance is equally simple: run your campaign on public funding. Give all candidates who meet a threshold equal amounts of money.
I have yet to hear a convincing argument about what benefit a democracy receives from campaigns having different amounts of funding. That feels like the tail wagging the dog (your supporters fund you, so you can spend that money to buy more supporters).
In other words, it seems impossible to run for president without breaking the law.
This is not okay. One of the issues here is that by making candidates break a law, you basically now have some kind of weird leverage over them. You can make threats to prosecute or fine further and thus have them by the proverbial balls. You also naturally push away people who would be wanting to follow the law, which I argue is sorely needed in Washington DC.
This opens the door to a lot of bad bad bad blackmail opportunities.
If no party and no candidate is able to stage a presidential campaign without being fined, ranging from Trump's chaotic, high energy campaigns to Clinton's 'proper' campaign with lots of decorum to Biden's bring-back-normal campaign, then I think something is seriously the matter with campaign finance laws.
Ideally, fines are a rare occurrence.
Sources:
1. https://www.cnn.com/2022/03/30/politics/clinton-dnc-steele-d...
2. https://www.politico.com/story/2013/01/obama-2008-campaign-f...
3. https://www.cnn.com/2010/POLITICS/07/17/biden.campaign.fine/... (Biden's 2010 campaign... 2020 campaigns seem to be under investigation)
3. Of course, everyone knows about Trump
> There are many reasons to be optimistic about AI
Without a modicum of awareness.
Paul G [0] and Sam Altman both have recognized the potential dangers.[1]
[0] https://x.com/paulg/status/1651613807779667968
[1] https://blog.samaltman.com/machine-intelligence-part-1
For one, it absolutely will stifle innovation if one or a few companies can control the market. Just look at what Google has done with their money printing monopoly money over the past decade.
Competition will be doubly important if modern AI can fullfill much of the current hype. That kind of power in the hands of a sophisticated used car salesman like Sam Altman will be bad in so many ways.
So, what this means: that in 2024, if you want to get VC capital, your startup must be related to AI.
2020 was remote work, 2021 was web3, now we have the big LLM boom.
Honestly it seems there’s a lot of advantages to “riding a wave” and a lot of advantages to being contrarian. But if raising money is your priority I do think you should ride the wave. Being contrarian sounds romantic, but don’t expect funding from people who disagree with you.
The most charitable thing I’d say about YCs AI focus is it’s hard to think of a startup idea that couldn’t benefit from AI in some way.
Basically starting with the technology and then finding problems.
The very opposite of what YC has been promoting all these years.
https://www.youtube.com/watch?v=KxjPgGLVJSg
AI though, will be very useful, at least a good one. Theoretically, AI can swim in a good ocean of company documentations and save time searching. They can help doctors diagnose a ct scan faster (if not already).
Deleted Comment
unfortunately Michael is no longer running things and it shows in the lack of long-term vision vs. hypecasting
Garry has had this rep for a long time-disappointing to see this changing of guard
If anything, the enormous amount of social media attention on AI has made it easier to raise VC in other trending areas because all of the low-quality "me too" startups have gotten pulled into the fashionable AI orbit. This has significantly improved the signal-to-noise ratio in these non-AI areas because the legions of trend-jumping founders are all doing AI, the startups that remain tend to be founded by people with substantial investment and expertise in their domain without regard for fashion. This is good for VCs and for founders.
You are trying to say "adding IA to the pitch won't improve a founder's chances", but what you are saying is "granting money to founders that didn't add AI to their pitch improves a smart VC's chances".
The collective man hours wasted on appe every year for what is essentially a lottery is insane.
Rejection isn't hard either. If you're a founder, you'll be rejected by VCs all the time, and many early stage investors offer far lower amounts and far worse advice. You'll be rejected by your product - MVPs often have to be reworked and pivoted. And then you'll face some more rejection when doing sales and product interviews.
I think it's important to be transparent about the rate though, but also important to make it clear that the low rate doesn't mean they're trying to push people away.
It's not really a lottery, more like a messy matching algorithm for supply and demand. They give everyone a shot since they look at everything, but getting in is not evenly distributed :)
My guess would be: 90% of applicants are foreigners, maybe a fair bit of spam.
The rest 10% are domestic, and of those that were chosen, there's a lot of Stanford alma mater.
I've seen this mindset among people promoting open positions at desirable companies that don't hire often. In most cases, applying is a waste of time because the company already made its decision to fill the position with an internal candidate.
My only complaint is that keeping everyone hanging on until May 29, ready to clear our plates in June, and then giving absolutely zero feedback for the rejection, was the sort of blatantly self-interested and founder-unfriendly move that, I suppose, it’s good to remember happens a lot in VC Land.
I was looking for a job in the summer of 2018 and that's what all the ads were for. Ended up working for an ISP though, which was nice.
AI will.
Now most VCs are actually somewhat cautious about AI largely because they over-invested in companies at ridiculous high valuations and aren't seeing the ROI. Especially with many companies simply not seeing the promised benefits from deploying AI.
YC is actually the one that is out of touch with reality.
I feel like open models do virtually nothing for transparency, collaboration, or innovation, and are only modifiable in that they can be fine-tuned. It's "open source" training processes and data that will lead to "transparency, collaboration, and innovation", and I'm unaware of any large company that does this.
Am I wrong?
Your "source" is not open nor is it transparent if training code, original dataset, model architecture details, and training methodology are not all there.
Where do you go under that link to get it?
E.g. https://the-eye.eu/public/AI/pile/readme.txt says it’s gone (and "old news"? I disagree).
[0] https://github.com/invoke-ai/InvokeAI
[1] https://github.com/ggerganov/llama.cpp
The entire point of having transparency is around building those foundations so they don’t inherit the biases of humans, for starters. Right now, we have zero introspection into this and no ability to improve upon it with the widely deployed models being used today, and that has already created problematic situations, let alone situations that are problematic and not known yet.
Transparency around this is a very good thing to prevent AI from inheriting negative human ideas and biases, and broadens access to improve training data that benefits everyone