Doesn't matter. We must keep building more and more technology no matter the cost. Have an idea for a business? Build it. Does your business make the lives of people worse? Doesn't matter, keep pushing. Could some new technology ruin the lives and relationships that people have? Doesn't matter, just build it. We always need more, need to do more. Every experiment is valid, every impulse must be followed. More complexity, more control, more distraction, more outrage, more engagement. Just keep building forever no matter the cost.
> You better maximize engagement or you will lose engagement this is a red queen’s race we can’t afford to lose! Burn all the social capital, burn all your values, FEED IT ALL TO MOLOCH!
That's an uncharitable take. Those VCs care very deeply about society, that's why they're funding so much research into the Torment Singularity (and giving so many talks about it) and making sure that the Right People get the Torment Nexus first so "we" can decide how it gets used.
Has any big-tech company every used "Torment Nexus" unironically as some new project name, or any startup used is their company/product name? I feel its close if it hasn't happened already. I mean the unironic use.
Eric Weinstein refers to this as an Embedded Growth Obligation (EGO), whereby organizations and economies at large assume perpetual growth, and that things really start to unravel when that growth inevitably slows. It is pretty mindblowing how we have basically accepted growth as the default state, it is not at all a given that things always grow and get better.
> It is pretty mindblowing how we have basically accepted growth as the default state
It is completely to be expected, exactly because it is not new.
It's been scarcely a generation since the peak in net change of the global human population, and will likely be at least another two generations before that population reaches its maximum value. It rose faster than exponentially for a few centuries before that (https://en.wikipedia.org/wiki/World_population#/media/File:P...). And across that time, for all our modern complaints, quality of life has improved immensely.
Of all the different experiences of various cultures worldwide and across recent history, "growth" has been quite probably the most stable.
Culture matters. People's actions are informed by how they are socialized, not just by what they can observe in the moment.
We will achieve essentially zero-cost infinite exponential scalability! The cloud has no limits! InfiniDum enterprises will operate in billions of markets across time space and dimensions of probability!
This is ignoring the Marketing to Engineering ratio. For most recent history technology companies have had to spend at least as much on marketing as engineering in order to survive, and two to ten times as much spent on marketing as engineering is common for successful companies. Who is going to buy the thing is the most important question and without solid answers there is nothing, no matter how much technology was engineered.
Now this formula has been complicated by technological engineering taking over aspects of marketing. This may seem to be simplifying and solving problems, but in ways it actually makes everything more difficult. Traditional marketing that focused on convincing people of solutions to problems is being reduced in importance. What is becoming most critical now is convincing people they can trust providers with potential solutions, and this trust is a more slippery fish than belief in the solutions themselves. That is partly because the breakdown of trust in communication channels means discussion of solutions is likely to never be heard.
One of the potential upsides to this as that people just might start taking time to engage in a bit of critical thinking before reacting. Is this real? How likely is this AI nonsense? What is the source? Is this the full picture? etc.
Neil deGrasse Tyson said a quote expressing a concern about the future impact of AI on information credibility.
The exact quote is:
"I foresee the day where AI become so good at making a deep fake that the people who believed fake news as true will no longer think their fake news is true because they'll think their fake news was faked by AI."
Yuval Noah Harari, Sapiens fame [0], has a great quote (paraphrasing):
Interviewer: How will humans deal with the avalanche of fake information that AI could bring?
YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.
In democracies, this is often either the government (e.g. the Bureau of Labor Statistics) or newspapers (e.g. the New York Times) or even individuals (e.g. Walter Cronkite).
In other forms of government, it becomes trust networks built on familial ties e.g. "Uncle/Aunt is the source for any good info on what's happening in the company" etc
The problem is that too many people just don't know how to weigh different probabilities of correctness against each other. The NYT is wrong 5% of the time - I'll believe this random person I just saw on TikTok because I've never heard of them ever being wrong; I've heard many stories about doctors being wrong - I'll listen to RFK; scientific models could be wrong, so I'll bet on climate change being not real etc.
Trust is much more nuanced than N% wrong. You have to consider circumstantial factors as well. ie who runs The NY Times, who gives them money, what was the reason they were wrong, even if they’re not wrong what information are they leaving out. The list goes on. No single metric can capture this effectively.
Moreover, the more political a topic the more likely the author is trying to influence your thoughts (but not me I promise!). I forgot who, but a historian was asked why they wouldn’t cover civil war history, and responded with something to the affect of “there’s no way to do serious work there because it’s too political right now”.
It’s also why things like calling your opponents dumb, etc is so harmful. Nobody can fully evaluate the truthfulness of your claims (due to time, intellect, etc) but if you signal “I don’t like you” they’re rightfully going to ignore you because you’re signaling you’re unlikely to be trustworthy.
5% wrong is an extremely charitable take on the NYT.
I once went to a school that had complementary subscriptions. The first time I sat down to read one there was an article excoriating President Bush about hurricane Katrina. The entire article was a glib expansion of an expert opinion who was just some history teacher who said that it was “worse than the battle of Antietam” for America. No expertise in climate. No expertise in disaster response. No discussion of facts. “Area man says Bush sucks!” would have been just as intellectually rigorous. I put the paper back on the shelf and have never looked at one since.
COVID ended my trust in media. I went from healthy skepticism to assuming everything is wrong/a lie. There was no accountability for this so this will never change for me. I am like the people who lived through the Great Depression not trusting banks 60 years later and keeping their money under the mattress.
The problem isn't "The NYT is wrong 5% of the time". It's that institutions are systematically wrong in predictable ways that happen to benefit their point of view. It's not random. It's planned.
This was done intentionally, over decades, to try to push ‘trust’ closer to where it can be controlled. Religion, family ties (through propaganda), etc.
For sure. He then goes on to mention "in democracies", but a lot of democracies are now failing, in part because the institutions like the free press are being directed by their billionaire owners, or suppressed. And the family ties are being impacted heavily by mass misinformation and propaganda campaigns online, where their publishers are actively pushing it themselves (major worldwide social networks are now state influenced and / or their top brass has subjugated themselves to the reigning parties and / or any countermeasures have been removed).
How very inconvenient it is, then, that at the same time intentional efforts to spread uncertainty and to erode trust in traditional institutions are at an all-time high! Must be a coincidence.
It's a feedback loop; you need things like freedom of speech and press to get a functional and free democracy, but you need a functional and free democracy to have freedom of speech / press. Infringe on one and you take down the other. But you need to strip down the legal branch of a free democracy first, because the democracy and freedom of speech/press is protected by a constitution in most cases.
Our familial ties have been corrupted, supposing they were ever anything a sane person should've relied upon. And if humans can build institutions they trust, what happens when AI can build fake, simulated institutions that hit all the right buttons for humans to trust just as if they were of the human-created variety? Do those AIs lock in those pseudo-institution followers forever? Walter Crondeepfake can't not be trusted, just listen to his gravitas!
Trusting institutions is fine but you have to trust people or institutions for the right things, blind trust is harmful.
I'll trust my doctor to give me sound medical advice and my lawyer for better insights into law. I won't trust my doctor's inputs on the matters of law or at least be skeptical and verify thoroughly if they are interested in giving that advice.
Newspapers are a special case. They like to act as the authoritative source on all matters under the sun but they aren't. Their advice is only as good as their sources they choose and those sources tend to vary wildly for many reasons ranging from incompetence all the way to malice on both the sides.
I trust BBC to be accurate on reporting news related to UK, and NYT on news about US. I wouldn't place much trust on BBC's opinion about matters related to the US or happenings in Africa or any other international subjects.
Transferring or extending trust earned in one area to another unrelated area is a dangerous but common mistake.
The thing is, building such institutions and maintaining trust is expensive. Exploiting trust is lucrative (fraud, etc.) It's also expensive to not trust - all sorts of opportunities don't happen in that scenario if, say, you can't get a friend or relative in the right place.
There are many equilibrium points possible as a result. Some have more trust than others. The "west" has benefited hugely from being a high trust society. The sort of place where, in the Prisoner's Dilemma matrix, both parties can get the "cooperate" payoff. It's just that right now that is changing as people exploit that trust to win by playing "defect", over and over again without consequence.
> YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.
Funny that he doesn’t say that the institutions have to provide accurate information, but just that we have to trust them to provide accurate information.
That's not how they did it though. Trusted institutions are only really needed in a trustless society and reliance on them as a source of truth is a really new trend. Society used to be trustful.
Unfortunately that's not what happens.
BBC, Al-Jazeera, RT, CBC are all propaganda sources and are not sources of information.
The other family members will get the information from those sources so family will not be trusted as well.
And the sources I consider as trustfull, my opinion of them most likely skewed by my bias and others will consider it propaganda as well.
CBC and BBC aren't perfect, but I trust them leagues over any billionaire owned media like anything from Post Media the Murdochs, or Bezos. Really any for profit news isn't to be trusted.
> Will you still be here in 12 months when I’ve integrated your tool into my workflow?
This is the biggie; especially with B2B. It's really 3 months, these days. Many companies have the lifespan of a mayfly.
AI isn't the new reason for this. It's been getting worse and worse, in the last few years, as people have been selling companies; not products, but AI will accelerate the race to the bottom. One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).
So that means that not only is the SNR dropping, the "noise" is now a lot riskier and uglier.
> One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).
I needed to get some builder quotes for my home. It did not enter my mind to go online to search for any.
I just reached out to my family for any trustworthy builders they've had, and struck up conversations with some of my fancier neighbors for any recommendations.
(I came to the conclusion that all builders are cowboys, and I might as well just try doing some of this myself via youtube videos)
Using the internet to buy products is not a problem for me, I know roughly the quality of what I expect to get and can return anything not up to standard. Using the internet to buy services though? Not a chance. How can you refund a service
When we needed some work done, we asked family and friends too, and ended up with a cowboy. When the work needed to be re-done, we looked up local reviews for contractors, and ended up with someone who was more expensive but also much more competent, and the work was done to a higher standard.
> ended up with someone who was more expensive but also much more competent, and the work was done to a higher standard.
How do you know that? Or is it just that your bias is coybows are bad and so you assume someone who dresses and acts better is better?
Now step back, I'm not asking you personally, but the general person. It is possible that you have the knowledge and skills to do the job and so you know how to inspect it to ensure it was done right. However the average person doesn't have those skills and so won't know the well dressed person who does a bad job that looks good from the poorly dressed person who does a good job but doesn't look as good.
Every house is different and so every job is custom. Whatever standards you think the builder is enacting to get the job done to an agreeable price is likely an ad-hoc solution that you yourself could have done as an amateur if you had the tools.
For every standard to be met, you compromise either on cash or time.
Maybe this is regional. "Cowboy" in reference to a tradie means someone who is untrustworthy and out to rip you off or cheap out on work. They'll do the work but often to a low standard or with major defects they try to hide.
AI-esque blog post about how infinite AI content is awful, from "a co-founder at Paid, which is the first and only monetization and billing system for AI Agents".
I'm not saying it's actually written with AI (and indeed, I don't think that's the case; hence my calling it "AI-esque" rather than actually AI generated). It's just that it's a particular style of businessy blog writing that, though originated by humans, AI is now often used to crank out. Lots of bullet points, sudden emphases, etc.
It's just funny, even by hand, to be writing in the infinite AI content style while lamenting the awfulness of infinite AI content while co-founding a monetization and billing system for AI agents.
I think this basically proves your point. There were things about it that made me think it may have been at least "AI-assisted", until I saw your "guaranteed bot-free" thing at the bottom. Anyone doing entirely hand-written things from now on are going to be facing a headwind of skepticism.
this is a funny phenomenon that I keep seeing. I think people are going through the reactionary “YoU mUsT hAvE wRiTtEn ThIs oN a CuRsEd TyPwRiTeR instead of handwriting your letter!1!!”
hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)
This didn’t seem AI-generated to me, although it follows the LinkedIn pattern of “single punchy sentence per paragraph”. LinkedIn people wrote like this long before LLMs.
I do love the irony of someone building a tool for AI sales bots complaining that their inbox is full of AI sales slop. But I actually agree with the article’s main idea, and I think if they followed it to its logical conclusion they might decide to do something else with their time. Seems like a great time to do something that doesn’t require me to ever buy or sell SaaS products, honestly.
I wish we were talking about what's next versus what's increasingly here.
How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems? Short term, sure. But infinite (also) implies long term.
I wish I had a really smart game theorist friend who could help me project forward into time if for nothing other than just fun.
Don't get me wrong, I'm not trying to reduce the value of "ouch, it hurts right now" stories and responses.
But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
What's next after trust collapses? All of us just give up? What if that collapse is sooner than we thought; can we think about the fun problem now?
From a game-theory perspective, if players rush the field with AI-generated content because it's where all the advantages are this year, then there's going to be room on the margins for trust-signaling players to advance themselves with more obviously handspun stuff. Basically, a firm handshake and an office right down the street. Lunches and golf.
The real question to ask in this gold rush might be what kind of shovels we can sell to this corner of hand shakers and lunchers. A human-verifiable reputation market? Like Yelp but for "these are real people and I was able to talk to an actual human." Or diners and golf carts, if you're not into abstractions.
That gets my brain moving, thanks. What do you think those who are poor/rich in a trust economy look like? How much of a transformation to trust economy do you think we make?
> How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?
You're assuming they can be fixed.
> But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
I'm sure the peasants during Holomodor also thought: "wow, what an interesting problem to solve".
I don't have the time to read all four stories that ChatGPT turned up right this minute, but I now have cause to believe that at least some minority of those peasants you refer to did find fun in solving their problems.
I'm with that group of people. What was your point in bringing this up?
We need PageRank like algorithm for "Trust / Human Content" to be applied directly to the source of such content. E.g. following all three channels are AI made. But all these content can be liked to an advanced AI version of audio based videos of Wikiarticles. If a video is providing just a summary based on established historical facts, even though it is AI based, how is it different than refering a thesaurus or dictionary? Aren't such videos making "knowledge" accessible.
FINAL Financial hours of U.S.A. just before the 1929 crash
What I get from the article is that, proving that a company will stick around for a while after you’ve subscribed is hard now, because anybody can AI generate the general vibe of the marketing department of a big established player. This seems like it’ll be devastating for companies whose business model requires signing new users up for ongoing subscriptions.
Maybe it could lead to a resurgence of the business model where you buy a program and don’t have to get married to the company that supports it, though?
I’d love it if the business model of “buy our buggy product now, we’ll maybe patch it later” died.
you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.
I think the point is that nobody will give companies money unless the product already works. No more "but in a month this'll get a really cool update that'll add all these features". If you can't trust that a company will continue to exist, you have to be confident that what you're buying is acceptable in its current state.
The modern software market actually seems like a total inversion of normal human bartering and trade relationships, actually…
In Ye Olden Days, you go to the blacksmith, and buy some horseshoes. You expect the things to work, they are simple enough that you can do a cursory check and at least see if they are plausibly shaped, and then you put them on your horse and they either work or they don’t. Later you sell him some carrots, buy a pot: you have an ongoing relationship checkpointed by ongoing completed tasks. There were shitty blacksmiths and scummy farmers, but at some point you get a model of how shitty the blacksmith is and adjust your expectations appropriately (and maybe try to find somebody better when you need nails).
Ongoing contracts were the domain of specialists and somewhat fraught with risk. Big trust (and associated mechanics, reputation and prestige). Now we’re negotiating an ongoing contracts for our everyday tools, it is totally bizarre.
Wow. A new profile text for my Tinder account!
If you are not building the next paperclip optimizer the competition already does!
Dead Comment
It is completely to be expected, exactly because it is not new.
It's been scarcely a generation since the peak in net change of the global human population, and will likely be at least another two generations before that population reaches its maximum value. It rose faster than exponentially for a few centuries before that (https://en.wikipedia.org/wiki/World_population#/media/File:P...). And across that time, for all our modern complaints, quality of life has improved immensely.
Of all the different experiences of various cultures worldwide and across recent history, "growth" has been quite probably the most stable.
Culture matters. People's actions are informed by how they are socialized, not just by what they can observe in the moment.
Now this formula has been complicated by technological engineering taking over aspects of marketing. This may seem to be simplifying and solving problems, but in ways it actually makes everything more difficult. Traditional marketing that focused on convincing people of solutions to problems is being reduced in importance. What is becoming most critical now is convincing people they can trust providers with potential solutions, and this trust is a more slippery fish than belief in the solutions themselves. That is partly because the breakdown of trust in communication channels means discussion of solutions is likely to never be heard.
nevermind if the things are people or their lives!!
> nevermind if the things are people or their lives!!
Breaking things is ok. If people are things then it's ok to break them, right? Got it. Gotta get back to my startup to apply that insight.
Larry Fink and The Money Owners.
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
Deleted Comment
Perhaps I am too optimistic...
The exact quote is: "I foresee the day where AI become so good at making a deep fake that the people who believed fake news as true will no longer think their fake news is true because they'll think their fake news was faked by AI."
Interviewer: How will humans deal with the avalanche of fake information that AI could bring?
YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.
In democracies, this is often either the government (e.g. the Bureau of Labor Statistics) or newspapers (e.g. the New York Times) or even individuals (e.g. Walter Cronkite).
In other forms of government, it becomes trust networks built on familial ties e.g. "Uncle/Aunt is the source for any good info on what's happening in the company" etc
0 - https://amzn.to/4nFuG7C
Moreover, the more political a topic the more likely the author is trying to influence your thoughts (but not me I promise!). I forgot who, but a historian was asked why they wouldn’t cover civil war history, and responded with something to the affect of “there’s no way to do serious work there because it’s too political right now”.
It’s also why things like calling your opponents dumb, etc is so harmful. Nobody can fully evaluate the truthfulness of your claims (due to time, intellect, etc) but if you signal “I don’t like you” they’re rightfully going to ignore you because you’re signaling you’re unlikely to be trustworthy.
Trust is hard earned and easily lost.
I once went to a school that had complementary subscriptions. The first time I sat down to read one there was an article excoriating President Bush about hurricane Katrina. The entire article was a glib expansion of an expert opinion who was just some history teacher who said that it was “worse than the battle of Antietam” for America. No expertise in climate. No expertise in disaster response. No discussion of facts. “Area man says Bush sucks!” would have been just as intellectually rigorous. I put the paper back on the shelf and have never looked at one since.
Don’t get emotionally attached to content farms.
It’s simply reality, or else propaganda wouldn’t work so well.
Except those institutions have long lost all credibility themselves.
I'll trust my doctor to give me sound medical advice and my lawyer for better insights into law. I won't trust my doctor's inputs on the matters of law or at least be skeptical and verify thoroughly if they are interested in giving that advice.
Newspapers are a special case. They like to act as the authoritative source on all matters under the sun but they aren't. Their advice is only as good as their sources they choose and those sources tend to vary wildly for many reasons ranging from incompetence all the way to malice on both the sides.
I trust BBC to be accurate on reporting news related to UK, and NYT on news about US. I wouldn't place much trust on BBC's opinion about matters related to the US or happenings in Africa or any other international subjects.
Transferring or extending trust earned in one area to another unrelated area is a dangerous but common mistake.
There are many equilibrium points possible as a result. Some have more trust than others. The "west" has benefited hugely from being a high trust society. The sort of place where, in the Prisoner's Dilemma matrix, both parties can get the "cooperate" payoff. It's just that right now that is changing as people exploit that trust to win by playing "defect", over and over again without consequence.
https://en.wikipedia.org/wiki/High-trust_and_low-trust_socie...
Funny that he doesn’t say that the institutions have to provide accurate information, but just that we have to trust them to provide accurate information.
Wall Street, financier centric and biased in general. Very pro oligarchy.
The worst was their cheerleading for the Iraq war, and swallowing obvious misinformation from Colin Powell at face value.
This is the biggie; especially with B2B. It's really 3 months, these days. Many companies have the lifespan of a mayfly.
AI isn't the new reason for this. It's been getting worse and worse, in the last few years, as people have been selling companies; not products, but AI will accelerate the race to the bottom. One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).
So that means that not only is the SNR dropping, the "noise" is now a lot riskier and uglier.
Made my day. So true.
I just reached out to my family for any trustworthy builders they've had, and struck up conversations with some of my fancier neighbors for any recommendations.
(I came to the conclusion that all builders are cowboys, and I might as well just try doing some of this myself via youtube videos)
Using the internet to buy products is not a problem for me, I know roughly the quality of what I expect to get and can return anything not up to standard. Using the internet to buy services though? Not a chance. How can you refund a service
How do you know that? Or is it just that your bias is coybows are bad and so you assume someone who dresses and acts better is better?
Now step back, I'm not asking you personally, but the general person. It is possible that you have the knowledge and skills to do the job and so you know how to inspect it to ensure it was done right. However the average person doesn't have those skills and so won't know the well dressed person who does a bad job that looks good from the poorly dressed person who does a good job but doesn't look as good.
because you know the brands and trust them, to a degree
you have prior experience with them
For every standard to be met, you compromise either on cash or time.
It's just funny, even by hand, to be writing in the infinite AI content style while lamenting the awfulness of infinite AI content while co-founding a monetization and billing system for AI agents.
Also, this is entirely hand-written ;)
hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)
I do love the irony of someone building a tool for AI sales bots complaining that their inbox is full of AI sales slop. But I actually agree with the article’s main idea, and I think if they followed it to its logical conclusion they might decide to do something else with their time. Seems like a great time to do something that doesn’t require me to ever buy or sell SaaS products, honestly.
This is just how I write in the last few years
How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems? Short term, sure. But infinite (also) implies long term.
I wish I had a really smart game theorist friend who could help me project forward into time if for nothing other than just fun.
Don't get me wrong, I'm not trying to reduce the value of "ouch, it hurts right now" stories and responses.
But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
What's next after trust collapses? All of us just give up? What if that collapse is sooner than we thought; can we think about the fun problem now?
The real question to ask in this gold rush might be what kind of shovels we can sell to this corner of hand shakers and lunchers. A human-verifiable reputation market? Like Yelp but for "these are real people and I was able to talk to an actual human." Or diners and golf carts, if you're not into abstractions.
You're assuming they can be fixed.
> But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
I'm sure the peasants during Holomodor also thought: "wow, what an interesting problem to solve".
I'm with that group of people. What was your point in bringing this up?
Wait, was I just trolled? If so, lol. Got me!
Dead Comment
Would this truly be a move back? I've met people outside my social class and disposition who seem to rely quite heavily on networking this way.
You can't regress back to a being a kid just because the problems you face as an adult are too much to handle.
However this is resolved, it will not be anything like "before". Accept that fact up front.
If you try to “go back” you’ll just end up recreating the same structure but with different people in charge
Meet the New boss same as the old boss - biological humans cannot escape this state because it’s a limit of the species
FINAL Financial hours of U.S.A. just before the 1929 crash
https://www.youtube.com/watch?v=dxiSOlvKUlA&t=1008s
The Volcker Shock: When the Fed Broke the Economy to Save the Dollar (1980)
https://www.youtube.com/watch?v=cTvgL2XtHsw
How Inflation Makes the Rich Richer
https://www.youtube.com/watch?v=WDnlYQsbQ_c
Maybe it could lead to a resurgence of the business model where you buy a program and don’t have to get married to the company that supports it, though?
I’d love it if the business model of “buy our buggy product now, we’ll maybe patch it later” died.
you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.
I'm using this
The modern software market actually seems like a total inversion of normal human bartering and trade relationships, actually…
In Ye Olden Days, you go to the blacksmith, and buy some horseshoes. You expect the things to work, they are simple enough that you can do a cursory check and at least see if they are plausibly shaped, and then you put them on your horse and they either work or they don’t. Later you sell him some carrots, buy a pot: you have an ongoing relationship checkpointed by ongoing completed tasks. There were shitty blacksmiths and scummy farmers, but at some point you get a model of how shitty the blacksmith is and adjust your expectations appropriately (and maybe try to find somebody better when you need nails).
Ongoing contracts were the domain of specialists and somewhat fraught with risk. Big trust (and associated mechanics, reputation and prestige). Now we’re negotiating an ongoing contracts for our everyday tools, it is totally bizarre.
Deleted Comment