Hah! My first thought was, "Oh, is their plan this time to give the playbook to Microsoft?"
I confess that the current ChatGPT hype wave leaves me feeling cold. A lot of people are obviously excited about it. But then, a lot of people have been excited about VR and blockchains, and I'm still waiting to see significant uptake. I have to wonder if this is another area where it's more sci-fi driven novelty hypnosis than the beginning of a revolution.
Most people have heard of blockchain (through crypto), but non-techies are _actually using_ ChatGPT for daily tasks. Departments of ed and universities moved fast in integrating anti-ChatGPT into their programs.
I suppose I should have been more specific. I can believe people will be using this. I'm just not yet seeing how it will make things net better. So yes, academic cheaters will surely be trying to use it, although we'll see whether they actually succeed.
But continuing with your analogy, I'll note that the people who fight financial crimes and fraud have absolutely had to integrate responses to cryptocurrencies. That didn't prove that there was lasting value to the cryptocurrency hype wave.
To my shock, She started explaining to me how awesome it is and how all of her teacher friends are using it to offload a lot of the boring work like writing comments and stuff for students report cards. They are required to write numerous paragraphs for each student several times.
Something in the past I would take my wife an entire week each time.
These are people that need help setting up Wi-Fi on a laptop. And they are in love with ChatGPT.
I agree. However the ChatGPT situation is the first hype that’s really been harmful rather than just disruptive. It can and is being used to generate low accuracy noise which degrades quality information. I suspect our demise isn’t going to be a big bang but a slow fizzle of information decay. And this is the start of it. At best it’s going to make us stupider.
Information decay has started around the year 2000. I'm not a Hipster, I have experienced the internet pre 2000, which was an semi exclusive club for educated and intelligent people with accurate search results.
After the masses poured onto it the results became more half truths and opinions.
And now you have "AI" feeding on those half truths and opinions.
Whatever could go wrong?
However imagine if it was fed with facts only.
recently I tried to search for CO2-consumption (as in usage as an industrial gas) on Google. It didn't return 1 useful hit, everything was about emissions.
So: We are already there, all without ChatGPT... ChatGPT (with its cool ways to make it nondeterministic, like: replacing tokens with synonyms and raising the temperature. Or: retraining it...) will make things worse definitely.
Probably the way to go is for {academia,sects,orgs with sense for humanity,...} to build stores of attributable information and then start shunning the branches which produce bullshit (this is what scientific publishing should do already.... it doesn't... even without ChatGPT there's already templated articles strewn around the corpus and noone cares). The main issue there is, that this means that they will also have to stop using a lot of technology (and buy a 32nm foundry...), because many orgs are probably already poisoned.
Information decay has started with forums for the masses. It has become increasingly hard to actually find useful advice, because lots of the "answers" you find are actually partly wrong, mostly because people without knowhow feel like they need to answer questions to boost their karma...
Many of my coworkers feel the same. And this started way before GPT was conceived.
Besides, your post somehow feels like generic technology-angst to me. It seems like whenever we invent something new, there are those people that predict this is finally our demise...
GGPT is a tool, like every other technology. Use it wisely, and you will likely benefit. Use it without engaging your brain, and you will like see negative effects.
Noise? We've been suffering from it quite some time now. Once complete control (of information) wasn't possible the next choice has been baffle them with bullshit (and repetition). This is happening on all fronts, all ideologies. Sure some may be more subtle than others but it's everywhere. To believe otherwise is simply naive.
ChatGPT is a symptom that perhaps finally raises discussion of the root problem above the noise. Yeah, ironic.
The solution is already predicted by scifi. We will start building new sub-internets with stricter, formalised, rules to reduce spam and noise. The outside internet will become the wild, noisy internet where few people reside. It's like in Cyberpunk 2077 where there is the "wall" that keeps all the AI out from the mainstream internet.
> It can and is being used to generate low accuracy noise which degrades quality information.
I would argue this is a problem stemming from the way "AI" is presented rather than anything to do with "AI" itself.
That is to say, fucking nobody trusts autocorrect to be correct and "AI" is really just a more complex form of autocorrect.
But people hear "AI" and think AI as in Transformers, Terminator, R2-D2, etc. They're intelligent, right? And that's where the presentation is doing significant harm. There is nothing intelligent, let alone sentient, about "AI".
I'm not sure if it's that chatGPT is that good, or that traversing today's attention-weaponized web is that bad, but I think I'm getting about a 30% productivity boost by replacing:
Google -> Stack Overflow
With...
chatGPT --maybe--> Google --> Stack Overflow
I won't make predictions about the future, but it's been pretty transitive for today.
Same here, except I dropped Google as my search engine of choice a long time ago.
ChatGPT is great for doing some of the heavy lifting. I pasted in a list of raw, unprocessed data and asked it to give me unique values sorted alphabetically. I know I could have done it in code, but just being able to paste it in, give a plain English command and get the same results was fantastic.
100%, this is a tool that is going to be on every person's phone in the next 5yrs. Whether it's ChatGPT or via search engines or another interface. This has massive implications for the world.
The recent South Park episode where the boys are using ChatGPT to write the perfect romantic replies to their girlfriends really opened my eyes to how impactful this is going to be, not just business copywriting/education essays/etc but even for social interactions like dating (Tinder is going to get interesting).
It's like an autocomplete for your day-to-day life.
> ChatGPT has over 100 million users 2 months in. It's not even remotely comparable to VR or blockchain.
First, that just means ChatGPT had 100m people enter their email into the registration page. Second, none of those 100m users had to invest any money, unlike VR or blockchain.
> But then, a lot of people have been excited about VR and blockchains
dunno about blockchains, but i'm sure there isn't 100million VR units sold. Plus you don't need to pay upfront to use chatGPT to try it. And everyone who has tried it has been amazed, or at least, impressed at how good the responses are.
Not saying that the language models and AI hype is to be believed, but there's substance there. If you are an investor, you should be wary of missing out. And if you're not an investor, you should be wary of losing your human advantage in the economy.
> And everyone who has tried it has been amazed, or at least, impressed at how good the responses are.
This is almost the definition of a novelty effect. Consider, for example, 3D.
Stereoscopic 3D has gone through at least 5 waves of people trying it, saying, "Amazing! This will change everything!" But so far every time everything has not been changed. (The 5 waves I count: Brewster stereoscopes in the 1850s, 3D still viewers in the 1940s, 3D movies in the 1950s, VR in the 1990s, and 3D movies/TV starting in 2009. Assuming that we see the current wave of VR/metaverse hype as still open, of course.)
The question for me is what happens once something gets past "amazing". For example, consider Shazam's music recognition technology. It was widely considered amazing at the time. But in very short order it became boring, as it had basically one use case, one that did not matter much to most people.
It is a common and uninformed HN fallacy to equate AI and Blockchain.
While Blockchain only has hype, AI has hype and real applications. While there are enough bottom-feeder AI bros on Twitter and Discord, AI does have real applications and it solves real problems and makes people real money.
You need to shift your focus from the AGI and LLM hype to real application areas.
I have shipped end-user facing real world AI product myself. It saved each client (non tech companies) 2-3mn USD per year, and made our company money.
I can say with authority that if you think AI and blockchain are the same, you are plain WRONG.
I think at the very least this is going to lead to some major changes in UI design, but first they need to reel it in and get it under control instead of training it off of Reddit posts and then making a surprise pikachu face when it turns out to be an arrogant cunt.
But it's definitely not going to go nearly as far as people are predicting. The more i use it the more it becomes apparent to me that there's no real intelligence behind it, just large-scale pattern recognition that makes it seem real because I'm not intelligent enough to comprehend the patterns.
> apparent to me that there's no real intelligence behind it
Does it even need to be to be useful?
A ChatGPT that is legitimately "intelligent" would basically be the El Dorado of tech. There's plenty of value between here and there.
The big issue IMO is people are going to Chatgpt's website explicitly and having certain high expectations. Or they use it via Bing expecting it to be a search engine answer machine. Once such an interface is more directly integrated into our lives in a clean way where people understand what it is and what the limitations are it could easily be an app that basically everyone has installed, but more like a super-useful tool like Google Maps rather than replacing Google. That's still a big business and major development for the industry.
And even if the consumer daily-used mobile app thing or some high-visibility Google integration turns out to be less interesting/useful than predicted, there's still a million niche business and hobby usecases.
I think ChatGPT will get to us whether we want it or not. It has more obvious consumer applications, and I'm sure it has applications which have not been realized yet. The most obvious applications I can think of off the top of my head:
- Customer reviews on sites like Amazon can be gamed with significantly better accuracy, at significantly higher numbers. Presumably there will be a period where this is successful for marketers, and then eventually people will come to wholly distrust customer reviews altogether. This will also likely be the death of the "reddit" trick, where you can just query on reddit for real, live customer experiences with products.
- Significantly increased ads on anything like a social network, where a "user" might be talking about a product.
- Generally cheaper ability to write advertisements in a broad category of platforms.
- Sponsored misinformation (government, corporate, etc.) at scale.
To be clear, I think that ChatGPT is going to be a disaster for the internet.
Have you had a chance to use Bing Search with AI/ChatGPT? I have been using it for over two weeks and I am pleasantly surprised how practical and generally useful it is.
I pay for a premium OpenAI ChatGPT account, but except for explaining what code in unfamiliar programming languages does (including libraries it is calling), and occasionally having it write an email for me I don't get a lot of value out of it.
I do get a lot of value from the OpenAI APIs (as well as the offerings from Hugging Face).
It depends on what you expect of ChatGPT. A reliable human-like assistant? No. A reliable natural-language API to every IoT capable device? Absolutely.
For search, chatgpt is the best out there. If I have a question I don't have to read the whole Wikipedia entry. Chatgpt just gives me the relevant part.
Oh, the horrors! Reading a whole Wikipedia entry! (Which is not what people trying to answer a question do anyhow; they look at headings and skim for the bit they want.)
I believe that gives an answer. But the right answer? Even Wikipedia doesn't guarantee that.
In general it seems like the things that have actually ended up changing the world significantly in my lifetime seemed insignificant when they were first introduced. The first time I was on the internet it seemed novel, but unimportant. When I created my first social media account, it was interesting, playful, and not going to change the world. The iPhone was a joke, who would want a phone that big?
The simple fact that people are predicting that GPT will somehow change the world makes me doubt its actual value.
Those are your personal reactions, but those were not common reactions at the time. In the early 1990s I was telling people the Internet was amazing and showing it off to them. Social media was a huge deal starting with SixDegrees. The iPhone was incredibly hyped at the time.
I think ChatGPT is worth doubting, but I think your examples need some work.
The FPS effect is tangible. The visual effect is subtle in a game made in 2016 before anyone in the mainstream was talking about this sort of thing, but imagine the possibilities for an indie developer without much funding. They can make their assets at a resolution that's reasonable for their budget, but that would look bad scaled up with traditional methods. They can offer higher resolutions at AAA quality without an AAA performance tuning and asset budget.
See, I got that answer a lot with VR. And like you, they never asked whether I had used it, generally assuming that the only possible way I could be skeptical is through ignorance.
But as with both of those and plenty of other novel technology, whether or not I like it doesn't answer most of my questions, which are about what other people will use things for. E.g., I did not like YouTube, but I am just not a video person. So me using YouTube and saying "meh" wasn't informative, because I wasn't the target audience. Similarly, I was wowed by the modern VR experience, but being wowed didn't make me a long-term user, something apparently true for a lot of people.
I've never been excited about VR and 'blockchains'. I was, and still am, excited for Bitcoin, which in my opinion is the only useful application of blockchain. I use Bitcoin in a daily basis, and I have done it for over a decade.
As for ChatGPT, I'm actually using it and it enhances my productivity by a lot. So, I believe this isn't just hype.
Mircosoft's implementation of ChatGPT in Edge (Dev) is weak. They have really neutered its ability and length/number of queries you can ask it. It also tries to sprinkle in some web suggestions as well.
This is worrisome because I think mega corporations like Microsoft and Google want AI to just be a fun little tool for drafting emails and writing jokes and recipes. Any more powerful language constructions are too risky to their existing business models or too risky in terms of PR. People will just screenshot something the AI says philosophically and it will get retweeted and people will lose their mind.
I really hope a open source workable competitor to chatGPT emerges soon.
We said the same thing, in one way or another, about indexing and searching the web -- privacy/anonymity, spam/seo, algorithms, bias, corporations.... So today one can build their own crawler and search engine with great opensource software, yet a vast majority is querying a web index through third parties as the computing power required (well, mostly storage and networking) remains brutal, and then all those nitty-picketty details about crawling one needs to painstakingly figure out.
Even if by Moore's law we get that great power in our own computers, enough to run a today's chatGPT clone, there will always be a commercial version that is 100x more awesome out there built by actors with immense compute power backed by rivers of corporate money.
Would love it, too. But at this stage locking it up behind a corporate paywall is probably better IMO - at least until governments around the world are prepared to address a tsunami of misinformation, fraud, and abuse that these systems enable.
This really does not sound like you would love it to be an open source workable competitor at all.
IMO it's not the task of governments to be prepared to address misinformation. It's an individual responsibility. This requires an educated civil society with the media literacy necessary to make a democracy work in the 21st century. It's an ongoing, aspirational project. ...and we can't wait until we think we're ready.
I know that some of the features were available later in some other social networks too, but the original premise of making groups (friends, family, coworkers, random people I met I don't know where,...) and posting selective content into selective groups was great.
Some good ideas and horrible execution. First of all the UI wasn't great, then you were forced to use your (back then) "real-name" Google account, and no third party clients.
Maybe it was different for people who live and breathe GMail and the google ecosystem but for me it was a walled garden from the start and I kinda hated the UI.
it actually was a great idea and had a good implementation. but google seems not capable to promote/market/maintain/grow a product, especially not something targeting the younger side of the B2C market like a social network. and then the monetarization was pretty bad I guess, so it was orphaned and then killed.
Now its tiktok, and these "first gen social networks" (like facebook, twitter, ...) are a boomer playground mostly, even instagram is slowly getting to that list. Aside from the tech or the medium (video vs photo vs text vs ...) I think this will happen every few years when the youth receives friend requests from tgeir parents on platform X - thats the peak/decline moment no matter how great it is.
Most of those social networks are responsible for their own demise...
Facebook was for looking at content from friends, and then it moved to mainstream media recommended content, ads, and every 20 or so posts, something from a person you actually knew.
People then switched to instagram, where it was based around photos (and not "statuses"), and it was great... now it's 20 photos/videos of "suggested content" and one of someone you actually know.
Social networks should stay social, not become another cable tv provider with "recomended content" and ads.
G+ wasn't designed to be very usable, even ignoring the part where they had 0 users. It was a nerdy design seemingly trying to bring the fun of ACL systems to the public.
Btw, one lasting disconnect between most Google SWEs and the general public: many SWEs consider YouTube a social network today.
To be fair, Google copied this playbook whole-cloth from how Microsoft fought Netscape and survived its transition to the Internet - I was there when billg sent the famous memo. LLMs are obviously a sea-change and existential threat to search, and Google will be looking to survive the transition, even while its core ad serving/matching business has a moat, YouTube has a moat, Android and the App Store have moats, ...
I have my frustrations with PMs like everyone else, and this plan does seem over corrective and destined for failure. But I don't mind having PMs, all in all. I'm not an idea guy, I'm an implementation guy.
"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." - Heinlein
Or they may decide to rewrite whole thing in Rust, and before that write a new framework to make sure that productivity is optimized, and write a project management tool for it first because current one sucks.
Yes all software engineers are actually complete idiot's who if left unsupervised will walk around with their trousers on their heads bumping into walls.
I worked for Google when the Google+ announcement went out and they tied our bonuses and other successes to social. I worked in Cloud, building a non-social product, and it was absolutely demoralizing. Guess what? Google+ crashed and burned, but now Google is doing cloud in a sort of semi-serious way, but still chasing after ML and not rewarding the hard-working cloud engineers. Really a messed up incentivization system.
I confess that the current ChatGPT hype wave leaves me feeling cold. A lot of people are obviously excited about it. But then, a lot of people have been excited about VR and blockchains, and I'm still waiting to see significant uptake. I have to wonder if this is another area where it's more sci-fi driven novelty hypnosis than the beginning of a revolution.
The hype is real.
But continuing with your analogy, I'll note that the people who fight financial crimes and fraud have absolutely had to integrate responses to cryptocurrencies. That didn't prove that there was lasting value to the cryptocurrency hype wave.
To my shock, She started explaining to me how awesome it is and how all of her teacher friends are using it to offload a lot of the boring work like writing comments and stuff for students report cards. They are required to write numerous paragraphs for each student several times.
Something in the past I would take my wife an entire week each time.
These are people that need help setting up Wi-Fi on a laptop. And they are in love with ChatGPT.
blockchain?
all that pointless CO2 for nothing
So: We are already there, all without ChatGPT... ChatGPT (with its cool ways to make it nondeterministic, like: replacing tokens with synonyms and raising the temperature. Or: retraining it...) will make things worse definitely.
Probably the way to go is for {academia,sects,orgs with sense for humanity,...} to build stores of attributable information and then start shunning the branches which produce bullshit (this is what scientific publishing should do already.... it doesn't... even without ChatGPT there's already templated articles strewn around the corpus and noone cares). The main issue there is, that this means that they will also have to stop using a lot of technology (and buy a 32nm foundry...), because many orgs are probably already poisoned.
Many of my coworkers feel the same. And this started way before GPT was conceived.
Besides, your post somehow feels like generic technology-angst to me. It seems like whenever we invent something new, there are those people that predict this is finally our demise...
GGPT is a tool, like every other technology. Use it wisely, and you will likely benefit. Use it without engaging your brain, and you will like see negative effects.
ChatGPT is a symptom that perhaps finally raises discussion of the root problem above the noise. Yeah, ironic.
I would argue this is a problem stemming from the way "AI" is presented rather than anything to do with "AI" itself.
That is to say, fucking nobody trusts autocorrect to be correct and "AI" is really just a more complex form of autocorrect.
But people hear "AI" and think AI as in Transformers, Terminator, R2-D2, etc. They're intelligent, right? And that's where the presentation is doing significant harm. There is nothing intelligent, let alone sentient, about "AI".
Google -> Stack Overflow
With...
chatGPT --maybe--> Google --> Stack Overflow
I won't make predictions about the future, but it's been pretty transitive for today.
ChatGPT is great for doing some of the heavy lifting. I pasted in a list of raw, unprocessed data and asked it to give me unique values sorted alphabetically. I know I could have done it in code, but just being able to paste it in, give a plain English command and get the same results was fantastic.
The recent South Park episode where the boys are using ChatGPT to write the perfect romantic replies to their girlfriends really opened my eyes to how impactful this is going to be, not just business copywriting/education essays/etc but even for social interactions like dating (Tinder is going to get interesting).
It's like an autocomplete for your day-to-day life.
First, that just means ChatGPT had 100m people enter their email into the registration page. Second, none of those 100m users had to invest any money, unlike VR or blockchain.
dunno about blockchains, but i'm sure there isn't 100million VR units sold. Plus you don't need to pay upfront to use chatGPT to try it. And everyone who has tried it has been amazed, or at least, impressed at how good the responses are.
Not saying that the language models and AI hype is to be believed, but there's substance there. If you are an investor, you should be wary of missing out. And if you're not an investor, you should be wary of losing your human advantage in the economy.
This is almost the definition of a novelty effect. Consider, for example, 3D.
Stereoscopic 3D has gone through at least 5 waves of people trying it, saying, "Amazing! This will change everything!" But so far every time everything has not been changed. (The 5 waves I count: Brewster stereoscopes in the 1850s, 3D still viewers in the 1940s, 3D movies in the 1950s, VR in the 1990s, and 3D movies/TV starting in 2009. Assuming that we see the current wave of VR/metaverse hype as still open, of course.)
The question for me is what happens once something gets past "amazing". For example, consider Shazam's music recognition technology. It was widely considered amazing at the time. But in very short order it became boring, as it had basically one use case, one that did not matter much to most people.
ChatGPT is being used by common people in practical ways, not just going "wow." It's already proven itself.
While Blockchain only has hype, AI has hype and real applications. While there are enough bottom-feeder AI bros on Twitter and Discord, AI does have real applications and it solves real problems and makes people real money.
You need to shift your focus from the AGI and LLM hype to real application areas.
I have shipped end-user facing real world AI product myself. It saved each client (non tech companies) 2-3mn USD per year, and made our company money.
I can say with authority that if you think AI and blockchain are the same, you are plain WRONG.
> I can say with authority that if you think AI and blockchain are the same, you are plain WRONG.
Is attacking a straw man that you built all by yourself.
But it's definitely not going to go nearly as far as people are predicting. The more i use it the more it becomes apparent to me that there's no real intelligence behind it, just large-scale pattern recognition that makes it seem real because I'm not intelligent enough to comprehend the patterns.
Does it even need to be to be useful?
A ChatGPT that is legitimately "intelligent" would basically be the El Dorado of tech. There's plenty of value between here and there.
The big issue IMO is people are going to Chatgpt's website explicitly and having certain high expectations. Or they use it via Bing expecting it to be a search engine answer machine. Once such an interface is more directly integrated into our lives in a clean way where people understand what it is and what the limitations are it could easily be an app that basically everyone has installed, but more like a super-useful tool like Google Maps rather than replacing Google. That's still a big business and major development for the industry.
And even if the consumer daily-used mobile app thing or some high-visibility Google integration turns out to be less interesting/useful than predicted, there's still a million niche business and hobby usecases.
> training it off of Reddit posts and then making a surprise pikachu face when it turns out to be an arrogant cunt
- Customer reviews on sites like Amazon can be gamed with significantly better accuracy, at significantly higher numbers. Presumably there will be a period where this is successful for marketers, and then eventually people will come to wholly distrust customer reviews altogether. This will also likely be the death of the "reddit" trick, where you can just query on reddit for real, live customer experiences with products.
- Significantly increased ads on anything like a social network, where a "user" might be talking about a product.
- Generally cheaper ability to write advertisements in a broad category of platforms.
- Sponsored misinformation (government, corporate, etc.) at scale.
To be clear, I think that ChatGPT is going to be a disaster for the internet.
I pay for a premium OpenAI ChatGPT account, but except for explaining what code in unfamiliar programming languages does (including libraries it is calling), and occasionally having it write an email for me I don't get a lot of value out of it.
I do get a lot of value from the OpenAI APIs (as well as the offerings from Hugging Face).
I think beginning of a revolution. The ground has shifted under our feet.
It'll be really interesting to see how resistent it is to these problems.
Once upon a time Google Search was a great product.
I believe that gives an answer. But the right answer? Even Wikipedia doesn't guarantee that.
The simple fact that people are predicting that GPT will somehow change the world makes me doubt its actual value.
I think ChatGPT is worth doubting, but I think your examples need some work.
https://www.nvidia.com/en-us/geforce/news/may-2021-rtx-dlss-...
The FPS effect is tangible. The visual effect is subtle in a game made in 2016 before anyone in the mainstream was talking about this sort of thing, but imagine the possibilities for an indie developer without much funding. They can make their assets at a resolution that's reasonable for their budget, but that would look bad scaled up with traditional methods. They can offer higher resolutions at AAA quality without an AAA performance tuning and asset budget.
Also I think that once GPT will understand images the designers and front-end devs jobs will start evaporating.
But as with both of those and plenty of other novel technology, whether or not I like it doesn't answer most of my questions, which are about what other people will use things for. E.g., I did not like YouTube, but I am just not a video person. So me using YouTube and saying "meh" wasn't informative, because I wasn't the target audience. Similarly, I was wowed by the modern VR experience, but being wowed didn't make me a long-term user, something apparently true for a lot of people.
As for ChatGPT, I'm actually using it and it enhances my productivity by a lot. So, I believe this isn't just hype.
This is worrisome because I think mega corporations like Microsoft and Google want AI to just be a fun little tool for drafting emails and writing jokes and recipes. Any more powerful language constructions are too risky to their existing business models or too risky in terms of PR. People will just screenshot something the AI says philosophically and it will get retweeted and people will lose their mind.
I really hope a open source workable competitor to chatGPT emerges soon.
Even if by Moore's law we get that great power in our own computers, enough to run a today's chatGPT clone, there will always be a commercial version that is 100x more awesome out there built by actors with immense compute power backed by rivers of corporate money.
IMO it's not the task of governments to be prepared to address misinformation. It's an individual responsibility. This requires an educated civil society with the media literacy necessary to make a democracy work in the 21st century. It's an ongoing, aspirational project. ...and we can't wait until we think we're ready.
The American system, as designed, is diametrically opposed to waiting for governments around the world to protect you from misinformation.
Turns out that misinformation is already incredibly cheap to create and the bottleneck lies elsewhere.
I know that some of the features were available later in some other social networks too, but the original premise of making groups (friends, family, coworkers, random people I met I don't know where,...) and posting selective content into selective groups was great.
Maybe it was different for people who live and breathe GMail and the google ecosystem but for me it was a walled garden from the start and I kinda hated the UI.
Now its tiktok, and these "first gen social networks" (like facebook, twitter, ...) are a boomer playground mostly, even instagram is slowly getting to that list. Aside from the tech or the medium (video vs photo vs text vs ...) I think this will happen every few years when the youth receives friend requests from tgeir parents on platform X - thats the peak/decline moment no matter how great it is.
Facebook was for looking at content from friends, and then it moved to mainstream media recommended content, ads, and every 20 or so posts, something from a person you actually knew.
People then switched to instagram, where it was based around photos (and not "statuses"), and it was great... now it's 20 photos/videos of "suggested content" and one of someone you actually know.
Social networks should stay social, not become another cable tv provider with "recomended content" and ads.
Btw, one lasting disconnect between most Google SWEs and the general public: many SWEs consider YouTube a social network today.
Would have been actually great without the "love of my life", and if the "creeps" were just "friends" or something.
disclaimer: I'm long GOOG after the selloff.
https://www.wired.com/2010/05/0526bill-gates-internet-memo/ "I want every product plan to try and go overboard on Internet features."
Top level goals make sense for focusing, but for generally just improving products significantly, getting rid of blockers is what's important.
They do not generate worth through ideas and they do not generate worth through implementations.
They solicit other people's ideas, count how many votes each idea gets, and passes them to the grunts.
They are middle management distilled.
"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." - Heinlein
Devs, especially good ones, can make good products without a PM. A PM can be a nice tool to interface with the rest of the company, Sure.
Yeah, with abysmal user interfaces.
IME PMs are no better at that stuff than engineers.
It's unfortunate; this statement will probably never be true due to the privatization of research and corporatization of these technologies.
AI will more likely have a profoundly negative effect than a positive one.