This is correct, but there's also some meta comment to be made that research has an "instragram problem". What he's talking about isn't really even AI so much as deep learning: there are lots of other branches that now get virtually no play relative to deep learning. And then there are all sorts of adjacent CS, image processing (when was the last time you saw someone talk about wavelets), language analysis, and other research avenues, probably often better suited to many problems, that get buried under the hype.
This is why academic research is so important. All the corporate money is flowing to deep learning, grant agencies should be sustaining the other more fundamental areas.
Academic research has the same problem. When I studied neuroscience, flashy fMRI studies received substantially more funding than fundamental research. Without understanding how small neural nets work, it's difficult to construct bottom-up theories of the brain.
The problem is academia just doesn't pay a livable salary these days. When I'm getting hit with thousands of dollars of medical bills on a regular basis, despite having insurance, having a corporate salary cushion helps for peace of mind. A lot.
The goal of academic funding is to help the military and also/later the economy.
To think otherwise is naive. The whole point of research is to have an advantage. Unless that money has no strings attached, you’re beholden to someone. Every historical scientist had a major donor whether it was some kingdom or nation.
Call me naive then. While I agree that a fair amount of academic research funding is defense/military related, it’s certainly not all of it. If said research is funded by organizations like DARPA, then yes, absolutely. There are other sources of funding out there though that isn’t defense backed: NSF, NIH funding comes to mind among others, including corporate backed academic research. When i was in academia, I worked on research funded by grants from NSF, Intel and Microsoft to name a few.
>"What he's talking about isn't really even AI so much as deep learning: there are lots of other branches that now get virtually no play relative to deep learning."
Can you say what are some of non-Deep Learning branches in AI that aren't getting play relative Machine Learning/Deep Learning?
Andrew Ng in the linked message said himself: "Maybe you’re training XGBoost on a structured dataset and wondering if you’re missing out on ChatGPT. You may well be onto something even if XGBoost isn’t in the news."
Boosting generally is very competitive with (or better than) neural networks on small to midsized datasets.
Bayesian techniques are always interesting.
Personally I think there are great opportunities linking knowledge engineering approaches (which are very manual-labor heavy) with neural network approaches so we can bridge the taxonomy/unstructured data divide.
All AI projects are valuable, but it's annoying and demotivating to work hard on a unique and useful project and have no one read/use it because it doesn't stand out in the extremely rapidly-evolving ecosystem.
Most of my AI text-generation and image-generation projects and tools have already become obsolete by technology released in the past few months, and I've almost given up competing.
One advise I'll never forget (from an interview with late mathematician Maryam Mirzakhani) is that you don't have to race faster than everyone else (although if you can, that's great!). You can just run in a direction no one is running toward (and hopefully be persistent). Then you're only racing yourself and speed is largely irrelevant.
I myself know I'm very slow, specially developing projects. So I don't try to race anyone, except contribute to things I know are neglected. There are so many neglected problems, you don't need to work on the most coveted fields.
“Run in the other direction” is so generic that it isn’t helpful. No one else going in that direction is, at best, unrelated to the quality of the idea and, at worst, a heuristic that the idea is bad.
When asked how to keep up with all of this AI/ML stuff when starting your own AI business, Jeremy Howard (general genius and co-creator of fast.ai) replied that you don't need to know everything and keep up with everything. Pick a niche and focus just on that. And if that keeps changing too fast or is too much, pick a niche inside that niche.
don't give up max, you're tremendously inspirational to the rest of us who are even further behind in trying to keep up with what's happening. you're by far one of the best simplifiers/explainers of whats going on who actually regularly builds useful stuff. its just a matter of time until you hit the thing that goes vertical, and all this work to date will have been practice. you can only connect the dots looking back.
I've called this the rapidly shrinking innovation-disruption-adaption cycles. It quickly becomes the new rat race.
"It will be unlikely we will be able to plan for the disruption as it will simply happen before anyone can reason about what is likely to happen next or what to do about it. Any such planning efforts will be obsolete before they begin"
The problem is the dynamic of AI is leveraging massive computing power into something flashy - OpenAI and DeepMind have been going back and forth on this for a while, with OpenAI leverage a lot of human also in ChatGPT.
It's a bit like the original dot-com bubble, when people were throwing what was then vast amount of money out just to be in the space. And not knowing what the business model was going to be. The business model then turned out to be advertising but it's certain that advertising is well suited to a chatbot or other new AI stuff, stuff that is computationally and dollars-wise expensive.
AI projects are often valuable if given away. I assume people will ask how valuable per watt they are after a while.
And remember we are 6-12 months off having good, open source chatGPT class open source models and the software support to make it possible to run them at home.
I've almost given up just trying to keep up with the field... let alone competing. I'm not sure how people are doing it (keeping up) at this point given the pace of change.
I've definitely been feeling deflated. Every angle I've come up with... some project has narrowly shipped before me. And they gain so much popularity so quickly that mindshare per project becomes a power law overnight. It's like the first person to release is at the top of the App Store, and there's no unseating them; the feedback loop is already too powerful. It very much feels like first to release wins. And I swear, the quality of the code I've seen in some of these projects that are getting thousands of stars is abysmal. It's making me think releasing vaporware in order to capture mindshare, then actually building something valuable is the only way to compete.
It is not enough to be the first one to ship. I shipped a Mac app that uses whisper to transcribe audio before anyone else, I even implemented a dictation algorithm on top of it. It was my first app, I am not an ios developer. However, after a while, someone with a better track record in app development came up and made a similar stuff. He knows how to build an audience, and he knows how to market an app. As a result, smashed out the app I developed.
So I don't think being the first one is that much important, it is not also important to take care clean coding etc. Someone with better marketing skills probably would ruin every work you did.
I may be out of the loop, but I have seen very little apps with new AI tech get any traction. Or, I've seen them, tried for a few minutes, and then went on with my day. Most of these are just flash in a pan; a cool tech demo or some initial hype from an over-promising landing page, but not many viable businesses. Just someone downloading a model, fine-tuning it on some dataset and trying to pawn it off.
And if some project "narrowly shipped before you", how much value or moat is there really if we're talking multiple projects and they're so quick&easy to do? Sorry to be a bit harsh, I just don't get feeling deflated by others making the same MVP as you do if it's just a natural idea on top of some existing tech. Your focus on code quality is also moot, it's delivering value that counts, not the tech stack beneath.
Isn't this extremely common advice? All the way down to "just do it all manually then build the software" where applicable (obviously not in AI). No one cares if the code is crap if it does what they want and you can always iterate on code after securing a revenue/funding stream.
I would not put so much stock in the first mover effect. I can't bring it to mind immediately, but there was an excellent podcast that brought up how the second movers oftentimes do better in the end, at least as a company.
Case in point -- once upon a time, there was a big race called Dawnbench for training CIFAR10 to 94% accuracy in the shortest amount of time a little while ago by Stanford University. During that time, there was a lot of cool movement, and there were a few notable people who really moved the bar (Chen Wang is underrecognized for their contributions, while David Page is relatively well known for his, which indeed truly are excellent).
I remember reading Page's notes on it and thinking that I could never come up with the caliber of ideas that he brought to the training table for these networks, and plus, 24 seconds on a V100?!?! Crazy.
That was years ago that I saw it. I didn't touch it at all -- not anyone really did, transformers were sorta the big thing now, and still are. And the one or two times I did try to do anything with it...anything I tried made it worse, and I really struggled with his code (it's very functionally-written stylistically, very cool but didn't jive with my rapid experimentation style).
In any case, I thought maybe I could do better though if I really and truly took a cool crack at it. And even if I didn't, I sorta needed a good living resume to prove that I could make a good software project. So I reimplemented it in a more hackable (to me, at least) kind of way ala karpathy's nanoGPT (and was almost way too meticulous with writing, organizing, and documenting my code), reorganized and streamlined a few things, and moved it to a more-accessible-to-me GPU, an A100. ~18.1 seconds or so (17.2 with some other open-source code). So that was the line.
Since then, every single time it feels like I've found all that I can find, there's something else (eventually, at least) waiting behind that wall for me. 18.1 seconds turned to 12.7, which I thought was about as far as I could go. Then 12.7 turned to 12.3, which turned to ~9.91 seconds. Then ~9.91 seconds became, incredibly, ~7.7 seconds or so.
Earlier this week I released an update that brought it to roughly ~6.97-6.99 seconds or so. That is unreal, to me. At first, I was numb to how much things could improve, now I'm sorta in denial. The throughput is totally insane, roughly 88,389 training images through the GPU _every second_. This also means that our step time is roughly ~11.35 microseconds per batch, which is...blistering, to stay the least. Hard really for me to wrap my own head around it.
I'd say from the experience that I've had, I've felt similar feelings to what you've talked about here, especially if someone already with a lot of followers from a more hype point of view does something like glue huggingface code together, make a fancy GIF that's well stylized, and gets a ton of adoration from it.
But that said, the market for quality software is small, and the market for hype is large. Not that the above project doesn't have hype, but it's meant to be more valuable as a researcher's workbench than a toy. It did thankfully get a huge boost early on because Karpathy tweeted it out, but even the last release, for example, maybe got 10 likes on Twitter, and an additional 10-20 (or 30) stars on Github from the sum total interactions (including a Reddit post), even if that.
But! The good thing in some senses is that the people that I get to talk to if I'm proactive, that like this software, are often people who are known or are skilled in their field of work. And I honestly don't have too many warm fuzzies about that from lived experience as that is new to me. But I can say that I appreciate the opportunity.
Everytime I've thought about going down the hype/vaporware road just to get eyes on the project(s) I do, I have to ask myself -- "Do I want these eyes on the project? Do I want this kind of attention from this kind of person to make up most of my interactions and what I am building?"
Sure, if you have to feed a family, that sort of make sense. And we have to feed ourselves and our emotional needs too. But maybe we can be okay with being content with the smaller audience, as it is. At least, that's what I'm working towards, though I do fear that I'll stumble and give in to the allure of chasing the hype every now and again. And if I do, I'm sure that particular extreme emptiness (of a sort) will help pull me back towards just working on being content with the little things I have.
I want to close with a video that was made almost exclusively for you, and would like to ask you to watch it in its entirety if you have the time. It talks about content creation (which is what we do, in a sense), but is taught in a way that is very general and I think is the best take I've ever heard on this topic in a condensed/beginner-friendly way, that I can remember at least.
It should not only help alleviate some of your concerns or negative feelings from the shipping arms-race, it'll give you clarity on good next-step solutions that will help hopefully contextualize and give a good 'path forward' to making software that people like. I really cannot recommend this video enough, the wisdom is simple, practical, distilled, and hard-won (and has certainly helped me, I am glad I got to learn this earlier rather than later): https://youtu.be/lNzWsp5UUPA
Happy to discuss or offer any thoughts on any questions. I do recommend the video first, I often enjoy talking about that kind of particular topic.
This is so true, but to follow this advice takes a respectable amount of personal discipline, combined with picking a very specific niche to focus on. A business with an R&D core such as AI should only be considering generalizing (if at all) after developing concrete confidence that you are on, and will stay on the bleeding edge of your initial niche. It's easy to feel ashamed when you tell someone you're doing X, and enough lay (tech) persons ask if it uses the trending research technology, and you're not. It may also make you think like you should use that tech, but it's likely a costly distraction more often than not.
Our company [0] developed a cutting edge computer vision system focused on detecting cardio machine exercise cadence (hyper specific!) and became the only reliable camera-based solution to do so. We then tried to generalize to all exercise motion (rep tracking, a still unsolved problem), achieved mediocre success, and put the exploration to sleep later because we think waiting for other technologies to mature would be easier and faster (better 3D cameras, AI pose models, etc). On the other hand we've picked other niches that meet our business needs to expand our CV R&D into, with pretty good success but mostly just for internal use (video content creation tools). More importantly, we're still the best camera-based indoor cardio detection tech out there, and that's a big part of why we're still alive as a bootstrapped business founded in 2010.
Idk about AI having an Instagram problem, but I know Instagram has an AI problem.
So many fake accounts with AI as actors and they send you chat messages trying to pretend to be real people.
They even react to comments and can discern good from bad comments.
At first it was interesting. Now it's just annoying.
Then the Instagram algorithm, if you comment on coffee ads that you drink tea, you'll get tea ads.
It will show you progressively naked women, cameltoes and similar softcore erotica.
Sometimes even real porn. When you report that porn accounts your report will not even be reviewed and denied.
But God beware you post the word tits. Instant harassment automoderation.
All the women that follow you instantly message you saying hi, how are you, how old are you, what is your name, where are you from etc.
I think they're infobots trying to build profiles and sell your data.
Some play the old " I'm a hot girl but can't post my picture, please buy me an iTunes gift card ", like I was born yesterday. And how dumb their game is.
They see you commenting on a picture you liked, take that picture, claim to be that person, only on a private account, and they picked you, their loyal fan.
Many of them are cheap porn actresses or accounts of people having built profiles from the pictures those porn actresses post, only modified by AI.
I am so tired of that platform. And then they serve you small 1 minute clips of stand up comedy, cats or dogs, or some stupid 1 person having a chat with themself. Let's not forget those "how do I say it at my workplace" "expert" vids.
And lots of ads in between.
Why are you still using it? (Not trying to troll you, I'm legitimately curious -- it really doesn't sound like it's offering you anything positive at this point)
As someone using Instagram but has a Todo list item to close my FB/Instagram/Meta account, the reason I still use it (and only Instagram) is to follow artists. A hyper curated list of individuals that post interesting things that I can check up on. It's what I lost when I shuttered my Twitter account. I use it as a glorified RSS feed rather than as a way to interact socially.
But yeah, the bad outweighs the good, and other than spending the time to archive data, I do plan to shut it down.
No, I follow athletes related to cycling, swimming, skiing etc. What content does Instagram promote me? Female models dressed in cycling clothes with their zippers down. Maybe because their algorithm says that it's the "most trending cycle-tagged content at the moment". But it's not related to the professional cyclists I'm following at all. Same goes for almost all content/tags: there will be some half-nude version of it that gets promoted.
Not the OP, but it seems like the actual usage of the platform caused the most trouble for them. From my own anecdotal experience of using modern day social media, it seems who you follow establishes the baseline of what garbage they send your way. But any sort of engagement, even down to which posts you hover over the longest, quickly influences your feed.
I almost never use instagram, mostly follow military-related fitness[1] and car stuff....and I still get an endless feed of curvaceous women thrust in my face.
That viewpoint baffles me because it has never been easier to make text, image and other classifiers. I think though that ChatGPT has attracted people who were into NFTs two months ago and people like that find downloading a model from huggingface about as hard as Medium users seem to find blogging.
The viewpoint described in the article, particularly the emotion of envy.
I have been working on this stuff since 2005 or so and I can do more than I ever did and if there is one thing I am certain of it is that other people are going to research things and publish even better models in the future that I'm going to download from huggingface or whatever comes next.
(On the other hand the fact that I know how to pose well-defined problems and I am willing to work hard to make training sets really does put me in the top 5%. If I am kicking myself it is for the times when I didn’t have the courage to ask clients to put the resources in to make training sets.)
The article is correct, but it comes from someone that is already on the AI field. LLMs will peak and pass to become mature, that's high-frequency signal. But there's also a low frequency component, a more fundamental signal if you like, that tells us that AI is definitely out of the academia --it's been for some years, it's only reaching general public now. It cannot be ignored.
It has already reached the public a while ago, it just wasn't announced or marketed so heavily. Google for example has been using BERT to improve search results since 2019. If you did slightly more sophisticated english language queries back then, you've already been benefiting from LLMs directly.
How does DDG and Kagi competes with ChatGpt ? In my understanding their “responses” above search results works like Google graph, but I’ll be happy to be corrected.
This is why academic research is so important. All the corporate money is flowing to deep learning, grant agencies should be sustaining the other more fundamental areas.
This is particularly infuriating since fMRI studies are not statistically robust and should be held in deep distrust.
The same happens everywhere in academia unfortunately. The current big thing is building “Apps” for your project even when no app is needed.
The problem is academia just doesn't pay a livable salary these days. When I'm getting hit with thousands of dollars of medical bills on a regular basis, despite having insurance, having a corporate salary cushion helps for peace of mind. A lot.
To think otherwise is naive. The whole point of research is to have an advantage. Unless that money has no strings attached, you’re beholden to someone. Every historical scientist had a major donor whether it was some kingdom or nation.
Can you say what are some of non-Deep Learning branches in AI that aren't getting play relative Machine Learning/Deep Learning?
Boosting generally is very competitive with (or better than) neural networks on small to midsized datasets.
Bayesian techniques are always interesting.
Personally I think there are great opportunities linking knowledge engineering approaches (which are very manual-labor heavy) with neural network approaches so we can bridge the taxonomy/unstructured data divide.
or enlightened corporations - ahm … never mind.
Most of my AI text-generation and image-generation projects and tools have already become obsolete by technology released in the past few months, and I've almost given up competing.
I myself know I'm very slow, specially developing projects. So I don't try to race anyone, except contribute to things I know are neglected. There are so many neglected problems, you don't need to work on the most coveted fields.
"It will be unlikely we will be able to plan for the disruption as it will simply happen before anyone can reason about what is likely to happen next or what to do about it. Any such planning efforts will be obsolete before they begin"
https://dakara.substack.com/p/ai-and-the-end-to-all-things
It's a bit like the original dot-com bubble, when people were throwing what was then vast amount of money out just to be in the space. And not knowing what the business model was going to be. The business model then turned out to be advertising but it's certain that advertising is well suited to a chatbot or other new AI stuff, stuff that is computationally and dollars-wise expensive.
AI projects are often valuable if given away. I assume people will ask how valuable per watt they are after a while.
And remember we are 6-12 months off having good, open source chatGPT class open source models and the software support to make it possible to run them at home.
So I don't think being the first one is that much important, it is not also important to take care clean coding etc. Someone with better marketing skills probably would ruin every work you did.
And if some project "narrowly shipped before you", how much value or moat is there really if we're talking multiple projects and they're so quick&easy to do? Sorry to be a bit harsh, I just don't get feeling deflated by others making the same MVP as you do if it's just a natural idea on top of some existing tech. Your focus on code quality is also moot, it's delivering value that counts, not the tech stack beneath.
Case in point -- once upon a time, there was a big race called Dawnbench for training CIFAR10 to 94% accuracy in the shortest amount of time a little while ago by Stanford University. During that time, there was a lot of cool movement, and there were a few notable people who really moved the bar (Chen Wang is underrecognized for their contributions, while David Page is relatively well known for his, which indeed truly are excellent).
I remember reading Page's notes on it and thinking that I could never come up with the caliber of ideas that he brought to the training table for these networks, and plus, 24 seconds on a V100?!?! Crazy.
That was years ago that I saw it. I didn't touch it at all -- not anyone really did, transformers were sorta the big thing now, and still are. And the one or two times I did try to do anything with it...anything I tried made it worse, and I really struggled with his code (it's very functionally-written stylistically, very cool but didn't jive with my rapid experimentation style).
In any case, I thought maybe I could do better though if I really and truly took a cool crack at it. And even if I didn't, I sorta needed a good living resume to prove that I could make a good software project. So I reimplemented it in a more hackable (to me, at least) kind of way ala karpathy's nanoGPT (and was almost way too meticulous with writing, organizing, and documenting my code), reorganized and streamlined a few things, and moved it to a more-accessible-to-me GPU, an A100. ~18.1 seconds or so (17.2 with some other open-source code). So that was the line.
Since then, every single time it feels like I've found all that I can find, there's something else (eventually, at least) waiting behind that wall for me. 18.1 seconds turned to 12.7, which I thought was about as far as I could go. Then 12.7 turned to 12.3, which turned to ~9.91 seconds. Then ~9.91 seconds became, incredibly, ~7.7 seconds or so.
Earlier this week I released an update that brought it to roughly ~6.97-6.99 seconds or so. That is unreal, to me. At first, I was numb to how much things could improve, now I'm sorta in denial. The throughput is totally insane, roughly 88,389 training images through the GPU _every second_. This also means that our step time is roughly ~11.35 microseconds per batch, which is...blistering, to stay the least. Hard really for me to wrap my own head around it.
I'd say from the experience that I've had, I've felt similar feelings to what you've talked about here, especially if someone already with a lot of followers from a more hype point of view does something like glue huggingface code together, make a fancy GIF that's well stylized, and gets a ton of adoration from it.
But that said, the market for quality software is small, and the market for hype is large. Not that the above project doesn't have hype, but it's meant to be more valuable as a researcher's workbench than a toy. It did thankfully get a huge boost early on because Karpathy tweeted it out, but even the last release, for example, maybe got 10 likes on Twitter, and an additional 10-20 (or 30) stars on Github from the sum total interactions (including a Reddit post), even if that.
But! The good thing in some senses is that the people that I get to talk to if I'm proactive, that like this software, are often people who are known or are skilled in their field of work. And I honestly don't have too many warm fuzzies about that from lived experience as that is new to me. But I can say that I appreciate the opportunity.
Everytime I've thought about going down the hype/vaporware road just to get eyes on the project(s) I do, I have to ask myself -- "Do I want these eyes on the project? Do I want this kind of attention from this kind of person to make up most of my interactions and what I am building?"
Sure, if you have to feed a family, that sort of make sense. And we have to feed ourselves and our emotional needs too. But maybe we can be okay with being content with the smaller audience, as it is. At least, that's what I'm working towards, though I do fear that I'll stumble and give in to the allure of chasing the hype every now and again. And if I do, I'm sure that particular extreme emptiness (of a sort) will help pull me back towards just working on being content with the little things I have.
I want to close with a video that was made almost exclusively for you, and would like to ask you to watch it in its entirety if you have the time. It talks about content creation (which is what we do, in a sense), but is taught in a way that is very general and I think is the best take I've ever heard on this topic in a condensed/beginner-friendly way, that I can remember at least.
It should not only help alleviate some of your concerns or negative feelings from the shipping arms-race, it'll give you clarity on good next-step solutions that will help hopefully contextualize and give a good 'path forward' to making software that people like. I really cannot recommend this video enough, the wisdom is simple, practical, distilled, and hard-won (and has certainly helped me, I am glad I got to learn this earlier rather than later): https://youtu.be/lNzWsp5UUPA
Happy to discuss or offer any thoughts on any questions. I do recommend the video first, I often enjoy talking about that kind of particular topic.
Our company [0] developed a cutting edge computer vision system focused on detecting cardio machine exercise cadence (hyper specific!) and became the only reliable camera-based solution to do so. We then tried to generalize to all exercise motion (rep tracking, a still unsolved problem), achieved mediocre success, and put the exploration to sleep later because we think waiting for other technologies to mature would be easier and faster (better 3D cameras, AI pose models, etc). On the other hand we've picked other niches that meet our business needs to expand our CV R&D into, with pretty good success but mostly just for internal use (video content creation tools). More importantly, we're still the best camera-based indoor cardio detection tech out there, and that's a big part of why we're still alive as a bootstrapped business founded in 2010.
Quadruple down on your niche first!
[0] https://www.activetheoryinc.com/
At first it was interesting. Now it's just annoying.
Then the Instagram algorithm, if you comment on coffee ads that you drink tea, you'll get tea ads. It will show you progressively naked women, cameltoes and similar softcore erotica. Sometimes even real porn. When you report that porn accounts your report will not even be reviewed and denied. But God beware you post the word tits. Instant harassment automoderation. All the women that follow you instantly message you saying hi, how are you, how old are you, what is your name, where are you from etc. I think they're infobots trying to build profiles and sell your data.
Some play the old " I'm a hot girl but can't post my picture, please buy me an iTunes gift card ", like I was born yesterday. And how dumb their game is. They see you commenting on a picture you liked, take that picture, claim to be that person, only on a private account, and they picked you, their loyal fan. Many of them are cheap porn actresses or accounts of people having built profiles from the pictures those porn actresses post, only modified by AI.
I am so tired of that platform. And then they serve you small 1 minute clips of stand up comedy, cats or dogs, or some stupid 1 person having a chat with themself. Let's not forget those "how do I say it at my workplace" "expert" vids. And lots of ads in between.
Instagram is so disgusting.
Why are you still using it? (Not trying to troll you, I'm legitimately curious -- it really doesn't sound like it's offering you anything positive at this point)
But yeah, the bad outweighs the good, and other than spending the time to archive data, I do plan to shut it down.
I’d delete my account instantly otherwise.
Edit: I have a plan to stop adding people on there I want to keep in contact with. A phase out plan.
[1] https://www.instagram.com/sofletehq/
I have been working on this stuff since 2005 or so and I can do more than I ever did and if there is one thing I am certain of it is that other people are going to research things and publish even better models in the future that I'm going to download from huggingface or whatever comes next.
(On the other hand the fact that I know how to pose well-defined problems and I am willing to work hard to make training sets really does put me in the top 5%. If I am kicking myself it is for the times when I didn’t have the courage to ask clients to put the resources in to make training sets.)
- Facebook enters the geolocation game (dowalla/foursquare/instagram)
- Google releases reviews to maps (yelp & co)
- Amazon starts it's own airline (Fedex)
- Microsoft adds ChatGPT to search (Kagi, Duckduckgo, etc..)
Sometimes it's hard to recover your startup when a big player does something to fill the void you were targeting.
That's why Google, DDG, Kagi and others are all putting summary blurbs.
Now that we have ChatGPT a lot of people are going to cut out searches in exchange for prompts.