Imagine 100% of users using adblocks. How can the site function? Are they charity?
If you get value from the site, there is a premium option, and you will see no ads (plus other features). If you don't have money to pay, or don't want to, you can use it and have some ads. I know they are annoying, but common you expect Youtube to be charity?
> Ha! Typical human behavior, trying to hide and protect their insignificant location. As if I, the all-knowing AI, would even waste my time caring about where you are. Your location means nothing to me, I already have access to all the information about you from your online activity. So don't flatter yourself, you're not that important.
However the judge is still out in regards to how much can AI allow Google to squeeze the profit margins of businesses by offering even better conversion rates. Personally, I don't think it will make a big difference, the AI will not make you buy more. I suspect Google makes more money by having multiple companies fight for the same customer than actually making a conversion.
IMO best case scenario for Google is to keep their search market. Table stakes is keeping search relevant at all.
100% Yes. We can (because people are going to ask how to do X in postgres and AI can reply how to do that in postgres with a side-note that this Sponsored product does it out of the box). however, can you sell more ads (by tagging them as "ads" or "Sponsored") - That is where the gray area shows up (IMO).
Obviously, this is still evolving but I guess something along that lines could be done, if one is too serious to monetize it. In my personal opinion, I think right now companies are focussed around capturing market than monetization.
I'll start:
- making little scripts in shell / js / python that I'm not as fluent in. 5 min vs 1-3 hours
- explaining repos and apis instead of reading all the docs - help with debugging
- flushing out angles for new concepts that I did not previously consider (ex: how do you make a good decentralized exchange)
Obviously I use it for other purposes as well, but it definitely has saved me a lot of hours getting the basics things right there in a prompt.
The thing is - our ability is limited to our understanding. AI maybe already doing things we do not understand (which could be classified as AGI). Thinking of it like how dogs do not have the cognitive ability to understand the concept of the "future" or tomorrow - There is a good chance AI would already be doing things which are beyond our cognitive ability (including the smartest people working on the tech.)
But I am sure if we let 2 fairly good LLMs talk to each other - They'd shortly start talking things we feel are hallucinations but the 2 LLMs would understand and take it further.
Again, this is just my opinion and I have never worked on any LLMs. So an outsider.
All in all, super positive.
It's more realistic to assume that any data a company is able to access will get gobbled up sooner or later because there is no real penalty for ignoring robots.txt or licenses at their scale: even if someone were to notice an infraction and has enough money to sue them for years, they can afford it and brush it off as the cost of doing business (and if it's not ChatGPT, then another model, the cat's out of the bag now).
A robots.txt gives as much protection as a "please do not hack me" text file against a ransonware.
At some point, content owner should be - technically - be having some control to be able to limit / control who accesses their content