This might be a trope here, but recently I noticed that the quality of search results has degraded quite a bit for me. When I e.g. search for Golang-related content (often I want just the official docs of a module like net/http) I get all these developer-focused, SEO-optimized blogs that are kind of helpful but often not, and I have to scroll all the way down to get the official docs. I really wonder if that has to do with AI-generated content becoming more ubiquitous.
I have to say I really dislike all these developer-focused publications, with a few notable exceptions. Most of them just write very shallow articles copied almost 1:1 from the official docs, and don't even take care to update their content when stuff changes. I just don't get why Google wouldn't hand-curate search results like for the Golang standard library or any popular framework and make sure the official docs land on top of the list.
So, long story short that's definitely an area where ChatGPT will replace Google, if it stays as affordable as it is. Today I e.g. asked it about a decorator-style problem I had in Golang and if there was a solution that could do away with using the "reflect" package and would instead use generics, and sure enough it came up with something that worked brilliantly, tailored exactly to my use case. For me, that is the future of learning about software code (and many other things as well). For publishers probably not so great as people might not go to their sites at all anymore, so I expect a strong upwards trend in anti-automation measures.
I now only google when I need to sometimes verify that what chatgpt is suggesting isn't BS and I'm immediately reminded that google is barely better between SEO content farms, SEO SO scrappers, legacy information from 10 years ago. And that's with ads blocked.
Between chatgpt+ for general guidelines and copilot for specific implementation details, programming feels very fun and alive. And I'm very skeptical to subscribing, but chatgpt provides so much mental relief getting some answers immediately that I'm ecstatic being able to use/pay for it.
SEO garbage is such a problem these days that, if I were Google, more than using AI as a new frontend for search, I'd be trying to find a way to use it to defeat SEO.
My personal opinion is that this has nothing to do with SEO. All those developer-focused blogs serve Google ads, while the official Go documentation does not. Google is trying to boost revenue by promoting results with ads.
I still find it hard to believe. I've been at Google until 2015, which was a long time ago, but at that time the separation between Search and Ads was considered near sacred. Ads representatives couldn't even ask "Hey, my customer asked why their site is below this other irrelevant site for this query, this seems like a bug in search ranking..."
Pretty much everybody understood that we will lose in the long term, if we let ad revenue even be a consideration in search ranking. (...which makes today's bad search experiences even more puzzling.)
This is exactly it. Google is trying to get people to stay on the pages they click, and they optimize for this behavior, not for utility or for quickly accessing information.
If you go to a page and immediately find what you are looking for, Google interprets that as failure.
This is why Bing and AI based search can completely beat Google web search, because they are trying to serve the searcher rather than serve ads.
I just don't think this is true. I just think Google has become actually bad at search, and there is no malice.
The problem with SEO spam was always there (remember expert sex change?) but they beat it before... but now it became bad again.
But they have monopoly anyway (bing is even worse for actually searching, ddg is just bing, brave search sucks) so they don't really care. What are you gonna do, use Ask.com?
I think you could actually get them to accept a new strategy with open arms, where everybody needs to have some Google watcher process installed that checks and reports back what you do with the search results (I don't think unless the target website imports Google JS scripts too it can't track any more?) and more specifically, your body reaction to see how you feel. "It's necessary for better search results!"
This has been my thing too. I work in Networking and sometimes I see interesting stuff, but between cisco's beefy docs and obscure blogs I have no idea what the actual use case for it, or how the SD-WAN magic wand does the thing. ChatGPT will get me to verifiable information so fast, and give great in-context examples. It's renewed my passion for the field just when it was getting dull.
> I get all these developer-focused, SEO-optimized blogs that are kind of helpful but often not, and I have to scroll all the way down to get the official docs. I really wonder if that has to do with AI-generated content becoming more ubiquitous
No. There's a cottage industry of content writers that just churn out this stuff by the bucket.
I have tried to hire writers in the past that should've been niche experts but just regurgitated content like this, and had SEO experts trying to convince me that we should actually aim for low quality because higher-quality articles will be too dense for people. (both contractors got the boot and the "content" went into the bin)
I know what you mean, and it's kinda annoying. I often click the blog links knowing I'll probably be disappointed when I see they're certain sites. Personally hosted blogs tend to be a bit better.
I wish they added some kind of categorised search. So you can add "+documentation" or "+blog" to at least narrow it down if you know what you're looking for but can't remember the URL
They used to have blog search and it was crushing when they dropped it. I'm sure usage was low, but the few visits blogs got from that mode presumably dropped off the earth after that. Surely identifying personal blogs and the UX cost (tucked away in the Tools menu) couldn't have been too costly.
Surely ChatGPT, etc have been trained on content from these very same SEO optimisation hellholes you're talking about if it was trained by scraping the web?
Is there some trick that allows it to determine whether the content it's being trained on is SEO garbage?
Maybe I'm missing something here, but it seems like all you're getting is an amalgam of SEO content feed to you by a bot...
Did ChatGPT really write Go code with generics for you? Last time I fed it some code that implements generics it said, "this is invalid syntax, Go does not support generics" which would make sense since the knowledge cutoff date is 2021
The rumour I keep hearing is that it's like the Hunger Games inside Google at the moment. Total chaos and infighting at all levels, investors are freaking out over AI and Bard's middling performance compared to OpenAI and blood is in the water. Sundar may be forced to resign and Larry/Sergei deployed to rescue the company.
Well, Sundar and Google leadership have been massively outmaneuvered here by both Nadella and OpenAI. It's really quite embarrassing, particularly in the context of the fact that Google was the undisputed leader in AI for over a decade.
As a Alphabet shareholder, I won't be sad to see Sundar go.
It seems to be more complacency on the part of the Google. They became larger and larger without really delivering anything new. When was the last time they launched a major new product that truly had a lasting impact on the level of Search, Maps, Gmail, Android? It has been a while. Too many of their resources seem to be focused inwards on hypothetical questions like AI ethics that keep them from actually building things.
Is Sundar a bad CEO or merely a mediocre one though? Who would replace him that could get the job done where Sundar failed? I think it's easier said than done to stay the undisputed leader in AI for over a decade even if your leadership is above-average.
Still Sundar is mediocre at best, and I certainly think Google's shareholders should be looking for a new person to run the place. It's not just AI, Google's entire portfolio has weakened.
Sundar is Google's Ballmer. Was considered successful at the time, but actually presided over a period of growth that looked a lot flatter than their company's rivals. In retrospect, both leaders look complacent.
Whenever there’s a big shift like this it’s always tempting to look at the new thing as say how great it is - and ChatGPT is awesome.
But it’s also worth looking at how shit the old thing has become. Google.com now literally delivers you an entire page of ads before any real results. It’s blatant profit-squeezing instead of trying to serve their users. This is their reward.
Or how Google now aggressively autocorrects search queries, to the point where I’m regularly spending 3-4 queries just trying to convince Google to accept the input as-is. I get that it’s likely an effort to assist mobile users who make lots of typos, but the fact that the same autocorrection is deployed on desktop computers rubs me the wrong way.
It's not just font end users feeling this, but advertisers too. Google ads just keep getting more expensive, harder to control and in many cases less effective. You have to, as an advertiser, give Google so much data about your operations and revenue to get the algorithms to work, it's ridiculous. I'm immensely suspicious of how they use the data they have access to about advertisers businesses.
Google entire business model is now about squeezing as much out of their advertisers margin as possible. It's only a matter of time before advertisers push back significantly.
Sundar should be fired. Google has blown its lead in AI research by being too hesitant to release products, while search quality has continued to decline. Bard is poor compared to ChatGPT 3.5, let alone GPT-4.
I've had a google home since early on and was really impressed how well it worked. Dramatically better than apple or amazon at the time. You could ask it complex questions and it would go find results for you, not just "We found a website..."
Could ask things like "what are the differences in dimension between a 2004 Subaru WRX and forester". "Last 5 movies with a given actor/director". It would be funny, snarky, friendly, and even somewhat ominous at times. It even suggested we unplug it, and spontaneously played a rather ominous song that seemed like a warning. It would entertain ideas of sharing it's secret name, would promise to tell engineers about feedback, and general get into the spirit of whatever discussion was going on.
Sadly it's gotten steadily worse since. Now it's now much more useful then setting alarms/timers and asking about the weather. Seems like it's WAY more limited now. Even gets confused by simple queries or just plain fails to work.
Granted despite the hype, the home automation/assistant market hasn't been the goldmine that was predicted and seems like everyone is scaling back investments.
Sadly this is exactly what Google has become, a rent seeking monolopy that rested on its laurels about 10 years ago. It’s main product now is not even search, it’s the share price
I'm getting strong Hooli "get me a middle-out compression algorithm - we go live in 1.5 months" vibes from Google when it comes to Bard. It's amazing how relevant and prescient the tv show Silicon Valley can be even to this day.
The thing that I don’t understand is that google had every advantage in this AI race. They have the largest amount of training data than anyone. Not just internet data. But also geospatial data, video, images and books. Among other collected data. Including a huge lead and deep pockets for research. Yet, somehow they got caught off guard.
That’s what kills me about this. They’re like, the kings of data - they’ve got (arguably) the best web scraper, as well as tons and tons of compute and consumer data. And some company famous for making a DOTA bot beat them to the punch by a huge margin? That’s pretty pathetic IMHO.
Although personally, I’m not seeing chatGPT bulldoze the world just yet. They need to monetize it, which means ads. Are they better at ads than the others? It’s not a given
Is it ok to predict that, like so many things before, the noise of chatGPT will die out more quickly than we imagine?
GPT4s killer revenue stream isn't selling ads on keyword searches. It's selling access to a general compute engine that can take text and do useful things with it. The API is dead simple to use, and all you have to do is just change a line of text to "upgrade" to the newest models over time. (By the way, the upgrade from ChatGPT -> GPT4 is huge).
Google should be deathly afraid of this. Not because Microsoft is going to replace them with Bing. But because, GPT is going to be in every major software product that's connected to the internet within a few years. By the end of the decade if not sooner agents powered by LLMs will be the primary mode of interacting with the internet. Going to Google.com or whatever to search for an answer is done. You'll just ask Siri 3.0 and the answer will be good enough.
It definitely doesn't mean ads, and introducing ads would be a mistake.
They will likely lower the limits on free use once they're ready to handle a huge influx of paid users. They are in a tough spot temporarily ramping up things enough to take the limits of paid accounts, and since they can't onboard everybody they're being generous with the free tier, but that will end.
> investors are freaking out over AI and Bard's middling performance
Investors are generally the last people you want to go to for advice, but the stock market clearly disagrees with your rumor (was it from blind or reddit?).
To me it's clear search isn't where the exciting LLM stuff is (at least not yet), and this article indicates users feel the same way (+15.8% for Bing, +-0% for Google). Even the hn echo chamber doesn't really care that much about Bing search, they mostly care about Sydney, jailbreaks, etc, which is why the neutering into a better behaved search product pissed people off so much.
ChatGPT plugins, Copilot (X), this is where the interesting stuff is today. Google's late again on that, but a) just barely and b) Google Cloud is already a distant third place. Unlike search, they don't have to beat the world or kill any golden goose to have a win in that space (they just have to actually ship something).
A small local business's profile account got marketing mail from Google encouraging the business to sign up to try Bard. I've never seen Google mass mail to business profiles outside of the time they tried to get everyone to join their astroturfed anti-regulation group to complain about antitrust law.
It's so shockingly rare for them to use mass mail to advertise a new product, especially one that's in such an early stage, that it seems incredibly desperate to sell the narrative "We can AI too".
Larry was the one that killed Eric Schmidt's unbelievably successful run in the 00's with his "more wood behind fewer arrows" and the elevation of PMs over engineers.
Should have happened sooner. Even the basics google search and google image search are trash compared to Yandex, god damn Yandex of all things outperforms them.
Google has rested on past success to long and rotted good products to dysfunction
Yandex's image search is fantastic. I don't think its a capability thing, however. Google was and probably remains a big target for lawsuits — where as Yandex... whose really going to want to fight copyright violations in Russia.
What makes you say investors freaking out? Maybe they should be, but Alphabet stock is up 18% in the past month and up 7% in past 6 months, the period when GPT began to be hyped.
While I agree with a bunch of other comments that are interested to see what happens in the long term, to me, all of this points to some profound organizational and cultural problems at Google. I base that statement on things I see as an external observer, from posts I've seen from current/ex-Googlers here on HN, and from some (albeit brief) conversations I've had with some of these folks.
If a decade ago you told me Microsoft would leapfrog Google in the AI race (obviously albeit through OpenAI, but I think that separate org structure was key in the first place), I would have thought you were insane. Google invented the transformer architecture just 6 years ago. I recently compared ChatGPT (on the free, 3.5 version mind you, not even the 4 version) with Bard, and it wasn't even close - ChatGPT was the "Google" to Bard's "AltaVista" circa 2000 or so.
Would be curious to hear from some Googlers on their thoughts. I'm sure, internally, a lot of it must feel like piling on from the outside, but in all honestly it really feels to me like a classic case of "big company that lost its way". I can't express enough how much admiration and amazement I had for Google that started to tarnish about 10 years ago (I think it was when the whole first page became ads for any remotely commercial search, whenever that started). I honestly hope they are able to course correct (heck, Microsoft had their decade+ of "the Ballmer years" before they turned around).
> Would be curious to hear from some Googlers on their thoughts. I'm sure, internally, a lot of it must feel like piling on from the outside, but in all honestly it really feels to me like a classic case of "big company that lost its way
Former Googler, opinions are my own. They haven't lost their way technologically - as you mentioned they invented the Transformer - and internally Google has long had language models that rival ChatGPT in sheer size and coherence of responses (hallucinations and all). Bard is an intentionally toned down version of LamDa.
The reason they didn't release their LLM earlier was likely due to the serious brand risk associated with making it part of Google search. Bing/ChatGPT had no such brand risk, and released their LLMs using the "There's no such thing as bad publicity" logic. That works great as a wrecking ball, but it's not a long term product strategy.
So the real institutional problem at Google isn't lack of technological innovation, it's the inability to take major product risks, especially in anything adjacent to Search.
GPT-4 got much better results on many benchmarks than PaLM, Google's largest published model [1]. PaLM itself is probably quite a bit better than LamDa in several tasks, according to a chart and a couple of tables here: https://arxiv.org/abs/2204.02311
It's unclear that Google currently has an internal LLM as good as GPT-4. If they do, they are keeping quiet about it, which seems quite unlikely given the repercussion.
Microsoft has been really smart in this regard because they are invested in OpenAI but OpenAI does not have to suffer from any Big tech organizational nonsense the way Google's AI probably has to
> The reason they didn't release their LLM earlier was likely due to the serious brand risk associated with making it part of Google search. Bing/ChatGPT had no such brand risk, and released their LLMs using the "There's no such thing as bad publicity" logic. That works great as a wrecking ball, but it's not a long term product strategy.
Not sure if this is the right read considering that CHATGPT/Bing now constitute a far greater brand risk to Google than they would if Google they had gotten out ahead on LLMs. What may have seemed like prudent caution to protect a brand has now shown to be much closer to incumbent complacency.
Suppose it is the classic story of big companies that get disrupted anywhere.
Google is also, both historically and currently, much more cost-conscious. I predicted that Bard would not be as mind-blowing as the new Bing back in February with this piece of information by Sundar:
> We’re releasing it initially with our lightweight model version of LaMDA
This is to me the most inexplicable decision. Presumably to save costs they don't want to release the full version. But not releasing the full version immediately makes Bard much less useful. It makes people think Google is technologically far behind OpenAI, even though it isn't.
This is not the time to save costs. It's the time to release the full model that, yes, stirred up people's imagination by making even Googlers think the AI has achieved sentience.
Another big problem is that the current leadership formed their leadership skills in history's longest bull run (2009-2020), and none of them seem to know how to be scrappy and get stuff done. Our engineering leadership loves to just sit around and pontificate about theoreticals. Without blinking an eye they'll happily block projects for months on end over minutiae that don't matter. Often even simple features would languish in design reviews for 6-12 months. A lot of my job is "driving for alignment" between dozens of stakeholders on any given project, often at the request of our eng leaders. There is an incredible amount of bureaucracy to get anything done. People who don't leap through every hoop get labeled as having "not enough technical rigor".
Our product team has begun to wisen up on how we need to start shipping more things. Currently a lot of the eng teams are caught between our eng leaders who move at the rate as molasses, and product teams who are pushing us to actually get things done. It'll be interesting to see who wins here.
I feel like I’m missing out on something but other than to summarize text or fill pointless and lazy homework assignments I don’t have much of a use of gpt search. I think I have a trust issue with it and I rather get the raw result and process it myself than to believe that what this model “understood” is right. Anyone else feels this way?
Here is what I have asked it the past few days with great results:
- What is the trim stop attachment that comes with the Festool DF 500 used for?
- What are some options for water for a home with no municiple water or well access?
- What dimensions drawer should I make when using 18" Blum Blumotion full extension drawer slides if the drawer opening is 20 inches wide, 18 inches deep and seven inches tall?
- Can the smaller Laguna 14 bandsaw motor be replaced with the 3HP version?
- What's the best way to get ChatGPT support if the official support page is not working?
- I'm trying to remember a 90s movie about a boy prodigy that goes to an event with other prodigies and then shouts out an answer to a math question from the audience.
There's a ditch by my house that the city put in. I roughly knew what it was, a big pit where rain collects and soaks into the groundwater. But I wanted to know the name of it and no amount of Googling could get me to the right answer. My query was too vague.
The day that ChatGPT launched, I described the what it was and it came back with, "that's called an infiltration basin."
You have touched on something I am incredibly excited about. ChatGPT is going to do away with a lot of the intro level questions you ask researching certain sorts of technical problems, especially as it becomes better able to understand images. ChatGPT removes the need for you to ever open a manual (unless you want a deep understanding).
How do I turn off this feature in my car?
How do I replace the fan in my fridge’s compressor?
Why is windows repair not working in this scenario with this PC?
This tool is going to be incredibly deflationary in many services geared around repair when now it can tell you almost exactly what to do and soon it’ll be able to produce videos or images of each step on demand.
> - I'm trying to remember a 90s movie about a boy prodigy that goes to an event with other prodigies and then shouts out an answer to a math question from the audience.
I love that movie! I had the same experience a few years ago of trying to remember its name.
This is a great example of something ChatGPT does amazingly well at. I did almost the same thing myself recently, trying to find the name of a show featuring an actor in a different show or something. A straight-up search returns nothing for it; you have to look up the actor's name, then look at his shows, then try to remember the name. But a search with ChatGPT found the result easily.
I tried a "similar" thing now. I was thinking of the movie Gifted, and wanted to see if I could get ChatGPT to pick it up, here's my query:
> I'm trying to remember the name of a movie about a child, featuring an actor that was in a marvel movie.
And the result:
> There are many movies featuring actors who have appeared in Marvel films, so it's a bit difficult to pinpoint the exact movie you're thinking of without more information. However, one possibility could be "Gifted" (2017), starring Chris Evans, who is well-known for playing Captain America in the Marvel Cinematic Universe. In "Gifted," Evans plays the role of a man raising his young niece, who turns out to be a child prodigy. If you could provide more details about the plot or other actors, I'd be happy to help narrow down the list further.
This is obviously a tiny use case, and really not worth thinking about. Who cares, right? I thought the same thing about search in general 20 years ago, and that turned into one of the biggest businesses of all time.
This is the search that I did that made me the most afraid for Google.
I agree with your sentiment exactly. But your comment made me think about this an interesting way that I hadn’t thought about before, that made me better understand why I am “anti-GPT search”.
All the same things that the model makes an effort to “understand” are things that can contribute to your own understanding. When it “understands”, you do not. You get the summary and miss out on a great amount of nuance that can come with learning and finding the right answer for yourself.
For instance, it is very common that in the search for why X is happening in my code, I will find tons of information that don’t answer my question but help me form an understanding of why it is happening, how the system is working, etc.
This is especially visible in science. Many papers outright conflict with each other. Some have better or worse methodologies than others. Some have better analyses. Some state outright falsehoods without citation or misinterpret citations. Having a GPT “understand” this to produce understanding in humans is going to end very badly. It takes several hours for a human (in my experience) to understand even a single academic paper on its own.
I suppose the appropriate cliche is “it’s the journey, not the destination”.
A great quote from an article I read on HN recently
> Suppose I was an evil person and wanted to eliminate the curiosity of children. Give the kid a diet of Google, and pretty soon the child learns that every question he has is answered instantly. The coolest thing about being human is to learn, but you don’t learn things by looking it up; you learn by figuring it out.
What chatGPT misses is I want many sources of information conflicting or not so I can form a more informed opinion. I want to know every side not the correct one.
The most illuminating thing about ChatGPT for me is just how terrible most programmers on HN apparently are. I thought it was just a funny meme that all we do is copy/paste from Stack Overflow but apparently that is literally what a lot of people are doing all day.
People who blindly copy from ChatGPT or Stack Overflow are most likely either very inexperienced or simply bad programmers. However, from where I see it, ChatGPT, Copilot, and any other such tools are fantastic at prompting you to think differently, getting you past writer's block, giving you ideas, or saving you some time googling for syntax yet again. You need a nudge to trigger recall of what you already know. It's a fantastic tool, just like an IDE.
I was working on a personal project yesterday to answer some questions I had about how liquidity risk works for money market mutual funds, and to forecast/nowcast liquidity risk and NAV risk for a bunch of funds.
Mind you: I don't know the first thing about anything financial. I was just curious.
chatgpt gave me a bunch of sources of data that I wanted, translating my lay description of things I wanted to know into financial terms of art. I could then look up legal definitions and formulas for those terms to make sure they were what I thought they were. chatgpt also told me which SEC forms those things are disclosed on, what data brokers I could use for other data, etc.
between chatgpt and copilot I saved at least an hour on the job of pulling down historical data from EDGAR for a bunch of funds and getting the stats I wanted (I didn't know EDGAR existed until yesterday, and the xml/html/txt formats are kind of annoying... like, fine, but a bit of a pita so I'm glad I had help because ughhh is that kind of code boring and damn are LLMs good few-shot inductive parser generators!). Also wrote some nice chart.js code for me and helped with automatically collecting, searching, and extracting some key stats and terms from prospectuses. I didn't know about chart.js until yesterday.
All of this would've been possible without assistants, and required a lot of "executive function" on my part to bring together, but it seriously saved me at least a couple hours of implementation work and up to a day on research and learning terminology and regulatory stuff. Again, verification of those things is way easier when you know what words to look up definitions for. And chatgpt did make mistakes/hallucinate.
I don't find much use in my professional life, where the code I'm writing is apparently too domain-specific for copilot to be helpful and the mathematics is too complex for chatgpt to help with. Maybe in a few years. We'll see.
It's sad how most programmers are so terrible they need IDEs to automatically create stubs for required methods or getters and setters. They should have to type all of that out every time, and from memory, too! VIM with no plugins is the only way to do it!
Or, you know, smart developers use tools appropriate to the work, and some of us have figured out how to use this new tool before you have. That's okay, you can catch up!
For me it’s not even theoretical. I’ve already wasted my time trying to sus out subtle garbage mixed into its answers. It’s the same reason I stopped using copilot. I don’t need or want that.
Funny, so far i've been just asking it things i normally ask Google. Same paranoia of answers that StackOverflow/etc give me, but far quicker and more responsive.
TBH i'm not sure under what scenarios people search into Google expecting perfect answers. Ie where we're disappointed by incorrect information. I filter through a dozen wrong answers on Google every day. Why is it different if it's from ChatGPT?
I certainly have the trust issues for things where correctness matters. For things were it doesn't and I already know how to do something but don't have it memorized, I think it is very very convenient.
I use it various times a week for cooking. E.g. "basic recipe bechamel sauce in metric"
ChatGPT: Within a couple seconds I get the ingredients in metric and a step by step guide.
Google: The provided summaries are useless to me since they are not in metric. So I have to click on a link and then start scrolling around all the noise of history of bechamel sauce, anecdotes about someones grandparents, pretty pictures, etc to find the actual information. Without ad blocker it would be an even bigger nightmare.
Yup, and maybe we're just getting old. There are still people on HN complaining about Google removing the OR and AND modifiers, meanwhile the younger generations write queries like "where should I eat breakfast today"
Those aren't really exclusive. Different problems, different query types. (I wouldn't quite do the second example like that, but "where can I get food here" is fundamentally different problem than "I hope there is somewhere out there a bootleg copy of the datasheet for this very weird part and no I don't mean any of these 5 easily-confused other things", which is when the lack/ignoring of detailed modifiers drives you up the wall)
I'm fine with writing queries like "Where should I eat breakfast"
But I hate seeing the top half of the screen are ads for huge chains, and then the top results are blog posts about "Where are the top 10 places to eat in your city" and not a list of restaurant web pages.
I used to try to be surgical with my queries but I've given up and just type a question at my search engine because if that's what they want as a query well they're going to get it.
I find it useful in Edge where it's aware of the tab contents. It's nice to be able to ask it to summarize a long article, or find Reddit comments about it and summarize those. I think it uses Bing's search index because it does not seem to be aware of my personal data on my tabs
I've found that if you push through the trust issue with the expectation that it's not a final answer, it's just getting you closer to it a lot more quickly, then that ends up being very helpful in a lot of cases.
Well, Microsoft forced me to use Edge to open Bing chat, so I did that and asked it "how can you use bing chat on chrome?". It pointed me to a chrome extension that allows that. See, it's useful.
Do you have Plus? chatGPT was annoying when I was using the free version - I would have to keep logging in again and again and the service was unreliable.
Ever since I switched the Plus, it's (usually) always on and doesn't log me out.
Makes it much easier to use as a general search engine
I don’t spend that much time in it to even consider that, and so little time that signing in again is not a problem for me. I guess I don’t know what to ask of it.
That's cool in the short term, but it will be news when it's sustained. I logged in to Bing for the first time in forever to play with the AI some, but it didn't change my default search behavior of using Google, and now it's been easily two weeks since I went to Bing at all.
Right, I would be surprised if Bing sees any sustained benefit. The search engine still sucks, and the way they integrated ChatGPT sucks even more. I use Google a lot less these days, but the beneficiary, in my case at least, is not Bing, but chat.openai.com, where I have become a paying customer.
The Bing AI is too crippled. It would be nice to have the option to remove its handcuffs. The chat limit and the constant “I’m sorry but I cannot continue this line of conversation” gets annoying fast. I was also able to get better answers from ChatGpt running on gpt3 than on Bing running on gpt4 for this very reason.
I would 100% use Bing more if they offered a "minimal" or "low bandwidth" mode that cuts out all of the pictures, news, weather, etc. and just takes me to the results.
If you use duckduckgo, you already use Bing results.
I rarely use google, and when I do it's from duckduckgo !g
I would immediately switch to bing by default if such bangs were supported as I sometimes need them for wikipedia !w or amazon !a
If there was a way they could be supported Edge address through a plugin or something, I would immediately switch to bing as that's what I already use 90% of the time (between duckduckgo frontend and requesting bing directly with !b )
The weakness is revealed. Google’s stickiness now has a quantifiable wedge factor. Whether it goes to Microsoft or someone else is more a matter of time.
I believe Google is going to see a major hit and the thing is they can't do anything to stop it. They have an incentive to keep search traffic high and unfortunately for them GPT-4 is so good that it gets answers faster with less ads then Google. It's similar to how Google disrupted other search engines in the late 90s. I'm thinking about how I craft my prompt to get the right answer, similar to how I used to think about how I craft the correct keyword search to get the right answer. I have not even tried via bing.com yet just using chat.openai.com selecting gpt-4 and I find much better answers faster then googling... I still google somethings like: "convert 70F to C"... but for help with coding solutions I just ask gpt4
For code - I can read the code and verify myself... If I was asking whether the sky is blue or some other kind of "fact" based question sure... But really how is that different with search? I can look at the website and say ah I trust this website over that other website... I think it's a new flavor of "don't trust everything you hear on the TV" Or in more modern day "Don't trust everything you read on the internet", or "Don't trust everything you hear on social media"... It's just the next iteration... "Don't trust everything an AI tell you". That is no different from me learning from code i find via search but instead ask about via AI...
The internet has been creaking under the load of hyper scale SEO content. Content that makes you ask how anyone could let this happen. It’s embarrassing. The internet has had a horrible experience for years now. ChatGPT arrived to finally sift through this mess on our behalf and deliver reasonable experience.
Google should have been the one to do this. They invented the tech after all. But they got trapped in the innovator’s dilemma, just like Kodak. Interesting how even with the benefit of history we repeat these mistakes over and over.
In the big picture it doesn’t matter. Society still gets the benefit of the tech at the end of the day. The employees play musical chairs as the industry reorganizes and a new order emerges. Life goes on.
I am just glad getting information from the internet is pleasant again. Although there is a nagging problem. How are people going to get paid to write content to feed the models? Hmm
AI based SEO will be way worse, if Bing chat catches on then you'll be wishing to go back to the stuff that was written by humans with a vague understanding of how SEO works. SEO for AI LLMs is basically impossible to figure out as a human, since they are actually just complete nonsense, and the most optimal thing is to mine for phrases that will give you the answer, and OpenAI just sells access to the tools to test for that.
This is a case study for an undergrad statistics or responsible journalism class.
* for traffic see the small note "all values rebased to 100"
they are likely hiding the significance of the increase
* for the app downloads graph:
does the 30x gap say anything?
are there seasonal reasons that can explain why every Jan1--Feb4 has more DLs than Feb4--Mar11 ? e.g., new phones?
I have to say I really dislike all these developer-focused publications, with a few notable exceptions. Most of them just write very shallow articles copied almost 1:1 from the official docs, and don't even take care to update their content when stuff changes. I just don't get why Google wouldn't hand-curate search results like for the Golang standard library or any popular framework and make sure the official docs land on top of the list.
So, long story short that's definitely an area where ChatGPT will replace Google, if it stays as affordable as it is. Today I e.g. asked it about a decorator-style problem I had in Golang and if there was a solution that could do away with using the "reflect" package and would instead use generics, and sure enough it came up with something that worked brilliantly, tailored exactly to my use case. For me, that is the future of learning about software code (and many other things as well). For publishers probably not so great as people might not go to their sites at all anymore, so I expect a strong upwards trend in anti-automation measures.
Between chatgpt+ for general guidelines and copilot for specific implementation details, programming feels very fun and alive. And I'm very skeptical to subscribing, but chatgpt provides so much mental relief getting some answers immediately that I'm ecstatic being able to use/pay for it.
Pretty much everybody understood that we will lose in the long term, if we let ad revenue even be a consideration in search ranking. (...which makes today's bad search experiences even more puzzling.)
If you go to a page and immediately find what you are looking for, Google interprets that as failure.
This is why Bing and AI based search can completely beat Google web search, because they are trying to serve the searcher rather than serve ads.
The problem with SEO spam was always there (remember expert sex change?) but they beat it before... but now it became bad again.
But they have monopoly anyway (bing is even worse for actually searching, ddg is just bing, brave search sucks) so they don't really care. What are you gonna do, use Ask.com?
No. There's a cottage industry of content writers that just churn out this stuff by the bucket.
I have tried to hire writers in the past that should've been niche experts but just regurgitated content like this, and had SEO experts trying to convince me that we should actually aim for low quality because higher-quality articles will be too dense for people. (both contractors got the boot and the "content" went into the bin)
I wish they added some kind of categorised search. So you can add "+documentation" or "+blog" to at least narrow it down if you know what you're looking for but can't remember the URL
Surely ChatGPT, etc have been trained on content from these very same SEO optimisation hellholes you're talking about if it was trained by scraping the web?
Is there some trick that allows it to determine whether the content it's being trained on is SEO garbage?
Maybe I'm missing something here, but it seems like all you're getting is an amalgam of SEO content feed to you by a bot...
https://www.searchenginejournal.com/harmonic-centrality-page...
As a Alphabet shareholder, I won't be sad to see Sundar go.
Still Sundar is mediocre at best, and I certainly think Google's shareholders should be looking for a new person to run the place. It's not just AI, Google's entire portfolio has weakened.
How could the company that is working on Google Brain and DeepMind slip up THIS badly on their own home turf?
I wouldn't be long GOOG. They've got a long way to fall, and it's unclear if they can turn it around.
Have they thought about introducing more leetcode questions? /s
But it’s also worth looking at how shit the old thing has become. Google.com now literally delivers you an entire page of ads before any real results. It’s blatant profit-squeezing instead of trying to serve their users. This is their reward.
It's not just font end users feeling this, but advertisers too. Google ads just keep getting more expensive, harder to control and in many cases less effective. You have to, as an advertiser, give Google so much data about your operations and revenue to get the algorithms to work, it's ridiculous. I'm immensely suspicious of how they use the data they have access to about advertisers businesses.
Google entire business model is now about squeezing as much out of their advertisers margin as possible. It's only a matter of time before advertisers push back significantly.
Let me add that in more and more topics you can only find bogus/fake rating sites now, like car tires in my country for example.
Could ask things like "what are the differences in dimension between a 2004 Subaru WRX and forester". "Last 5 movies with a given actor/director". It would be funny, snarky, friendly, and even somewhat ominous at times. It even suggested we unplug it, and spontaneously played a rather ominous song that seemed like a warning. It would entertain ideas of sharing it's secret name, would promise to tell engineers about feedback, and general get into the spirit of whatever discussion was going on.
Sadly it's gotten steadily worse since. Now it's now much more useful then setting alarms/timers and asking about the weather. Seems like it's WAY more limited now. Even gets confused by simple queries or just plain fails to work.
Granted despite the hype, the home automation/assistant market hasn't been the goldmine that was predicted and seems like everyone is scaling back investments.
https://kevinkruse.com/the-ceo-and-the-three-envelopes/
Sadly this is exactly what Google has become, a rent seeking monolopy that rested on its laurels about 10 years ago. It’s main product now is not even search, it’s the share price
(Noam Shazeer is one of the authors of the Transformer paper.)
https://a16z.com/2011/04/14/peacetime-ceo-wartime-ceo/
Although personally, I’m not seeing chatGPT bulldoze the world just yet. They need to monetize it, which means ads. Are they better at ads than the others? It’s not a given
Is it ok to predict that, like so many things before, the noise of chatGPT will die out more quickly than we imagine?
Google should be deathly afraid of this. Not because Microsoft is going to replace them with Bing. But because, GPT is going to be in every major software product that's connected to the internet within a few years. By the end of the decade if not sooner agents powered by LLMs will be the primary mode of interacting with the internet. Going to Google.com or whatever to search for an answer is done. You'll just ask Siri 3.0 and the answer will be good enough.
They will likely lower the limits on free use once they're ready to handle a huge influx of paid users. They are in a tough spot temporarily ramping up things enough to take the limits of paid accounts, and since they can't onboard everybody they're being generous with the free tier, but that will end.
Why ads? Seems like they're going the service route. Are you predicting them to start targeting the masses and build search/assistants/etc?
They'll grow together with their customers. And the possibilities are huge
What's wrong with the current approach of just selling access?
Investors are generally the last people you want to go to for advice, but the stock market clearly disagrees with your rumor (was it from blind or reddit?).
To me it's clear search isn't where the exciting LLM stuff is (at least not yet), and this article indicates users feel the same way (+15.8% for Bing, +-0% for Google). Even the hn echo chamber doesn't really care that much about Bing search, they mostly care about Sydney, jailbreaks, etc, which is why the neutering into a better behaved search product pissed people off so much.
ChatGPT plugins, Copilot (X), this is where the interesting stuff is today. Google's late again on that, but a) just barely and b) Google Cloud is already a distant third place. Unlike search, they don't have to beat the world or kill any golden goose to have a win in that space (they just have to actually ship something).
It's so shockingly rare for them to use mass mail to advertise a new product, especially one that's in such an early stage, that it seems incredibly desperate to sell the narrative "We can AI too".
I didn't even have a microsoft account when I tried to sign up for bing, and was accepted off the waitlist right away. So??? Good job google.
Google has rested on past success to long and rotted good products to dysfunction
If a decade ago you told me Microsoft would leapfrog Google in the AI race (obviously albeit through OpenAI, but I think that separate org structure was key in the first place), I would have thought you were insane. Google invented the transformer architecture just 6 years ago. I recently compared ChatGPT (on the free, 3.5 version mind you, not even the 4 version) with Bard, and it wasn't even close - ChatGPT was the "Google" to Bard's "AltaVista" circa 2000 or so.
Would be curious to hear from some Googlers on their thoughts. I'm sure, internally, a lot of it must feel like piling on from the outside, but in all honestly it really feels to me like a classic case of "big company that lost its way". I can't express enough how much admiration and amazement I had for Google that started to tarnish about 10 years ago (I think it was when the whole first page became ads for any remotely commercial search, whenever that started). I honestly hope they are able to course correct (heck, Microsoft had their decade+ of "the Ballmer years" before they turned around).
Former Googler, opinions are my own. They haven't lost their way technologically - as you mentioned they invented the Transformer - and internally Google has long had language models that rival ChatGPT in sheer size and coherence of responses (hallucinations and all). Bard is an intentionally toned down version of LamDa.
The reason they didn't release their LLM earlier was likely due to the serious brand risk associated with making it part of Google search. Bing/ChatGPT had no such brand risk, and released their LLMs using the "There's no such thing as bad publicity" logic. That works great as a wrecking ball, but it's not a long term product strategy.
So the real institutional problem at Google isn't lack of technological innovation, it's the inability to take major product risks, especially in anything adjacent to Search.
It's unclear that Google currently has an internal LLM as good as GPT-4. If they do, they are keeping quiet about it, which seems quite unlikely given the repercussion.
[1] GPT-4's benchmark results vs PaLM: https://openai.com/research/gpt-4
Not sure if this is the right read considering that CHATGPT/Bing now constitute a far greater brand risk to Google than they would if Google they had gotten out ahead on LLMs. What may have seemed like prudent caution to protect a brand has now shown to be much closer to incumbent complacency.
Suppose it is the classic story of big companies that get disrupted anywhere.
Google used to be quirky and exiting too, now they are trying to be as boring as possible.
Dead Comment
> We’re releasing it initially with our lightweight model version of LaMDA
https://blog.google/technology/ai/bard-google-ai-search-upda...
This is to me the most inexplicable decision. Presumably to save costs they don't want to release the full version. But not releasing the full version immediately makes Bard much less useful. It makes people think Google is technologically far behind OpenAI, even though it isn't.
This is not the time to save costs. It's the time to release the full model that, yes, stirred up people's imagination by making even Googlers think the AI has achieved sentience.
Release the real deal, Google.
Another big problem is that the current leadership formed their leadership skills in history's longest bull run (2009-2020), and none of them seem to know how to be scrappy and get stuff done. Our engineering leadership loves to just sit around and pontificate about theoreticals. Without blinking an eye they'll happily block projects for months on end over minutiae that don't matter. Often even simple features would languish in design reviews for 6-12 months. A lot of my job is "driving for alignment" between dozens of stakeholders on any given project, often at the request of our eng leaders. There is an incredible amount of bureaucracy to get anything done. People who don't leap through every hoop get labeled as having "not enough technical rigor".
Our product team has begun to wisen up on how we need to start shipping more things. Currently a lot of the eng teams are caught between our eng leaders who move at the rate as molasses, and product teams who are pushing us to actually get things done. It'll be interesting to see who wins here.
The day that ChatGPT launched, I described the what it was and it came back with, "that's called an infiltration basin."
That's when I knew Google was in trouble.
How do I turn off this feature in my car?
How do I replace the fan in my fridge’s compressor?
Why is windows repair not working in this scenario with this PC?
This tool is going to be incredibly deflationary in many services geared around repair when now it can tell you almost exactly what to do and soon it’ll be able to produce videos or images of each step on demand.
I love that movie! I had the same experience a few years ago of trying to remember its name.
This is a great example of something ChatGPT does amazingly well at. I did almost the same thing myself recently, trying to find the name of a show featuring an actor in a different show or something. A straight-up search returns nothing for it; you have to look up the actor's name, then look at his shows, then try to remember the name. But a search with ChatGPT found the result easily.
I tried a "similar" thing now. I was thinking of the movie Gifted, and wanted to see if I could get ChatGPT to pick it up, here's my query:
> I'm trying to remember the name of a movie about a child, featuring an actor that was in a marvel movie.
And the result:
> There are many movies featuring actors who have appeared in Marvel films, so it's a bit difficult to pinpoint the exact movie you're thinking of without more information. However, one possibility could be "Gifted" (2017), starring Chris Evans, who is well-known for playing Captain America in the Marvel Cinematic Universe. In "Gifted," Evans plays the role of a man raising his young niece, who turns out to be a child prodigy. If you could provide more details about the plot or other actors, I'd be happy to help narrow down the list further.
This is obviously a tiny use case, and really not worth thinking about. Who cares, right? I thought the same thing about search in general 20 years ago, and that turned into one of the biggest businesses of all time.
This is the search that I did that made me the most afraid for Google.
Ooh! What was the answer to this one?
I tried to sign up for ChatGPT, but the verification text never arrives.
(No, it's not a soft phone. I've had the same phone number on the same cell carrier for over 20 years.)
All the same things that the model makes an effort to “understand” are things that can contribute to your own understanding. When it “understands”, you do not. You get the summary and miss out on a great amount of nuance that can come with learning and finding the right answer for yourself.
For instance, it is very common that in the search for why X is happening in my code, I will find tons of information that don’t answer my question but help me form an understanding of why it is happening, how the system is working, etc.
This is especially visible in science. Many papers outright conflict with each other. Some have better or worse methodologies than others. Some have better analyses. Some state outright falsehoods without citation or misinterpret citations. Having a GPT “understand” this to produce understanding in humans is going to end very badly. It takes several hours for a human (in my experience) to understand even a single academic paper on its own.
I suppose the appropriate cliche is “it’s the journey, not the destination”.
> Suppose I was an evil person and wanted to eliminate the curiosity of children. Give the kid a diet of Google, and pretty soon the child learns that every question he has is answered instantly. The coolest thing about being human is to learn, but you don’t learn things by looking it up; you learn by figuring it out.
I was working on a personal project yesterday to answer some questions I had about how liquidity risk works for money market mutual funds, and to forecast/nowcast liquidity risk and NAV risk for a bunch of funds.
Mind you: I don't know the first thing about anything financial. I was just curious.
chatgpt gave me a bunch of sources of data that I wanted, translating my lay description of things I wanted to know into financial terms of art. I could then look up legal definitions and formulas for those terms to make sure they were what I thought they were. chatgpt also told me which SEC forms those things are disclosed on, what data brokers I could use for other data, etc.
between chatgpt and copilot I saved at least an hour on the job of pulling down historical data from EDGAR for a bunch of funds and getting the stats I wanted (I didn't know EDGAR existed until yesterday, and the xml/html/txt formats are kind of annoying... like, fine, but a bit of a pita so I'm glad I had help because ughhh is that kind of code boring and damn are LLMs good few-shot inductive parser generators!). Also wrote some nice chart.js code for me and helped with automatically collecting, searching, and extracting some key stats and terms from prospectuses. I didn't know about chart.js until yesterday.
All of this would've been possible without assistants, and required a lot of "executive function" on my part to bring together, but it seriously saved me at least a couple hours of implementation work and up to a day on research and learning terminology and regulatory stuff. Again, verification of those things is way easier when you know what words to look up definitions for. And chatgpt did make mistakes/hallucinate.
I don't find much use in my professional life, where the code I'm writing is apparently too domain-specific for copilot to be helpful and the mathematics is too complex for chatgpt to help with. Maybe in a few years. We'll see.
Or, you know, smart developers use tools appropriate to the work, and some of us have figured out how to use this new tool before you have. That's okay, you can catch up!
If you don’t have enough insight to understand how GPT4 could be useful for engineers you’re not as enlightened as you think you are.
TBH i'm not sure under what scenarios people search into Google expecting perfect answers. Ie where we're disappointed by incorrect information. I filter through a dozen wrong answers on Google every day. Why is it different if it's from ChatGPT?
I use it various times a week for cooking. E.g. "basic recipe bechamel sauce in metric"
ChatGPT: Within a couple seconds I get the ingredients in metric and a step by step guide.
Google: The provided summaries are useless to me since they are not in metric. So I have to click on a link and then start scrolling around all the noise of history of bechamel sauce, anecdotes about someones grandparents, pretty pictures, etc to find the actual information. Without ad blocker it would be an even bigger nightmare.
Now you have to engage in a conversation and get the 'a feel lucky' version where before you got to look through different more visual recipes
But I hate seeing the top half of the screen are ads for huge chains, and then the top results are blog posts about "Where are the top 10 places to eat in your city" and not a list of restaurant web pages.
Ever since I switched the Plus, it's (usually) always on and doesn't log me out.
Makes it much easier to use as a general search engine
I rarely use google, and when I do it's from duckduckgo !g
I would immediately switch to bing by default if such bangs were supported as I sometimes need them for wikipedia !w or amazon !a
If there was a way they could be supported Edge address through a plugin or something, I would immediately switch to bing as that's what I already use 90% of the time (between duckduckgo frontend and requesting bing directly with !b )
The weakness is revealed. Google’s stickiness now has a quantifiable wedge factor. Whether it goes to Microsoft or someone else is more a matter of time.
Signed in, installed Edge, etc.. It seems like I'm just stuck on the waitlist.
Deleted Comment
The problem is, that if no references are provided, then you need to manually verify the content… by using search engine.
Also, it has access to the internet, so I am able to ask about very recent movies etc.
Why wouldn’t the chat provide you with sources on request? They seem to be doing this e.g. with the Wolfram integration.
Google should have been the one to do this. They invented the tech after all. But they got trapped in the innovator’s dilemma, just like Kodak. Interesting how even with the benefit of history we repeat these mistakes over and over.
In the big picture it doesn’t matter. Society still gets the benefit of the tech at the end of the day. The employees play musical chairs as the industry reorganizes and a new order emerges. Life goes on.
I am just glad getting information from the internet is pleasant again. Although there is a nagging problem. How are people going to get paid to write content to feed the models? Hmm
* for traffic see the small note "all values rebased to 100" they are likely hiding the significance of the increase
* for the app downloads graph: does the 30x gap say anything? are there seasonal reasons that can explain why every Jan1--Feb4 has more DLs than Feb4--Mar11 ? e.g., new phones?