I was writing some Python code (not my primary language), and wanted to know if there was something built into f-strings that would nicely wordwrap the output.
I did a Google search for "how can I create f-strings in python so the output is wrapped at some fixed length". Except, for Google, I did not use the "how do I" etc. and just threw some keywords at it: "python f-string wrapping", "python f-string folding", etc.
All I got back was various results on how to wrap f-strings themselves.
Frustrated, I typed "how can I create f-strings in python so the output is wrapped at some fixed length" into ChatGPT, and back came the answer: ... However, f-strings themselves don't provide a built-in method for wrapping text at a specific width. ... For text wrapping, you can use the textwrap module ... Here's how you can combine f-strings with the textwrap module to achieve text wrapping: ... (followed by a full example).
I think "Search" will be changing dramatically over the next year or two. And websites which depend on Search traffic to survive will be in deep trouble.
The double standard here is hilarious. The ChatGPT result you got is better largely because you typed in a longer search doc. The illusion of superiority that surrounds ChatGPT is significantly due to the human factors curiosity that people are willing to type in a long-ass search string in the conversational context. They will type "best pants" into Google then complain about the futility of it, then switch tabs and spend ten minutes writing a short essay on pants for ChatGPT, and rave about the results.
If you take the entire quoted search sentence from your third paragraph, then Google also presents a complete solution.
> I think "Search" will be changing dramatically over the next year or two. And websites which depend on Search traffic to survive will be in deep trouble.
Yes, and no. A lot of sites generate crap content as it is. Not all search queries are just questions that can be answered, some are looking for businesses, shopping items, legal help, etc things that ChatGPT, etc can't emulate. AI will hurt sites like Reddit/StackOverflow/Quora, etc which answer -actual- questions. The issue is that once that happens there will be less and less content to solve random issues and ChatGPT, etc will not have been trained on them creating a bit of a gap. Then couple that with all the junk AI articles people will be publishing, it will be increasingly more difficult to solve minor issues that you randomly come across.
ChatGPT/Midjourney, etc right now is just feeding into the "Dead Internet." I'd say currently, on net, it has been more of a bad thing than a good thing. That may change, or it could get worse.
> some are looking for businesses, shopping items, legal help, etc things that ChatGPT, etc can't emulate.
Except it can. It doesn’t have to try and “remember” the details in its weights, it can RAG and all sorts of fun tricks. As Google’s Search Generative Experience shows, LLM interfaces don’t have to be strictly chat. These tools surface and aggregate existing content, instead of hallucinating new ones. It can’t stop people from pasting generated content into Reddit, but it can be used for actual search.
Imagine just asking ChatGPT (or more likely, Gemini) for “space heater recommendations for a 300sqft office in a shed” and having the LLM walk you through the options - you no longer need to use a UI to toggle filters, and you don’t need to explicitly throw keywords into a search bar and hope you picked the right ones.
Regarding the “dead internet” - you’ll always have humans create new content. People wanna talk about their interests on Reddit. That won’t change. People will file bug reports and new code in GitHub. Companies will post support articles. Journals will publish research papers. These “good” sources will still exist because there are external reasons to. The only thing that will die are the SEO crap now that they’re really not special.
As a side note, "Search" outside of google is changing too. I have been using Kagi for a few month now, my search experience is so much better since then.
An LLM is not an assistant. It's a tool that will fill up the gaps with plausible sounding content. If it happens to have a lot of data on what you're looking for in its training good. If not, you SOL, risking being fooled by confidence.
I feel like context is far more important than the actual answer.
If you want to develop your skills, it's just as important to be aware of the things that aren't the answer, rather than just a single hyper-specific curated answer.
As others have mentioned in nearby threads, Kagi has some good examples of this in action. Appending a question mark to your query will run a GPT model to interpret and summarize the results, providing sources for each reference so that you can read through in more detail. It's way less "trust me bro" than google and feels more like a research assistant.
I would have been on board with it, if I didn’t get a completely wrong AI generated response from Google the other day.
I was for some reason looking into history of Reddit and ended up searching “what happened to former AMA coordinator Victoria Taylor”. Google AI summary got confused and told me that she got fired because hundreds of redditors voted for her ouster (clearly mixing up Victoria’s story and Ellen Pao’s)
I know n=1, but feels like maybe a little too early to get it out of the experimental mode.
I think this points to bigger issues with the AI field generally with respect to adoption and development. The output of these systems is fundamentally unverifiable. Building a system to verify these answers would be orders of magnitude more expensive and maybe computationally impossible. It looks really impressive because it's 95% of the way there, but 5% error rate is atrocious, in was expensive to get this far, and improving it much more will be more and more expensive. What we've essentially built is a search engine that is 5 times more expensive to operate and that alienates the content from its context, which makes it less valuable.
Maybe it's good enough for programming since you can immediately verify the output but I suspect we're pretty far away from the breakthroughs ai boosters insist we are mere months away from. We've had some form of self driving for a while now and the other company that seems close is waymo, and that seems to have taken over a decade of huge research and capital expenditures.
I wonder if Kagi's implementation of the same feature influenced anyone at google or if this was an inevitable development. Google has been providing the AI answer boxes for years but it's never been very user driven.
With Kagi you can use !fast/!expert bangs to use one of several LLMs fed the top few search results or as of last month, just end the search query with a question mark (no affiliation just a happy customer). It's almost completely changed how I use search.
I pay for Kagi, I'm only vaguely aware that they have AI features and I have no interest in using them. What I like is that it doesn't annoy me about that stuff or anything, it just lets me use it (kagi search) as I want. Completely different from aggressive big tech "you're the product" companies that use modal "got it?" popups to try and push features you don't want. I hope that doesn't change for Kagi more than anything else.
Also, I rarely use search to answer questions and anyway would never just trust an answer. I even read SO before pasting in the code and decide if it fits with what I'm doing and what modifications the answer may need, I guess I'm old. But more importantly, I use "search" mostly to get to pages I know exist or expect to exist. And getting to the right page faster is what I care about, not any answer.
I pay for Kagi and was similarly only vaguely aware that they have AI features and I had no interest in using them.
I started adding a question mark to my query out of curiosity and their instant answers are really good. Also they have links to their references, which I often check to verify the answer.
It's quite interesting what a search engine can be in 2024 when the product isn't the one using it, but the search results themselves.
Kagi with a wayy smaller budget than Google have really managed to make something pretty cool. Still does not replace google for 100% of my searches, but for the ones it does it is remarkably good, and so much less mentally taxing when working on a coding problem. I don't have to on top of everything manually filter search results, I can just click the top link.
I use the same Google account at work and home but only got AI-powered results at work, making me wonder what else Google does differently when you connect from a favored IP.
I did a Google search for "how can I create f-strings in python so the output is wrapped at some fixed length". Except, for Google, I did not use the "how do I" etc. and just threw some keywords at it: "python f-string wrapping", "python f-string folding", etc.
All I got back was various results on how to wrap f-strings themselves.
Frustrated, I typed "how can I create f-strings in python so the output is wrapped at some fixed length" into ChatGPT, and back came the answer: ... However, f-strings themselves don't provide a built-in method for wrapping text at a specific width. ... For text wrapping, you can use the textwrap module ... Here's how you can combine f-strings with the textwrap module to achieve text wrapping: ... (followed by a full example).
I think "Search" will be changing dramatically over the next year or two. And websites which depend on Search traffic to survive will be in deep trouble.
If you take the entire quoted search sentence from your third paragraph, then Google also presents a complete solution.
Previously writing human like sentences would not yield good results
Now its more like <key word> <key word> reddit
Which one is the complete solution you're seeing here? https://www.google.com/search?q=how+can+I+create+f-strings+i...
They said that they put exactly the same queries into Google as ChatGPT.
Yes, and no. A lot of sites generate crap content as it is. Not all search queries are just questions that can be answered, some are looking for businesses, shopping items, legal help, etc things that ChatGPT, etc can't emulate. AI will hurt sites like Reddit/StackOverflow/Quora, etc which answer -actual- questions. The issue is that once that happens there will be less and less content to solve random issues and ChatGPT, etc will not have been trained on them creating a bit of a gap. Then couple that with all the junk AI articles people will be publishing, it will be increasingly more difficult to solve minor issues that you randomly come across.
ChatGPT/Midjourney, etc right now is just feeding into the "Dead Internet." I'd say currently, on net, it has been more of a bad thing than a good thing. That may change, or it could get worse.
Except it can. It doesn’t have to try and “remember” the details in its weights, it can RAG and all sorts of fun tricks. As Google’s Search Generative Experience shows, LLM interfaces don’t have to be strictly chat. These tools surface and aggregate existing content, instead of hallucinating new ones. It can’t stop people from pasting generated content into Reddit, but it can be used for actual search.
Imagine just asking ChatGPT (or more likely, Gemini) for “space heater recommendations for a 300sqft office in a shed” and having the LLM walk you through the options - you no longer need to use a UI to toggle filters, and you don’t need to explicitly throw keywords into a search bar and hope you picked the right ones.
Regarding the “dead internet” - you’ll always have humans create new content. People wanna talk about their interests on Reddit. That won’t change. People will file bug reports and new code in GitHub. Companies will post support articles. Journals will publish research papers. These “good” sources will still exist because there are external reasons to. The only thing that will die are the SEO crap now that they’re really not special.
What I want is a librarian, who can sift through the impossibly vast oceans of information and point me at resources that are relevant to my query.
If you want to develop your skills, it's just as important to be aware of the things that aren't the answer, rather than just a single hyper-specific curated answer.
I was for some reason looking into history of Reddit and ended up searching “what happened to former AMA coordinator Victoria Taylor”. Google AI summary got confused and told me that she got fired because hundreds of redditors voted for her ouster (clearly mixing up Victoria’s story and Ellen Pao’s)
I know n=1, but feels like maybe a little too early to get it out of the experimental mode.
Maybe it's good enough for programming since you can immediately verify the output but I suspect we're pretty far away from the breakthroughs ai boosters insist we are mere months away from. We've had some form of self driving for a while now and the other company that seems close is waymo, and that seems to have taken over a decade of huge research and capital expenditures.
With Kagi you can use !fast/!expert bangs to use one of several LLMs fed the top few search results or as of last month, just end the search query with a question mark (no affiliation just a happy customer). It's almost completely changed how I use search.
Also, I rarely use search to answer questions and anyway would never just trust an answer. I even read SO before pasting in the code and decide if it fits with what I'm doing and what modifications the answer may need, I guess I'm old. But more importantly, I use "search" mostly to get to pages I know exist or expect to exist. And getting to the right page faster is what I care about, not any answer.
I started adding a question mark to my query out of curiosity and their instant answers are really good. Also they have links to their references, which I often check to verify the answer.
[0]: kagi.com/fastgpt
Kagi with a wayy smaller budget than Google have really managed to make something pretty cool. Still does not replace google for 100% of my searches, but for the ones it does it is remarkably good, and so much less mentally taxing when working on a coding problem. I don't have to on top of everything manually filter search results, I can just click the top link.
They started with the info boxes that extracted info from sites so you don’t have to click and read.
This is just a natural evolution of that.
Love this feature too, especially the references! I end almost all my queries with a question mark now.
I get that generative AI is the latest terrible thing, but this seems like a normal product rollout to me