I think there is a problem of incentive here. When we made our websites Search Engine Optimized, the incentive was for google to understand our content, and bring traffic our way. When you make your content optimized for LLM, it only improves their product, and you get nothing in return.
I do dev work for a marketing dept of a large company and there is a lot of talk about optimizing for LLMs/AI. Chatgpt can drive sales in the same way a blog post indexed by Google can.
If a customer asks the AI what product can solve their problem and it replies with our product that is a huge win.
If your business is SEO spam with online ads, chatgpt might eat it. But if your business is selling some product, chatgpt might help you sell it.
But software documentation is a prime example of when the incentives don't have any problems. I want my docs to be more accessible to LLMs, so more people use my software, so my software gets more mindshare, so I get more paying customers on my enterprise support plan.
This isn't true. ChatGPT and Gemini link to sites in a similar way to how search engines have always done it. You can see the traffic show up in ahrefs or semrush.
Yes, they show a tiny link behind a collapsed menu that very few people bother clicking. For example, my blog has gone from being prominently taking first spot on Google for some queries. Now with AI overviews, there is a sharp drop in traffic. However, it still showed higher impressions then ever. This means I'm appearing in search, even in AI overview, it's just that very few people click.
As of last week, impressions have also dropped. Maybe people not clicking on my links anymore is the result?
I had a call with a new user for a SaaS product that I sell recently. During the call he mentioned that he found it by typing what he was looking for into Gemini, and it recommended my app. I don't do anything special for llms, and the public-facing part of the website has been neglected for longer than I like to admit, so I was delighted. I had never considered that AI could send new users to me rather than pull them away. It felt like I'd hacked the system somehow, skipped through all the SEO best practices of yesteryear and had this benevolent bullshit machine bestow a new user on me at the cost of nothing.
If you are selling advertising, then I agree. However, if you are selling a product to consumers then no. Ask an LLM "What is the best refrigerator on the market." You will get various answers like:
> The best refrigerator on the market varies based on individual needs, but top brands like LG and Samsung are highly recommended for their innovative features, reliability, and energy efficiency. For specific models, consider LG's Smart Standard-Depth MAX™ French Door Refrigerator or Samsung's smart refrigerators with internal cameras.
Optimizing your site for LLM means that you can direct their gestalt thinking towards your brand.
And neither of those two ultimately help the humans who are actually looking for something. You have a finite amount of time to spend on optimising for humans, or for search engines (and now LLMs), and unfortunately many chose the latter and it's just lead to plenty of spam in the search results.
Yes, SEO can bring traffic to your site, but if your visitors see nothing of value, they'll quickly leave.
There was a lot of conversation about this on X over the last couple days and the `Accept` request header including "text/markdown, text/plain" has emerged as kind of a new standard for AI agents requesting content such that they don't burn unnecessary inference compute processing HTML attributes and CSS.
The concept is called content negotiation. We used to do this when we wanted to serve our content as XHTML to clients preferring that over HTML. It's nice to see it return as I always thought it was quite cool.
I don’t understand why the agents requesting HTML can’t extract text from HTML themselves. You don’t have to feed the entire HTML document to your LLM. If that’s wasteful, why not have a little bit of glue that does some conversion?
I learned that the golang CLI[1] is the best through my work simplifying Firecrawl[2]. However, in this case I used one available through npmjs such that it would work with `npx` for the CF worker builds.
It's always better for the agent to have fewer tools and this approach means you get to avoid adding a "convert HTML to markdown" one which improves efficiency.
Also, I doubt most large-scale scrapers are running in agent loops with tool calls, so this is probably necessary for those at a minimum.
This does not make any sense to me. Can you elaborate on this?
It seems “obvious” to me that if you have a tool which can request a web page, you can make it so that this tool extracts the main content from the page’s HTML. Maybe there is something I’m missing here that makes this more difficult for LLMs, because before we had LLMs, this was considered an easy problem. It is surprising to me that the addition of LLMs has made this previously easy, efficient solution somehow unviable or inefficient.
I think we should also assume here that the web site is designed to be scraped this way—if you don’t, then “Accept: text/markdown” won’t work.
Or one can just use semantic HTML; it's easy enough to convert semantic HTML into markdown with a tool like pandoc. That would also help screen readers, browser "reader modes", text-based web browsers, etc.
If a customer asks the AI what product can solve their problem and it replies with our product that is a huge win.
If your business is SEO spam with online ads, chatgpt might eat it. But if your business is selling some product, chatgpt might help you sell it.
As of last week, impressions have also dropped. Maybe people not clicking on my links anymore is the result?
> The best refrigerator on the market varies based on individual needs, but top brands like LG and Samsung are highly recommended for their innovative features, reliability, and energy efficiency. For specific models, consider LG's Smart Standard-Depth MAX™ French Door Refrigerator or Samsung's smart refrigerators with internal cameras.
Optimizing your site for LLM means that you can direct their gestalt thinking towards your brand.
Yes, SEO can bring traffic to your site, but if your visitors see nothing of value, they'll quickly leave.
Humans get HTML, bots get markdown. Two tiny tweaks I’d make...
Send Vary: Accept so caches don’t mix Markdown and HTML.
Expose it with a Link: …; rel="alternate"; type="text/markdown" so it’s easy to discover.
- https://x.com/bunjavascript/status/1971934734940098971
- https://x.com/thdxr/status/1972421466953273392
- https://x.com/mintlify/status/1972315377599447390
Deleted Comment
1. The Jina reader API - https://jina.ai/reader/ - add r.jina.ai to any URL to run it through their hosted conversion proxy, eg https://r.jina.ai/www.skeptrune.com/posts/use-the-accept-hea...
2. Applying Readability.js and Turndown via Playwright. Here's a shell script that does that using my https://shot-scraper.datasette.io tool: https://gist.github.com/simonw/82e9c5da3f288a8cf83fb53b39bb4...
[1]: https://github.com/JohannesKaufmann/html-to-markdown
[2]: https://github.com/devflowinc/firecrawl-simple
This is much cheaper to run on a server. For example: https://github.com/ozanmakes/scrapedown
Also, I doubt most large-scale scrapers are running in agent loops with tool calls, so this is probably necessary for those at a minimum.
It seems “obvious” to me that if you have a tool which can request a web page, you can make it so that this tool extracts the main content from the page’s HTML. Maybe there is something I’m missing here that makes this more difficult for LLMs, because before we had LLMs, this was considered an easy problem. It is surprising to me that the addition of LLMs has made this previously easy, efficient solution somehow unviable or inefficient.
I think we should also assume here that the web site is designed to be scraped this way—if you don’t, then “Accept: text/markdown” won’t work.
https://toffelblog.xyz/blog/gemini-overview/https://news.ycombinator.com/item?id=23730408
https://gemini.circumlunar.space/https://news.ycombinator.com/item?id=23042424