Readit News logoReadit News
foxfired · 3 months ago
I think there is a problem of incentive here. When we made our websites Search Engine Optimized, the incentive was for google to understand our content, and bring traffic our way. When you make your content optimized for LLM, it only improves their product, and you get nothing in return.
naet · 3 months ago
I do dev work for a marketing dept of a large company and there is a lot of talk about optimizing for LLMs/AI. Chatgpt can drive sales in the same way a blog post indexed by Google can.

If a customer asks the AI what product can solve their problem and it replies with our product that is a huge win.

If your business is SEO spam with online ads, chatgpt might eat it. But if your business is selling some product, chatgpt might help you sell it.

monkeyelite · 3 months ago
And what that means is the usefulness of LLms in recommending products is about to jump off a cliff.
krainboltgreene · 3 months ago
Neat up until the "customer ask" is "What, in X space, is the worst product you can purchase?" Something you have no ability to manipulate.
CGamesPlay · 3 months ago
But software documentation is a prime example of when the incentives don't have any problems. I want my docs to be more accessible to LLMs, so more people use my software, so my software gets more mindshare, so I get more paying customers on my enterprise support plan.
skeptrune · 3 months ago
Oh hey, I work at Mintlify! We shipped this as a default feature for all of our customers.
skeptrune · 3 months ago
This isn't true. ChatGPT and Gemini link to sites in a similar way to how search engines have always done it. You can see the traffic show up in ahrefs or semrush.
foxfired · 3 months ago
Yes, they show a tiny link behind a collapsed menu that very few people bother clicking. For example, my blog has gone from being prominently taking first spot on Google for some queries. Now with AI overviews, there is a sharp drop in traffic. However, it still showed higher impressions then ever. This means I'm appearing in search, even in AI overview, it's just that very few people click.

As of last week, impressions have also dropped. Maybe people not clicking on my links anymore is the result?

nozzlegear · 3 months ago
I had a call with a new user for a SaaS product that I sell recently. During the call he mentioned that he found it by typing what he was looking for into Gemini, and it recommended my app. I don't do anything special for llms, and the public-facing part of the website has been neglected for longer than I like to admit, so I was delighted. I had never considered that AI could send new users to me rather than pull them away. It felt like I'd hacked the system somehow, skipped through all the SEO best practices of yesteryear and had this benevolent bullshit machine bestow a new user on me at the cost of nothing.
gl-prod · 3 months ago
How many users do actually visit these links?
skeeter2020 · 3 months ago
and like Google - but much, much worse - they bring back enough content to keep users in the chat interface; they never visit your site.
foxyv · 3 months ago
If you are selling advertising, then I agree. However, if you are selling a product to consumers then no. Ask an LLM "What is the best refrigerator on the market." You will get various answers like:

> The best refrigerator on the market varies based on individual needs, but top brands like LG and Samsung are highly recommended for their innovative features, reliability, and energy efficiency. For specific models, consider LG's Smart Standard-Depth MAX™ French Door Refrigerator or Samsung's smart refrigerators with internal cameras.

Optimizing your site for LLM means that you can direct their gestalt thinking towards your brand.

userbinator · 3 months ago
And neither of those two ultimately help the humans who are actually looking for something. You have a finite amount of time to spend on optimising for humans, or for search engines (and now LLMs), and unfortunately many chose the latter and it's just lead to plenty of spam in the search results.

Yes, SEO can bring traffic to your site, but if your visitors see nothing of value, they'll quickly leave.

shpx · 3 months ago
You get to live in a world where other people are slightly more productive.
burcs · 3 months ago
Really cool idea

Humans get HTML, bots get markdown. Two tiny tweaks I’d make...

Send Vary: Accept so caches don’t mix Markdown and HTML.

Expose it with a Link: …; rel="alternate"; type="text/markdown" so it’s easy to discover.

Rohansi · 3 months ago
Would be nice for humans to get the markdown version too. Once it's rendered you get a clean page.
captn3m0 · 3 months ago
I’ve been asking for browser-native markdown support for years now. A clean web is not that far, if browsers support more than just HTML.
yawaramin · 3 months ago
This person hypermedias
skeptrune · 3 months ago
There was a lot of conversation about this on X over the last couple days and the `Accept` request header including "text/markdown, text/plain" has emerged as kind of a new standard for AI agents requesting content such that they don't burn unnecessary inference compute processing HTML attributes and CSS.

- https://x.com/bunjavascript/status/1971934734940098971

- https://x.com/thdxr/status/1972421466953273392

- https://x.com/mintlify/status/1972315377599447390

Deleted Comment

hahnbee · 3 months ago
keep us posted on how this change impacts your GEO!
Kimitri · 3 months ago
The concept is called content negotiation. We used to do this when we wanted to serve our content as XHTML to clients preferring that over HTML. It's nice to see it return as I always thought it was quite cool.
skeptrune · 3 months ago
Agreed! I love that such a tried and true web standard is making a comeback because of AI.
pabs3 · 3 months ago
Content negotiation is also good for choosing human languages, unfortunately the browser interfaces for it are terrible.
klodolph · 3 months ago
I don’t understand why the agents requesting HTML can’t extract text from HTML themselves. You don’t have to feed the entire HTML document to your LLM. If that’s wasteful, why not have a little bit of glue that does some conversion?
simonw · 3 months ago
Converting HTML into Markdown isn't particularly hard. Two methods I use:

1. The Jina reader API - https://jina.ai/reader/ - add r.jina.ai to any URL to run it through their hosted conversion proxy, eg https://r.jina.ai/www.skeptrune.com/posts/use-the-accept-hea...

2. Applying Readability.js and Turndown via Playwright. Here's a shell script that does that using my https://shot-scraper.datasette.io tool: https://gist.github.com/simonw/82e9c5da3f288a8cf83fb53b39bb4...

skeptrune · 3 months ago
I learned that the golang CLI[1] is the best through my work simplifying Firecrawl[2]. However, in this case I used one available through npmjs such that it would work with `npx` for the CF worker builds.

[1]: https://github.com/JohannesKaufmann/html-to-markdown

[2]: https://github.com/devflowinc/firecrawl-simple

osener · 3 months ago
A lightweight alternative to Playwright, which starts a browser instance, is using an HTML parser and DOM implementation like linkedom.

This is much cheaper to run on a server. For example: https://github.com/ozanmakes/scrapedown

skeptrune · 3 months ago
It's always better for the agent to have fewer tools and this approach means you get to avoid adding a "convert HTML to markdown" one which improves efficiency.

Also, I doubt most large-scale scrapers are running in agent loops with tool calls, so this is probably necessary for those at a minimum.

klodolph · 3 months ago
This does not make any sense to me. Can you elaborate on this?

It seems “obvious” to me that if you have a tool which can request a web page, you can make it so that this tool extracts the main content from the page’s HTML. Maybe there is something I’m missing here that makes this more difficult for LLMs, because before we had LLMs, this was considered an easy problem. It is surprising to me that the addition of LLMs has made this previously easy, efficient solution somehow unviable or inefficient.

I think we should also assume here that the web site is designed to be scraped this way—if you don’t, then “Accept: text/markdown” won’t work.

xg15 · 3 months ago
I don't think it's about including this as a tool, just as general preprocessing before the agent even gets the text.
stebalien · 3 months ago
Or one can just use semantic HTML; it's easy enough to convert semantic HTML into markdown with a tool like pandoc. That would also help screen readers, browser "reader modes", text-based web browsers, etc.
NathanFlurry · 3 months ago
We’re doing this on https://rivet.dev now. I did not realize how much context bloat we had since we were using Tailwind.
skeptrune · 3 months ago
It is crazy how badly Tailwind bloats HTML. Tradeoffs!
jauntywundrkind · 3 months ago
troyvit · 3 months ago
FYI both the link to toffelblog and circumlunar.space are broken with ssl errors.