Especially discovering unknown unknowns that lead to changes in your original requirements. This often happens at each step of the process (e.g. when writing the PRD, when breaking down the tickets, when coding, when QAing, and when documenting for users).
That’s when the agent needs to stop and ask for feedback. I haven’t seen (any) agents do this well yet.
IMO this is why this will not work. If you're too small a publisher, you don't want to lose potential click-through traffic. If you're a big publisher, you negotiate with the main bots that crawl a site (Perplexity, ChatGPT, Anthropic, Google, Grok).
The only way I can see something like this work is if a large "bot" providers set the standard and say they'll pay if this is set up (unlikely) or smaller apps that crawl see that this as cheaper than a proxy. But in the end, most of the traffic comes from a few large players.
[x] instantly usable
[x] no sales pitch
These are the posts that keep us on HN
What are the long-term plans given the likelihood of Apple having on-device LLMs and integrating them throughout their tools in 202X?
How are you thinking about staying differentiated if LinkedIn were to add this feature?
Also, given that most LLM chat tools are free now (e.g. http://chatgpt.com), how are you thinking about long term differentiation and pricing?
Deleted Comment