More specifically, FetchFox is targeting a specific niche of scraping. It focuses on small scale scraping, like dozens or a few hundred pages. This is partly because, as a Chrome extension, it can only scrape what the user's internet connection can support. You can't scrape thousands or millions of pages on a residential connection.
But a separate reason is, I think that LLM's open up a new market and use case for scraping. FetchFox lets anyone scrape without coding knowledge. Imagine you're doing a research project, and want data from 100 websites. FetchFox makes that easy, whereas with traditional scraping you would have needed coding knowledge to scrape those sites.
As an example, I used FetchFox to research political bias in the media. I was able to get data from hundreds of articles without writing a line of code: https://ortutay.substack.com/p/analyzing-media-bias-with-ai . I think this tool could be used by many non-technical people in the same way.
Personally I am looking into options in this area, are you planning to offer a cloud based version of this at some point/could you tell which existing ones are good if not?
Also wondering how does the OP think about comparing themselves and standing out in the marketplace of seemingly bazillion options
Using a highlighter or annotation type tool, if you will.
So I decided to build an annotation tool for all public webpages! Playground demos of how it will work: - https://www.contextdive.com/snapshot?snapshottedId=47692b19-... - https://www.contextdive.com/snapshot?snapshottedId=3557f52f-...
^These are previously snapshotted page, you can highlight anywhere and leave a comment by right clicking for the context menu
\PS: I still don't have persistence of comments working yet since its a playground, but would love to hear feedback if anyone would like to use it.