This is pulling the content of the RSS feeds of several news sites into the context window of an LLM and then asking it to summarize news items into articles and fill in the blanks?
I'm asking because that is what it looks like, but AI / LLMs are not specifically mentioned in this blog post, they just say news are 'generated' under the 'News in your language' heading, which seems to imply that is what they are doing.
I'm a little skeptical towards the approach, when you ask an LLM to point to 'sources' for the information it outputs, as far as I know there is no guarantee that those are correct – and it does seem like sometimes they just use pure LLM output, as no sources are cited, or it's quoted as 'common knowledge'.
Just for concrete confirmation that LLM(s) are being used, there's an open issue on the GitHub repository, on hallucinations with made up information, where a Kagi employee specifically mentions "an LLM hallucination problem":
There's also a line at the bottom of the about page at https://kite.kagi.com/about that says "Summaries may contain errors. Please verify important information."
To take a moment to be a hopeless Stan for one of my all-time favorite companies: I don't think the summary above yours is fair, and I see why they don't center the summary part of it.
Unlike the disastrous Apple feature from earlier this year (which is still available, somehow!), this isn't trying to transform individual articles. Rather, it's focused on capturing broader trends and giving just enough info to decide whether to click into any of the source articles. That seems like a much smaller, more achievable scope than Apple's feature, and as always, open-source helps work like this a ton.
I, for one, like it! I'll try it out. Seems better than my current sources for a quick list of daily links, that's for sure (namely Reddit News, Apple News, Bluesky in general, and a few industry newsletters).
Yeah. I really like Kagi. This is a terrible idea.
1. It seems to omit key facts from most stories.
2. No economic value is returned to the sources doing the original reporting. This is not okay.
3. If your summary device makes a mistake, and it will, you are absolutely on the hook for libel.
There seem to be some misunderstandings about what news is and what’s makes it well-executed. It’s not the average, it’s the deepest and most accurate reporting. If anyone from the Kagi team wants to discuss, I’m a paying member and I know this field really, really well.
Thank you. Also a paying Kagi user because I like the idea that it’s worth it to pay for a good service. Ripping off journalists/newspapers content goes against that.
> It’s not the average, it’s the deepest and most accurate reporting.
Yes! I'm also a paying member but I'm deeply suspicious of this feature.
The website claims "we expose readers to the full spectrum of global perspectives", but not all perspectives are equal. It smacks of "all sides" framing which is just not what news ought to be about.
Yes, that's what it is. Kagi as a brand is LLM-optimist, so you may be fundamentally at odds with them here... If it lessens the issue for you, the sources of each item are cited properly in every example I tried, so maybe you could treat it as a fancy link aggregator
Kagi founder here. I am personally not an LLM-optimist. The thing is that I do not think LLMs will bring us to "Star Trek" level of useful computers (which I see humans eventually getting to) due to LLM's fundamentally broken auto-regressive nature. A different approach will be needed. Slight nuance but an important one.
Kagi as a brand is building tools in service of its users, no particular affinity towards any technologies.
I'm about as AI-pessimist as it gets, but Kagi's use of LLMs is the most tasteful and practical I've seen. It's always completely opt-in (e.g. "append a ? to your search query if you want an AI summary", as opposed to Google's "append a swear word to your search query if you don't want one"), it's not pushy, and it's focused on summarizing and aggregating content rather than trying to make it up.
I consider myself a major LLM optimist in many ways, but if I'm receiving a once per day curated news aggregation feed I feel I'd want a human eye. I guess an LLM in theory might have less of the biases found in humans, but you're trading one kind of bias for another.
Hard pass then. I’m a happy Kagi search subscriber, but I certainly don’t want more AI slop in my life.
I use RSS with newsboat and I get mainstream news by visiting individual sites (nytimes.com, etc.) and using the Newshound aggregator. Also, of course, HN with https://hn-ai.org/
> when you ask an LLM to point to 'sources' for the information it outputs, as far as I know there is no guarantee that those are correct
A lot of times when I ask for a source, I get broken links. I'm not sure if the links existed at one point, or if the LLM is just hallucinating where it thinks a link should exist. CDN libraries, for example. Or sources to specific laws.
I monitor 404 errors on my website. ChatGPT frequently sends traffic to pages that never existed. Sometimes the information they refer to has never existed on my website.
For example: "/glossary/love-parade" - There is no mention of this on my website. "/guides/blue-card-germany" has always been at "/guides/blue-card". I don't know what "/guides/cost-of-beer-distribution" even refers to.
They'll do pretty much everything you ask of them, so unless the text actually come from some source (via tool calls, injecting content into the context or other way), they'll make up a source rather than doing nothing, unless prompted otherwise.
If you need to ask for a source in the first place, chances are very high that the LLM's response is not based on summarizing existing sources but rather exclusively quoting from memory. That usually goes poorly, in my experience.
The loop "create a research plan, load a few promising search results into context, summarize them with the original question in mind" is vastly superior to "freely associate tokens based on the user's question, and only think about sources once they dig deeper".
It actually seems more like an aggregator (like ground.news) to me. And pretty much every single sentence cites the original article(s).
There are nice summaries within an article. I think what they mean is that they generate a meta-article after combining the rest of them. There's nothing novel here.
But the presentation of the meta-article and publishing once a day feel like great features.
I have yeah, to me it looks like what I described in my comment above, it's LLM generated text, is it not?
> And pretty much every single sentence cites the original article(s).
Yeah but again, correct me if I'm wrong, but I don't think asking an LLM to provide a source / citation yields any guarantee that the text it generates alongside it is accurate.
I also see a lot of text without any citations at all, here are three sections (Historical background, Technical details and Scientific significance) that don't cite any sources: https://kite.kagi.com/s/5e6qq2
Publishing once a day to remove the "slot machine dopamine hit" is worth it for that alone. I have forever been looking for a peer/replacement to Google News, I was about to pony up for a Ground News subscription but I'll probably hold off for a couple more months. Alternatives to google news have been sorely lacking for over a decade, especially since google news got their mobile-first redesign which significantly and permanently weakened the product to meet some product manager's bonus-linked KPI. One more product to wean off the google mothership. Gmail is gonna be real hard though.
I am fine with it using AI but it makes me feel pretty icky that they didn’t mention that this was ai/llm generated at any point in this article. That’s a no-no IMO, and has turned me off this pretty strongly.
I'm firmly on the side of "AI" skepticism, but even I have to admit that this is a very good use of the tech. LLMs generally do a great job at summarizing text, which is essentially what this is. The sources could be statically defined in advance, given that they know where they pull the information from, so I don't think the LLM generates that content.
So if this automates the process of fetching the top news from a static list of news sites and summarizing the content in a specific structure, there's not much that can go wrong there. There's a very small chance that the LLM would hallucinate when asked to summarize a relatively short amount of text.
It's useful for the users, but tragically bad for anyone involved with journalism. Not that they're not used to getting fucked by search engines at this point, be it via AMP, instant answers, or AI overviews.
Not that the userbase of 50k is big enough to matter right now, but still...
I see! One thing I'm wondering: They say they are fetching the content from the RSS feeds of news outlets rather than scraping them, I haven't used RSS in a bit, but I recall most news outlets would usually not include the full article in their feed but just the headline or a small summary. I'd be worried that articles with misleading headlines (which are not uncommon) might cause this tool to generate incorrect news items, is that not a concern?
If the parent commenter is correct, the concern I'd have would be about transparency. Even if it's good at what it does, I don't think we're anywhere close to a place as a society where it shouldn't be explicit when it's being used for something like this.
When you go to Google News, the way they group together stories is AI (pre-LLM technology). Kagi is merely taking it one step further.
I agree with your concern. I see this as a convenient grouping, and if any interests me I can skip reading the LLM summary and just click on the sources they provide (making it similar to Google News).
It cannot be "one step further", because there's a clear break in reality between what Google News provides and Kagi provides. Google News links to an article that exists in our world, 100%, no chance involved. Kagi uses an LLM generate text and thus is entirely up to chance.
> when you ask an LLM to point to 'sources' for the information it outputs,
Services listing sources, like Kagi news, perplexity and others don't do that. They start with known links and run LLMs on that content. They don't ask LLMs to come up with links based on the question.
That is what I mean yeah, I’m not saying it’s fabricating sources from training data, that would obviously be impossible for news articles, I’m saying if you give it a list of articles A, B and C including their content in the context and ask ‘what is the foo of bar?’ and it responds ‘the foo of bar is baz, source: article B paragraph 2’, that does not tell you whether the output is actually correct, or contained in the cited source at all, unless you manually verify it.
This seems like the opposite of "privacy by design"
> Privacy by design: Your reading habits belong to you. We don’t track, profile, or monetize your attention. You remain the customer and not the product.
How would the LLM provider get any information about your reading habits from the app? The LLM is used _before_ the news content is served to you, the reader.
It's also a workaround around copyright, news sites would be (rightfully) pissed if you publicly post their articles in full and would argue that you're stealing their viewership. But, if you're essentially doing an automatic mash-up of five stories on the same topic from different sources, all of a sudden you're not doing anything wrong!
As an example from one of their sources, you can only re-publish a certain amount of words from an article in The Guardian (100 commercially, 500 non-comercially) without paying them.
And yet, after trying it, I have to admit it's more informative and less provocative than any other news source I've seen since at least 2005.
I don't know how they do it, and I'm not sure I care, the result is they've eliminated both clickbait and ragebait, and the news are indeed better off for it!
Yes, they are not the only player here. Quite a few companies are doing this, if you use Perplexity, they also have a news tab with the exact feature set.
> if you use Perplexity, they also have a news tab with the exact feature set
"Exact" is far from accurate. I just did a side-by-side comparison. To name only two obvious differences:
A. At the top level, Perplexity has a "Discover" tab [1] -- not titled "News". That leads to a AAF page with the endless-scroll anti-pattern (see [2] [3] for other examples). Kagi News [4] presents a short list of ~7ish items without images.
B. At the detail-page level, Kagi organizes their content differently (with more detail, including "sources", "highlights", "perspectives", "historical background", and "quick questions"). Perplexity only has content with sources and "discover more". You can verify for yourself.
After using Perplexity, its news tab is US-centric, without much options to get regional content from what i can see.
Kagi seems to offer regional news and the sources appear to be from the respective area also. do appreciate public access (for now?) with RSS feeds (ironic but handy).
Thanks for pointing out that this is yet more AI slop. Very disappointing for Kagi to do this. I get my money's worth from searches, but if I was looking for more features I would want them to be not AI-based.
I guess they embed the news of the day and let it summarize it. You can add metadata to the training set, which you should technically query reliably. You don't have to let the model do the summarization of the source, which can be erroneous.
Far more interesting is how they aggregate the data. I thought many sources moved behind paywalls already.
> Kagi is probably the only pro-LLM company praised on HN.
Kagi made search useful again, and their genAI stuff can be easily ignored. Best of both worlds -- it remains useful for people like myself who don't want genAI involved, but there's genAI stuff for people who like that sort of thing.
That said, if their genAI stuff gets to be too hard to ignore, then I'd stop using or praising Kagi.
That this is about news also makes it less problematic for me. I just won't see it at all, since I don't go to Kagi for news in the first place.
Disappointing. Non-LLM NLP summarization is actually rather good these days. It works by finding the key sentences in the text and extracting the relevant sections, no possibility for hallucination. No need to go full AI for this feature.
i believe an llm output is fine for giving an overview if provided the articles, if you want a detailed overview you should be reading the articles anyways.
> One daily update: We publish once per day around noon UTC, creating a natural endpoint to news consumption. This is a deliberate design choice that turns news from an endless habit into a contained ritual.
I might not agree with all decisions Kagi makes, but this is gold. Endless scrolling is a big indicator that you're a consumer not a customer.
> Endless scrolling is a big indicator that you're a consumer not a customer.
Someone recently highlighted the shift from social networks to social media in a way I'd never thought about:
>> The shift from social networks to social media was subtle, and insidious. Social networks, systems where you talk to your friends, are okay (probably). Social media, where you consume content selected by an algorithm, is not. (immibis https://news.ycombinator.com/item?id=45403867)
Specifically, in the same way that insufficient supply of mortgage securities (there's a finite number of mortgages) led to synthetic CDOs [0] in order to artificially boost supply of something there was a market for.
Social media and 24/7 news (read: shoving content from strangers into your eyeballs) are the synthetic CDOs of content, with about the same underlying utility.
There is in fact a finite amount of individually useful content per unit of time.
> Social media and 24/7 news (read: shoving content from strangers into your eyeballs) is the synthetic CDO of content, with about the same underlying utility.
This is a great way to put it. Much of the social media content is a derivative/synthetic representation of actual engagement. Content creators and influencers can make us "feel" like we have a connection to them (eg: "get ready with me!" type videos), but it's not the same as genuine connection or communication with people.
I agree but I also would like to see yesterday's news. 12 articles is a little to few for me. I would like to come back every couple of days and review what happened.
This is one of the big reasons I've gravitated towards a reverse-chronological feed that takes you from the past to the present -- at some point you hit a natural end, which is a natural prompt to go do something else. I've picked up Reeder[0] as a feed reader, since it can aggregate a bunch of sources (chiefly RSS, but also Mastodon, BlueSky, reddit, etc) and presents it in such a timeline without pressure to read everything.
I am seeing this app mentioned after years. When did this one move to subscription model - it was a one time paid app? Found it - it's also available as Reeder Classic on mac app store.
About a year ago I switched my news reading habits.
Now I just read the news on a Sunday (unless I'm doing something much more exciting). For the remainder of the week I don't read the news at all. It's the way my grandad used to read the news when he was a farmer.
I've found it to be a convenient format. It let's you stay informed, while it gives enough of a gap for news stories to develop and mature (unless they happen the day before). There's less speculation and rumours, and more established details, and it has reduced my day-to-day stress.
Annoyingly I still hear news from people around me, but I try to tune it out in the moment. I can't believe I used to consume news differently and it baffles me why I hear of people reading/watching/listening to the news 10+ times per day, including first thing when they awaken and last thing before they sleep. Our brains were not designed for this sort of thing.
I am not so sure. It currently highlights a story from Munich, and in addition to a few factual errors, the information is simply outdated; there have been numerous new relevant developments. (I also don't understand the selection of sources. Aljazeera? rt.com? South China Morning Post? As if there weren't enough sources of original reporting right from Germany.)
I would agree that a single daily news update is useful (and healthy), but this must also be reflected in the choice of topics and the type of reporting.
I think this is the wrong direction. We need better journalism, not better summarizing aggregators.
Summaries are no substitute for real articles, even if they're generated by hand (and these apparently are not). Summaries are bound to strip the information of context, important details and analysis. There's also no accountability for the contents.
Sure, there are links to the actual articles, but let's not kid ourselves that most people are going to read them. Why would they need a summarizing service otherwise? Especially if there are 20 sources of varying quality.
There are no "lifehacks" to getting informed. I'll be harsh: this service strikes me as informationally illiterate person's idea of what getting informed is like.
Also, they talk about "echo chambers" and "full spectrum of global perspectives". Representing all perspectives sounds great in theory, but how far should it go?
Should all politicians' remarks be reproduced verbatim with absolutely no commentary, no fact-checking and no context? Should an article about an airplane crossing the Pacific include "some experts believe that this is impossible because Earth is flat?"
Excessive bias in media is definitely a problem, but I don't think that completely unbiased media can exist while still being useful. In my expierence, people looking for it either haven't thought about it deeply enough, or they just want information that doesn't make their side look bad.
> Representing all perspectives sounds great in theory
A bigger bias problem by far is bias by omission, so including all stories whether they meet the presenter's political agenda or not would be a great start.
> We need better journalism, not better summarizing aggregators.
I agree, but how do you envision that happening? Journalism died a long time ago, arguably around the birth of the 24-hour news cycle, and it was further buried by social media. A niche tech company can only provide a better way to consume what's out there, not solve such large societal problems.
> There are no "lifehacks" to getting informed.
I don't think their intent is to change how people are informed. What this aims to do is replace endless doomscrolling on sites that are incentivized to rob us of our attention and data, with spending a few minutes a day to get a sense of general events around the world. If something piques your interest, you can visit the linked sources, or research the event elsewhere. But as a way of getting a quick general overview of what's going on, I think it's great.
We're seeing success with giving journalists better tools to create engaging journalism (which HN hates :). Many outlets are now seeing that they have to once more prove their value, and there exists some really great subscription-only media here in the Nordics and France.
That's precisely what Axios does, and they make money from this (and they don't list their sources). So I can see Kagi pursuing this.
FWIW, I agree with you.
I used to be a news junkie. I've always thought of writing the lessons I learned, but one of them was "If you're a casual news reader, you are likely more misinformed than the one who doesn't read any news." One either should abstain or go all in.
I guess I'd amend it to put people who only glance at headlines to be even more misinformed. It was not at all unusual for me to read articles where the content just plain disagreed with the headline!
It feels much less slimy to pay a nominal fee for a service than it does to use a "free" service and wonder about how / to what extent your data is being exploited.
The Kagi implementation can use Kagi search and can use advanced features of search like lenses. This isn't a unique feature but if you believe Kagi search is better than whoever Anthropic/OpenAI are using it's a nice plus.
Kagi's contracts with LLM providers are the ones businesses get with actual privacy protections which is also nice.
Actually, i get the news search with a quick answer and a link to the assistent and not a single LLM but practically all LLMs in one interface and can link and share the chats.
The Interface is nice, simple and Kagi is very up to date regarding new LLMs (it already contains Sonnet 4.5, for example).
It's just a nice interface for all LLMs which i often use on mobile or laptop for various work and also private tasks.
The last months have shown that there is no single LLM worth investing in (todays "top" LLM is tomorrows second-in-class).
You get multiple LLM in a single interface, with a single login and a single subscription to maintain, all your threads stored at the same place, the ability to switch between models in a thread, custom models...
KI multi-step asisstant. Being able to try out all the llms in one subscription. Search integration with Kagi which means AI can really search only pages I want. And my settings for search as well.
I used Kagi search for awhile but eventually switched back to google because Kagi location aware search sucks. It might be better nowadays. I’ve been living on their browser Orion for a few weeks now though and it’s great. It works about 90% of the time which is impressive for a browser that isn’t tested alongside the big 4
What everyone gets wrong about news curation is thinking people want the same news as everyone else, or "both sides" of a situation, or whatever mechanism for exposing them to things that someone else thinks are true.
What I actually want is a curated set of things that are useful to me personally given my situation.
The most important things about my situation to give me useful news are things like: net worth, income, citizenship, family situation, where I live, what industries I work in, current investments, travel destinations, regulatory and political risks associated with any of those things, etc.
Because those are the things that dictate how the parts of the world I can't control are going to affect me (especially if I don't react).
I don't want to hear about random things that aren't going to affect me when I'm looking at the news.
Sometimes I want to learn new random/useless things for fun, but that's a leisure activity. It's totally separate from the "news", which is a thing that adults consume as a chore to better plan their lives.
The fundamental problem is that myself and others are not going to willing give out the personal information required to curate useful news feeds, so the news will always be filled with noise.
Maybe local AI can help with that.
ChatGPT Pulse has actually done a great job at this for me. It knows about an upcoming vacation I have planned and gave me some specific news about closures and events there, with recommendations on what activities to book in advance.
I like Kagi and want them to succeed. But currently (according to LinkedIn) theres 26 employees. They are building search, LLM assistant wrappers, a browser and now news. Please don't overextend the same way Proton is currently doing.
I used to love Proton, but they focus too much on feature development instead of stability and fixing long-standing bugs. E.g. zooming has been broken for years in ProtonMail on iOS. Some emails won’t even render at all :(
Yup, i quit Proton (Mail) for the same reason. I had been using it for a long time…
There are so many little bugs and annoyances, it’s frustrating to see new features being released all the time while obvious bugs and shortcomings are not fixed.
It was a very big relief going back to a normal email client.
I still support Proton (i pay for Proton VPN) and hope they will succeed in their mission.
How is Proton over extending? All of their services are pretty great imo. I'm happy with them. Doesn't mean I am ever going to use their bitcoin wallet app thing, but if they want to build it, great, they know their customer base so it's probably not out of left field.
In the drive mobile app you cant even download a folder. There has been issues opened on it for almost a year now and since then they've opened two entirely new services and added many extra features.
When you're paying for something you expect the basics to be there and thats what annoys me about proton.
I like this a lot, going to try it! One issue i have though is in the current world of LLMs scraping content, i'd prefer there to be more discussion about compensation of authors.
I know the announcement page talks about not scraping, but to me personally the value i see in this product is that i don't have to go to those ad ridden, poorly organized and often terrible pages of the authors. Which then seems really unfair to the actual content providers.
I'd like to see this type of service cost $3-5/m ontop of my normal Kagi sub to compensate the authors of the articles i read. A Streaming Music model for news, ish.
This proposed value is quite small, but my assumption is only a very small amount of money would reach them from my ad views anyway so a $10/m addition feels extreme to me.
> One daily update: We publish once per day around noon UTC, creating a natural endpoint to news consumption. This is a deliberate design choice that turns news from an endless habit into a contained ritual.
Could you guys maybe print it on paper and send it to my physical mailbox, so I can do this ritual with breakfast? :-)
This is pulling the content of the RSS feeds of several news sites into the context window of an LLM and then asking it to summarize news items into articles and fill in the blanks?
I'm asking because that is what it looks like, but AI / LLMs are not specifically mentioned in this blog post, they just say news are 'generated' under the 'News in your language' heading, which seems to imply that is what they are doing.
I'm a little skeptical towards the approach, when you ask an LLM to point to 'sources' for the information it outputs, as far as I know there is no guarantee that those are correct – and it does seem like sometimes they just use pure LLM output, as no sources are cited, or it's quoted as 'common knowledge'.
https://github.com/kagisearch/kite-public/issues/97
There's also a line at the bottom of the about page at https://kite.kagi.com/about that says "Summaries may contain errors. Please verify important information."
Unlike the disastrous Apple feature from earlier this year (which is still available, somehow!), this isn't trying to transform individual articles. Rather, it's focused on capturing broader trends and giving just enough info to decide whether to click into any of the source articles. That seems like a much smaller, more achievable scope than Apple's feature, and as always, open-source helps work like this a ton.
I, for one, like it! I'll try it out. Seems better than my current sources for a quick list of daily links, that's for sure (namely Reddit News, Apple News, Bluesky in general, and a few industry newsletters).
1. It seems to omit key facts from most stories.
2. No economic value is returned to the sources doing the original reporting. This is not okay.
3. If your summary device makes a mistake, and it will, you are absolutely on the hook for libel.
There seem to be some misunderstandings about what news is and what’s makes it well-executed. It’s not the average, it’s the deepest and most accurate reporting. If anyone from the Kagi team wants to discuss, I’m a paying member and I know this field really, really well.
Yes! I'm also a paying member but I'm deeply suspicious of this feature.
The website claims "we expose readers to the full spectrum of global perspectives", but not all perspectives are equal. It smacks of "all sides" framing which is just not what news ought to be about.
Kagi founder here. I am personally not an LLM-optimist. The thing is that I do not think LLMs will bring us to "Star Trek" level of useful computers (which I see humans eventually getting to) due to LLM's fundamentally broken auto-regressive nature. A different approach will be needed. Slight nuance but an important one.
Kagi as a brand is building tools in service of its users, no particular affinity towards any technologies.
I use RSS with newsboat and I get mainstream news by visiting individual sites (nytimes.com, etc.) and using the Newshound aggregator. Also, of course, HN with https://hn-ai.org/
A lot of times when I ask for a source, I get broken links. I'm not sure if the links existed at one point, or if the LLM is just hallucinating where it thinks a link should exist. CDN libraries, for example. Or sources to specific laws.
For example: "/glossary/love-parade" - There is no mention of this on my website. "/guides/blue-card-germany" has always been at "/guides/blue-card". I don't know what "/guides/cost-of-beer-distribution" even refers to.
They'll do pretty much everything you ask of them, so unless the text actually come from some source (via tool calls, injecting content into the context or other way), they'll make up a source rather than doing nothing, unless prompted otherwise.
The loop "create a research plan, load a few promising search results into context, summarize them with the original question in mind" is vastly superior to "freely associate tokens based on the user's question, and only think about sources once they dig deeper".
It actually seems more like an aggregator (like ground.news) to me. And pretty much every single sentence cites the original article(s).
There are nice summaries within an article. I think what they mean is that they generate a meta-article after combining the rest of them. There's nothing novel here.
But the presentation of the meta-article and publishing once a day feel like great features.
> And pretty much every single sentence cites the original article(s).
Yeah but again, correct me if I'm wrong, but I don't think asking an LLM to provide a source / citation yields any guarantee that the text it generates alongside it is accurate.
I also see a lot of text without any citations at all, here are three sections (Historical background, Technical details and Scientific significance) that don't cite any sources: https://kite.kagi.com/s/5e6qq2
So if this automates the process of fetching the top news from a static list of news sites and summarizing the content in a specific structure, there's not much that can go wrong there. There's a very small chance that the LLM would hallucinate when asked to summarize a relatively short amount of text.
Not that the userbase of 50k is big enough to matter right now, but still...
When you go to Google News, the way they group together stories is AI (pre-LLM technology). Kagi is merely taking it one step further.
I agree with your concern. I see this as a convenient grouping, and if any interests me I can skip reading the LLM summary and just click on the sources they provide (making it similar to Google News).
I would argue creating your own summary is several steps beyond an ordering algorithm.
You don't and you should not use this one either.
A) redacted the news in a format that is read friendly
B) set up a page with prioritized news
Because _that’s what a newspaper is_.
What extra value is gotten from a AI rewrite? At best is a borderline noop, at worst a lossy transformation (?)
Services listing sources, like Kagi news, perplexity and others don't do that. They start with known links and run LLMs on that content. They don't ask LLMs to come up with links based on the question.
> Privacy by design: Your reading habits belong to you. We don’t track, profile, or monetize your attention. You remain the customer and not the product.
But the person running the LLM surely does.
That’s not news. That’s news-adjacent random slop.
As an example from one of their sources, you can only re-publish a certain amount of words from an article in The Guardian (100 commercially, 500 non-comercially) without paying them.
I don't know how they do it, and I'm not sure I care, the result is they've eliminated both clickbait and ragebait, and the news are indeed better off for it!
"Exact" is far from accurate. I just did a side-by-side comparison. To name only two obvious differences:
A. At the top level, Perplexity has a "Discover" tab [1] -- not titled "News". That leads to a AAF page with the endless-scroll anti-pattern (see [2] [3] for other examples). Kagi News [4] presents a short list of ~7ish items without images.
B. At the detail-page level, Kagi organizes their content differently (with more detail, including "sources", "highlights", "perspectives", "historical background", and "quick questions"). Perplexity only has content with sources and "discover more". You can verify for yourself.
[1]: https://www.perplexity.ai/discover
[2]: https://www.reddit.com/r/rant/comments/e0a99k/cnn_app_is_ann...
[3]: https://www.tumblr.com/make-me-imagine/614701109842444288/an...
[4]: https://kite.kagi.com
Kagi seems to offer regional news and the sources appear to be from the respective area also. do appreciate public access (for now?) with RSS feeds (ironic but handy).
https://web.archive.org/web/20250930154005/https://blog.kagi...
Far more interesting is how they aggregate the data. I thought many sources moved behind paywalls already.
Deleted Comment
Imagine if Google news use LLM to show summaries to the users without explicitly saying it's AI on the UI.
Ironically, one of the first LLM-induced mistakes experienced by average people was a news summary: https://www.bbc.com/news/articles/cge93de21n0o.amp
Kagi made search useful again, and their genAI stuff can be easily ignored. Best of both worlds -- it remains useful for people like myself who don't want genAI involved, but there's genAI stuff for people who like that sort of thing.
That said, if their genAI stuff gets to be too hard to ignore, then I'd stop using or praising Kagi.
That this is about news also makes it less problematic for me. I just won't see it at all, since I don't go to Kagi for news in the first place.
I might not agree with all decisions Kagi makes, but this is gold. Endless scrolling is a big indicator that you're a consumer not a customer.
Someone recently highlighted the shift from social networks to social media in a way I'd never thought about:
>> The shift from social networks to social media was subtle, and insidious. Social networks, systems where you talk to your friends, are okay (probably). Social media, where you consume content selected by an algorithm, is not. (immibis https://news.ycombinator.com/item?id=45403867)
Specifically, in the same way that insufficient supply of mortgage securities (there's a finite number of mortgages) led to synthetic CDOs [0] in order to artificially boost supply of something there was a market for.
Social media and 24/7 news (read: shoving content from strangers into your eyeballs) are the synthetic CDOs of content, with about the same underlying utility.
There is in fact a finite amount of individually useful content per unit of time.
[0] If you want the Michael Lewis-esque primer on CDOs https://m.youtube.com/watch?v=A25EUhZGBws
This is a great way to put it. Much of the social media content is a derivative/synthetic representation of actual engagement. Content creators and influencers can make us "feel" like we have a connection to them (eg: "get ready with me!" type videos), but it's not the same as genuine connection or communication with people.
Please expand obscure acronyms, not everyone lives in your niche.
Deleted Comment
[0] https://reederapp.com
Anyway, there's this https://netnewswire.com - https://github.com/Ranchero-Software/NetNewsWire (mac native) if someone is looking for an open source alt.
Now I just read the news on a Sunday (unless I'm doing something much more exciting). For the remainder of the week I don't read the news at all. It's the way my grandad used to read the news when he was a farmer.
I've found it to be a convenient format. It let's you stay informed, while it gives enough of a gap for news stories to develop and mature (unless they happen the day before). There's less speculation and rumours, and more established details, and it has reduced my day-to-day stress.
Annoyingly I still hear news from people around me, but I try to tune it out in the moment. I can't believe I used to consume news differently and it baffles me why I hear of people reading/watching/listening to the news 10+ times per day, including first thing when they awaken and last thing before they sleep. Our brains were not designed for this sort of thing.
I would agree that a single daily news update is useful (and healthy), but this must also be reflected in the choice of topics and the type of reporting.
Summaries are no substitute for real articles, even if they're generated by hand (and these apparently are not). Summaries are bound to strip the information of context, important details and analysis. There's also no accountability for the contents.
Sure, there are links to the actual articles, but let's not kid ourselves that most people are going to read them. Why would they need a summarizing service otherwise? Especially if there are 20 sources of varying quality.
There are no "lifehacks" to getting informed. I'll be harsh: this service strikes me as informationally illiterate person's idea of what getting informed is like.
Should all politicians' remarks be reproduced verbatim with absolutely no commentary, no fact-checking and no context? Should an article about an airplane crossing the Pacific include "some experts believe that this is impossible because Earth is flat?"
Excessive bias in media is definitely a problem, but I don't think that completely unbiased media can exist while still being useful. In my expierence, people looking for it either haven't thought about it deeply enough, or they just want information that doesn't make their side look bad.
A bigger bias problem by far is bias by omission, so including all stories whether they meet the presenter's political agenda or not would be a great start.
Yes. That's an interview, and is much better than summarizations and short soundbites and one-sentence quotes.
I agree, but how do you envision that happening? Journalism died a long time ago, arguably around the birth of the 24-hour news cycle, and it was further buried by social media. A niche tech company can only provide a better way to consume what's out there, not solve such large societal problems.
> There are no "lifehacks" to getting informed.
I don't think their intent is to change how people are informed. What this aims to do is replace endless doomscrolling on sites that are incentivized to rob us of our attention and data, with spending a few minutes a day to get a sense of general events around the world. If something piques your interest, you can visit the linked sources, or research the event elsewhere. But as a way of getting a quick general overview of what's going on, I think it's great.
FWIW, I agree with you.
I used to be a news junkie. I've always thought of writing the lessons I learned, but one of them was "If you're a casual news reader, you are likely more misinformed than the one who doesn't read any news." One either should abstain or go all in.
I guess I'd amend it to put people who only glance at headlines to be even more misinformed. It was not at all unusual for me to read articles where the content just plain disagreed with the headline!
(I was very skeptical about Kagi Assistant but now i am a happy Kagi Ultimate subscriber).
I like that Kagi charges for their service, so their motive is to provide services for that cost, and not with ads on top of it.
Kagi's contracts with LLM providers are the ones businesses get with actual privacy protections which is also nice.
It's just a nice interface for all LLMs which i often use on mobile or laptop for various work and also private tasks.
The last months have shown that there is no single LLM worth investing in (todays "top" LLM is tomorrows second-in-class).
You get multiple LLM in a single interface, with a single login and a single subscription to maintain, all your threads stored at the same place, the ability to switch between models in a thread, custom models...
What I actually want is a curated set of things that are useful to me personally given my situation. The most important things about my situation to give me useful news are things like: net worth, income, citizenship, family situation, where I live, what industries I work in, current investments, travel destinations, regulatory and political risks associated with any of those things, etc.
Because those are the things that dictate how the parts of the world I can't control are going to affect me (especially if I don't react). I don't want to hear about random things that aren't going to affect me when I'm looking at the news. Sometimes I want to learn new random/useless things for fun, but that's a leisure activity. It's totally separate from the "news", which is a thing that adults consume as a chore to better plan their lives.
The fundamental problem is that myself and others are not going to willing give out the personal information required to curate useful news feeds, so the news will always be filled with noise. Maybe local AI can help with that.
It was a very big relief going back to a normal email client.
I still support Proton (i pay for Proton VPN) and hope they will succeed in their mission.
When you're paying for something you expect the basics to be there and thats what annoys me about proton.
I mean, it keeps bothering me that their search engine logo is a "g". Anything to position themselves as close to google.
I know the announcement page talks about not scraping, but to me personally the value i see in this product is that i don't have to go to those ad ridden, poorly organized and often terrible pages of the authors. Which then seems really unfair to the actual content providers.
I'd like to see this type of service cost $3-5/m ontop of my normal Kagi sub to compensate the authors of the articles i read. A Streaming Music model for news, ish.
This proposed value is quite small, but my assumption is only a very small amount of money would reach them from my ad views anyway so a $10/m addition feels extreme to me.
Could you guys maybe print it on paper and send it to my physical mailbox, so I can do this ritual with breakfast? :-)
Guten: A Tiny Newspaper Printer - https://news.ycombinator.com/item?id=42599599 - January 2025 (106 comments)
Getting my daily news from a dot matrix printer - https://news.ycombinator.com/item?id=41742210 - October 2024 (253 comments)