I can't use it because I'm not classified as "human" by a computer. There is no captcha that I could get wrong, just a checkbox that probably uses a black box model to classify me automatically
Was curious after the post claimed that the quality is better than Google and DeepL, but the current top comment showed translations from Afrikaans that it got wrong but I could understand as a Dutch person who doesn't even speak that language (so it's not like seven levels of negation and colloquialisms that they broke it on)
What do I do with this "Error Code: 600010"? I've submitted a "report" but obviously they're not going to know if those reports are from a bot author frustrated with the form or me, a paying customer of Kagi's search engine. The feedback page linked in the blog post has the same issue: requires you to log in before being able to leave feedback, but "We couldn't verify if you're a robot or not." The web is becoming more fragmented and unusable every day...
Cloudflare is more or less a necessity if you offer any sort of computationally expensive service for free. They're problematic for sure, but I think they're a lesser evil in the grand scheme of things.
Very much a symptom of a much larger problem however, one with not a lot of good solutions.
I had tons of issues with these Cloudflare checkboxes. I finally figured out it was because I use this extension [1] that disables HTML5 autoplay. I assume Cloudflare is doing some kind of thing where they verify that the client can playback media, as they assume that headless browsers or crawlers won't have that capability
Interactive captchas are one foot in the grave. With multi-modal tool-using AI models proliferating, challenge tasks that only a human can complete are vanishing. Current challenges exclude users with minor physical or mental impairments even now.
Anti-bot filters will require a different signal to determine if a physical human actually made the request.
Thanks, but I am logged in and it still shows that. Clicking log in at the top of the page leads me to the login page which takes about 10 seconds to (while I'm typing) realise that I'm already logged in and then redirects me to the homepage (kagi search)
I don't have any site-specific settings and clearly HN works fine (as well as other sites) so it's not that cookies are disabled or such
Edit: come to think of it, I'm surprised that you find translator data to be more sensitive (worth sticking behind a gatekeeper) than user logins. Must have been a lot of work to develop this intellectual property. There is no Cloudflare check on the login page. Not that I'd want to give you ideas, though! :-)
> Afrikaans that it got wrong but I could understand as a Dutch person who doesn't even speak that language
The one language is basically a derivative of the other. Understanding and judging accuracy of translation are quite different though.
eg for me it’s the reverse - can understand large parts of Dutch due to Afrikaans. Couldn’t tell you if a Dutch sentence is correctly translated or grammatically correct though
At least for Afrikaans I'm not impressed here. There are some inaccuracies, like "varktone" becoming "pork rinds" instead of "pig toes" and also some censorship ("jou ma se poes" does NOT mean "play with my cat"!). Comparing directly against Google Translate, Google nails everything I threw at it.
I didn't see any option to provide feedback, suggested translations, etc, but I'm hopeful that this service improves.
Just tried translating your comment to German. Kagi took a very literal approach, keeping sentence structure and word choice mostly the same. Google Translate and DeepL both went for more idiomatic translations.
However translating some other comments from this thread, there are cases where Kagi outperforms others on correctness. For example one comment below talks about "encountering multiple second page loads". Google Translate misunderstands this as "encountering a second page load multiple times" while DeepL and Kagi both get it right with "encountering page loads of multiple seconds" (with DeepL choosing a slightly more idiomatic wording)
I asked some inappropriate things and it was "translated" to I cannot assist with that request. It definitely needs to be more clear when it's refusing to translate. But, then again, I don't even use kagi.
This is kind of natural for a new service, one of the advantages the big players have is a giant test corpus. For less mainstream languages and terms it will be more noticeable.
"The game is my poem" when back-translated from the Turkish translation, "oyun benim şiirimdir". And there's censorship too when doing EN-TR for a few other profanities I tested. When you add another particular word to the sentence, it outputs "play with my cat, dad".
Just as a quick usability feedback: As long as Deepl translates asynchronously as I type, while Kagi requires a full form send & page refresh, I am not inclined to switch (translation quality is also already too good for my language pairs to consider switching for minor improvements, but the usability/ speed is the real feature here).
This is coming from a user with existing Kagi Ultimate subscription, so I'm generally very open to adopt another tool if it fits my needs).
Slightly offtopic, slight related: As already mentioned the last time Kagi hit the HN front page when I saw it: the best improvement I could envision for kagi is improved search performance (page speed). I still encounter multiple second page loads far too frequently that I didn't notice with other search engines.
Interesting, I'm actually annoyed that DeepL sends every keystroke and I'm using idk how many resources on their end when I'm just interested in the result at the end and for DeepL to receive the final version I want to share with them
That it's fast, you don't have to wait much between finishing typing and the result being ready, that's great and probably better than any form system is likely to be. But if it could be a simple enter press and then async loading the result, that sounds great to me
> As long as Deepl translates asynchronously as I type, while Kagi requires a full form send & page refresh,
This leads to increased cost and we wanted to keep service free. But yes we will introduce translate as your type (will be limited to paid Kagi members).
We are focusing most our resources on search (which I hope you can agree, we are doing a pretty good job at). And it turns out search is not enough and you need other things - like maps (or a browser, because some browsers will not let you change search engine and our paid users can not use the service). Both are also incredibly hard to do right. If it appears quarter-baked (and I am first to say that we can and will definetely keep imporivng improving with our products), it is not for the lack of trying or ambition but the lack of resources. Kagi is 100% user-funded. So we need users, and we sometimes work on tools that do not bring us money directly, but bring us users (like Small Web, Universal Summarizer or Translate). It is all part of the plan. And it is a decade-long plan.
I absolutely did not mean to imply that you did not want to improve the products.
I did assume that you are missing the resources for the many products you develop.
Its just very sad to show/recommend Kagi to people and then have them(or me) run into so many bugs, and sometimes product-breaking bugs. (such as Maps that I mentioned. because I would love to use Kagi maps, but its so broken that I just cant)
Would love to travel 10 years into the future of Kagi's roadmap.
love the plan, but i’d suggest being more up front with users on how “finished” a product is.
With the maps example, you run into problems because of expectations. If you slap a BETA or ALPHA logo on the maps product, expectations will be lower, and people are more forgiving of issues while you continue improving the product. Or if it’s only good in the US (just an example), make it clear somehow when searching for addresses outside the US.
I'm curious to see if I can identify what data source and search software it is based on, since I've heard similar complaints about Nominatim and it is indeed finicky if you made a typo or don't know the exact address; it does no context search based on the current view afaik. Google really does do search well compared to the open source software I'm partial to, I gotta give them that
Edit: ah if you horizontally scroll on the homepage there's a "search maps" thing. Putting in a street name near me that's unique in the world, it comes up with a lookalike name in another country. Definitely not any OpenStreetMap-based product I know of then, they usually aren't unliteral like that. Since the background map is Apple by default, I guess that's what the search is as well
>Quality ratings based on internal testing and user feedback
I'd be interested in knowing more about the methodology here. People who use Kagi tend to love Kagi, so bias would certainly get in the way if not controlled for. How rigorous was the quality-rating process? How big of a difference is there between "Average", "High" and "Very High"?
I'm also curious to the 1 additional language that Kagi supports (Google is listed at 243, Kagi at 244)?
They really must have copied Google, because like I said this was diffing exact strings, meaning that slight variations of how the languages are presented don't exist.
I am very suspicious of the results. A few months ago they published a LLM benchmark, calling it "perfect" while it actually contained like only 50 inputs (academic benchmark datasets usually contain tens of thousands of inputs).
I recently noticed that Google Translate and Bing have trouble translating the German word "Orgel" ("organ", as in "church organ", not as in "internal organs") to various languages such as Vietnamese or Hebrew. In several attempts, they would translate the word to an equivalent of "internal organs" even though the German word is, unlike the English "organ", unambiguous.
Kagi Translate seems to do a better job here. It correctly translates "Orgel" to "đàn organ" (Vietnamese) and "עוגב" (Hebrew).
DeepL also, for the record (since it's being compared in the submission)
It's pretty clear if you use the words out of context and they're true friends but it gets you the German translation of the English translation of whatever Dutch thing you put in. I also heard somewhere, perhaps when interviewing with DeepL, that they were working towards / close to not needing to do that anymore, but so far no dice that I've noticed and it has been a few years
Looks like the page translator wants to use an iframe, so of course the x-frame-options header of that page will be the limiting factor.
> To protect your security, note.com will not allow Firefox to display the page if another site has embedded it. To see this page, you need to open it in a new window.
This is a super common setting and it's why I use a browser extension instead.
Was curious after the post claimed that the quality is better than Google and DeepL, but the current top comment showed translations from Afrikaans that it got wrong but I could understand as a Dutch person who doesn't even speak that language (so it's not like seven levels of negation and colloquialisms that they broke it on)
What do I do with this "Error Code: 600010"? I've submitted a "report" but obviously they're not going to know if those reports are from a bot author frustrated with the form or me, a paying customer of Kagi's search engine. The feedback page linked in the blog post has the same issue: requires you to log in before being able to leave feedback, but "We couldn't verify if you're a robot or not." The web is becoming more fragmented and unusable every day...
Cloudfare, the gatekeeper of the internet, strikes again.
The usual suspects are VPN or proxy, javascript, cookies, etc.
https://developers.cloudflare.com/turnstile/troubleshooting/...
Unfortunately, even with the error code, I doubt the above page will help much.
Very much a symptom of a much larger problem however, one with not a lot of good solutions.
Deleted Comment
[1] https://addons.mozilla.org/en-US/firefox/addon/disable-autop...
It only defeats users.
Anti-bot filters will require a different signal to determine if a physical human actually made the request.
It uses Cloudflare Turnstile captcha.
The service shows no captcha to logged in Kagi users, so you can just create a (trial) Kagi account.
I don't have any site-specific settings and clearly HN works fine (as well as other sites) so it's not that cookies are disabled or such
Edit: come to think of it, I'm surprised that you find translator data to be more sensitive (worth sticking behind a gatekeeper) than user logins. Must have been a lot of work to develop this intellectual property. There is no Cloudflare check on the login page. Not that I'd want to give you ideas, though! :-)
The one language is basically a derivative of the other. Understanding and judging accuracy of translation are quite different though.
eg for me it’s the reverse - can understand large parts of Dutch due to Afrikaans. Couldn’t tell you if a Dutch sentence is correctly translated or grammatically correct though
(cf. https://en.wikipedia.org/wiki/Zalgo_text https://en.wikipedia.org/wiki/Zombie https://en.wikipedia.org/wiki/Crack_cocaine )
At least for Afrikaans I'm not impressed here. There are some inaccuracies, like "varktone" becoming "pork rinds" instead of "pig toes" and also some censorship ("jou ma se poes" does NOT mean "play with my cat"!). Comparing directly against Google Translate, Google nails everything I threw at it.
I didn't see any option to provide feedback, suggested translations, etc, but I'm hopeful that this service improves.
However translating some other comments from this thread, there are cases where Kagi outperforms others on correctness. For example one comment below talks about "encountering multiple second page loads". Google Translate misunderstands this as "encountering a second page load multiple times" while DeepL and Kagi both get it right with "encountering page loads of multiple seconds" (with DeepL choosing a slightly more idiomatic wording)
EDIT: the "Limitations" section report the use of LLMs without specifying the models used.
How the hell is everyone okay with it?
Why should I be "forbidden" from understanding a text written in a foreign language if it contains something inappropriate?
I don't know of any translation service that does censorship.
Not even Google does it and Kagi was supposed to be less user hostile than Google, not more.
This is coming from a user with existing Kagi Ultimate subscription, so I'm generally very open to adopt another tool if it fits my needs).
Slightly offtopic, slight related: As already mentioned the last time Kagi hit the HN front page when I saw it: the best improvement I could envision for kagi is improved search performance (page speed). I still encounter multiple second page loads far too frequently that I didn't notice with other search engines.
That it's fast, you don't have to wait much between finishing typing and the result being ready, that's great and probably better than any form system is likely to be. But if it could be a simple enter press and then async loading the result, that sounds great to me
This leads to increased cost and we wanted to keep service free. But yes we will introduce translate as your type (will be limited to paid Kagi members).
Unacceptable.
Maps for example is basically unusable and has been for a while. (at least in Germany)
Trying to search for an address often leads Kagi maps to go to a different random address.
Still love the search, but Id love for Kagi to concentrate on one thing at a time.
I did assume that you are missing the resources for the many products you develop.
Its just very sad to show/recommend Kagi to people and then have them(or me) run into so many bugs, and sometimes product-breaking bugs. (such as Maps that I mentioned. because I would love to use Kagi maps, but its so broken that I just cant)
Would love to travel 10 years into the future of Kagi's roadmap.
With the maps example, you run into problems because of expectations. If you slap a BETA or ALPHA logo on the maps product, expectations will be lower, and people are more forgiving of issues while you continue improving the product. Or if it’s only good in the US (just an example), make it clear somehow when searching for addresses outside the US.
Just my 2 cents as a paying Kagi customer.
Deleted Comment
I'm curious to see if I can identify what data source and search software it is based on, since I've heard similar complaints about Nominatim and it is indeed finicky if you made a typo or don't know the exact address; it does no context search based on the current view afaik. Google really does do search well compared to the open source software I'm partial to, I gotta give them that
Edit: ah if you horizontally scroll on the homepage there's a "search maps" thing. Putting in a street name near me that's unique in the world, it comes up with a lookalike name in another country. Definitely not any OpenStreetMap-based product I know of then, they usually aren't unliteral like that. Since the background map is Apple by default, I guess that's what the search is as well
Can also be found here:
https://kagi.com/maps
Kagi uses Apple Maps
I'd be interested in knowing more about the methodology here. People who use Kagi tend to love Kagi, so bias would certainly get in the way if not controlled for. How rigorous was the quality-rating process? How big of a difference is there between "Average", "High" and "Very High"?
I'm also curious to the 1 additional language that Kagi supports (Google is listed at 243, Kagi at 244)?
>Kagi Translate is free for everyone.
That's nice!
In Kagi, not Google:
In Google, not Kagi: They really must have copied Google, because like I said this was diffing exact strings, meaning that slight variations of how the languages are presented don't exist.I just copied all of the values from the select element on the page (https://translate.kagi.com/) and there's only 243. Now I genuinely wonder if it's Pig Latin. https://news.ycombinator.com/item?id=42080562
Deleted Comment
Kagi Translate seems to do a better job here. It correctly translates "Orgel" to "đàn organ" (Vietnamese) and "עוגב" (Hebrew).
It's pretty clear if you use the words out of context and they're true friends but it gets you the German translation of the English translation of whatever Dutch thing you put in. I also heard somewhere, perhaps when interviewing with DeepL, that they were working towards / close to not needing to do that anymore, but so far no dice that I've noticed and it has been a few years
</ha-ha-only-serious>
Bing detects it as English but leave it unchanged.
Google detects it as Telegu and gives a garbage translation.
ChatGPT detects it as Pig Latin and translates it correctly.
> To protect your security, note.com will not allow Firefox to display the page if another site has embedded it. To see this page, you need to open it in a new window.
This is a super common setting and it's why I use a browser extension instead.