Readit News logoReadit News
netsroht commented on We're bringing Pebble back   repebble.com/... · Posted by u/erohead
hparadiz · 7 months ago
All this plus not being able to see any data while offline. Super useful when you're 13,000 feet up on a mountain somewhere.
netsroht · 7 months ago
Gadgetbridge added support for Garmin watches recently [1]. All data is stored on your Android phone with no internet connectivity required and you can even export the sqlite DB so you own your sensor data. The UI isn't as nice as Garmin's but it does its job.

[1] https://gadgetbridge.org/basics/topics/garmin/

netsroht commented on Show HN: I built a LLM-powered Ask HN: like Perplexity, but for HN comments   hackersearch.net/ask... · Posted by u/jnnnthnn
coloneltcb · a year ago
This is very cool. I wouldn't make buying decisions off of this, but it is a good starting point to get a pulse on the developer zeitgeist on any given topic.
netsroht · a year ago
Always cool seeing stuff in this space.

Regarding "zeitgeist", about a year ago I built something similar called https://zeitgaist.ai which also incorporates other sources like Mastodon, Bluesky, some subreddits etc.

netsroht commented on Ask HN: How to do literal web searches after Google destroyed the “ ” feature?    · Posted by u/7moritz7
freediver · 2 years ago
Kagi already provides a way to search anonymously via a random email address (we do not really verify it or need it for anything) and Bitcoin/Lightning payment [1].

Since you are interested in cryptography, there is a discussion on Kagi feedback site along the same lines as your idea, about possible ways to achieve this without the need for cryptocurrency. [2]

[1] https://blog.kagi.com/accepting-paypal-bitcoin

[2] https://kagifeedback.org/d/653-completely-anonymous-searches...

netsroht · 2 years ago
Thanks for the links. Using a disposable email with crypto payments and occasionally generating a new account to unlink from previous searches could be a viable intermediate solution.

Also, I found this link [1] in the thread you mentioned. They seem to have implemented something like that.

[1] https://metager.de/keys/help/anonymous-token

netsroht commented on Ask HN: How to do literal web searches after Google destroyed the “ ” feature?    · Posted by u/7moritz7
EA-3167 · 2 years ago
I'm not searching for anything terrifyingly illegal, and for the rest Google and MS already scrape and compile every byte of data I've ever generated. Why would it suddenly be a problem when a more reliable and less vicious company is doing a fraction of that?

You have to understand that most of us aren't fighting some battle for "perfect privacy," I just want a search engine that works for me, rather than advertisers, at the level of the search results themselves.

netsroht · 2 years ago
I get your perspective. A lot of us just want a search engine that serves the user first, not advertisers, especially at the results level. It's about function over strict privacy for many--everyone has their own privacy threshold.

But it's also about digital data autonomy. It's not just about avoiding surveillance over sensitive searches, but having control over our data's destiny. Even mundane data, in aggregate, can sometimes be used in ways we can't predict.

netsroht commented on Ask HN: How to do literal web searches after Google destroyed the “ ” feature?    · Posted by u/7moritz7
boredpudding · 2 years ago
Any system that can check balance, can link searches to a user. There's no way around it. In your case, Kagi would need to trust the client with the balance, which would be insecure.

There's only one solution, and that is that you need to put a bit of trust in Kagi. Compared to the major one, Google, you can chose between one that promises to not store data, and one that promises it does (and does a lot).

It's always a bit sad that here on HN, when companies try to do better than bigger players, there's always people who think it isn't enough. It has to be absolutely impossibly perfect.

netsroht · 2 years ago
I'm not a cryptography expert, but from my research, shouldn't it be possible to verify quota on ZKPs server-side? Essentially, the server doesn't need to know the specifics of the user's identity, just that they possess a valid token and haven't exceeded their quota.

You can use search engines like Google without being logged in. When combined with tools like uBlock Origin and Cookie AutoDelete, it becomes more challenging for them to build a singular profile about a user, especially one tied to payment methods such as credit cards.

I genuinely appreciate what Kagi is doing, and I'd absolutely be willing to pay for their service, because if you're not paying for a service, you're the product. I trust companies to uphold their privacy promises, but "Trust is good, but proof is better." ;)

netsroht commented on Ask HN: How to do literal web searches after Google destroyed the “ ” feature?    · Posted by u/7moritz7
TradingPlaces · 2 years ago
Kagi. Never have I been so happy to send someone $10 every month. When you become the customer, not the product, it’s amazing what can happen.
netsroht · 2 years ago
Being logged in while making search queries in search engines poses significant privacy risks. The searches can paint a comprehensive profile of the user, and these data often remain stored for extended periods. There's a chance this information might be shared with third parties. Coupled with other user data, these logged-in searches can pave the way for targeted advertising, sophisticated predictive analysis, and potential exploitation by governments or malicious entities. In the event of data breaches, the user's logged-in search histories can be exposed. Furthermore, users typically don't have clear insight into how their data is utilized when logged in.

I hope Kagi introduces an anonymous access feature. For instance, it could incorporate zero-knowledge proofs (ZKPs). These are cryptographic techniques where one party (the prover) can confirm to another (the verifier) that a claim is accurate without disclosing any additional information. This is especially beneficial for authentication scenarios where it's essential to avoid sharing extra details.

To implement zero-knowledge authentication for quota API access:

1. Token Creation:

- Each month, users receive a token tied to their identity and quota.

- The token can be split for use on multiple devices using cryptographic methods.

2. API Access:

- Clients present a zero-knowledge proof (ZKP) to confirm they have a valid token and haven't used up their quota. The server verifies this without seeing the exact details.

3. Client Synchronization:

- Each client tracks its quota usage.

- Synchronization can be peer-to-peer or through a centralized, encrypted server to prevent double spending of the quota.

4. Quota Renewal:

- Monthly, old tokens expire, and new tokens are issued.

Challenges:

- ZKPs can be resource-intensive.

- Token security is crucial; there should be a way to handle lost or compromised tokens.

- The system should prevent quota "double-spending" across devices.

- If a centralized server is used for synchronization, it should operate with encrypted data.

This way Kagi would only know who their customers are but not what kind of searches they make.

netsroht commented on AI is killing the old web   theverge.com/2023/6/26/23... · Posted by u/solalf
latexr · 2 years ago
> Is my new project also part of the problem?

Yes. In a few ways it’s considerably worse. Your website is referencing major world events, including war and freedom of press, by leveraging uninformed comments from the web (many of them themselves written by AI). That would be a problem even if your content weren’t auto-generated (random comments don’t make good or accurate journalism) but it’s worse when it’s churned at a high rate and introduces its own false interpretations.

netsroht · 2 years ago
Thanks for your input. This website should neither compete with nor replace regular journalism. What I try to achieve here is to be able to break free from social media silos where usually people are in kind of bubble. No one can read this many comments and people usually tend to read only comments / conversations where they align with their believes. Hence, I try to highlight different view points along with contrasting opinions (across several different social media platforms) to get an overview--not necessarily fact-based. These stories aren't supposed to push any agenda down anyone's throat.

Since this project just went live I'm still figuring out how to communicate that.

netsroht commented on AI is killing the old web   theverge.com/2023/6/26/23... · Posted by u/solalf
danielbln · 2 years ago
Why do all the images look deep fried?
netsroht · 2 years ago
Because I intentionally process them with a very rudimentary "cartoonizer" in order to distinguish from a regular news articles and to emphasize that these stories are not written by humans. I don't know yet whether this helps.
netsroht commented on AI is killing the old web   theverge.com/2023/6/26/23... · Posted by u/solalf
netsroht · 2 years ago
Is my new project [0] also part of the problem? I'm still unsure myself because LLMs also allow us to process data in unprecedented ways. In my specific case, I auto generate stories to highlight different view points based on what people are saying about hot controversial topics on social media.

What's your opinion?

[0] https://zeitgaist.social

netsroht commented on Tell HN: Cloudflare verification is breaking the internet    · Posted by u/statquontrarian
statquontrarian · 2 years ago
No news yet; however, one of the other HN comments suggested creating a new Firefox profile using about:profiles and that seems to have worked (whereas clearing cache/cookies didn't work), although I'm still trying to find the root cause because it's going to be annoying migrating to the new profile. I think the deeper issue stands that the process to find the cause of why CloudFlare is blocking large parts of the internet for me is too opaque, so I hope CloudFlare has a broader solution such as a diagnostic code or detailed help page.

Right now I'm reviewing about:config for non-standard settings. I did find that I did set general.useragent.override at some point and I forgot about it; however, unsetting it didn't help. I went through all other non-default settings and haven't found anything yet.

netsroht · 2 years ago
I regularly get these infinity captchas on Firefox as well. A couple of days ago I noticed that switching to a different Firefox container let's me pass the captchas.

u/netsroht

KarmaCake day53May 19, 2022
About
https://zeitgaist.ai

Tap into the collective knowledge of the online hive mind

Currently building ZEITGAIST:

> ZEITGAIST is a chatbot trained to answer questions about current global financial, cultural, and intellectual climate. It is unique, as its answers are informed not just by web sources but also by what people are saying on social media. It allows users to explore current global conversations in a truly unique way.

E-Mail: netsroht[at)zeitgaist.ai

View Original