Readit News logoReadit News
_QrE commented on The Missing Protocol: Let Me Know   deanebarker.net/tech/blog... · Posted by u/deanebarker
_QrE · 12 days ago
Maybe I'm misunderstanding something, but you can add filters to RSS feeds. What is proposed is pretty much just RSS, except for one specific item. Yes, it's more work on your side, but asking the creator to manage updates for whatever one thing any/every random person is interested in is pretty unrealistic, especially since the people asking for this are going to be explicitly not interested in everything else about the creator.

> There’s no AI to this. No magic. No problems to be solved.

Why would you not involve yourself in the new hotness? You _can_ put AI into this. Instead of using some expression to figure out whether a new article has links to the previous ones in the series / a matching title, you can have a local agent check your RSS feed and tell you if it's what you're looking for, or else delete the article. For certain creators this might even be a sensible choice, depending on how purple their prose is and their preferred website setup.

_QrE commented on Show HN: TheProtector – Linux Bash script for the paranoid admin on a budget   github.com/IHATEGIVINGAUS... · Posted by u/lotussmellsbad
_QrE · a month ago
Neat, but isn't packing all this stuff into a bash script overkill? You can pretty easily install and configure some good tools (i.e. crowdsec, rkhunter, ssh tarpit or whatever) to cover each of the categories rather than have a bunch of half-measures.

Also, you're calling this TheProtector, but internally it seems to be called ghost sentinel?

> local update_url="https://raw[dot]githubusercontent[dot]com/your-repo/ghost-se..."

_QrE commented on Go is a good fit for agents   docs.hatchet.run/blog/go-... · Posted by u/abelanger
_QrE · 3 months ago
I'm not sure how valid most of these points are. A lot of the latency in an agentic system is going to be the calls to the LLM(s).

From the article: """ Agents typically have a number of shared characteristics when they start to scale (read: have actual users):

    They are long-running — anywhere from seconds to minutes to hours.
    Each execution is expensive — not just the LLM calls, but the nature of the agent is to replace something that would typically require a human operator. Development environments, browser infrastructure, large document processing — these all cost $$$.
    They often involve input from a user (or another agent!) at some point in their execution cycle.
    They spend a lot of time awaiting i/o or a human.
"""

No. 1 doesn't really point to one language over another, and all the rest show that execution speed and server-side efficiency is not very relevant. People ask agents a question and do something else while the agent works. If the agent takes a couple seconds longer because you've written it in Python, I doubt that anyone would care (in the majority of cases at least).

I'd argue Python is a better fit for agents, mostly because of the mountain of AI-related libraries and support that it has.

> Contrast this with Python: library developers need to think about asyncio, multithreading, multiprocessing, eventlet, gevent, and some other patterns...

Agents aren't that hard to make work, and you can get most of the use (and paying users) without optimizing every last thing. And besides, the mountain of support you have for whatever workflow you're building means that someone has probably already tried building at least part of what you're working on, so you don't have to go in blind.

_QrE commented on Ask HN: Startup getting spammed with PayPal disputes, what should we do?    · Posted by u/june3739
sky2224 · 3 months ago
What have you switched to that isn't PayPal and also doesn't have this issue?
_QrE · 3 months ago
Easy example is Stripe. You can enable 3DS, and you can listen for 'early_fraud_warning' events on a webhook to refund users & close accounts to avoid chargebacks and all the associated fees and reputation penalties.
_QrE commented on Launch HN: Nomi (YC X25) – Copilot for Sales    · Posted by u/ethansafar
swanYC · 3 months ago
Would be super insightful to get the "more" discussed to avoid any strange feeling.

We currently have the following;

"Nomi.so is recording the call and assisting [Name] representative"

_QrE · 3 months ago
If I am in a call with someone using Nomi, can I send a message in the call or wherever to disable it, or will I have to ask the person using it to turn it off?
_QrE commented on You Don't Need Re-Ranking: Understanding the Superlinked Vector Layer   superlinked.com/vectorhub... · Posted by u/softwaredoug
nostrebored · 3 months ago
If this were true, and initial candidate retrieval were a solved problem, teams where search is revenue aligned wouldn't have teams of very well paid people looking for marginal improvement here.

Treating BM25 as a silver bullet is just as strange as treating vector search as the "true way" to solve retrieval.

_QrE · 3 months ago
I don't mean to imply that it's a solved problem; all I'm saying is that in a lot of cases, the "weak initial retrieval" assertion stated by the article is not true. And if you can get a long way using what has now become the industry standard, there's not really a case to be made that BM25 is bad/unsuited, unless the improvement you gain from something more complex is more than just marginal.
_QrE commented on You Don't Need Re-Ranking: Understanding the Superlinked Vector Layer   superlinked.com/vectorhub... · Posted by u/softwaredoug
janalsncm · 3 months ago
I don’t think the author understands the purpose of reranking.

During vector retrieval, we retrieve documents in sublinear time from a vector index. This allows us to reduce the number of documents from potentially billions to a much smaller number. The purpose of re-ranking is to allow high powered models to evaluate docs much more closely.

It is true that we can attempt to distill that reranking signal into a vector index. Most search engines already do this. But there is no replacement for using the high powered behavior based models in reranking.

_QrE · 3 months ago
I agree.

> "The real challenge in traditional vector search isn't just poor re-ranking; it's weak initial retrieval. If the first layer of results misses the right signals, no amount of re-sorting will fix it. That's where Superlinked changes the game."

Currently a lot of RAG pipelines use the BM25 algorithm for retrieval, which is very good. You then use an agent to rerank stuff only after you've got your top 5-25 results, which is not that slow or expensive, if you've done a good job with your chunking. Using metadata is also not really a 'new' approach (well, in LLM time at least) - it's more about what metadata you use and how you use them.

_QrE commented on Tower Defense: Cache Control   jasonthorsness.com/26... · Posted by u/jasonthorsness
victorbjorklund · 3 months ago
Bunny is never free (other than a 14 day free trial). But it is dirt cheap but with a min cost of 1 usd/month.
_QrE · 3 months ago
You're right, my bad. I was looking at the CDN tab and it rounds the traffic cost to 0.
_QrE commented on Tower Defense: Cache Control   jasonthorsness.com/26... · Posted by u/jasonthorsness
tikotus · 3 months ago
I've been running a handful of low traffic services on a $10 VPS from DigitalOcean for years. I'm currently in a situation where a thing I've made might blow up, and I had to look into CDNs just in case. It's just static content, but updates once per day (it's a daily logic puzzle).

I must admit I had no idea what goes into runnig a CDN. First I had a look at DO's spaces object storage which has CDN support. But it wasn't exactly the right tool for serving a static website apparently, but rather for serving large files. For example I couldn't make it serve css files with the correct mime type without some voodoo, so I had to conclude I wasn't doing the right thing.

Then I looked at DO's app platform. But that seemed like an overkill for just sharing some static content. It wants me to rely on an external service like GitHub to reliably serve the content. I already rely on DO, I don't want to additionally rely on something else too. Seems like I could also use DO's docker registry. What? To serve static content on CDN I need to create a whole container running the whole server? And what do I need to take into consideration when I want to update the content once per day simultaneously for all users? It's easy when it's all on my single VPS (with caching disabled for that url) but I actually have no idea what happens with the docker image once it's live on the "app platform". This is getting way more complex than I was hoping for. Should I go back to the spaces solution?

Right now I'm in a limbo. On one hand I want to be prepared in case I get lucky and my thing goes "viral". On the other hand my tiny VPS is running on 2% CPU usage with already quite a few users. And if I do get lucky, I should afford doubling my VPS capacity. But what about protection from DDoS? Anything else I should worry about? Why is everyone else using CDN?

And I don't even have caching! An article like this puts my problem into shame. I just want to serve a couple of plain web files and I can't choose what I should do. This article really shows how quickly the problem starts ballooning.

_QrE · 3 months ago
It sounds like you might be looking at the wrong place. There's services like bunny.net and cloudflare CDN (I'm not affiliated with either, but I use the former) that are really easy to set up and configure, if you've built your site properly (edit for clarification: if you have clearly defined static content, and/or you're using some build system). You don't want to 'run' a CDN, you want to use one.

Configuration depends a lot on the specifics of your stack. For Svelte, you can use an adapter (https://svelte.dev/docs/kit/adapters) that will handle pointing to the CDN for you.

Cloudflare's offering is free, bunny.net is also probably going to be free for you if you don't have much traffic. CDNs are great insurance for static sites, because they can handle all the traffic you could possibly get without breaking a sweat.

_QrE commented on Show HN: Undetectag, track stolen items with AirTag   undetectag.com/... · Posted by u/pompidoo
wanderer2323 · 3 months ago
You have also developed a device that allows people to use AirTags for stalking.
_QrE · 3 months ago
This. Normal AirTags are just fine for tracking your stuff.

> "(thiefs use apps to locate AirTags around, and AirTags will warn the thief if an unknown AirTag is travelling with them, for example if they steal your car)"

The reason this was introduced is exactly because people used AirTags to stalk others. Advertising that your product turns that off is basically targeting that specific demographic.

u/_QrE

KarmaCake day140April 28, 2025View Original