Readit News logoReadit News
Szpadel commented on Claude for Chrome   anthropic.com/news/claude... · Posted by u/davidbarker
biggestfan · a day ago
According to their own blog post, even after mitigations, the model still has an 11% attack success rate. There's still no way I would feel comfortable giving this access to my main browser. I'm glad they're sticking to a very limited rollout for now. (Sidenote, why is this page so broken? Almost everything is hidden.)
Szpadel · a day ago
well, at least they are honest about it and don't try to hide it in any way. They probably want to gather more real world data for training and validation, that's why this limited release. openai have browser agent for some time already but I didn't hear about any security considerations. I bet they have the same issues
Szpadel commented on Starlink announced a $5/month plan that gives unlimited usage at 500kbits/s   twitter.com/ID_AA_Carmack... · Posted by u/tosh
StrangeDoctor · 11 days ago
What exactly is the point of abusing the word “unlimited” here?

You’ll never be able to go over 168GB, let them call it the 169.69 plan

Szpadel · 11 days ago
so the only true unlimited internet is the one that have infinite bandwidth
Szpadel commented on Show HN: Real-time privacy protection for smart glasses   github.com/PrivacyIsAllYo... · Posted by u/tash_2s
sillystuff · 14 days ago
I appreciate your intent, But...

This does nothing to alleviate my privacy concerns, as a bystander, about someone rudely pointing a recording camera at me. The only thing that alleviates these concerns about "smart" glasses wearers recording video, is not having "smart" glasses wearers. I.e., not having people rudely walking around with cameras strapped to their faces recording everyone and everything around them. I can't know/trust that there is some tech behind the camera that will protect my privacy.

A lot of privacy invasions have become normalized and accepted by the majority of the population. But, I think/hope a camera strapped to someone's face being shoved into other peoples' faces will be a tough sell. Google Glass wearers risked having the camera ripped off their faces / being punched in the face. I expect this will continue.

Perhaps your tech would have use in a more controlled business/military environment? Or, to post-process police body camera footage, to remove images of bystanders before public release?

Szpadel · 13 days ago
I agree with all you said, but I don't believe there is any way you could protect yourself from being recorded.

The only way for this to work are legal regulations. But those can be easily dismissed as not possible to implement. So this is good PoC to show what is possible and way to discover how this could function. Without such implementation I don't believe you are able to convince anybody to start working on such regulations.

Szpadel commented on Show HN: Real-time privacy protection for smart glasses   github.com/PrivacyIsAllYo... · Posted by u/tash_2s
Szpadel · 13 days ago
I'm curious how do you handle case when camera sees multiple people and you detect consent.

If I understand correctly how this works consent can come from camera operator and be attributed to recorded person

Szpadel commented on Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?    · Posted by u/superasn
Szpadel · 19 days ago
AFAIK main trick is batching, GPU can do same work on batch of data, you can work on many requests at the same time more efficiently.

batching requests increase latency to first token, so it's tradeoff and MoE makes it more tricky because they are not equally used.

there was somewhere great article explaining deepseek efficiency that explained it in great detail (basically latency - throughput tradeoff)

Szpadel commented on Browser extension and local backend that automatically archives YouTube videos   github.com/andrewarrow/st... · Posted by u/fcpguru
Szpadel · 25 days ago
I creates something similar in concept but with different goal. I wanted to be able to watch videos with sponsor block on iPad ideally using Plex.

I found self hosted solution like this but I was very dissatisfied with how that worked

on other hand I wanted to check out loco.rs framework, so I decided to implement my own solution.

basically you are able to add channels/playlists on many many platforms that yt-dlp supports, you can select what should be cut out using sponsor block and you choice how many days you want it (videos older that that are automatically deleted)

if you are interested, you can check it out: https://github.com/Szpadel/LocalTube

Szpadel commented on OpenAI's ChatGPT Agent casually clicks through "I am not a robot" verification   arstechnica.com/informati... · Posted by u/joak
theptip · a month ago
This will be one of the big fights of the next couple years. On what terms can an Agent morally and legally claim to be a user?

As a user I want the agent to be my full proxy. As a website operator I don’t want a mob of bots draining my resources.

Perhaps a good analogy is Mint and the bank account scraping they had to do in the 2010s, because no bank offered APIs with scoped permissions. Lots of customers complained, and after Plaid made it big business, eventually they relented and built the scalable solution.

The technical solution here is probably some combination of offering MCP endpoints for your actions, and some direct blob store access for static content. (Maybe even figuring out how to bill content loading to the consumer so agents foot the bill.)

Szpadel · 25 days ago
I believe this is non issue, you place captcha to make bypassing it much more costly and less profitable to abuse.

LLM models are much harder to drive than any website to serve, so you do not expect mob of bots.

Also keep in mind that this no interaction captchas use behavioral data that are collected in background. Plus you usually have sensitivity levels configured. depending on your use case you might want user proof not being a bot or it might be good enough to just not provide evidence for being one.

bypassing this no interaction captcha can be also purchased as a service, they basically (AFAIK) reuse someone else session for captcha bypass.

Szpadel commented on Lume: The Robotic Lamp   twitter.com/aaronistan/st... · Posted by u/pbardea
JCBird1012 · a month ago
I know the intro video is a render - but the folding of the purple sheet looks so... unnatural (to put it lightly) - like the cloth physics do exactly what is convenient for the video and don't reflect reality.
Szpadel · a month ago
IMO this looks like reverse video
Szpadel commented on Fast and cheap bulk storage: using LVM to cache HDDs on SSDs   quantum5.ca/2025/05/11/fa... · Posted by u/todsacerdoti
GauntletWizard · a month ago
The same for ZFS; there's provisioning to make a "zil" device - ZFS Intent Log, basically the journal. ZFS is a little nicer in that this journal is explicitly disposable - If you lose your ZIL device, you lose any writes since it's horizon, but you don't lose the whole array.

The next step up is building a "metadata" device, which stores the filesystem metadata but not data. This is dangerous in the way the ext4 journal is; lose the metadata, and you lose everything.

Both are massive speedups. When doing big writes, a bunch of spinning rust can't achieve full throughput without a SSD ZIL. My 8+2 array can write nearly two gigabits, but it's abysmal (roughly the speed of a single drive) without a ZIL.

Likewise, a metadata device can make the whole filesystem feel as snappy as SSD, but it's unnecessary if you have enough cache space; ZFS prefers it, so if your metadata fits into your cache SSD, most of it will stay loaded

Szpadel · a month ago
I just want to mention that ZIL is just to speed up sync writes, as it ends syscall when data are written to ZIL, but might be still in progress on slower storage.

ZIL is also basically write only storage, therefore sad without very significant over provisioning will die quickly (you only read from ZIL after unclean shutdown)

if you don't really case about latest version of file (risk of loosing recent chances is acceptable) you might set sync=disabled for that dataset and you can have great performance without ZIL

Szpadel commented on Fast and cheap bulk storage: using LVM to cache HDDs on SSDs   quantum5.ca/2025/05/11/fa... · Posted by u/todsacerdoti
Szpadel · a month ago
something that people forget with raid1 is that this only protect from catastrophic disk failure.

this means your your drive need to be dead for raid to do it's protection and this is usually the case.

the problem is when starts corrupting data it reads of writes. in that case raid have no way to know that and can even corrupt data on the healthy drive. (data is read corrupted and then written to both drives)

the issue is that there are 2 copies of the data and raid have no way of telling with one is correct so it's basically flips a coin and select one of them, even if filesystem knows that content makes no sense.

that's basically biggest advantage of filesystems like zfs or btrfs that manage raid themselves, they have checksums and that know with copy is valid and are able to recover and say that one drive appears healthy but it's corrupting data so you probably want to replace it

u/Szpadel

KarmaCake day801April 26, 2016View Original