Readit News logoReadit News
Implicated commented on Netflix to Acquire Warner Bros   about.netflix.com/en/news... · Posted by u/meetpateltech
chrz · 9 days ago
i have no idea how to even display a complete list of movies
Implicated · 9 days ago
On the native interface(s), surely, you can't.
Implicated commented on I made a quieter air purifier   chillphysicsenjoyer.subst... · Posted by u/crescit_eundo
tguvot · 15 days ago
conway filter posted above, you can't really hear it from 10ft when it runs on low speed (it has sensor and he autoadjusts ).

i also have iqair which according to reviews is quiet at low speed. in my experience this quiet sounds like airplane (i got it replaced once. apparently it's just the way it is).

Implicated · 14 days ago
I think you're missing his point entirely. The problem with these retail purifiers is that you either get quiet or effective. You don't get both.

Sure, you can't hear it from 10ft away - but how much air is it moving at that setting?

I have various configurations of these PC fan setups and the Arctic P14 Pro that you can get 5 for $32 on amazon are honestly wildly effective and designed for applications with some static pressure (radiators and such).

So we're back to; effective or quiet, you're only going to get both with the PC fans, for now.

Implicated commented on I made a quieter air purifier   chillphysicsenjoyer.subst... · Posted by u/crescit_eundo
ruralfam · 14 days ago
I agree completely re: mev-13 == optimal solution. But the word "pragmatic" hits me hard. Merv-13 when new/clean start out with pretty restrictive flow. They catch a lot of particles so restriction increases rapidly. At some point the CFM loss means the filter is much less optimal. All the studies I read used new filters, smoke-filled rooms, a day's treatment. It is obviously impractical and very, very expensive to replace a merv-13 filter every few days. There are no reusable merv-13 filters that I could find. If there is a study about merv-13 effectiveness over 30 days vs. merv-8 I would love to see it. I would love to use merv-13, but just cannot get my head around how it is a practical, affordable solution across years and years of use, and let's say a month between filter renewal. Let me know if you have good insights as I am pretty worn out researching this. Thx, RF
Implicated · 14 days ago
I've been down this rabbit hole for a while now and sadly can't seem to find the article about the filtrete vs others, but there's been some people who tested the 'load' on these furnace filters and the Filtrete filters far exceeded everyone else in terms of airflow as the filters loaded up.

Re: Filter costs - stock up when costco has them on sale, which seems like every few months. They've got Filtrete 2500 (merv 14) for 3 filters for ~$35 if I remember correctly. I use them in my CR boxes and those I built for family (which I give them with a 3 pack of new filters and also instruct them to refill during costco sales)

I built one for a local stray cat rescue where it literally sits in the middle of the living space for 10+ cats, it's 4 months now since they started using it and the filters look quite dirty but the air flow is surprisingly still very good. (4x Noctua NF-F12 iPPC 3000 fans and 4x 16"x25"x1" Filtrete 2500 filters)

Here's some info re: which merv levels work best with various fan combinations (looks like if you're going to go with higher merv you'll need the static pressure to be able to continue to pull through them with any reasonable airflow once they're getting loaded up with stuff) - https://www.cleanairkits.com/blogs/news/what-happens-to-cadr...

Implicated commented on Meow.camera   meow.camera/... · Posted by u/southwindcg
catlover13000 · 2 months ago
dont quote me on this but also there are 2 documented feeder types and i believe the second one, the shelter feeder, is the one that came to the U.S. the purrrr owners are very kind and will probably give you information though or at least a date for when they might become more widely available. contact info here https://www.hipurrrr.com/
Implicated · 2 months ago
Thank you!
Implicated commented on AWS multiple services outage in us-east-1   health.aws.amazon.com/hea... · Posted by u/kondro
jewba · 2 months ago
500 billions events. Always blows my mind how many people use aws
Implicated · 2 months ago
I know nothing. But I'd imagine the number of 'events' generated during this period of downtime will eclipse that number every minute.
Implicated commented on Meow.camera   meow.camera/... · Posted by u/southwindcg
thr0w__4w4y · 2 months ago
I noticed all the feeders seem to be similar / same. I'm in California, I feed 3 strays in an area where the average outdoor cat's lifespan is about 4.5 years (fires, traffic, hawks, coyotes, evil people).

Right now my process is very manual but it's a labor of love. All 3 cats only show up after dark. Ring stick up camera, bowls out (clean them every day), run out on a motion alert, etc. problem is I also have racoons, opposssums and skunks. (I'm not in L.A. highrises, I'm close to the ocean).

Where can such feeders now be purchased (US customer). Thank you!

Implicated · 2 months ago
Similarly, I've got 7 regulars and 4 to 5 more occasional visitors that I've been feeding and fixing (11 of them fixed so far) but there's issues with racoons and opossums - pretty sure they stole a whole litter, they taint the water and eat any food available.

My plan is something similar to this feed the cats thing - except, a twitch live stream of an fpv water turret in which they can "deter" those unwanted visitors.

Implicated commented on Claude Sonnet 4.5   anthropic.com/news/claude... · Posted by u/adocomplete
kelvinjps · 3 months ago
How would you have written the prompt?
Implicated · 3 months ago
tbh, I don't really understand it well enough to be able to give a response here. But here's a real prompt I just used on a project copy/pasted:

```

Something that seems to have been a consistent gotcha when working with llm's on this project is that there's no specific `placement` column on the table that holds the 'results' data. Our race_class_section_results table has it's rows created in placement order - so it's inferred via the order relative to other records in the same race_class_section. But this seems to complicate things quite a bit at times when we have a specific record/entry and want to know it's placement - we have to query the rest of them and/or include joins and other complications if we want to filter results by the placements, etc.

  Can you take a look at how this is handled, both with the querying of existing data by views/livewire components/etc and how we're storing/creating the records via the import processes and give me a
   determination on whether you think it should be refactored to include a column on the database for `placement`? I think right now we've got 140,000 or so records on that table and it's got nearly
  20 years worth of race records so I don't think we need to be too concerned with the performance of the table or added storage or anything. Think very hard, understand that this would be a rather
  major refactor of the codebase (I assume, since it's likely used/referenced in _many_ places - thankfully though that most of the complicated queries it would be found in would be easily identified
  by just doing a search of the codebase for the race_class_section_results table) and determine if that would be worth it for the ease of use/query simplification moving forward.
```

This comes with a rather developed CLAUDE.md that includes references to other .md documents that outline various important aspects of the application that should be brought into context when working in those areas.

This prompt was made in planning mode - the LLM will then dig into the code/application to understand things and, if needed, ask questions and give options to weigh before return with a 'plan' on how to approach. I then iterate on that plan with it before eventually accepting a plan that it will then begin work on.

Implicated commented on Claude Sonnet 4.5   anthropic.com/news/claude... · Posted by u/adocomplete
SirMaster · 3 months ago
But isn't the end goal to be able to get useful results without so much prompting?

I mean in the movies for example, advanced AI assistants do amazing things with very little prompting. Seems like that's what people want.

To me, the fact that so many people basically say "you are prompting it wrong" is knock against the tech and the model. If people want to say that these systems are so smart at what they can do, then they should strive to get better at understanding the user without needing tons of prompts.

Do you think his short prompt would be sufficient for a senior developer? If it's good enough for a human it should be good enough for a LLM IMO.

I don't want to take away the ability to use tons of prompting to get the LLM to do exactly what you want, but I think that the ability for an LLM to do better with less prompting is actually a good thing and useful metric.

Implicated · 3 months ago
> But isn't the end goal to be able to get useful results without so much prompting?

See below about context.

> I mean in the movies for example, advanced AI assistants do amazing things with very little prompting. Seems like that's what people want.

Movies != real life

> To me, the fact that so many people basically say "you are prompting it wrong" is knock against the tech and the model. If people want to say that these systems are so smart at what they can do, then they should strive to get better at understanding the user without needing tons of prompts.

See below about context.

> Do you think his short prompt would be sufficient for a senior developer? If it's good enough for a human it should be good enough for a LLM IMO.

Context is king.

> I don't want to take away the ability to use tons of prompting to get the LLM to do exactly what you want, but I think that the ability for an LLM to do better with less prompting is actually a good thing and useful metric.

What I'm understanding from your comments here are that you should just be able to give it broad statements and it should interpret that into functional results. Sure - that works incredibly well, if you provide the relevant context and the model is able to understand and properly associate it where needed.

But you're comparing the LLMs to humans (this is a problem, but not likely to stop so we might as well address it) - but _what_ humans? You ask if that prompt would be sufficient for a senior developer - absolutely, if that developer already has the _context_ of the project/task/features/etc. They can _infer_ what's not specified. But if you give that same prompt to a jr dev who maybe has access to the codebase and has poked around inside the working application once or twice but no real in depth experience with it - they're going to _infer_ different things. They might do great, they might fail spectacularly. Flip a coin.

So - with that prompt in the top level comment - if that LLM is provided excellent context (via AGENTS.md/attached files/etc) then it'll do great with that prompt, most likely. Especially if you aren't looking for specifics in the resulting feature outside of what you mentioned since it _will_ have to infer some things. But if you're just opening codex/CC without a good CLAUDE.md/AGENTS.md and feeding it a prompt like that you have to expect quite a bit of variance to what you get - exactly the same way you would a _human_ developer.

You context and prompt are the project spec. You get out what you put in.

Implicated commented on Claude Sonnet 4.5   anthropic.com/news/claude... · Posted by u/adocomplete
iagooar · 3 months ago
I think that is an interesting observation and I generally agree.

Your point about prompting quality is very valid and for larger features I always use PRDs that are 5-20x the prompt.

The thing is my "experiment" is one that represents a fairly common use case: this feature is actually pretty small and embeds into an pre-existing UI structure - in a larger codebase.

GPT-5-Codex allows me to write a pretty quick & dirty prompt, yet still get VERY good results. It not only works on first try, Codex is reliably better at understanding the context and doing the things that are common and best practice in professional SWE projects.

If I want to get something comparable out of Claude, I would have to spend at least 20mins preparing the prompt. If not more.

Implicated · 3 months ago
> The thing is my "experiment" is one that represents a fairly common use case

Valid as well. I guess I'm just nitpicking based on how much I see people saying these models aren't useful combined with seeing this example, triggered my "you're doing it wrong" mode :D

> GPT-5-Codex allows me to write a pretty quick & dirty prompt, yet still get VERY good results.

I have a reputation with family and co-workers of being quite verbose - this might be why I prefer Claude (though haven't tried Codex in the last month or so). I'm typically setting up context and spending a few minutes writing an initial prompt and iterating/adjusting on the approach in planning mode so that I _can_ just walk away (or tab out) and let it do it's thing knowing that I've already reviewed it's approach and have a reasonable amount of confidence that it's taking an approach that seems logical.

I should start playing with codex again on some new projects I have in mind where I have an initial planning document with my notes on what I want it to do but nothing super specific - just to see what it can "one shot".

Implicated commented on Claude Sonnet 4.5   anthropic.com/news/claude... · Posted by u/adocomplete
pton_xd · 3 months ago
> I bet if I were in your shoes and looking to write a prompt to start a task of a similar type that my prompt would have been 5 to 20x the length of yours

Why would you need such extensive prompting just to get the model to not re-implement authentication logic, for example? It already has access to all of the existing code, shouldn't it just take advantage of what's already there? A 20x longer prompt doesn't sound like a satisfying solution to whatever issue is happening here.

Implicated · 3 months ago
Well, I don't have the context myself about what's happening in this example, though I don't see anything about auth myself.

And I left that window at 5-20x because, again, no real context. But unless I was already in the middle of a task and I was giving direction that there was already context for - my prompt is generally almost never _this_ short. (referring to the prompt in the top level comment)

> A 20x longer prompt doesn't sound like a satisfying solution to whatever issue is happening here.

It wouldn't be, given the additional context given by the author in a sibling comment to yours. But if you had specific expectations on the resulting code/functionality that 20x longer prompt is likely to save you time and energy in the back and forth adjustments you might have to make otherwise.

u/Implicated

KarmaCake day982May 5, 2013
About
https://lwhiker.com https://legitphp.com https://github.com/mferrara

mferrara -. at .- gmail

View Original