Readit News logoReadit News
rdedev commented on A statistical analysis of Rotten Tomatoes   statsignificant.com/p/is-... · Posted by u/m463
autoexec · 3 days ago
I feel pretty confident that Captain Marvel, Emilia Perez, Spy Kids, Sausage Party, The Last Jedi, and Ghostbusters 2016 aren't at much risk of going over the heads of audiences, but you're entitled to your opinion if you think they count as avant garde cinema
rdedev · 3 days ago
Hey it's a heuristic so ymmv
rdedev commented on A statistical analysis of Rotten Tomatoes   statsignificant.com/p/is-... · Posted by u/m463
liveoneggs · 5 days ago
this feels like an interview question
rdedev · 5 days ago
Here is a better heuristic:

High critic score / low audience score = Avant garde type films. Might go over your head

Low critic score / high audience score = Maybe fun but forgettable movie

rdedev commented on Australia Post halts transit shipping to US as 'chaotic' tariff deadline looms   abc.net.au/news/2025-08-2... · Posted by u/breve
JPKab · 5 days ago
None of you are responding to the fact that China and India are buying Russian oil and financing the thousands of dead every week.

Per capita means nothing to the planet. The temperature climbs regardless, and China has a choice and it's chosen power plants even if India doesn't. They are building coal plants every single day and you don't say anything because you are a product of a left-wing movement that views state capitalism as an offshoot of communism and therefore positive. That's why your movement never has anything critical to say about China. You only care about Muslims if they are being murdered by capitalists. But the uyghurs can just go pound sand. They are literally having forced birth control and have lower birth rates than any Muslim population on the planet and you sit there and ignore it.

rdedev · 5 days ago
https://www.carbonbrief.org/analysis-record-solar-growth-kee...

China is on track to reduce its use of fossil fuel for energy production

Deleted Comment

rdedev commented on GPT-5 leaked system prompt?   gist.github.com/maoxiaoke... · Posted by u/maoxiaoke
OsrsNeedsf2P · 18 days ago
I find it interesting how many times they have to repeat instructions, i.e:

> Address your message `to=bio` and write *just plain text*. Do *not* write JSON, under any circumstances [...] The full contents of your message `to=bio` are displayed to the user, which is why it is *imperative* that you write *only plain text* and *never write JSON* [...] Follow the style of these examples and, again, *never write JSON*

rdedev · 18 days ago
I build a plot generation chatbot for a project at my company andit used matplotlib as the plotting library. Basically the llm will write a python function to generate a plot and it would be executed on an isolated server. I had to explicitly tell it not to save the plot a few times. Probably cause all many matplotlib tutorials online always saves the plot
rdedev commented on Building Effective AI Agents   anthropic.com/engineering... · Posted by u/Anon84
AvAn12 · 2 months ago
How do agents deal with task queueing, race conditions, and other issues arising from concurrency? I see lots of cool articles about building workflows of multiple agents - plus what feels like hand-waving around declaring an orchestrator agent to oversee the whole thing. And my mind goes to whether there needs to be some serious design considerations and clever glue code. Or does it all work automagically?
rdedev · 2 months ago
This is why I am leaning towards making the llm generate code that calls operates on took calls instead of having everything in JSON.

Huggingfaces's smolagents library makes the llm generate python code where tools are just normal python functions. If you want parallel tools calls just prompt the llm to do so. It should take care of synchronizing everything. Ofcourse there is the whole issue around executing llm generated code but we have a few solutions for that

rdedev commented on When ChatGPT broke the field of NLP: An oral history   quantamagazine.org/when-c... · Posted by u/mathgenius
motorest · 4 months ago
> That the whole field seems to be moving in a direction where you need a lot of resources to do anything. You can have 10 different ideas on how to improve LLMs but unless you have the resources there is barely anything you can do.

I think you're confusing problems, or you're not realizing that improving the efficiency of a class of models is a research area on it's own. Look at any field that involves expensive computational work. Model reduction strategies dominate research.

rdedev · 4 months ago
I felt that way maybe an year or two ago. It seemed like the most research were only concerned about building bigger models to beat benchmarks. There was also this prevalent idea that models need to be big and have massive compute. Especially from companies like openai. I was glad that models like deepseek were made. Bought back some hope
rdedev commented on When ChatGPT broke the field of NLP: An oral history   quantamagazine.org/when-c... · Posted by u/mathgenius
sp1nningaway · 4 months ago
For me as a lay-person, the article is disjointed and kinda hard to follow. It's fascinating that all the quotes are emotional responses or about academic politics. Even now, they are suspicious of transformers and are bitter that they were wrong. No one seems happy that their field of research has been on an astonishing rocketship of progress in the last decade.
rdedev · 4 months ago
It's a truly bitter pill to swallow when your whole area of research goes redundant.

I have a bit of background in this field so it's nice to see even people who were at the top of the field raise concerns that I had. That comment about LHC was exactly what I told my professor. That the whole field seems to be moving in a direction where you need a lot of resources to do anything. You can have 10 different ideas on how to improve LLMs but unless you have the resources there is barely anything you can do.

NLP was the main reason I pursued an MS degree but by the end of my course I was not longer interested in it mostly because of this.

rdedev commented on When ChatGPT broke the field of NLP: An oral history   quantamagazine.org/when-c... · Posted by u/mathgenius
peterldowns · 4 months ago
All of this matches my understanding. It was interesting taking an NLP class in 2017, the professors said basically listen, this curriculum is all historical and now irrelevant given LLMs, we’ll tell you a little about them but basically it’s all cutting edge sorry.
rdedev · 4 months ago
Same for my nlp class of 2021. Just directly went onto talking about transformers after a brief intro of the old stuff
rdedev commented on How Kerala got rich   aeon.co/essays/how-did-ke... · Posted by u/lordleft
trompetenaccoun · 5 months ago
That may be but the topic of the thread is how rich Kerala supposedly is, not how super awesome their public train announcements are. The claim is not just false, the article is outright propaganda given how one of the co-authors works for the state government.
rdedev · 5 months ago
I guess my main point is that a communist type govt was not exclusively bad for Kerala since they took a lot of effort to improve education and public health.

You can look at other sources to see how good kerala is doing wrt other states but I do agree the article over emphasised the good parts without any hint to it's bad parts

u/rdedev

KarmaCake day674January 31, 2021View Original