Readit News logoReadit News

Deleted Comment

joe_the_user commented on Reading for pleasure plummets by 40% in the US   medicalxpress.com/news/20... · Posted by u/geox
shusaku · 14 hours ago
Maybe I am misunderstanding the study but I don’t understand why reading a magazine or newspaper is counted while reading an article on one’s phone is not.
joe_the_user · 14 hours ago
You in fact are misunderstanding the article, reading on an electronic device is included reading for pleasure - it is one of the three categories listed parenthetically as that.

Quote: The study focused on two activities: reading for pleasure (reading a book, newspaper, magazine, reading on electronic devices and listening to audiobooks) and reading with children.

joe_the_user commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
hnuser123456 · 2 days ago
Even more fundamental than science, there is missing philosophy, both in us regarding these systems, and in the systems themselves. An AGI implemented by an LLM needs to, at the minimum, be able to self-learn by updating its weights, self-finetune, otherwise it quickly hits a wall between its baked-in weights and finite context window. What is the optimal "attention" mechanism for choosing what to self-finetune with, and with what strength, to improve general intelligence? Surely it should focus on reliable academics, but which academics are reliable? How can we reliably ensure it studies topics that are "pure knowledge", and who does it choose to be, if we assume there is some theoretical point where it can autonomously outpace all of the world's best human-based research teams?
joe_the_user · 2 days ago
Well,

Original 80s AI was based on mathematical logic. And while that might not encompass all philosophy, it certainly was a product of philosophy broadly speaking - some analytical philosophers could endorse. But it definitely failed and failed because it could process uncertainty (imo). I think also if you closely, classical philosophy wasn't particularly amenable to uncertainty either.

If anything, I would say that AI has inherited its failure from philosophy's failure and we should look to alternative approaches (from Cybernetics to Bergson to whatever) for a basis for it.

joe_the_user commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
justcallmejm · 2 days ago
The missing science to engineer intelligence is composable program synthesis. Aloe (https://aloe.inc) recently released a GAIA score demonstrating how CPS dramatically outperforms other generalist agents (OpenAI's deep research, Manus, and Genspark) on tasks similar to those a knowledge worker would perform.

I'd argue it's because intelligence has been treated as a ML/NN engineering problem that we've had the hyper focus on improving LLMs rather than the approach articulated in the essay.

Intelligence must be built from a first principles theory of what intelligence actually is.

joe_the_user · 2 days ago
CPS sounds interesting but your link goes to a teaser trailer and a waiting list. It's kind of hard to expect much from that.
joe_the_user commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
andy99 · 2 days ago
If you believe the bitter lesson, all the handwavy "engineering" is better done with more data. Someone likely would have written the same thing as this 8 years ago about what it would take to get current LLM performance.

So I don't buy the engineering angle, I also don't think LLMs will scale up to AGI as imagined by Asimov or any of the usual sci-fi tropes. There is something more fundamental missing, as in missing science, not missing engineering.

joe_the_user · 2 days ago
So the "Bitter Lesson" paper actually came up recently and I was surprised to discover that what it claimed was sensible and not at "all you need is data" or "data is inherently better"

The first line and the conclusion is: "The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin." [1]

I don't necessary agree with it's examples or the direction it vaguely points at. But it's basic statement seems sound. And I would say that there's lot of opportunity for engineer, broadly speaking, in the process of creating "general methods that leverage computation" (IE, that scale). What the bitter lesson page was roughly/really about was earlier "AI" methods based on logic-programming and which including information on the problem domain in the code itself.

And finally, the "engineering" the paper talks about actually is pro-Bitter lesson as far as I can tell. It's taking data routing and architectural as "engineering" and here I agree this won't work - but for the opposite reason - specifically 'cause I don't just data routing/process will be enough.

[1]https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...

joe_the_user commented on Weaponizing image scaling against production AI systems   blog.trailofbits.com/2025... · Posted by u/tatersolid
K0nserv · 5 days ago
The security endgame of LLMs terrifies me. We've designed a system that only supports in-band signalling, undoing hard learned lessons from prior system design. There are ampleattack vectors ranging from just inserting visible instructions to obfuscation techniques like this and ASCII smuggling[0]. In addition, our safeguards amount to nicely asking a non deterministic algorithm to not obey illicit instructions.

0: https://embracethered.com/blog/posts/2024/hiding-and-finding...

joe_the_user · 5 days ago
What lessons have organizations learned about security?

Hire a consultant who can say you're following "industry standards"?

Don't consider secure-by-design applications, keep your full-featured piece of jump but work really hard to plug holes, ideally by paying a third party or better getting your customers to pay ("anti-virus software").

Buy "security as product" software allow with system admin software and when you get a supply chain attack, complain?

joe_the_user commented on Bank forced to rehire workers after lying about chatbot productivity, union says   arstechnica.com/tech-poli... · Posted by u/ndsipa_pomu
taylodl · 5 days ago
How many times has a chatbot successfully taken care of a customer support problem you had? I have had success, but the success rate is less than 5%. Maybe even way less than 5%.

Companies need to stop looking at customer support as an expense, but rather as an opportunity to build trust and strengthen your business relationship. They warn against assessing someone when everything is going well for them - the true measure of the person is what they do when things are not going well. It's the same for companies. When your customers are experiencing problems, that's the time to shine! It's not a problem, it's an opportunity.

joe_the_user · 5 days ago
I remember the pre-AI Geico chat bot that I liked. I could call it once every six months and pay my entire balance with a few words. But then the company started leaning harder on monthly payments and the "pay entire balance" option was removed and I now must either laboriously speak-out the entire dollars and cents due or talk to a person.

What is to say that a lot of the functions that a customer service person does is getting people things they need and that the company resists giving to them. Which is to say that companies mostly need customer service agents because the company's raw impulses are so shitty they need someone with the slight independence of a customer service agent just to provide the services their customers need.

It's like why I never go to company websites despite being very web-savy. These websites only serve the company's idea of what I get and if I'm calling at all, it's because I need more than that.

Naturally, the point is an AI chat can't do customer service because it can't override policy, tell people tricks and similar things.

joe_the_user commented on Understanding Moravec's Paradox   hexhowells.com/posts/mora... · Posted by u/hexhowells
hexhowells · 5 days ago
The human ability to learn from few examples can be explained with evolution (and thus search). We evolved to be fast learners as it was key to our survival. If you touched fire and felt pain, you better learn quickly not to keep touching it. This learning from reward signals (neurotransmitters) in our brain generalises to pretty much all learning tasks
joe_the_user · 5 days ago
Everything can "be explained by evolution" but such an explanation doesn't tell you how a particular form serves a particular task.

Deleted Comment

joe_the_user commented on Understanding Moravec's Paradox   hexhowells.com/posts/mora... · Posted by u/hexhowells
joe_the_user · 5 days ago
Any attention on Moravec's paradox is good imo because it is important.

That said, the article starts with several problems.

1) Claims that it isn't a paradox, which is just silly. A paradox is a counter-intuitive result. The result is generally counter-intuitive whatever explanation you give. Zeno's paradox remains a paradox despite calculus essentially explaining it, etc.

2) Calls the article "Understanding Moravec's Paradox" when it should be called "My Explanation of Moravec's Paradox".

3) The author's final explanation seems kind of simplistic; "Human activities just have a large search space". IDK. Human activity sometimes does still in things that aren't walking also. I mean, "not enough data" is an explanation why neural networks can't do a bunch of things. But not all programs are neural networks. One of the things humans are really good at is learning things from a few examples. A serious explanation of Moravec's Paradox would have to explain this as well imo.

u/joe_the_user

KarmaCake day27409December 2, 2008
About
"Fully Automated Luxury Gay Space Communism"

Also, I know maths, semi-seriously

View Original