Readit News logoReadit News
edulix commented on A definition of AGI   arxiv.org/abs/2510.18212... · Posted by u/pegasus
edulix · 2 months ago
We have SAGI: Stupid Artificial General Intelligence. It's actually quite general, but works differently. In some areas it can be better or faster than a human, and in others it's more stupid.

Just like an airplane doesn't work exactly like a bird, but both can fly.

edulix commented on An embarrassingly simple approach to recover unlearned knowledge for LLMs   arxiv.org/abs/2410.16454... · Posted by u/PaulHoule
edulix · a year ago
The problem of current models is that they don't learn, they get indoctrinated.

They lack critical thinking during learning phase.

edulix commented on ZombAIs: From Prompt Injection to C2 with Claude Computer Use   embracethered.com/blog/po... · Posted by u/macOSCryptoAI
simonw · a year ago
For all of the excitement about "autonomous AI agents" that go ahead and operate independently through multiple steps to perform tasks on behalf of users, I've seen very little convincing discussion about what to do about this problem.

Fundamentally, LLMs are gullible. They follow instructions that make it into their token context, with little regard for the source of those instructions.

This dramatically limits their utility for any form of "autonomous" action.

What use is an AI assistant if it falls for the first malicious email / web page / screen capture it comes across that tells it to forward your private emails or purchase things on your behalf?

(I've been writing about this problem for two years now, and the state of the art in terms of mitigations has not advanced very much at all in that time: https://simonwillison.net/tags/prompt-injection/)

edulix · a year ago
The core flaw of current AI is the lack of critical thinking during learning.

LLMs don’t actually learn: they get indoctrinated.

edulix commented on What Does It Mean to Learn?   newyorker.com/culture/ope... · Posted by u/wallflower
edulix · a year ago
shameless plug: AIs don't learn, they get indoctrinated

https://x.com/edulix/status/1827493741441249588

edulix commented on International Study Detects Consciousness in Unresponsive Patients   massgeneralbrigham.org/en... · Posted by u/geox
ASalazarMX · a year ago
That was either a medical miracle, or the laziest young man in the world. Wonder what were his first words.
edulix · a year ago
It should be a new medical technique, which shall be named.. the frame method.
edulix commented on Sutskever: OpenAI board doing its mission to build AGI that benefits all   twitter.com/GaryMarcus/st... · Posted by u/convexstrictly
lamp987 · 2 years ago
"one of the most important organizations in the world"

lol

edulix · 2 years ago
This is like saying that Jeff Bezos leapfrogged into being the CEO of one of the biggest and most successful startups in the world.

Maybe he had something to do with it? Maybe, just maybe, it didn't just randomly happened to him.

edulix commented on Working on Multiple Web Projects with Docker Compose and Traefik   georgek.github.io/blog/po... · Posted by u/globular-toast
ripperdoc · 2 years ago
I use Nginx as reverse proxy, and each service runs on the same internal port. There is a way to configure Nginx natively to dynamically route to the container with the same name. If I need multiple services up locally for development, I bring up Nginx there too, and each service is mapped to a domain that ends with .test, which I have added to local DNS (in my case /etc/hosts ). I find that it's anyway better to run development with reverse proxy to find errors that otherwise only would appear in prod.

The main thing I want to improve is to not use one big compose file for all services, as it would be cleaner to have one per service and just deploy them to the same network. But I haven't figured the best way to auto-deploy each service's compose file to the server (as the current auto-deploy only updates container images).

edulix · 2 years ago
Can you please elaborate on how to "dynamically route to the container with the same name" with nginx?
edulix commented on Ask HN: What boosted your confidence as a new programmer?    · Posted by u/optbuild
edulix · 2 years ago
I was 12-13 at the time. When I started programming it seemed really difficult. I didn't have access to the Internet back then.

But I saw it like me against the machine. Since I was youn I wanted to be an inventor. This tool allowed anyone to "invent" any software coming out of the inventor's imagination. It just required a computer, and the inventor not giving up and using his brain. I could do that. I liked the challenge.

Be a tinkerer, have fun! Discover things on your own. Dare to be stupid and do whatever stupid thing feels right. You don't need to follow some pre-programmed plan.

Programming is all about problem solving. You solve one problem, good. Now you will have another problem. No one guarantees you will solve it nor how much effort it will require specifically for you to solve it. And maybe it's the wrong problem to solve. But you will end up figuring all that out, and then you will feel accomplished and willingly hunt the next problem.

edulix commented on Open source licenses need to leave the 1980s and evolve to deal with AI   theregister.com/2023/06/2... · Posted by u/gumby
JoshTriplett · 2 years ago
> 2. Why making a difference between AI and HI (Human Intelligence)?

Because you can't copyright a human brain, and because humans (unlike machines) can themselves create works subject to copyright.

edulix · 2 years ago
At what point you can't copyright an "AI brain" either? Maybe AI will at some point create works subject to copyright?
edulix commented on Open source licenses need to leave the 1980s and evolve to deal with AI   theregister.com/2023/06/2... · Posted by u/gumby
edulix · 2 years ago
1. At what point an intelligence trained with copyrighted work is derivative work of the trained materials?

2. Why making a difference between AI and HI (Human Intelligence)?

3. Given the fast development in the field, when does the difference made above (if any) start being outdated and unrealistic and how do we future-proof against this?

u/edulix

KarmaCake day29November 4, 2020
About
CTO of Sequent Tech
View Original