They lack critical thinking during learning phase.
They lack critical thinking during learning phase.
Fundamentally, LLMs are gullible. They follow instructions that make it into their token context, with little regard for the source of those instructions.
This dramatically limits their utility for any form of "autonomous" action.
What use is an AI assistant if it falls for the first malicious email / web page / screen capture it comes across that tells it to forward your private emails or purchase things on your behalf?
(I've been writing about this problem for two years now, and the state of the art in terms of mitigations has not advanced very much at all in that time: https://simonwillison.net/tags/prompt-injection/)
LLMs don’t actually learn: they get indoctrinated.
lol
Maybe he had something to do with it? Maybe, just maybe, it didn't just randomly happened to him.
The main thing I want to improve is to not use one big compose file for all services, as it would be cleaner to have one per service and just deploy them to the same network. But I haven't figured the best way to auto-deploy each service's compose file to the server (as the current auto-deploy only updates container images).
But I saw it like me against the machine. Since I was youn I wanted to be an inventor. This tool allowed anyone to "invent" any software coming out of the inventor's imagination. It just required a computer, and the inventor not giving up and using his brain. I could do that. I liked the challenge.
Be a tinkerer, have fun! Discover things on your own. Dare to be stupid and do whatever stupid thing feels right. You don't need to follow some pre-programmed plan.
Programming is all about problem solving. You solve one problem, good. Now you will have another problem. No one guarantees you will solve it nor how much effort it will require specifically for you to solve it. And maybe it's the wrong problem to solve. But you will end up figuring all that out, and then you will feel accomplished and willingly hunt the next problem.
Because you can't copyright a human brain, and because humans (unlike machines) can themselves create works subject to copyright.
2. Why making a difference between AI and HI (Human Intelligence)?
3. Given the fast development in the field, when does the difference made above (if any) start being outdated and unrealistic and how do we future-proof against this?
Just like an airplane doesn't work exactly like a bird, but both can fly.