Readit News logoReadit News
astromaniak commented on The future of Deep Learning frameworks   neel04.github.io/my-websi... · Posted by u/lairv
astromaniak · a year ago
The article misses multi-modal thing. Which is the future. Sure they can be considered a separate things, like today. But that's probably not the best approach. Support from framework may include partial training, easy components swap, intermediate data caching, dynamic architecture, automatic work balance and scaling.
astromaniak commented on Does Reasoning Emerge? Probabilities of Causation in Large Language Models   arxiv.org/abs/2408.08210... · Posted by u/belter
refulgentis · a year ago
> To make model reason you have to put it in a loop with fallbacks

Source? TFA, i.e. the thing we're commenting on, tried to, and seems to, show the opposite

astromaniak · a year ago
When the task, or part of it, is np complete there is no way around. Model has to try all options till it find working one. In a loop. And this can be multi-step with partial fallback. That's how humans are thinking. They can see only to some depth. They may first determine promising directions. Select one, go dipper. Fallback if it doesn't work. Pattern matching mentioned is simplest one step solution. LLMs are doing it with no problems.
astromaniak commented on Does Reasoning Emerge? Probabilities of Causation in Large Language Models   arxiv.org/abs/2408.08210... · Posted by u/belter
layer8 · a year ago
My impression is that LLMs “pattern-match” on a less abstract level than general-purpose reasoning requires. They capture a large number of typical reasoning patterns through their training, but it is not sufficiently decoupled, or generalized, from what the reasoning is about in each of the concrete instances that occur in the training data. As a result, the apparent reasoning capability that LLMs exhibit significantly depends on what they are asked to reason about, and even depends on representational aspects like the sentence patterns used in the query. LLMs seem to be largely unable to symbolically abstract (as opposed to interpolate) from what is exemplified in the training data.
astromaniak · a year ago
For some reasons LLMs get a lot of attention. But.. while simplicity is great it has limits. To make model reason you have to put it in a loop with fallbacks. It has to try possibilities and fallback from false branches. Which can be done on a level higher. This can be either algorithm, another model, or another thread in the same model. To some degree it can be done by prompting in the same thread. Like asking LLMs to first print high level algorithm and then do it step by step.
astromaniak commented on Grace Hopper, Nvidia's Halfway APU   chipsandcheese.com/2024/0... · Posted by u/PaulHoule
astromaniak · a year ago
This is good for datacenters, but.. NVidia stopped doing anything for consumers market.
astromaniak commented on USB Sniffer Lite for RP2040   github.com/ataradov/usb-s... · Posted by u/mdp2021
stavros · a year ago
Why go through all this trouble if you already have physical access to the computer?
astromaniak · a year ago
-> pc: to have wireless tablet, touchpad, joystick, etc.

pc ->: to have wireless robot control, printer, other devices which have drivers, but aren't easily programmable.

astromaniak commented on Sam Altman is becoming one of the most powerful people on Earth. Be afraid   theguardian.com/technolog... · Posted by u/edward
matteoraso · a year ago
The only real technological edge that OpenAI has right now (at least that I know of) is Sora. There are already some very good open source LLMs and the SoTA for image generation has always been Stable Diffusion. I don't think that Sora's some miraculous piece of technology that nobody else can replicate, so I doubt that OpenAI will be the absolute best in AI for long.
astromaniak · a year ago
FLUX image generator just came out of Black Forest Labs. They are working on video. So, you are right, this will become another battlefield soon.
astromaniak commented on Sam Altman is becoming one of the most powerful people on Earth. Be afraid   theguardian.com/technolog... · Posted by u/edward
bamboozled · a year ago
A lot of people seem to use Gemini where I work so I don’t think you’re right.
astromaniak · a year ago
It's Copilot here, which is MS+OAI. But it's good that we have a healthy competition.
astromaniak commented on The medieval 'New England' on the north-eastern Black Sea coast (2015)   caitlingreen.org/2015/05/... · Posted by u/Thevet
ChipperShredder · a year ago
They had even more recent colonies there as well. In some real, legal senses, Britain and British nobility have better, and legally enforceable claims on that region than the Ukrainians living there now, many of whom are merely transports shuffled into the area by the USSR.
astromaniak · a year ago
It all depends on how far back we want to go to find the 'true' owners. Is second generation living there enough. How about third not living there? There are parallels here. Jewish are willing to look as far as two millennials back. Most americans don't get even close to Columbus days. Anyway, the 'facts on the ground' are the facts.
astromaniak commented on Complex life forms existed 1.5B years earlier than believed, study finds   abcnews.go.com/Internatio... · Posted by u/isaacfrond
warvariuc · a year ago
> Complex life forms existed 1.5B years earlier than believed

I think it would be more correct to say "It's now believed that complex life forms existed 1.5B years earlier than believed earlier"

astromaniak · a year ago
So far no reasons to think so. The article lacks details. But for complex life to emerge in small pocket that oasis would have to exist for many millions of years. No proof of that. Then organism is not a bunch of single cellars in one place. This is another question here. Next there are intermediate forms of life which bundle together only for some time. This thing looks like single organism while it isn't.
astromaniak commented on Flux: Open-source text-to-image model with 12B parameters   blog.fal.ai/flux-the-larg... · Posted by u/CuriouslyC
Hizonner · a year ago
As far as I know, none have been released. And it doesn't even really make sense, because, as I said, the models aren't copyrightable to begin with and therefore aren't licensable either.

However, plenty of open source software exists. The fact that open source models don't exist doesn't excuse attempts to falsely claim the prestige of the phrase "open source".

astromaniak · a year ago
> As far as I know, none have been released.

I can tell you a secret. What you call 'open source' models are impossible. Because massive randomness is a part of training process. They are not reproducible. Having everything you cannot even tell if the given model was trained on the given dataset. Copyright is a different thing.

And a bad news, what's coming is even worst. Those will be the whole things with self awareness and personal experience. They can be copied, but not reproduced. More over, it's hard or almost impossible to detect if something undeclared was planted in their 'minds'.

All together means 'open source' model in strict interpretation is a myth, great idea which happen to be not. Like Turing test.

> However, plenty of open source software exists.

Attempt to switch topic detected.

PS: as for that massive downvote, I even wasn't rude, don't care. This account will be abandoned soon regardless, like all before and after.

u/astromaniak

KarmaCake day72June 2, 2024View Original