Readit News logoReadit News
Waterluvian · 3 years ago
Closer. But I still get lost when words like “tensor” are used. “structured lists of numbers” really doesn’t seem to explain it usefully.

This reminds me that explaining seemingly complex things in simple terms is one of the most valuable and rarest skills in engineering. Most people just can’t. And often because they no-longer remember what’s not general knowledge. You end up with a recursive Feynmannian “now explain what that means” situation.

This is probably why I admire a whole bunch of engineering YouTubers and other engineering “PR” people for their brilliance at making complex stuff seem very very simple.

netruk44 · 3 years ago
If it helps you to understand at all, assuming you have a CS background, any time you see the word "tensor" you can replace it with "array" and you'll be 95% of the way to understanding it. Or "matrix" if you have a mathematical background.

Whereas CS arrays tend to be 1 dimensional, and sometimes 2 dimensional, tensors can be as many dimensions as you need. A 256x256 photo with RGB channels would be stored as a [256 x 256 x 3] tensor/array. If you want to store a bunch of them? Add a dimension to store each image. Want rows and columns of images? Make the dimensions [width x height x channels x rows x columns].

minimaxir · 3 years ago
A more practical example of the added dimensionality of tensors is the addition of a batch dimension, so a 8 image batch per training step would be a (8, 256, 256, 3) tensor.

Tools such as PyTorch's DataLoader can efficiently collate multiple inputs into a batch.

brundolf · 3 years ago
This is something that infuriates me about formal mathematics (and CS, for that matter): they don't just use the words people already know! They have to make up new words, sometimes even entirely new symbols, that often just mean the same thing as something else. And then the literature becomes entirely inaccessible unless you've learned this secret code. The bottleneck becomes translation, instead of understanding.
pmoriarty · 3 years ago
"Whereas CS arrays tend to be 1 dimensional, and sometimes 2 dimensional, tensors can be as many dimensions as you need."

You can have arrays of as many dimensions as you need in many (most?) programming languages.

Is there some other difference between tensors and arrays?

Or is it just the math term for multidimensional array?

Waterluvian · 3 years ago
This helps. Thank you. Any advice on where to look to understand why the word tensor was used?
TigeriusKirk · 3 years ago
One of my favorite moments in Geoffrey Hinton's otherwise pretty info-dense Coursera neural network class was when he said-

"To deal with a 14-dimensional space, visualize a 3-D space and say 'fourteen' to yourself very loudly. Everyone does it."

TuringTest · 3 years ago
Kudos to this. It also helps to think of a three-dimensional space bounded by the sides of a box, and to think of another 11 boxes stacked on top of each other.

Then, you can visualize orthogonal on dimensions higher than 3 by throwing wires between equivalent points in two boxes.

beernet · 3 years ago
I think this post explains tensors fairly well: https://www.kdnuggets.com/2018/05/wtf-tensor.html

Quote: "A tensor is a container which can house data in N dimensions. Often and erroneously used interchangeably with the matrix (which is specifically a 2-dimensional tensor), tensors are generalizations of matrices to N-dimensional space."

Deleted Comment

pdntspa · 3 years ago
I got lost at the word 'tensor' too, but then I just googled it and skilled up...

But simply put, a 'tensor' is a 3+-dimensional array of floats. Or a stack of matrices.

Dead Comment

Waterluvian · 3 years ago
I should have added that the images/figures really help. I think I’m about there.

Deleted Comment

6gvONxR4sf7o · 3 years ago
You’re talking like using jargon makes something a bad explanation, but maybe you just aren’t the audience? Why not use words like that if it’s a super basic concept to your intended audience?
Waterluvian · 3 years ago
I saw the scientific term, “Text Understander” and wrongly thought I was the audience.
ultrasounder · 3 years ago
Kudos to Author for writing this. The author's other guide, https://jalammar.github.io/gentle-visual-intro-to-data-analy... is THE most approachable introduction to Pandas for Excel users like me who are "comfortable" with Python(disclaimer: Self taught EE/Hardware guy). I will definitely spend some time this weekend to go over this tome. Thanks again.
minism · 3 years ago
Great overview, I think the part for me which is still very unintuitive is the denoising process.

If the diffusion process is removing noise by predicting a final image and comparing it to the current one, why can't we just jump to the final predicted image? Or is the point that because its an iterative process, each noise step results in a different "final image" prediction?

psb217 · 3 years ago
In the reverse diffusion process, the reason we can't directly jump from a noisy image at step t to a clean image at step 0 is that each possible noisy image at step t may be visited by potentially many real images during the forward diffusion process. Thus, our model which inverts the diffusion process by minimizing least-squares prediction error of a clean image given a noisy image at step t will learn to predict the mean over potentially many real images, which is not a itself a real image.

To generate an image we start with a noise sample and take a step towards the _mean_ of the distribution of real images which would produce that noise sample when running the forward diffusion process. This step moves us towards the _mean_ of some distribution of real images and not towards a particular real image. But, as we take a bunch of small steps and gradually move back through the diffusion process, the effective distribution of real images over which this inverse diffusion prediction averages has lower and lower entropy, until it's effectively a specific real image, at which point we're done.

wokwokwok · 3 years ago
> But, as we take a bunch of small steps and gradually move back through the diffusion process...

...but, the question is, why can't we take a big step and be at the end in one step.

Obviously a series of small steps gets you there, but the question was why you need to take small steps.

I feel like this is just a 'intuitive explanation' that doesn't actually do anything other than rephrase the question; "You take a series of small steps to reduce the noise in each step and end up with a picture with no noise".

The real reason is that big steps result in worse results (1); the model was specifically designed to be a series of small steps because when you take big steps, you end up with over fitting, where the model just generates a few outputs from any input.

(1) - https://arxiv.org/pdf/1503.03585.pdf

ttul · 3 years ago
This is the best explanation I have ever read - on any topic.
minism · 3 years ago
Thank you! This is a great explanation.
mota7 · 3 years ago
The problem is that predicting a pixel requires knowing what the pixels around it looks like. But if we start with lots of noise, then the neighboring pixels are all just noise and have no signal.

You could also think of this as: We start with a terrible signal to noise ratio. So we need to average over very large areas to get any reasonable signal. But as we increase the signal, we can average over a smaller area to get the same signal-to-ratio.

In the beginning, we're averaging over large areas, so all the fine detail is lost. We just get 'might be a dog? maybe??'. What the network is doing is saying "if this a dog, there should be a head somewhere over here. So let me make it more like a head". Which improves the signal to noise ratio a bit.

After a few more steps, the signal is strong enough that we can get sufficient signal from smaller areas, so it starts saying 'head of a dog' in places. So the network will then start doing "Well, if this is a dog's head, there should be some eyes. Maybe two, but probably not three. And they'll be kinda somewhere around here".

Why do it this way?

Doing it this ways means the network doesn't need to learn "Here are all the ways dogs can look". Instead, it can learn a factored representation: A dog has a head and a body. The network only needs to learn a very fuzzy representation at this level. Then a head has some eyes and maybe a nose. Again, it only needs to learn a very fuzzy representation and (very) rough relative locations.

So it only when it get right down into fine detail that it actually needs to learn pixel perfect representation. But this is _way_ easier, because in small areas images have surprisingly very low entropy.

The 'text-to-image' bit is a just a twist on the basic idea. At the start when the network is going "dog? or it might be a horse?", we fiddle with the probabilities a bit so that the network starts out convinced there's a dog in there somewhere. At which point it starts making the most likely places look a little more like a dog.

sroussey · 3 years ago
I suppose that static plus a subliminal message would do the same thing to our own numeral networks. Or clouds. I can be convinced I’m seeing almost anything in clouds…
astrange · 3 years ago
Research is still ongoing here, but it seems like diffusion models despite being named after the noise addition/removal process don't actually work because of it.

There's a paper (which I can't remember the name of) that shows the process still works with different information removal operators, including one with a circle wipe, and one where it blends the original picture with a cat photo.

Also, this article describes CLIP being trained on text-image pairs, but Google's Imagen uses an off the shelf text model so that part doesn't seem to be needed either.

krackers · 3 years ago
I think it might be this paper [1] succintly described by the author in this twitter thread [2]

[1] https://arxiv.org/abs/2208.09392 [2] https://twitter.com/tomgoldsteincs/status/156250381442263040...

dougabug · 3 years ago
If you removed all of the noise in a corrupted image in one step, you would have a denoising autoencoder, which has been around since the mid-aughts or perhaps earlier. Denoising diffusion models remove noise a little bit at a time. Think about an image which only has a slight amount of noise added to it. It’s generally easier to train a model to remove a tiny amount of noise than a large amount of noise. At the same time, we likely introduced a small amount of change to the actual contents of the image.

Typically, in generating the training data for diffusion models, we add noise incrementally to an image until it’s essentially all noise. Going backwards from almost all noise to the original images directly in one step is a pretty dubious proposition.

hanrelan · 3 years ago
I was wondering the same and this video [1] helped me better understand how the prediction is used. The original paper isn't super clear about this either.

The diffusion process predicts the total noise that was added to the image. But that prediction isn't great and applying it immediately wouldn't result in a good output. So instead, the noise is multiplied by a small epsilon and then subtracted from the noisy image. That process is iterated to get to the final result.

[1]https://www.youtube.com/watch?v=J87hffSMB60

nullc · 3 years ago
You can think of it like solving a differential equation numerically. The diffusion model encodes the relationships between values in sensible images (technically in the compressed representations of sensible images). You can try to jump directly to the solution but the result won't be very good compared to taking small steps.
cgearhart · 3 years ago
I’m pretty sure it’s a stability issue. With small steps the noise is correlated between steps; if you tried it in one big jump then you would essentially just memorize the input data. The maximum noise would act as a “key” and the model would memorize the corresponding image as the “value”. But if we do it as a bunch of little steps then the nearby steps are correlated and in the training set you’ll find lots of groups of noise that are similar which allows the model to generalize instead of memorizing.
jayalammar · 3 years ago
Two diffusion processes are involved:

1- Forward Diffusion (adding noise, and training the Unet to predict how much noise is added in each step)

2- Generating the image by denoising. This doesn't predict the final image, each step only predicts a small slice of noise (the removal of which leads to images similar to what the model encountered in step 1).

So it is indeed an iterative processes in that way, each step taking one step towards the final image.

Deleted Comment

minimaxir · 3 years ago
Hugging Face's diffusers library and explainer Colab notebook (https://colab.research.google.com/github/huggingface/noteboo...) are good resources on how diffusion works in practice codewise.
jayalammar · 3 years ago
Agreed. "Stable Diffusion with Diffusers" and "The Annotated Diffusion Model" were excellent and are linked in the article. The code in Diffusers was also a good reference.
BrainVirus · 3 years ago
https://jalammar.github.io/images/stable-diffusion/article-F...

Can you say with a straight face that this image (from the original paper) was intended to explain rather than obfuscate?

backpropaganda · 3 years ago
Yes, I can read it perfectly fine. It relies on a lot of notation that AI researchers are familiar with.

Deleted Comment

MacsHeadroom · 3 years ago
It's as clear as wiring diagram for me. But I read a lot of ML literature.
herval · 3 years ago
If you're in AI research, that's a pretty standard diagram
torbTurret · 3 years ago
Love the visual explainers for machine learning nowadays.

The author has more here: https://jalammar.github.io/

Amazon has some highly interactive ones here: https://mlu-explain.github.io/

Google had: distill.pub

Hope to see education in this space grow more.

culi · 3 years ago
> Google had: distill.pub

Wait I thought they were just taking a break. Don't tell me it's being killed

swyx · 3 years ago
i've been collecting other explanations of how SD works here: https://github.com/sw-yx/prompt-eng#sd-model-values
wwarek · 3 years ago
Another very good (although less deep) explanation was published yesterday by Computerphile:

https://www.youtube.com/watch?v=1CIpzeNxIhU

cl3misch · 3 years ago
I also thought about this video while reading the HN post. The Computerphile video completely omits the latent space, right? But instead spends a lot of time on the iterative denoising. Even though I like Computerphile a lot, I don't think this was the best tradeoff.