Readit News logoReadit News
catgary commented on The Math Behind GANs (2020)   jaketae.github.io/study/g... · Posted by u/sebg
Garlef · a day ago
Maybe.

But my experience as a mathematician tells me another part of that story.

Certain fields are much more used to consuming (and producing) visual noise in their notation!

Some fields have even superfluous parts in their definitions and keep them around out of tradition.

It's just as with code: Not everyone values writing readable code highly. Some are fine with 200 line function bodies.

And refactoring mathematics is even harder: There's no single codebase and the old papers don't disappear.

catgary · a day ago
Maybe! I’ve found that people usually don’t do extra work if they don’t need to. The heavy notation in differential geometry, for example, can be awfully helpful when you’re actually trying to do Lagrangian mechanics on a Riemannian manifold. And superfluous bits of a definition might be kept around because going from the minimal definition to the one that is actually useful in practice can sometimes be non-trivial, so you’ll just keep the “superfluous” definition in your head.
catgary commented on The Math Behind GANs (2020)   jaketae.github.io/study/g... · Posted by u/sebg
oersted · a day ago
Paper authors (and this posts author apparently) like to throw in lots of scary-looking maths to signal that they are smart and that what they are doing has merit. The Reinforcement Learning field is particularly notorious for doing this, but it's all over ML. Often it is not on purpose, everyone is taught this is the proper "formal" way to express these things, and that any other representation is not precise or appropriate in a scientific context.

In practice, when it comes down to code, even without higher-level libraries, it is surprisingly simple, concise and intuitive.

Most of the math elements used have quite straightforward properties and utility, but of course if you combine them all together into big expressions with lots of single-character variables, it's really hard to understand for everyone. You kind of need to learn to squint your eyes and understand the basic building-blocks that the maths represent, but that shouldn't be necessary if it wasn't obfuscated like this.

catgary · a day ago
I’m going to push back on this a bit. I think a simpler explanation (or at least one that doesn’t involve projecting one’s own insecurities onto the authors) is that the people who write these papers are generally comfortable enough with mathematics that they don’t believe anything has been obfuscated. ML is a mathematical science and many people in ML were trained as physicists or mathematicians (I’m one of them). People write things this way because it makes symbolic manipulations easier and you can keep the full expression in your head; what you’re proposing would actually make it significantly harder to verify results in papers.
catgary commented on Who Invented Backpropagation?   people.idsia.ch/~juergen/... · Posted by u/nothrowaways
mindcrime · 11 days ago
Who didn't? Depending on exactly how you interpret the notion of "inventing backpropagation" it's been invented, forgotten, re-invented, forgotten again, re-re-invented, etc, about 7 or 8 times. And no, I don't have specific citations in front of me, but I will say that a lot of interesting bits about the history of the development of neural networks (including backpropagation) can be found in the book Talking Nets: An Oral History of Neural Networks[1].

[1]: https://www.amazon.com/Talking-Nets-History-Neural-Networks/...

catgary · 11 days ago
I think it’s the move towards GPU-based computing is probably more significant - the constraints put in place by GPU programming (no branching, try not to update tensors in place, etc) sync up with the constraints put in place by differentiable programming.

Once people had a sufficiently compelling reason to write differentiable code, the frameworks around differentiable programming (theano, tensorflow, torch, JAX) picked up a lot of steam.

catgary commented on ARM adds neural accelerators to GPUs   newsroom.arm.com/news/arm... · Posted by u/dagmx
cubefox · 14 days ago
There are now at least three ways to accelerate machine learning models on consumer hardware:

  - GPU compute units (used for LLMs)
  - GPU "neural accelerators"/"tensor cores" etc (used for video game anti-aliasing and increasing resolution or frame rate)
  - NPUs (not sure what they are actually used for)
And of course models can also be run, without acceleration, on the CPU.

catgary · 14 days ago
From what I can tell, NPUs are mostly being used by Microsoft to encourage vendor lock-in to the MicrosoftML/ONNX platform (similar to their DirectX playbook).
catgary commented on Why doctors hate their computers (2018)   newyorker.com/magazine/20... · Posted by u/mitchbob
ljchen · 25 days ago
My personal feeling is that medical practices have not evovled too fast with computing. Electrical engineering, mechanical engineering, biomedical engineering etc all contributed a lot to how doctors treat diseases. But whether medical records are digitialized or not is not significant. It helps, but does not increase cure rate. Old fashioned doctors have good reasons to reject. But they do not say no to new medicine, new devices, new procedures.
catgary · 25 days ago
Ehh, I think there’s a pretty consistent pattern of doctors rejecting pretty basic technologies or procedures that lead to positive outcomes for patients if it’s seeking to address the fact that doctors are human beings that can make mistakes. Medicine is a field full of massive egos.

Dead Comment

catgary commented on I wrote my PhD Thesis in Typst   fransskarman.com/phd_thes... · Posted by u/todsacerdoti
nextos · 2 months ago
> Integrating some actual programming features could be a game changer.

LuaTeX already lets you embed Lua code and it is really good.

However, I do agree some usability improvements are needed.

catgary · 2 months ago
Have you ever looked at a category theory or differential geometry paper? It just doesn’t feel like you appreciate what people are up against.

Look at this thing: https://images.app.goo.gl/4WHN9Pqupxkk8Z3j7

And that’s before you get into stuff like categories of spans, etc.

catgary commented on I wrote my PhD Thesis in Typst   fransskarman.com/phd_thes... · Posted by u/todsacerdoti
nextos · 2 months ago
LaTeX typesetting is a solved problem. Memoir or Classic Thesis, paired with microtype, provide outstanding results and you need to spend zero time on tweaking stuff.

Typst is interesting, but it doesn't yet support all microtypography features provided by microtype. IMHO, those make a big difference.

catgary · 2 months ago
I’m going to have to disagree with you there. The compile times are long, the error messages are worse than useless, and tikz diagrams are almost always unreadable messes.

Large swathes of mathematics, computer science, and physics involve notations and diagrams that are genuinely hard to typeset, and incredibly repetitive and hard to read if you don’t make heavy use of the macro system. Integrating some actual programming features could be a game changer.

catgary commented on Guess I'm a rationalist now   scottaaronson.blog/?p=890... · Posted by u/nsoonhui
tasty_freeze · 2 months ago
Is there any reason you are singling out the rationalist community? Is that not a common failure mode of all groups and all people?

BTW, this isn't a defensive posture on my part: I am not plugged in enough to even have an opinion on any rationalist community, much less identify as one.

catgary · 2 months ago
Oh. It’s literally the main stereotype about rationalists. It’s a very blog-heavy subculture.
catgary commented on Guess I'm a rationalist now   scottaaronson.blog/?p=890... · Posted by u/nsoonhui
noname120 · 2 months ago
No rationalist claims that it's “_definitely_ going to be the end of the world”. In fact they estimate to less than 30% the chance that AI becomes an existential risk by the end of the century.
catgary · 2 months ago
Adding numbers to your reasoning, when there is no obvious source for these probabilities (we aren’t calculating sports odds or doing climate science), is not really any different than writing a piece of fiction to make your point. It’s the same basic thing that objectivists did, and why I dismiss most “Bayesian reasoning” arguments out of hand.

u/catgary

KarmaCake day730April 3, 2020View Original