Readit News logoReadit News
voidhorse commented on The Math Behind GANs (2020)   jaketae.github.io/study/g... · Posted by u/sebg
catgary · 2 days ago
I’m going to push back on this a bit. I think a simpler explanation (or at least one that doesn’t involve projecting one’s own insecurities onto the authors) is that the people who write these papers are generally comfortable enough with mathematics that they don’t believe anything has been obfuscated. ML is a mathematical science and many people in ML were trained as physicists or mathematicians (I’m one of them). People write things this way because it makes symbolic manipulations easier and you can keep the full expression in your head; what you’re proposing would actually make it significantly harder to verify results in papers.
voidhorse · 2 days ago
Agreed. Also, fwiw, the mathematics involved in the paper are pretty simple as far as mathematical sophistication goes. Spend two to three months on one "higher level" maths course of your choosing and you'll be able to fully understand every equation in this paper relatively easily. Even a basic course in information theory coupled with some discrete maths should give you essentially all you need to comprehend the math in this post. The concepts being presented here are not mysterious and much of this math is banal. Mathematical notation can seem foreboding, but once you grasp it, you'll see, like Von Neumann said, that life is complicated but math is simple.
voidhorse commented on AI vs. Professional Authors Results   mark---lawrence.blogspot.... · Posted by u/biffles
thwarted · 12 days ago
> the discourse around LLM use in the creative space has just shown me that many people in technology simply have little to no comprehension of the arts and humanities and have no clue what it is that artists actually do, or even how to intelligently engage with artistic works

While shitting on "people in technology" is the pastime du jour, the technologists may be boosters, but non-technical, non-creative people also have "little to no comprehension of the arts and humanities and have no clue what it is that artists actually do, or even how to intelligently engage with artistic works". And that's because they are mainly consumers of the creative output.

voidhorse · 8 days ago
Your point being? The characteristics of that group have no bearing on my claims about the other group. Non-technical people are not the ones, by the way, actively pushing forward technologies that directly compete with small time artists.

I'm a "technical person" myself. I'm not shitting on us, I'm just stating what I've perceived to be the case. Maybe instead of getting upset and trying to absolve ourselves because some other group is "just as bad" a better response is to encourage us to learn more and understand people better before we actively try to disrupt their livelihoods.

voidhorse commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
AIorNot · 9 days ago
From the article:

“We have a class of products with deterministic cost and stochastic outputs: a built-in unresolved tension. Users insert the coin with certainty, but will be uncertain of whether they'll get back what they expect. This fundamental mismatch between deterministic mental models and probabilistic reality produces frustration — a gap the industry hasn't yet learned to bridge.”

And all the news today around AI being a bubble -

We’re still learning what we can do with these models and how to evaluate them but industry and capitalism forces our hand into building sellable products rapidly

voidhorse · 8 days ago
Precisely. There is nothing inherently wrong with LLMs and "agent" systems. There are certain classes of problems that they might be great for solving.

The problem is that the tech industry has devolved into a late-capitalist clown market supported on pure wild speculation and absurd "everything machine" marketing. This not only leads to active damage (see people falling into delusional spirals thanks to chat bots) but also inhibits us from figuring out what the actual good uses are and investing into the right applications.

Radical technology leads to behavior change. Smart phones led to behavior change, but you didn't have to beg people to buy them. LLMs are leading to behavior change but only because they are being forcibly shoved into everyone's faces and people are co-gaslighting each other into the collective hysteria that to not participate is to miss out on something big, but they can never articulate what that big thing actually is.

voidhorse commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
ACCount37 · 9 days ago
ML doesn't work like programming because it's not programming. It just happens to run on the same computational substrate.

Modern ML is at this hellish intersection of underexplored math, twisted neurobiology and applied demon summoning. An engineer works with known laws of nature - but the laws of machine learning are still being written. You have to be at least a little bit of a scientist to navigate this landscape.

Unfortunately, the nature of intelligence doesn't seem to yield itself to simple, straightforward, human-understandable systems. But machine intelligence is desirable. So we're building AIs anyway.

voidhorse · 8 days ago
You should read some of the papers written in the 1940s and learn about the history of cybernetics. Your glowing perception of the "demon summoning" nature of ML might change a bit.

People want others to think this tech is mysterious. It's not. We've known the theory of these systems since the mid 1900s, we just didn't fully work out the resource arrangements to make them tractable until recently. Yes, there are some unknowns and the end product is a black box insofar as you cannot simply inspect source code, but this description of the situation is pure fantasy.

voidhorse commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
rexer · 9 days ago
I read the full article (really resonated with it, fwiw), and I'm struggling to understand the issues you're describing.

> Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

Can you say more? It seems to me the article says the same thing you are.

> I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?

I think the author is drawing a connection to the world of science, specifically quantum mechanics, where the best way to make progress has been to describe and test theories (as opposed to math where we have proofs). Though it's not a great analog since LLMs are not probabilistic in the same way quantum mechanics is.

In any case, I appreciated the article because it talks through a shift from deterministic to probabilistic systems that I've been seeing in my work.

voidhorse · 8 days ago
Sure, but it's overblown. People have been reasoning about and building probabilistic systems formally since the birth of information theory back in the 1940s. Many systems we already rely on today are highly stochastic in their own ways.

Yes, LLMs are a bit of a new beast in terms of the use of stochastic processes as producers—but we do know how to deal with these systems. Half the "novelty" is just people either forgetting past work or being ignorant of it in the first place.

voidhorse commented on Building AI products in the probabilistic era   giansegato.com/essays/pro... · Posted by u/sdan
therobots927 · 9 days ago
This is pure sophistry and the use of formal mathematical notation just adds insult to injury here:

“Think about it: we’ve built a special kind of function F' that for all we know can now accept anything — compose poetry, translate messages, even debug code! — and we expect it to always reply with something reasonable.”

This forms the axiom from which the rest of this article builds its case. At each step further fuzzy reasoning is used. Take this for example:

“Can we solve hallucination? Well, we could train perfect systems to always try to reply correctly, but some questions simply don't have "correct" answers. What even is the "correct" when the question is "should I leave him?".”

Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

The most disturbing part of my tech career has been witnessing the ability that many highly intelligent and accomplished people have to apparently fool themselves with faulty yet complex reasoning. The fact that this article is written in defense of chatbots that ALSO have complex and flawed reasoning just drives home my point. We’re throwing away determinism just like that? I’m not saying future computing won’t be probabilistic but to say that LLMs are probabilistic, so they are the future of computing can only be said by someone with an incredibly strong prior on LLMs.

I’d recommend Baudrillards work on hyperreality. This AI conversation could not be a better example of the loss of meaning. I hope this dark age doesn’t last as long as the last one. I mean just read this conclusion:

“It's ontologically different. We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.”

I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?

That’s called the scientific method. Which is a PRECURSOR to planning and engineering. That’s how we built the technology we have today. I’ll stop now because I need to keep my blood pressure low.

voidhorse · 8 days ago
It's already wrong at the first step. A probabilistic system is by definition not a function (it is a relation). This is such a basic mistake I don't know how anyone can take this seriously. Many existing systems are also not strictly functions (internal state can make them return different outputs for a given input). People love to abuse mathematics and employ its concepts hastily and irresponsibly.
voidhorse commented on AI vs. Professional Authors Results   mark---lawrence.blogspot.... · Posted by u/biffles
hyperadvanced · 12 days ago
I also think that just having “an idea” isn’t exactly the same as increasing net creativity. Oftentimes with art, the impact of something truly creative results in changing the parameters of the medium or genre itself, not merely sticking to the script and producing a new work in the style of X. If you take, for example, J Dilla and his impact on hip-hop, the fact that there’s an entire subgenre or two focused on some of his hallmark innovations (micro-rhythm/wonky beats, neo-soul, lofi sampling/creative use of samples) speaks to that kind of “real” creativity. I frankly think that kind of genre-bending is possible with the use of LLMs, but if you just say, “here’s my story idea, make it so”, without any eye towards the actual technique or craft, you won’t be getting the next Blood Meridian out of it.
voidhorse · 12 days ago
I completely agree. If nothing else, the discourse around LLM use in the creative space has just shown me that many people in technology simply have little to no comprehension of the arts and humanities and have no clue what it is that artists actually do, or even how to intelligently engage with artistic works. The 20th cen transformation of art into time-filling idle entertainment by way of mass media has been a great success. The internet helped reset some of that, but not by much. I guess that shouldn't necessarily be surprising, but I am always kind of astonished at the fact that our society has produced a large number of specialists who are profoundly good at what they do but who clearly lack exposure to other spheres of life.
voidhorse commented on AI vs. Professional Authors Results   mark---lawrence.blogspot.... · Posted by u/biffles
voidhorse · 12 days ago
Interesting study. I think the use of AI boils down to this: is the product independent of the process and the context? Or is it dependent on it in some way? I think, when it comes to art, the latter is truer than the former, and most use of AI in creative fields is predicated on trying to convince people to engage with art in an extremely shallow way (art strictly as soulless, time filling entertainment).

When an author writes a novel, the novel does not exist in a vacuum. The author's persona and the cultural exchange that emerges around the text also becomes an important part of the phenomenal status of the the work and its cultural recognition. Even when an author remains pseudonymous and makes no appearance, this too is part of the work.

If an author uses AI as a tool and takes care to imbue the output with artistic and personal relevance, it probably will become an art object, but its status may or may not be modulated by the use of AI in the process to the extent that the use does/doesn't affect people's crafting interpretations of the work, or the author's own engagements. Contrarily, AI generated work that has close to no actual processual involvement on the part of the author will almost always have slop status, just because it's hard to imagine an interpretive culture around such work that doesn't at some point break down in the face of the inability to connect the worm with other cultural touchstones or the actual experience of a human being. Maybe it could happen, but if it did, at that point the status of the work is till something different in so far as it would be a marker not of human experience, as literature traditionally has been, but something quite new and different literature-cum-hypermarket(we already had mass market) product.

voidhorse commented on Rust in 2025: Targeting foundational software   smallcultfollowing.com/ba... · Posted by u/wseqyrku
asa400 · 14 days ago
People build stuff informally all the time, and that stuff very often works fine. Rich Hickey digs into this in some detail. It's not that types or formal methods or rigorous specs or automated tests (or whatever else - pick your formalism) have no value, but they always have a cost as well, and there are many, many existence proofs of systems being built without them. To me, that seems pretty compelling that formalisms in computing should be "a la carte".

I think my biggest objection to mandating formalism (in the abstract - I do find value in it in some situations) in computing is how little we know about computing compared to what we know about aviation or bridge building. Those are mature industries; computing is unsettled and changing rapidly. Even the formalisms we do have in computing are comparatively weak.

voidhorse · 13 days ago
> I think my biggest objection to mandating formalism (in the abstract - I do find value in it in some situations) in computing is how little we know about computing compared to what we know about aviation or bridge building.

I don't think that we should mandate formalism. I'm just trying to say that diminishing the value of formalism is bad for the industry as a whole.

And point taken about maturity, but in that sense if we don't encourage people to actually engage in defining specification and formalizing their software we won't ever get to the same point of maturity as actual engineering in the first place. We need to encourage people to explore these aspects of the discipline so that we can actually set software on better foundations, not discourage them by going around questioning the inherent value of formalism.

Rust is a particularly good example because, as other commenters have pointed out, if we believe it's a waste of time to formalize the language we purportedly want everyone to use to build foundational software, what exactly would we formalize then? If you aren't going to formalize that because it "isn't worth it", well, arguably, nothing is worth formalizing then if the core dependency itself isn't even rigorously defined.

People also forget the other benefits of having a formal spec for a language. Yes it enables alternative implementations, but it also enables people to build tons of other things in a verifiably correct way, such as static analysis checks and code generators.

voidhorse commented on OpenAI Progress   progress.openai.com... · Posted by u/vinhnx
wewewedxfgdf · 14 days ago
I just don't care about AGI.

I care a lot about AI coding.

OpenAI in particular seems to really think AGI matters. I don't think AGI is even possible because we can't define intelligence in the first place, but what do I know?

voidhorse · 14 days ago
They care about AGI because unfounded speculation on some undefined future in which some kind of breakthrough of unknown kind but presumably positive is the only thing currently buoying up their company and their existence is more of a function of the absurdities of modern capital than it is of any inherent usefulness of the costly technology they provide.

u/voidhorse

KarmaCake day3194March 3, 2015View Original