While shitting on "people in technology" is the pastime du jour, the technologists may be boosters, but non-technical, non-creative people also have "little to no comprehension of the arts and humanities and have no clue what it is that artists actually do, or even how to intelligently engage with artistic works". And that's because they are mainly consumers of the creative output.
I'm a "technical person" myself. I'm not shitting on us, I'm just stating what I've perceived to be the case. Maybe instead of getting upset and trying to absolve ourselves because some other group is "just as bad" a better response is to encourage us to learn more and understand people better before we actively try to disrupt their livelihoods.
“We have a class of products with deterministic cost and stochastic outputs: a built-in unresolved tension. Users insert the coin with certainty, but will be uncertain of whether they'll get back what they expect. This fundamental mismatch between deterministic mental models and probabilistic reality produces frustration — a gap the industry hasn't yet learned to bridge.”
And all the news today around AI being a bubble -
We’re still learning what we can do with these models and how to evaluate them but industry and capitalism forces our hand into building sellable products rapidly
The problem is that the tech industry has devolved into a late-capitalist clown market supported on pure wild speculation and absurd "everything machine" marketing. This not only leads to active damage (see people falling into delusional spirals thanks to chat bots) but also inhibits us from figuring out what the actual good uses are and investing into the right applications.
Radical technology leads to behavior change. Smart phones led to behavior change, but you didn't have to beg people to buy them. LLMs are leading to behavior change but only because they are being forcibly shoved into everyone's faces and people are co-gaslighting each other into the collective hysteria that to not participate is to miss out on something big, but they can never articulate what that big thing actually is.
Modern ML is at this hellish intersection of underexplored math, twisted neurobiology and applied demon summoning. An engineer works with known laws of nature - but the laws of machine learning are still being written. You have to be at least a little bit of a scientist to navigate this landscape.
Unfortunately, the nature of intelligence doesn't seem to yield itself to simple, straightforward, human-understandable systems. But machine intelligence is desirable. So we're building AIs anyway.
People want others to think this tech is mysterious. It's not. We've known the theory of these systems since the mid 1900s, we just didn't fully work out the resource arrangements to make them tractable until recently. Yes, there are some unknowns and the end product is a black box insofar as you cannot simply inspect source code, but this description of the situation is pure fantasy.
> Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?
Can you say more? It seems to me the article says the same thing you are.
> I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?
I think the author is drawing a connection to the world of science, specifically quantum mechanics, where the best way to make progress has been to describe and test theories (as opposed to math where we have proofs). Though it's not a great analog since LLMs are not probabilistic in the same way quantum mechanics is.
In any case, I appreciated the article because it talks through a shift from deterministic to probabilistic systems that I've been seeing in my work.
Yes, LLMs are a bit of a new beast in terms of the use of stochastic processes as producers—but we do know how to deal with these systems. Half the "novelty" is just people either forgetting past work or being ignorant of it in the first place.
“Think about it: we’ve built a special kind of function F' that for all we know can now accept anything — compose poetry, translate messages, even debug code! — and we expect it to always reply with something reasonable.”
This forms the axiom from which the rest of this article builds its case. At each step further fuzzy reasoning is used. Take this for example:
“Can we solve hallucination? Well, we could train perfect systems to always try to reply correctly, but some questions simply don't have "correct" answers. What even is the "correct" when the question is "should I leave him?".”
Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?
The most disturbing part of my tech career has been witnessing the ability that many highly intelligent and accomplished people have to apparently fool themselves with faulty yet complex reasoning. The fact that this article is written in defense of chatbots that ALSO have complex and flawed reasoning just drives home my point. We’re throwing away determinism just like that? I’m not saying future computing won’t be probabilistic but to say that LLMs are probabilistic, so they are the future of computing can only be said by someone with an incredibly strong prior on LLMs.
I’d recommend Baudrillards work on hyperreality. This AI conversation could not be a better example of the loss of meaning. I hope this dark age doesn’t last as long as the last one. I mean just read this conclusion:
“It's ontologically different. We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.”
I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?
That’s called the scientific method. Which is a PRECURSOR to planning and engineering. That’s how we built the technology we have today. I’ll stop now because I need to keep my blood pressure low.
When an author writes a novel, the novel does not exist in a vacuum. The author's persona and the cultural exchange that emerges around the text also becomes an important part of the phenomenal status of the the work and its cultural recognition. Even when an author remains pseudonymous and makes no appearance, this too is part of the work.
If an author uses AI as a tool and takes care to imbue the output with artistic and personal relevance, it probably will become an art object, but its status may or may not be modulated by the use of AI in the process to the extent that the use does/doesn't affect people's crafting interpretations of the work, or the author's own engagements. Contrarily, AI generated work that has close to no actual processual involvement on the part of the author will almost always have slop status, just because it's hard to imagine an interpretive culture around such work that doesn't at some point break down in the face of the inability to connect the worm with other cultural touchstones or the actual experience of a human being. Maybe it could happen, but if it did, at that point the status of the work is till something different in so far as it would be a marker not of human experience, as literature traditionally has been, but something quite new and different literature-cum-hypermarket(we already had mass market) product.
I think my biggest objection to mandating formalism (in the abstract - I do find value in it in some situations) in computing is how little we know about computing compared to what we know about aviation or bridge building. Those are mature industries; computing is unsettled and changing rapidly. Even the formalisms we do have in computing are comparatively weak.
I don't think that we should mandate formalism. I'm just trying to say that diminishing the value of formalism is bad for the industry as a whole.
And point taken about maturity, but in that sense if we don't encourage people to actually engage in defining specification and formalizing their software we won't ever get to the same point of maturity as actual engineering in the first place. We need to encourage people to explore these aspects of the discipline so that we can actually set software on better foundations, not discourage them by going around questioning the inherent value of formalism.
Rust is a particularly good example because, as other commenters have pointed out, if we believe it's a waste of time to formalize the language we purportedly want everyone to use to build foundational software, what exactly would we formalize then? If you aren't going to formalize that because it "isn't worth it", well, arguably, nothing is worth formalizing then if the core dependency itself isn't even rigorously defined.
People also forget the other benefits of having a formal spec for a language. Yes it enables alternative implementations, but it also enables people to build tons of other things in a verifiably correct way, such as static analysis checks and code generators.
I care a lot about AI coding.
OpenAI in particular seems to really think AGI matters. I don't think AGI is even possible because we can't define intelligence in the first place, but what do I know?