What the author presents as "sincerity" comes off as injecting (his) biased views into reporting. The post devolves into a tedious series of anecdotes that ostensibly prove that "context" can reframe a story, and he argues that sincere reporting should take that context into account, which is reasonable in principle, but he doesn't seem to realize that he's only presenting context that suits his worldview and tosses out the rest. For example, he decries journalists being wrong or underemphasizing his bias by failing to account for data that proves him right in retrospect. In the same paragraph, he smears reporters for under-weighing and over-weighing soft data. That's easy to do in hindsight. My takeaway is that he undermines his own premise by demonstrating everything that can go wrong in opinionated reporting: cherry-picking, double standards, and confirmation bias.
P.S.: the most surprising thing to me about this blog post is that it went through an editor.
> comes off as injecting (his) biased views into reporting
The trouble is that not adding context is also a choice, which also reveals an authors belief on the topic, except with sufficient plausible deniability. This is why the article describes it as cowardly. It isn't sufficient to defer to people in positions in power. You may appear to be neutral to those who don't bother to think about it, but in truth you're just adopting the position of the person whose anecdata you've unthinkingly regurgitated. The job of a journalist is to think, apply rigorous thought, do research, challenge the status quo.
There is no "unbiased" media, just sincere and insincere. Good will arguments and bad will arguments.
We all perceive the world some way, and it isn't always how other people perceive it. What one calls boss-coddling, another might call common sense business. As long as you do your homework, "stand on your shit", and don't just remasticate the pablum handed to you from on high, we'll be fine. Sadly, as pointed out in the article, we're sorta drowning in soggy pablum these days.
Dispensing from editorialism is a choice, yes, but that only translates to bias if it's done inconsistently. Meanwhile, while contextualizing, and to a greater extent reframing, can also be done in a fair and objective manner, doing it well and consistently is much more difficult.
I don't think that Zitron cares about objectivity nearly as much as he cares about his worldview being validated by reporters, thus the idea that failing to inject context [which promotes that worldview] is inherently insincere. Since journalism is a fairly ideologically homogeneous profession, I can understand how that might appeal to him, but I doubt he'd make that argument from the other side of the fence.
> My CoreWeave analysis may seem silly to some because its value has quadrupled — and that’s why I didn’t write that I believed the stock would crater, or really anything about the stock.
I think the underlying belief causing people to believe things like this are "silly" or that AI criticism is overstated is that the market does not really make mistakes, at least not in the aggregate. So, if XYZ company's CEO says "Our product is doing ABC 300000% better and will take over the world!" and its value/revenue is also going up at the same time, that is seen as a sign that the market has validated this view, and it is infallible (to a point). Of course, this ignores that the market has historically and often been completely wrong, and that this type of reasoning is entirely circular - pay no attention to the man (marketing team) behind the curtain or think about it too hard.
My tech friends and I cannot wait for this agentic bubble to pop. Much like the dotcom bubble, there's absolutely value in AI but the hype is absurd and is actively hurting investments into reasonable things (like just good UX).
The hype and zealotry remind me of a cult. And as I go higher up the chain at my big tech company, the more culty they are in their beliefs. And the less they believe AI can do their specific jobs, and the less they have actually tried to use AI beyond badly summarizing documents they barely read before.
AI, as far as I can tell, has been a net negative for humans. It's made labor cheaper, answers less reliable, reduced the value we placed on creativity and professionals in general, allows mass disinformation, and mostly results in people being lazier and not learning the basics of anything. There are of course spots of brightness, but the hype bubble needs to burst so we can move on.
My belief that's kind of settling in after a few years of observation is that I absolutely believe the "hype" claim that AI is a force multiplier. However, lots of things out there are terrible and shouldn't be force multiplied (spam, phishing, scams, etc) or say like, people that are very bad at their jobs. If people like this's output is multiplied, it clearly can and will be very bad. I have seen this play out at a small scale already on some teams I've worked with.
For the maybe ~1-5% of people out there that have something valuable to contribute (that's my number, and I fully believe it) then I think it can be good, but those types also seem to be the most wary of it.
What depresses me is all these people that are leading us with these stupid decisions re: AI will get bonuses and promotions after the bubble pops. All the useless effort getting AI everywhere will be forgotten, no one will care or remember the idiotic decisions and we will all be chasing the new new thing.
Sincerity will not win in the end. VC money and the quest for insurmountable tech driven cash flows is what drives everything. The age of software being driven by sincere engineers trying to build is dead outside niche projects.
You mean, besides how this one is targeted at journalists and that one is targeted at the tech industry?
The difference, besides everything else is expectations: He expects the tech industry to overhype things because they're salespeople, he expects journalists to call them out when they're overhyping things.
Can you say more? I'm asking seriously; I'd like to have a better understand of "how to read" Zitron, because these pieces are long and emotive. Are they basically just responses to whatever the latest news is, the way David Gerard writes about blockchain stuff? Because I did see the utility in that kind of writing.
I think there's some overlap in the thrust of both pieces but they're pretty distinct topic-wise?
The first piece strikes me as a paean against "enshittification" - the idea that the industry has done good things, but now subsists on a combination of hot air and making existing good things worse. It further makes the specific point that LLMs belong in the "hot air" category and that it shares little with other innovations of the past.
It does touch on what he perceives as overly-friendly press coverage of the above, but I didn't read the piece as focusing on that point.
FWIW, I find Zitron to be an... unreliable commentator on this subject, to put it mildly, but I am not entirely unsympathetic to the point.
The second piece is more specifically focused on the overly-friendly press coverage, and the idea that journalists are either overly credulous, especially to fantastical claims about the tech, or openly corrupted by the parties they are meant to cover.
I feel like I pick up all that stuff but I don't really understand who the audience is for it. It would make more sense to me if these companies were public and taking investment (I mean, Google and Meta are, but they're not "AI plays", and this most recent piece focuses on Anthropic and OpenAI). Then the point of the piece would be, like David Gerard's blockchain pieces, "don't invest in this".
We need more reality-grounded takes like this one. I do have a quibble:
> These LLMs also have “agents” - but for the sake of argument, I’d like to call them “bots.” Bots, because the term “agent” is bullshit and used to make things sound like they can do more than they can[…]
I'd argue "agents" is actually reasonable technical jargon for this purpose, with a history. Tog on Interface (circa 1990) uses the term for a smart software feature in an app from that time period.
P.S.: the most surprising thing to me about this blog post is that it went through an editor.
The trouble is that not adding context is also a choice, which also reveals an authors belief on the topic, except with sufficient plausible deniability. This is why the article describes it as cowardly. It isn't sufficient to defer to people in positions in power. You may appear to be neutral to those who don't bother to think about it, but in truth you're just adopting the position of the person whose anecdata you've unthinkingly regurgitated. The job of a journalist is to think, apply rigorous thought, do research, challenge the status quo.
There is no "unbiased" media, just sincere and insincere. Good will arguments and bad will arguments.
We all perceive the world some way, and it isn't always how other people perceive it. What one calls boss-coddling, another might call common sense business. As long as you do your homework, "stand on your shit", and don't just remasticate the pablum handed to you from on high, we'll be fine. Sadly, as pointed out in the article, we're sorta drowning in soggy pablum these days.
I don't think that Zitron cares about objectivity nearly as much as he cares about his worldview being validated by reporters, thus the idea that failing to inject context [which promotes that worldview] is inherently insincere. Since journalism is a fairly ideologically homogeneous profession, I can understand how that might appeal to him, but I doubt he'd make that argument from the other side of the fence.
I think the underlying belief causing people to believe things like this are "silly" or that AI criticism is overstated is that the market does not really make mistakes, at least not in the aggregate. So, if XYZ company's CEO says "Our product is doing ABC 300000% better and will take over the world!" and its value/revenue is also going up at the same time, that is seen as a sign that the market has validated this view, and it is infallible (to a point). Of course, this ignores that the market has historically and often been completely wrong, and that this type of reasoning is entirely circular - pay no attention to the man (marketing team) behind the curtain or think about it too hard.
Irrational Exuberance. Speculative bubbles are scarily common.
The hype and zealotry remind me of a cult. And as I go higher up the chain at my big tech company, the more culty they are in their beliefs. And the less they believe AI can do their specific jobs, and the less they have actually tried to use AI beyond badly summarizing documents they barely read before.
AI, as far as I can tell, has been a net negative for humans. It's made labor cheaper, answers less reliable, reduced the value we placed on creativity and professionals in general, allows mass disinformation, and mostly results in people being lazier and not learning the basics of anything. There are of course spots of brightness, but the hype bubble needs to burst so we can move on.
For the maybe ~1-5% of people out there that have something valuable to contribute (that's my number, and I fully believe it) then I think it can be good, but those types also seem to be the most wary of it.
Sincerity will not win in the end. VC money and the quest for insurmountable tech driven cash flows is what drives everything. The age of software being driven by sincere engineers trying to build is dead outside niche projects.
https://www.wheresyoured.at/never-forget-what-theyve-done/
The difference, besides everything else is expectations: He expects the tech industry to overhype things because they're salespeople, he expects journalists to call them out when they're overhyping things.
The first piece strikes me as a paean against "enshittification" - the idea that the industry has done good things, but now subsists on a combination of hot air and making existing good things worse. It further makes the specific point that LLMs belong in the "hot air" category and that it shares little with other innovations of the past.
It does touch on what he perceives as overly-friendly press coverage of the above, but I didn't read the piece as focusing on that point.
FWIW, I find Zitron to be an... unreliable commentator on this subject, to put it mildly, but I am not entirely unsympathetic to the point.
The second piece is more specifically focused on the overly-friendly press coverage, and the idea that journalists are either overly credulous, especially to fantastical claims about the tech, or openly corrupted by the parties they are meant to cover.
> These LLMs also have “agents” - but for the sake of argument, I’d like to call them “bots.” Bots, because the term “agent” is bullshit and used to make things sound like they can do more than they can[…]
I'd argue "agents" is actually reasonable technical jargon for this purpose, with a history. Tog on Interface (circa 1990) uses the term for a smart software feature in an app from that time period.