> "You know, YouTube is constantly working on new tools and experimenting with stuff," Beato says. "They're a best-in-class company, I've got nothing but good things to say. YouTube changed my life."
My despondent brain auto-translated that to: "My livelihood depends on Youtube"
As a consumer they are the most hostile platform to consume a video the way I want. Not the way they want me to. I am also required to use an adblocker to disable all shorts.
As a creator, they are also the most hostile platform, randomly removing video with no point of contact for help or fully removing channels (with livelihoods behind them) because of "a system glitch" but again, not point of contact to get it fixed.
I don't mind the ads as much as all the mandatory meta-baiting. Not the MB itself, but the mechanisms behind it.
Even if you produce interesting videos, you still must MB to get the likes, to stay relevant to the algorithm, to capture a bigger share of the limited resource that is human attention.
The creators are fighting each other for land, our eyeballs are the crops, meanwhile the landlord takes most of the profits.
And the other day he posted about the abusive copyright claims he has to deal with that cost him a lot of money and could maybe have his channels closed.
Although xe lays the blame for those at the feet of Universal Music Group, not YouTube. Apparently, UMG simply refuses to learn from the experience of having thousands of copyright claims rejected on fair use grounds.
It's almost as if there's a mindless robot submitting the claims to YouTube. Perish the thought! (-:
Wow, that xkcd really scares me. I Have No Mouth, and I Must Scream.
It's definitely something that could realistically happen in the near future, maybe even mandated by the EU
Beato is a musician and a producer. He just finds making YouTube videos an easier way to earn a living. He's said many times how frustrating it is as a producer to work with musicians.
I push back on the idea there is anything despondent there. If YouTube was enabling my lifestyle I'd be pretty happy about the situation and certainly not about to start piling public pressure on them. These companies get enough hate from roving bands of angry internet denizens.
Touching up videos is bad but it is hardly material to break out the pitchforks compared to some of the political manoeuvres YouTube has been involved in.
A chill ran down my spine as I imagined this being applied to the written word online: my articles being automatically "corrected" or "improved" the moment I hit publish, any book manuscripts being sent to editors being similarly "polished" to a point that we humans start to lose our unique tone and everything we read falls into that strange uncanny valley where everything reads ok, you can't quite put your finger on it, but it feels like something is wearing the skin of what you wrote as a face.
The well is already poisoned. I'm refraining from hiring editors merely because I suspect there's a high chance they'll just use an LLM. All recent books I'm reading is with suspicion that they have been written by AI.
However, polished to a point that we humans start to lose our unique tone is what style guides that go into the minutiae of comma placement try do do. And I'm currently reading a book I'm 100% sure has been edited by an expert human editor that did quite the job of taking away all the uniqueness of the work. So, we can't just blame the LLMs for making things more gray when we have historically paid other people to do it.
"By AI" or "with AI?" If I write the book and have AI proof read things as I go, or critique my ideas, or point out which points do I need to add more support for, is that written "by AI?"
When Big Corp says 30% of their code is now written "by AI," did they write the code by following thoughtful instruction from a human expert, who interpeted the work to be done, made decisions about the architectural impact, outlined those things and gave detailed instructions that the LLM could execute in small chunks?
This distinction I feel is going to become more important. AI tools are useful, and most people are using them for writing code, literature, papers, etc. I feel like, in some cases, it is not fair to say the thing was written by AI, even when sometimes it technically was.
I was listening to an interview (having a hard time remembering the name now). The guest was asked how he decides what to read, he replied that one easy way for him to filter is he only considers books published before the 70s. At the time, it sounded strange to me. It doesn't anymore, maybe he has a point
There's a YouTuber named Fil Henley (https://www.youtube.com/@WingsOfPegasus) who has been covering this for some years, now. Xe regularly comments on how universal application of pitch correction in post as an "industry standard" has dragged the great singers of yore down to the same level of mediocrity as everyone else.
Xe also occasionally reminds people that, equal temperament being what it is, this pitch correction is actually in a few cases making people less well in tune than they originally were.
It certainly removes unique tone. Yesterday's was a pitch corrected version of a performance by John Lennon from 1972, that definitely changed Lennon's sound.
> is what style guides that go into the minutiae of comma placement try do do
Eh. There might be a tacit presumption here that correctness isn't real, or that style cannot be better or worse. I would reject this notion. After all, what if something is uniquely crap?
The basic, most general purpose of writing is to communicate. Various kinds of writing have varying particular purposes. The style must be appropriate to the end in question so that it can serve the purpose of the text with respect to the particular audience.
Now, we may have disagreements about what constitutes good style for a particular purpose and for a particular audience. This will be a source of variation. And naturally, there can be stylistic differences between two pieces of writing that do not impact the clarity and success with which a piece of writing does its job.
People will have varying tastes when it comes to style, and part of that will be determined by what they're used to, what they expect, a desire for novelty, a desire for clarity and adequacy, affirmation of their own intuitions, and so on. We shouldn't obfuscate and sweep the causes of varying tastes under the rug of obfuscation, however.
In the case of AI-generated text, the uncanny, je ne said quoi character that makes it irritating to read seems to be that it has the quality of something produced by a zombie. The grammatical structure is obviously there, but at a pragmatic level, it lacks a certain cohesion, procession, and relevance that reads like something someone on amphetamines or The View might say. It's all surface.
This is why shadow banning rubbed people so wrong. I can't prove it, but i gave up on online dating a long time ago because i found a couple of automated systems would just not send messages and not tell you (in a middle of an already active conversation)
"A spark of excitement ran through me imagining this applied to writing online: my articles receiving instant, supportive refinements the moment I hit publish, and manuscripts arriving to editors already thoughtfully polished—elevating clarity while letting our distinctive voices shine even brighter. The result is a consistently smooth, natural reading experience that feels confidently authentic, faithfully reflecting what I wrote while enhancing it with care."
I get what you’re going for with this comment, but it seamlessly anthropomorphizes what’s happening in a way that has the opposite impact I think.
There is no thoughtfulness or care involved. Only algorithmic conformance to some non-human synthesis of the given style.
The issue is not just about the words that come out the other end. The issue is the loss of the transmission of human thoughts, emotions, preferences, style.
The end result is still just as suspect, and to whatever degree it appears “good”, even more soulless given the underlying reality.
> manuscripts arriving to editors already thoughtfully polished
except those editors will still make changes. that's there job. if they start passing manuscripts through without changes, they'd be nullifying their jobs.
My guess is that guys being replaced by the steam shovel said the same thing about the quality of holes being dug into the ground. "No machine is ever going to be able to dig a hole as lovingly or as accurately as a man with a shovel". "The digging machines consume way too much energy" etc.
I'm pretty sure all the hand wringing about A.I. is going to fade into the past in the same way as every other strand of technophobia has before.
I'm sure you can find people making arguments about a lack of quality from machines about textiles, woodworking, cinematography, etc., but digging holes? If you have a source of someone complaining about hole quality I'll be fascinated, but I moreso am thinking about a disconnecion here:
It looks like you see writing & editing as a menial task that we just do for it's extrinsic value, whereas these people who complain about quality see it as art we make for it's intrinsic value.
Where I think a lot of this "technophobia" actually comes from though are people who do/did this for a living and are not happy about their profession being obsolesced, and so try to justify their continued employment. And no, "there were new jobs after the cotton gin" will not comfort them, because that doesn't tell them what their next profession will be and presumes that the early industrial revolution was all peachy (it wasn't).
DDT has been banned, nuclear reactors have been banned in Germany, many people want to ban internal combustion engines, supersonic flight has been banned.
Moreover, most people have more attachment to their own thoughts or to reading the unaltered, genuine thoughts of other humans than to a hole in the ground. The comment you respond to literally talks about the Orwellian aspects of altering someone's works.
there is no way you aren't able to discern the obvious differences between physical labor such as digging a hole and something as innate to human nature as creativity. you realize just how hollow a set of matrix multiplications are when you try to "talk to it" for more than 3 minutes. the whole point of language is to talk to other people and to communicate ideas to them. that is something that requires a human factor, otherwise the ideas are simply regurgitations of whatever the training set happened to contain. there are no original ideas in there. a steam shovel, on the other hand, does not need to be creative or to have human factor, it's simply digging a hole in the ground
Excavation is an inherently dangerous and physically strenuous job. Additionally, when precision or delicateness is required human diggers are still used.
If AI was being used to automate dangerous and physically strenuous jobs, I wouldn't mind.
Instead it is being used to make everything it touches worse.
Imagine an AI-powered excavator that fucked up every trench that it dug and techbros insisted you were wrong for criticizing the fucked up trench.
When I see an argument like this I'm inclined to assume the author is motivated by jealousy or some strange kind of nihilism. Reminds me of the comment the other day expressing perplexity over why anyone would learn a new language instead of relying on machine translation.
You realize that making an analogy doesn't make your argument correct, right? And comparing digging through the ground to human thought and creativity is an odd mix of self debasement and arrogance. I'm guessing there is an unspoken financial incentive guiding your point of view.
Unfortunately the article doesn't have an example, or a comparison image. Other reports are similarly useless as well. The most that seemed to happen is that the wrinkles in someone's ear changed. In case anyone else wants to see it in action:
I skimmed the videos as well, and there is much more talk about this thing, and barely any examples of it. As this is an experiment, I guess that all this noise serves as a feedback to YouTube.
If you click through to Rhett Schul's (sp?) video you can see examples comparing the original video (from non-Shorts videos) with the sharpened video (from Shorts).
Basically YouTube is applying a sharpening filter to "Shorts" videos.
This makes sense. Saying YT is applying AI to every single video uploaded would be a huge WTF kind of situation. Saying that YT has created a workflow utilizing AI to create a new video from the creator's original video to fit a specific type of video format that they want to promote even when most creators are NOT creating that format makes much more sense. Pretty much every short I've seen was a portrait crop from something that was obviously originally landscape orientation.
Do these videos that YT creates to backfill their lack of Shorts get credited back to the original creator as far as monetization from ads?
This really has a feel of the delivery apps making websites for the restaurants that did not previously have one without the restaurant knowing anything about it while setting higher prices on the menu items while keeping that extra money instead of paying the restaurants the extra.
I saw the sharpening, and listened to the claims of shirt wrinkles being weird and so on, but I didn't deem these to be on the level of the original claim, which is that "AI enhancements" are made to the video, as in, new details and features are invented on the video. In the ear example, the shape of the ear changed, which is significant because I'd never want that in any of my photos or videos. The rest of the effects were more "overdone" than "inventive".
Although, I probably wouldn't want any automatic filtering applied to my video either, AI modifications or not.
Flickr used to apply an auto-enhancement (sharpening, saturation, etc) effect to photos[0]. It would be really weird seeing a photo locally and then see the copy on Flickr that looked better somehow.
Aside:
The mention of Technorati tags (and even Flickr) in the linked blog post hit me right in the Web 2.0 nostalgia feels.
Is what I've been noticing this past week! There have been a handful of videos that looked quite uncanny but were from creators I knew, and a few from unknown sources I completely skipped over because they looked suspect.
Have to say, I am not a fan of the AI sharpening filter at all. Would much prefer the low res videos.
IME this is a long-standing thing - failing to include visuals for inherently visual news stories. They're geared towards text news stories for whatever reason.
> We hear you, and want to clear things up! This is from an experiment to improve video quality with traditional machine learning – not GenAI. More info from @YouTubeInsider here:
> No GenAI, no upscaling. We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)
> YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features
Love the "[company] is always working on ways to provide the best..." that's always in these explanations, like "you actually just caught us doing something good! You're welcome!"
All of which is pretty reasonable, especially for shorts, which are meant to be thrown directly in the trash after being used to collect some ad revenue anyway, right?
This outrage feels odd, TV has "improved" movies for ages, youtube doing it with machine learning is the same idea, are we really upset because an ear looks a bit clearer?
No, people are upset because Youtube is editing their content without telling them. If they really thought this was a high value add they could have added an enhance button to let creators opt in, as has been done elsewhere. I wouldn't like it if HN started "optimizing" the wording my comments without telling me, even if it made them better along some metric.
>How about you think stuff through before even starting to waste time on stuff like this?
What makes you think they don't think it through? This effect is an experiment that they are running. It seems to be useless, unwanted from our perspective, but what if they find that it increases engagement?
> What makes you think they don't think it through?
Basing it on a lot of stupid decisions youtube has made over the years, the last being the horrendous autotranslation of titles/descriptions/audio that can't be turned off. Can only be explained by having morons making decisions, who can't imagine that anyone could speak more than one language.
Youtube says this was done for select Youtube Shorts as a denoising process. However most popular channels on Youtube, which seem to be the pool selected for this experiment, typically already have well lit and graded videos shouldn't benefit much from extra denoising from a visual point of view.
It's true though that aggressive denoising gives things an artificially generated look since both processes use denoising heavily.
Perhaps this was done to optimize video encoding, since the less noise/surface detail there is the easier it is to compress.
If we take them at their word then it's just an extension of technology to optimize video... and it's called AI because buzzwords and hence controversy.
> it's just an extension of technology to optimize video... and it's called AI because buzzwords and hence controversy.
The controversy is that YouTube is making strange changes to the videos of users, that make the videos look fake.
YouTube creators put hours upon hours on writing, shooting and editing their videos. And those that do it full time often depend on YouTube and their audience for income.
If YouTube messes up the videos of creators and makes the videos look like they are fake, of course the creators are gonna be upset!
Given the denoising is said to be aggressive enough to be noticeable on already compressed video I think criticism of it is fair. Just that it should be distinguished from something like Tiktok's 'beautifier' modifications, which from titles like the BBC's come to mind.
Last week I went to buy a Philip K Dick eBook while on vacation. It was only $2 and my immediate thought was, “what are the odds this is some weird pirated version that’s full of errors? What if it’s some American version that’s been self-censored by Amazon to be approved by the government? What if it’s been AI enhanced in some way?”
Just the consideration of these possibilities was enough to shake the authenticity of my reality.
Even more unsettling is when I contemplate what could be done about data authenticity. There are some fairly useful practical answers such as an author sharing the official checksum for a book. But, ultimately, authenticity is a fleeting quality and I can’t stop time.
Authenticity can be proven by saying things that upsets censors. For example, if I mention the Tiananmen square, you can be sure my comment wasn't edited by CCP's LLMs.
If AI is as wonderful and world-changing as people claim, it's odd that it's being inserted into products exactly like every other solution in search of a problem.
If it's being added to a toaster for no good reason, sure. But the internet as a whole, through a browser? That's not comparable, people explicitly seek it out when they want to.
Yeah I would, if "Internet" came with zero safeguards or regulations and corporations put the onus on the user to sift through mountains of spam or mitigate credit card leakage risks when buying something online.
My despondent brain auto-translated that to: "My livelihood depends on Youtube"
Even if you produce interesting videos, you still must MB to get the likes, to stay relevant to the algorithm, to capture a bigger share of the limited resource that is human attention.
The creators are fighting each other for land, our eyeballs are the crops, meanwhile the landlord takes most of the profits.
It's almost as if there's a mindless robot submitting the claims to YouTube. Perish the thought! (-:
Deleted Comment
Touching up videos is bad but it is hardly material to break out the pitchforks compared to some of the political manoeuvres YouTube has been involved in.
However, polished to a point that we humans start to lose our unique tone is what style guides that go into the minutiae of comma placement try do do. And I'm currently reading a book I'm 100% sure has been edited by an expert human editor that did quite the job of taking away all the uniqueness of the work. So, we can't just blame the LLMs for making things more gray when we have historically paid other people to do it.
"By AI" or "with AI?" If I write the book and have AI proof read things as I go, or critique my ideas, or point out which points do I need to add more support for, is that written "by AI?"
When Big Corp says 30% of their code is now written "by AI," did they write the code by following thoughtful instruction from a human expert, who interpeted the work to be done, made decisions about the architectural impact, outlined those things and gave detailed instructions that the LLM could execute in small chunks?
This distinction I feel is going to become more important. AI tools are useful, and most people are using them for writing code, literature, papers, etc. I feel like, in some cases, it is not fair to say the thing was written by AI, even when sometimes it technically was.
Xe also occasionally reminds people that, equal temperament being what it is, this pitch correction is actually in a few cases making people less well in tune than they originally were.
It certainly removes unique tone. Yesterday's was a pitch corrected version of a performance by John Lennon from 1972, that definitely changed Lennon's sound.
Eh. There might be a tacit presumption here that correctness isn't real, or that style cannot be better or worse. I would reject this notion. After all, what if something is uniquely crap?
The basic, most general purpose of writing is to communicate. Various kinds of writing have varying particular purposes. The style must be appropriate to the end in question so that it can serve the purpose of the text with respect to the particular audience.
Now, we may have disagreements about what constitutes good style for a particular purpose and for a particular audience. This will be a source of variation. And naturally, there can be stylistic differences between two pieces of writing that do not impact the clarity and success with which a piece of writing does its job.
People will have varying tastes when it comes to style, and part of that will be determined by what they're used to, what they expect, a desire for novelty, a desire for clarity and adequacy, affirmation of their own intuitions, and so on. We shouldn't obfuscate and sweep the causes of varying tastes under the rug of obfuscation, however.
In the case of AI-generated text, the uncanny, je ne said quoi character that makes it irritating to read seems to be that it has the quality of something produced by a zombie. The grammatical structure is obviously there, but at a pragmatic level, it lacks a certain cohesion, procession, and relevance that reads like something someone on amphetamines or The View might say. It's all surface.
It’s like saying you wouldn’t hire an engineer because you suspect they’d use computers rather than pencil and paper.
Now imagine the near future of the Internet, when all people have to adapt to that in order to not be dismissed as AI.
1: Chohei Kambayashi. (1994). Kototsubo. as yet unavailable in English
This is what tech bros in SV built and they all love it.
> enhancing it with care
I get what you’re going for with this comment, but it seamlessly anthropomorphizes what’s happening in a way that has the opposite impact I think.
There is no thoughtfulness or care involved. Only algorithmic conformance to some non-human synthesis of the given style.
The issue is not just about the words that come out the other end. The issue is the loss of the transmission of human thoughts, emotions, preferences, style.
The end result is still just as suspect, and to whatever degree it appears “good”, even more soulless given the underlying reality.
except those editors will still make changes. that's there job. if they start passing manuscripts through without changes, they'd be nullifying their jobs.
I'm pretty sure all the hand wringing about A.I. is going to fade into the past in the same way as every other strand of technophobia has before.
It looks like you see writing & editing as a menial task that we just do for it's extrinsic value, whereas these people who complain about quality see it as art we make for it's intrinsic value.
Where I think a lot of this "technophobia" actually comes from though are people who do/did this for a living and are not happy about their profession being obsolesced, and so try to justify their continued employment. And no, "there were new jobs after the cotton gin" will not comfort them, because that doesn't tell them what their next profession will be and presumes that the early industrial revolution was all peachy (it wasn't).
Moreover, most people have more attachment to their own thoughts or to reading the unaltered, genuine thoughts of other humans than to a hole in the ground. The comment you respond to literally talks about the Orwellian aspects of altering someone's works.
Excavation is an inherently dangerous and physically strenuous job. Additionally, when precision or delicateness is required human diggers are still used.
If AI was being used to automate dangerous and physically strenuous jobs, I wouldn't mind.
Instead it is being used to make everything it touches worse.
Imagine an AI-powered excavator that fucked up every trench that it dug and techbros insisted you were wrong for criticizing the fucked up trench.
https://en.wikipedia.org/wiki/John_Henry_(folklore)
Deleted Comment
https://www.reddit.com/r/youtube/comments/1lllnse/youtube_sh...
I skimmed the videos as well, and there is much more talk about this thing, and barely any examples of it. As this is an experiment, I guess that all this noise serves as a feedback to YouTube.
Basically YouTube is applying a sharpening filter to "Shorts" videos.
Do these videos that YT creates to backfill their lack of Shorts get credited back to the original creator as far as monetization from ads?
This really has a feel of the delivery apps making websites for the restaurants that did not previously have one without the restaurant knowing anything about it while setting higher prices on the menu items while keeping that extra money instead of paying the restaurants the extra.
Although, I probably wouldn't want any automatic filtering applied to my video either, AI modifications or not.
Aside: The mention of Technorati tags (and even Flickr) in the linked blog post hit me right in the Web 2.0 nostalgia feels.
[0] https://colorspretty.blogspot.com/2007/01/flickrs-dirty-litt...
Have to say, I am not a fan of the AI sharpening filter at all. Would much prefer the low res videos.
> We hear you, and want to clear things up! This is from an experiment to improve video quality with traditional machine learning – not GenAI. More info from @YouTubeInsider here:
> No GenAI, no upscaling. We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)
> YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features
https://x.com/TeamYouTube/status/1958286550229541158
Says everything. Hey PM at YouTube: How about you think stuff through before even starting to waste time on stuff like this?
What makes you think they don't think it through? This effect is an experiment that they are running. It seems to be useless, unwanted from our perspective, but what if they find that it increases engagement?
Basing it on a lot of stupid decisions youtube has made over the years, the last being the horrendous autotranslation of titles/descriptions/audio that can't be turned off. Can only be explained by having morons making decisions, who can't imagine that anyone could speak more than one language.
As long as YouTube continues to be the Jupiter sized gorilla in the room, they're not going to care very much about what the plebes think.
Dead Comment
It's true though that aggressive denoising gives things an artificially generated look since both processes use denoising heavily.
Perhaps this was done to optimize video encoding, since the less noise/surface detail there is the easier it is to compress.
- auto-dubbing
- auto-translation
- shorts (they're fine in a separate space, just not in the timeline)
- member only streams (if I'm not a member, which is 100% of them)
The only viable interface for that is the web and plenty of browser extensions.
Dead Comment
The controversy is that YouTube is making strange changes to the videos of users, that make the videos look fake.
YouTube creators put hours upon hours on writing, shooting and editing their videos. And those that do it full time often depend on YouTube and their audience for income.
If YouTube messes up the videos of creators and makes the videos look like they are fake, of course the creators are gonna be upset!
If so it's really just another kind of lossy compression. No different in principle from encoding a video to AV-1 format.
Just the consideration of these possibilities was enough to shake the authenticity of my reality.
Even more unsettling is when I contemplate what could be done about data authenticity. There are some fairly useful practical answers such as an author sharing the official checksum for a book. But, ultimately, authenticity is a fleeting quality and I can’t stop time.
Those things are different.