I’m a paying YouTube premium subscriber. Last weekend, I wanted to download something so I can watch it on my way in the train. The app got stuck at “waiting for download..” on my iPad. Same on iPhone. Restart did not work. I gave up after an hour (30 mins hands on trying stuff, 30 mins waiting for it to fix itself). Downloaded the video using yt-dlp, transferred it to my USB c flash drive, and watched it from that.
Awaiting their “premium cannot be shared with people outside household” policy so I can finally cancel. Family members make good use of ad-free.
I'm also a premium subscriber, and have struggled with the same issues on the iPad app. I try to keep some shows downloaded for my toddler, and the download feature never seems to work on the first try.
I finally got so fed up, I bought a Samsung Galaxy Tab A7 off ebay for $50 and flashed it with LineageOS. I can now load whatever media I want onto the 1 TB sdcard I've installed in it. The 5 year old hardware plays videos just fine with the VLC app. And, as a bonus, I discovered that NewPipe, an alternative YouTube client I installed through the F-Droid store, is actually much more reliable at downloading videos than the official client. I was planning on using yt-dlp to load up the sdcard, but now I don't even need to do that.
This is exactly why Google is clamping down on running your own choice of apps on Android, as well as pushing things like remote attestation on both phones and browsers.
It's time to milk the entire userbase for every cent they can get out of them by any means necessary. The future is bleak.
>, I discovered that NewPipe, an alternative YouTube client I installed through the F-Droid store, is actually much more reliable at downloading videos than the official client.
NewPipe is so good and so useful. It can even play 4K and watch livestreams now.
The TIDAL app is absolute trash, it has this same issue all the time; not just that, but also, if a download fails it just hangs there and does not download the rest of the album/playlist.
Also, why would you want to download things in the first place? To watch them offline, right? Well, guess what happens when you open the app w/o an internet connection ... it asks you to login, so you cannot even access your music. 900k/year TOC genius work there.
The only reason why I haven't canceled is because I'm too lazy to reset my password in order to login and cancel, lol. Might do it soon, though.
I also pay for YouTube Premium, but I still use ReVanced on my smartphone just to disable auto-translation. It’s absolute madness that users can’t configure this in the official app.
The auto-dub feature is madness. I noticed it first a couple of days ago, I'm crossing my fingers that few authors choose to enable it, and that YouTube makes it easy to disable as a default in settings (not currently possible, you have to do it as you watch, every time).
I'm in a Spanish speaking country, but I want to watch English videos in English.
Auto-generated subtitles for other languages are ok, but I want to listen to the original voices!
I wonder who got the idea at Youtube that forced auto-dub was a good idea. This shows how dysfunctional the management is. It's one thing to have assholes in your team, it's a different thing to not look at what they are doing.
I tried installing ReVanced recently. The configuration of the system (install a downloader/updater which then installs the app) was a huge turn-off. Why is it so complicated? Moreover, why not NewPipe or LibreTube?
Even more hilariously, if you upload to YouTube then try to download from your creator dashboard thing (e.g. because you were live-streaming and didn’t think to save a local copy or it impacts your machine too much) you get some shitty 720p render while ytdlp will get you the best quality available to clients.
Oh, that reminds me of a similar experience with Facebook video. Did a live DJ stream a few years ago but only recorded the audio locally at max quality.
Back then, I think I already had to use the browser debugger to inspect the url for the 720p version of the video.
When they recently insisted by email I download any videos before they sunset the feature, their option only gave me the SD version (and it took a while to perform the data export).
Canceled mine after ad-free stopped working on YouTube Kids of all things (on ShieldTV). Was probably a bug, but with practically no customer service options, no real solutions besides cancel.
I was also a holdover from a paying Play Music subscriber, and this was shortly after the pita music switchover to youtube, so it was a last straw.
Halfway ready to fist-fight whichever exec drove the death of Play Music. It was a very, very good application, which could have continued to function as such when the platform ended, but they wouldn't even let us have that. I still have them and refuse to uninstall.
I’m another Premium user in the same position. I use uBlock Origin and Sponsorblock on desktop and SmartTube on my TV. I pay for Premium to be able to share ad-free experience with my less technical family members, and to use their native iOS apps.
If they really tighten the rules on Premium family sharing, I’ll drop the subscription in an instant.
I’m a Premium user and primarily watch on AppleTV. A little while ago they added a feature where if I press the button to skip ahead on the remote when a sponsor section starts, it skips over the whole thing. It skips over “commonly skipped” sections.
While it doesn’t totally remove it, it lets me choose if I want to watch or not, and gets me past it in a single button press. All using the native app. I was surprised the first time this happened. I assume the creators hate it.
So long as they are broadcasting media to the public without an explicit login system, so as to take advantage of public access for exposure, it will remain perfectly legitimate and ethical to access the content through whatever browser or software you want.
After they blitzed me with ads and started arbitrarily changing features and degrading the experience, I stopped paying them and went for the free and adblocking clients and experience.
I may get rid of phones from my life entirely if they follow through with blocking third party apps and locking things down.
the problem is, you cannot be sure what Google does if they catch you violating their ToS. They have killed off entire google accounts for YT copyright strikes with no recourse.
I'm constantly baffled by how bad the implementation of YouTube Premium downloads is. Videos will buffer to 100% in a matter of seconds but get endlessly stuck when I hit the download button. Why? All the bytes are literally on my device already.
The whole YouTube app is weird. Sometimes it lets you do 1.0x-2.0x. Sometimes it lets you range from .25x-4x. Sometimes it pops up a text selection box with every .05x option from .1 to 4.0. Sometimes it has a nicer UI with shortcut selections for common choices and a sliding bar for speed. It recently picked up a bug where if you're listening to a downloaded video, but turn the screen off and on again, the video playback seems to crash. A few months ago it became very, very slow at casting, all manipulations could take 30 seconds to propagate to the cast video (pause, changing videos, etc)... but they didn't usually get lost. (It would be less weird if they did just get lost sometimes.) You aggressively can't cast a short to a TV, in a way that clearly shows this is policy for some incomprehensible reason, but if you use the YouTube app directly on your set top box it'll happily play a short on your TV. Despite its claims in small text that downloads are good for a month without being rechecked, periodically it just loses track of all the downloads and has to redownload them. It also is clearly trying to reauthorize downloads I made just 30 minutes ago sometimes when I'm in a no-Internet zone, defeating the entire purpose. When downloads are about 1/4th done it displays the text "ready to watch on the download screen" but if you try to watch it it'll fail with "not yet fully downloaded".
Feels like the app has passed the complexity threshold of what the team responsible for it can handle. Or possibly, too much AI code and not enough review and testing. And those don't have to be exclusive possibilities.
Also a paying YT Premium subscriber. I live in a rural part of CA where there isn't much 5G reception. For extremely long drives in my minivan, I allow my toddler to watch Ms. Rachel on the screen via an HDMI port input from my iPhone. Youtube Premium videos have DRM that disallow downloads to play over HDMI, so I had to do what you did and add them as files locally to VLC and play them from there.
I also have YouTube premium and watch mostly on my iPad and TV. YouTube constantly logs me out at least once per day. I notice because I’ll randomly start seeing ads again (I open videos from my rss reader, never their site). This never happened when I wasn’t on premium. I don’t get what they’re doing, but my impression after almost a year is that it’s only slightly less annoying than getting ads. At this point, I might as well not renew and just use ad block.
I have 2 homes. Every time I "go up north" I have to switch my Netflix household and then back again when I return. This sounds like that won't even be possible.
I'll admit to using yt-dlp to get copies of videos I wish to have a local copy of, which can't be taken away from me by somebody else, but I pay for premium because that pays for content I watch. If you don't pay for content, where's it going to come from? Patreon only works for super dedicated stars with a huge following.
I can’t speak for everyone, but I don’t watch content that needs to be “paid for” in that way. e.g. The last several videos I downloaded for archiving were install instructions for car parts that were uploaded by the part manufacturer. (And that aren’t readily available through any other channel.)
There are no files anymore. I mean, there technically are, but copyright industry doesn't want you to look at them without authorization, security people don't want you to look at them at all, and UX experts think it's a bad idea for you to even know such thing as "files" exists.
Share and enjoy. Like and subscribe. The world is just apps all the way down.
I run into that download issue all the time. I need to pause downloading each video. Force close the youtube app. Then unpause the downloads to get them downloading again. It has been happening for years and is still unfixed.
I have the opposite problem... frequently streaming a video gets stuck buffering even on my gigabit fiber connection, but I can download a full max quality version in a matter of seconds.
Why not use a non-chromium browser and help prevent Google from having larger control over the Internet?
We still need competition in the browser space or Google gets to have a disproportionate say in how the Internet is structured. I promise you, Firefox and Safari aren't that bad. Maybe Firefox is a little different but I doubt it's meaningfully different for most people [0]. So at least get your non techie family and friends onto them and install an ad blocker while you're at it.
[0] the fact that you're an individual may mean you're not like most people. You being different doesn't invalidate the claim.
This description reminds me of program language wars excerpt I saw somewhere years ago about how c was obviously superior to Pascal, because with enough preprocessor macros you can compile any Pascal program in c. Followed by some hideous and humorous examples.
This is excellent for some of my usages. I want to have my AI agents "fork" their context in some ways, this could be useful for that instead of juggling a tree of dictionaries.
Heh, now I wonder how much JavaScript it actually interprets and given that it’s < 1000 lines, whether it could be used towards an introductory course in compilers.
Over time they probably will require that. I believe YT still allows most of these things because of "legacy" apps, which they have been killing off bit by bit. I'm not sure if anyone is cataloging the oldest supported app, but most things like using YT from a slightly older game console don't work anymore.
Basically any publicly known method that can sip video content with doing the least work and authentication will be a common point of attack for this.
I wonder how long until it gets split off into its own project. For the time being, it could do with a lot more documentation. At least they've got some tests for it!
Aside from the fact that the point of the announcement is that they're dropping it entirely, this "interpreter" is a hack that definitely is nowhere near capable of interpreting arbitrary JS. For example, the only use of `new` it handles is for Date objects, which it does by balancing parens to deduce the arguments for the call, then treating the entire group of arguments as a string and applying regexes to that.
When I first got with my wife I seemed a bit crazier than I am because I am a media hoarder for 30+ years. I don't have any VHS, DVDs, etc. laying around because I only keep digital copies, but I have pretty decent archives. Nothing important really, just normal stuff and some rare or obscure stuff that disappears over time.
My wife was interested in the idea that I was running "Netfix from home" and enjoyed the lack of ads or BS when we watched any content. I never really thought I would be an "example" or anything like that - I fully expected everyone else to embrace streaming for the rest of time because I didn't think those companies would make so many mistakes. I've been telling people for the last decade "That's awesome I watch using my own thing, what shows are your favorites I want to make sure I have them"
In the last 2 years more family members and friends have requested access to my Jellyfin and asked me to setup a similar setup with less storage underneath their TV in the living room or in a closet.
Recently-ish we have expanded our Jellyfin to have some YouTube content on it. Each channel just gets a directory and gets this command ran:
It actually fails to do what I want here and download h264 content so I have it re-encoded since I keep my media library in h264 until the majority of my devices support h265, etc. None of that really matters because these YouTube videos come in AV1 and none of my smart TVs support that yet AFAIK.
I have set up a Plex server and started ripping shows from various streaming providers (using StreamFab mostly) specifically because my wife got frustrated with 1) ads starting to appear even on paid plans and 2) never-ending game of musical chairs where shows move from provider to provider, requiring you to maintain several subscriptions to continue watching. She's not a techie at all, she's just pissed off, and I know she's not the only one.
Let's make sure that when all those people come looking for solutions, they'll find ones that are easy to set up and mostly "just work", at least to the extent this can be done given that content providers are always going to be hostile.
First I ran a simple script, now I use ytdltt [1] to allow my mother via telegram bot to download YT videos (in her case its more like audiobooks) and sort them in directories so she can access/download it via jellyfin. Shes at around 1.2TB audiobooks in like 3 years.
I recently discovered Pinchflat [1], which seems like an *arr-inspired web alternative, and works great for me - I just need to add the videos I want downloaded to a playlist and it picks them up. Also uses yt-dlp under the hood.
Tried this: "yt-dlp -f 'bestvideo*[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best' -S vcodec:h264 -other_options …" ? I'm still getting proper h264 with that (my Raspberry PI 3 only wants a proper codec too… none of that mushy new-era codec stuff. ;) )
Days of just getting data off the web are coming to an end as everything requires a full browser running thousands of lines of obfuscated js code now. So instead of a website giving me that 1kb json that could be cached now I start a full browser stack and transmit 10 megabytes through 100 requests, messing up your analytics and security profile and everyone's a loser. Yay.
On the bright side, that opens an opportunity for 10,000 companies whose only activity is scraping 10MB worth of garbage and providing a sane API for it.
Luckily all that is becoming a non-issue, as most content on these websites isn't worth scraping anymore.
This 1kb os json still sounds like a modern thing, where you need to download many MB of JavaScript code to execute and display the 1kb json data.
What you want is to just download the 10-20kb html file, maybe a corresponding css file, and any images referenced by the html. Then if you want the video you just get the video file direct.
Simple and effective, unless you have something to sell.
The main reason for doing video through JS in the first place, other than obfuscation, is variable bitrate support. Oddly enough some TVs will support variable bitrate HLS directly, and I believe Apple devices, but not regular browsers. See https://github.com/video-dev/hls.js/
> unless you have something to sell
Video hosting and its moderation is not cheap, sadly. Which is why we don't see many competitors.
It's an arms race. Websites have become stupidly/unnecessarily/hostilely complicated, but AI/LLMs have made it possible (though more expensive) to get whatever useful information exists out of them.
Soon, LLMs will be able to complete any Captcha a human can within reasonable time. When that happens, the "analog hole" may be open permanently. If you can point a camera and a microphone at it, the AI will be able to make better sense of it than a person.
Please remember that an LLM accessing any website isn't the problem here. It's the scraping bots that saturate the server bandwidth (a DoS attack of sorts) to collect data to train the LLMs with. An LLM solving a captcha or an Anubis style proof of work problem isn't a big concern here, because the worst they're going to do with the collected data is to cache them for later analysis and reporting. Unlike the crawlers, LLMs don't have any incentives in sucking up huge amounts of data like a giant vacuum cleaner.
fortunately it is now easier than ever to do small-scale scraping, the kind yt-dlp does.
I can literally just go write a script that uses headless firefox + mitmproxy in about an hour or two of fiddling, and as long as I then don't go try to run it from 100 VPS's and scrape their entire website in a huge blast, I can typically archive whatever content I actually care about. Basically no matter what protection mechanisms they have in place. Cloudflare won't detect a headless firefox at low (and by "low" I mean basically anything you could do off your laptop from your home IP) rates, modern browser scripting is extremely easy, so you can often scrape things with mild single-person effort even if the site is an SPA with tons of dynamic JS. And obviously at low scale you can just solve captchas yourself.
I recently wrote a scraper script that just sent me a discord ping whenever it ran into a captcha, and i'd just go look at my laptop and fix it, and then let it keep scraping. I was archiving a comic I paid for but was in a walled-garden app that obviously didn't want you to even THINK of controlling the data you paid for.
> fortunately it is now easier than ever to do small-scale scraping, the kind yt-dlp does.
this is absolutely not the case. I've been web scraping since 00s and you could just curl any html or selenium the browser for simple automation but now it's incredibly complex and expensive even with modern tools like playwright and all of the monthly "undetectable" flavors of it. Headless browsers are laughably easy to detect because they leak the fact they are being automated and that they are headless. Not to even mention all of the fingerprinting.
To how many content creators have you written to request them share their content on PeerTube or BitTorrent? How did they respond? How will they monetize?
I think this is just another indication of how the web is a fragile equilibrium in a very adversarial ecosystem. And to some extent, things like yt-dlp and adblocking only work if they're "underground". Once they become popular - or there's a commercial incentive, like AI training - there ends up being a response.
> Days of just getting data off the web are coming to an end
All thanks to great ideas like downloading the whole internet and feeding it into slop-producing machines fueling global warming in an attempt to make said internet obsolete and prop up an industry bubble.
The future of the internet is, at best, bleak. Forget about openness. Paywalls, authwalls, captchas and verification cans are here to stay.
The Internet was turned into a slop warehouse well before LLMs became a thing - in fact, a big part of why ChatGPT et al. has so extreme adoption worldwide is because they let people accomplish many tasks without having to inflict on yourself the shitfest that's the modern web.
Personally, when it became available, o3 model in ChatGPT cut my use of web search by more than half, and it wasn't because Google became bad at search (I use Kagi anyway) - it's because even the best results are all shit, or embedded in shit websites, and the less I need to browse through that, the better for me.
Accelerationism is a dead-end theory with major holes in its core. Or I should say, "their" core, because there's a million distant and mutually-incompatible varieties. Everyone likes to say "gosh, things are awful, it MUST end in collapse, and after the collapse everyone will see things MY way." They can't all be right. And yet, all of them with their varied ideas still think it'll be a good idea to actively push to make things worse in order to bring on the collapse more quickly.
It doesn't work. There aren't any collapses like that to be had. Big change happens incrementally, a bit of refactoring and a few band-aids at a time, and pushing to make things worse doesn't help.
If you showed me the current state of YouTube 8 years ago - multiple unskippable ads before each video, 5 midrolls for a 10 minute video, comments overran with bots, video dislikes hidden, the shorts hell, the dysfunctional algorithm, .... - I would've definitely told you "Yep, that will be enough to kill it!"
At this point I don't know - I still have the feeling that "they just need to make it 50% worse again and we'll get a competitor," but I've seen too many of these platforms get 50% worse too many times, and the network effect wins out every time.
It's almost funny, not to mention sad, that their player/page has been changed, filling it with tons of JS that makes less powerful machines lag.
For a while now, I've been forced to change "watch?v=" to "/embed/" to watch something in 480p on an i3 Gen 4, where the same video, when downloaded, uses ~3% of the CPU.
However, unfortunately, it doesn't always work anymore.
Many performance problems on YouTube are because they now force everyone to use the latest heavy codecs, even when your hardware does not have acceleration for it. I have a laptop that is plenty powerful for everything else and plays 4K h264 no problem. 720p on YouTube on the other hand turns it into a hot slate after a minute and grinds everything to a halt.
There are browser extensions like h264ify that block newer codecs but WHY??? Is nobody at YouTube caring about the user experience? It’s easier and more reliable to just download the videos.
You are not alone. In Q1 2025 I was forced to adopt the embed player. In Q3 2025, google intentionally broke the embed player. Now the only youtube access I have is via yt-dlp. Long live yt-dlp and its developers
Nsig/sig - Special tokens which must be passed to API calls, generated by code in base.js (player code). This is what has broken for yt-dlp and other third party clients. Instead of extracting the code that generates those tokens (eg using regular expressions) like we used to, we now need to run the whole base.js player code to get these tokens because the code is spread out all over the player code.
PoToken - Proof of origin token which Google has lately been enforcing for all clients, or video requests will fail with a 403. On android it uses DroidGuard, for IOS, it uses built in app integrity apis. For the web it requires that you run a snippet of javascript code (the challenge) in the browser to prove that you are not a bot. Previously, you needed an external tool to generate these PoTokens but with the Deno change yt-dlp should be capable of producing these tokens by itself in the near future.
SABR - Server side adaptive bitrate streaming, used alongside Google's UMP protocol to allow the server to have more control over buffering, given data from the client about the current playback position, buffered ranges, and more. This technology is also used to do server-side ad injection. Work is still being done to make 3rd party clients work with this technology (sometimes works, sometimes doesn't).
>If you ever wondered why the likes of Google and Cloudflare want to restrict the web
I disagree with the framing of "us vs them".
It's actually "us vs us". It's not just us plebians vs FAANG giants. The small-time independent publishers and creators also want to restrict the web because they don't want their content "stolen". They want to interact with real humans instead of bots. The following are manifestations of the same fear:
- small-time websites adding Anubis proof-of-work
- owners of popular Discord channels turning on the setting for phone # verification as a requirement for joining
- web blogs wanting to put a "toll gate" (maybe utilize Cloudflare or other service) to somehow make OpenAI and others pay for the content
We're long past the days of colleagues and peers of ARPANET and NFSNET sharing info for free on university computers. Now everybody on the globe wants to try to make a dollar, and likewise, they feel dollars are being stolen from them.
I don't know, it's really hard to blame them. In a way, the next couple of years are going to be a battle to balance easy access to info with compensation for content creators.
The web as we knew it before ChatGPT was built around the idea that humans have to scavenge for information, and while they're doing that, you can show them ads. In that world, content didn't need to be too protected because you were making up for it in eyeballs anyway.
With AI, that model is breaking down. We're seeing a shift towards bot traffic rather than human traffic, and information can be accessed far more effectively and, most importantly, without ad impressions. So, it makes total sense for them to be more protective about who has access to their content and to make sure people are actually paying for it, be it with ad views or some other form of agreement.
Weird people talking about small time creators wanting DRM I've never seen that... Usually they'd be hounding for any attention? I don't know why multiple accounts are seemingly independently bringing this up, but maybe it is trying to muddy the waters? This concept?
At least for YouTube, viewbotting is very much a thing, which undermines trust in the platform. Even if we were to remove Google ads from the equation, there’s nothing preventing someone from crafting a channel with millions of bot-generated views and comments, in order to paid sponsor placements, etc.
The reasons are similar for Cloudflare, but their stances are a bit too DRMish for my tastes. I guess someone could draw the lines differently.
There could be valid reasons for fighting downloaders, for example:
- AI companies scraping YT without paying YT let alone creators for training data. Imagine how many data YT has.
- YT competitors in other countries scraping YT to copy videos, especially in countries where YT is blocked. Some such companies have a function "move all my videos from YT" to promote bloggers migration.
Everything trends towards centralization on a long enough period.
I laugh at people who think ActivityPub or Mastodon or BlueSky will save us. We already had that, it was called e-mail, look what happened once everyone started using it.
If we couldn't stop the centralization effects that occurred on e-mail, any attempt to stop centralization in general is honestly a utopian fool's errand. Regulation is easier.
And barely a few days after google did it the fix is in.
Amazing how they simply couldn't win - you deliver content to client, the content goes to the client. Could be the largest corporation of the world and we still have yt-dlp.
That's why all of them wanted proprietary walled gardens where they would be able to control the client too - so you get to watch the ads or pay up.
Good question! Indeed you can run the challenge code using headless Chromium and it will function [1]. They are constantly updating the challenge however, and may add additional checks in the future. I suppose Google wants to make it more expensive overall to scrape Youtube to deter the most egregious bots.
Once JavaScript is running, it can perform complex fingerprinting operations that are difficult to circumvent effectively.
I have a little experience with Selenium headless on Facebook. Facebook tests fonts, SVG rendering, CSS support, screen resolution, clock and geographical settings, and hundreds of other things that give it a very good idea of whether it's a normal client or Selenium headless. Since it picks a certain number of checks more or less at random and they can modify the JS each time it loads, it is very, very complicated to simulate.
Facebook and Instagram know this and allow it below a certain limit because it is more about bot protection than content protection.
This is the case when you have a real web browser running in the background. Here we are talking about standalone software written in Python.
More specifically, yt-dlp uses legacy API features supported for older smart TVs which don't receive software updates. Eventually once that traffic drops to near zero those features will go away.
That conspiracy theory never even made sense to me. Why would anyone think that a payment and ad-supported content platform secretly wants their content to be leaked through ad and payment free means?
Mainly the theory that, if you can’t use downloaders to download videos, then people will no longer see YT as the go-to platform for any video hosting and will consider alternatives.
And I call that a theory for a reason. Creators can still download their videos from YT Studio, I'm not sure how much importance there is on being able to download any video ever (and worst case scenario people could screen recording videos)
I agree, all I can think of is that durely alot of commentary YouTubers rely on YouTube downloaders to use fair-use snippets of other people's videos in their commentary videos?
> How on earth can it be that terrible [>20 minutes] compared to Deno?
QuickJS uses a bytecode interpreter (like Python, famously slow), and is optimised for simplicity and correctness. Whereas Deno uses a JIT compiler (like Java, .NET and WASM). Deno uses the same JIT compiler as Chrome, one of the most heavily-optimised in the world.
That doesn't normally lead to such a large factor in time difference, but it explains most of it, and depending on the type of code being run, it could explain all of it in this case.
QuickJIT (a fork of QuickJS that uses TCC for JIT) might yield better results, but still slower than Deno.
My concern is either that QuickJS is something like 100x slower, or that even when using Deno, the download experience will be insanely slow.
In my mind, an acceptable time for users might be 30 seconds (somewhat similar to watching an ad). If QuickJS is taking >20 minutes, then it is some 40x slower? Seems very high?
> QuickJIT (a fork of QuickJS that uses TCC for JIT) might yield better results, but still slower than Deno.
Interesting, not come across it before. Running C code seems like an insane workaround from a security perspective.
Awaiting their “premium cannot be shared with people outside household” policy so I can finally cancel. Family members make good use of ad-free.
I finally got so fed up, I bought a Samsung Galaxy Tab A7 off ebay for $50 and flashed it with LineageOS. I can now load whatever media I want onto the 1 TB sdcard I've installed in it. The 5 year old hardware plays videos just fine with the VLC app. And, as a bonus, I discovered that NewPipe, an alternative YouTube client I installed through the F-Droid store, is actually much more reliable at downloading videos than the official client. I was planning on using yt-dlp to load up the sdcard, but now I don't even need to do that.
It's time to milk the entire userbase for every cent they can get out of them by any means necessary. The future is bleak.
NewPipe is so good and so useful. It can even play 4K and watch livestreams now.
The TIDAL app is absolute trash, it has this same issue all the time; not just that, but also, if a download fails it just hangs there and does not download the rest of the album/playlist.
Also, why would you want to download things in the first place? To watch them offline, right? Well, guess what happens when you open the app w/o an internet connection ... it asks you to login, so you cannot even access your music. 900k/year TOC genius work there.
The only reason why I haven't canceled is because I'm too lazy to reset my password in order to login and cancel, lol. Might do it soon, though.
Download feature on iOS always works flawlessly whenever I need to hop on a long haul flight (several times a year).
I'm in a Spanish speaking country, but I want to watch English videos in English.
Auto-generated subtitles for other languages are ok, but I want to listen to the original voices!
I was using the browser feature that disables the mobile mode on smartphones.
The autodub feature should be disabled asap. Or at least have a way to disable globally on all my devices.
When they recently insisted by email I download any videos before they sunset the feature, their option only gave me the SD version (and it took a while to perform the data export).
I was also a holdover from a paying Play Music subscriber, and this was shortly after the pita music switchover to youtube, so it was a last straw.
Then I have good news for you! https://lifehacker.com/tech/youtube-family-premium-crackdown
In fact, I've got an email from them about this already. My YT is still ad-free though, so not sure when it's going to kick in for real.
While it doesn’t totally remove it, it lets me choose if I want to watch or not, and gets me past it in a single button press. All using the native app. I was surprised the first time this happened. I assume the creators hate it.
So long as they are broadcasting media to the public without an explicit login system, so as to take advantage of public access for exposure, it will remain perfectly legitimate and ethical to access the content through whatever browser or software you want.
After they blitzed me with ads and started arbitrarily changing features and degrading the experience, I stopped paying them and went for the free and adblocking clients and experience.
I may get rid of phones from my life entirely if they follow through with blocking third party apps and locking things down.
For now. I suspect this is the real reason Google is going to require a developer cert even for sideloaded apps: https://www.techradar.com/phones/android/google-will-soon-st...
until next year, when google will require real name and address for dev of side loaded apps
Feels like the app has passed the complexity threshold of what the team responsible for it can handle. Or possibly, too much AI code and not enough review and testing. And those don't have to be exclusive possibilities.
Giving you the bytes would be easy, the hard part is preventing the free flow of information. And those bugs are the side effects.
I recently got paused for "watching on another device" when I wasn't. I don't think that policy you mention is too far off.
That's been a policy for a while, the sign up page prominently says "Plan members must be in the same household".
No idea if its enforced though.
There are no files anymore. I mean, there technically are, but copyright industry doesn't want you to look at them without authorization, security people don't want you to look at them at all, and UX experts think it's a bad idea for you to even know such thing as "files" exists.
Share and enjoy. Like and subscribe. The world is just apps all the way down.
We are not the same.
We still need competition in the browser space or Google gets to have a disproportionate say in how the Internet is structured. I promise you, Firefox and Safari aren't that bad. Maybe Firefox is a little different but I doubt it's meaningfully different for most people [0]. So at least get your non techie family and friends onto them and install an ad blocker while you're at it.
[0] the fact that you're an individual may mean you're not like most people. You being different doesn't invalidate the claim.
Reddit has the answer for you: https://www.reddit.com/r/browsers/comments/1j1pq7b/list_of_b...
Dead Comment
1. Unlimited YouTube Premium
2. Unlimited drink reimbursement (coffee, tea, smoothies, whatever)
The psychological sense of loss from those two things would be larger than any 5% raise.
https://github.com/yt-dlp/yt-dlp/blob/2025.09.23/yt_dlp/jsin...
Here are lines 431 through 433:
Basically any publicly known method that can sip video content with doing the least work and authentication will be a common point of attack for this.
The submission is literally about them moving away from it in favor of Deno, so I think "never" probably gets pretty close.
My wife was interested in the idea that I was running "Netfix from home" and enjoyed the lack of ads or BS when we watched any content. I never really thought I would be an "example" or anything like that - I fully expected everyone else to embrace streaming for the rest of time because I didn't think those companies would make so many mistakes. I've been telling people for the last decade "That's awesome I watch using my own thing, what shows are your favorites I want to make sure I have them"
In the last 2 years more family members and friends have requested access to my Jellyfin and asked me to setup a similar setup with less storage underneath their TV in the living room or in a closet.
Recently-ish we have expanded our Jellyfin to have some YouTube content on it. Each channel just gets a directory and gets this command ran:
It actually fails to do what I want here and download h264 content so I have it re-encoded since I keep my media library in h264 until the majority of my devices support h265, etc. None of that really matters because these YouTube videos come in AV1 and none of my smart TVs support that yet AFAIK.Let's make sure that when all those people come looking for solutions, they'll find ones that are easy to set up and mostly "just work", at least to the extent this can be done given that content providers are always going to be hostile.
1: https://github.com/entropie/ytdltt
I struggled with that myself (yt-dlp documentation could use some work). What's currently working for me is:
you can also skip the match filters by running the /videos URL instead of the main channel url.
if you want 720p, use -S res:720
1. https://github.com/kieraneglin/pinchflat
https://pypi.org/project/ytcc/
you are missing [vcodec^=avc1] ?
Luckily all that is becoming a non-issue, as most content on these websites isn't worth scraping anymore.
What you want is to just download the 10-20kb html file, maybe a corresponding css file, and any images referenced by the html. Then if you want the video you just get the video file direct.
Simple and effective, unless you have something to sell.
> unless you have something to sell
Video hosting and its moderation is not cheap, sadly. Which is why we don't see many competitors.
Soon, LLMs will be able to complete any Captcha a human can within reasonable time. When that happens, the "analog hole" may be open permanently. If you can point a camera and a microphone at it, the AI will be able to make better sense of it than a person.
I can literally just go write a script that uses headless firefox + mitmproxy in about an hour or two of fiddling, and as long as I then don't go try to run it from 100 VPS's and scrape their entire website in a huge blast, I can typically archive whatever content I actually care about. Basically no matter what protection mechanisms they have in place. Cloudflare won't detect a headless firefox at low (and by "low" I mean basically anything you could do off your laptop from your home IP) rates, modern browser scripting is extremely easy, so you can often scrape things with mild single-person effort even if the site is an SPA with tons of dynamic JS. And obviously at low scale you can just solve captchas yourself.
I recently wrote a scraper script that just sent me a discord ping whenever it ran into a captcha, and i'd just go look at my laptop and fix it, and then let it keep scraping. I was archiving a comic I paid for but was in a walled-garden app that obviously didn't want you to even THINK of controlling the data you paid for.
this is absolutely not the case. I've been web scraping since 00s and you could just curl any html or selenium the browser for simple automation but now it's incredibly complex and expensive even with modern tools like playwright and all of the monthly "undetectable" flavors of it. Headless browsers are laughably easy to detect because they leak the fact they are being automated and that they are headless. Not to even mention all of the fingerprinting.
* PeerTube and similar platforms for video streaming of freely-distributable content;
* BitTorrent-based mechanisms for sharing large files (or similar protocols).
Will this be inconvenient? At first, somewhat. But I am led to believe that in the second category one can already achieve a decent experience.
Deleted Comment
All thanks to great ideas like downloading the whole internet and feeding it into slop-producing machines fueling global warming in an attempt to make said internet obsolete and prop up an industry bubble.
The future of the internet is, at best, bleak. Forget about openness. Paywalls, authwalls, captchas and verification cans are here to stay.
Personally, when it became available, o3 model in ChatGPT cut my use of web search by more than half, and it wasn't because Google became bad at search (I use Kagi anyway) - it's because even the best results are all shit, or embedded in shit websites, and the less I need to browse through that, the better for me.
Deleted Comment
I want them to go overboard. I want BigTech to go nuts on this stuff. I want broken systems and nonsense.
Because that’s the only way we’re going to get anything better.
It doesn't work. There aren't any collapses like that to be had. Big change happens incrementally, a bit of refactoring and a few band-aids at a time, and pushing to make things worse doesn't help.
At this point I don't know - I still have the feeling that "they just need to make it 50% worse again and we'll get a competitor," but I've seen too many of these platforms get 50% worse too many times, and the network effect wins out every time.
For a while now, I've been forced to change "watch?v=" to "/embed/" to watch something in 480p on an i3 Gen 4, where the same video, when downloaded, uses ~3% of the CPU.
However, unfortunately, it doesn't always work anymore.
https://www.youtube.com/watch?v=xvFZjo5PgG0https://www.youtube.com/embed/xvFZjo5PgG0
While they worsen the user experience, other sites optimize their players and don't seem to care about downloaders (pr0n sites, for example).
There are browser extensions like h264ify that block newer codecs but WHY??? Is nobody at YouTube caring about the user experience? It’s easier and more reliable to just download the videos.
PoToken - Proof of origin token which Google has lately been enforcing for all clients, or video requests will fail with a 403. On android it uses DroidGuard, for IOS, it uses built in app integrity apis. For the web it requires that you run a snippet of javascript code (the challenge) in the browser to prove that you are not a bot. Previously, you needed an external tool to generate these PoTokens but with the Deno change yt-dlp should be capable of producing these tokens by itself in the near future.
SABR - Server side adaptive bitrate streaming, used alongside Google's UMP protocol to allow the server to have more control over buffering, given data from the client about the current playback position, buffered ranges, and more. This technology is also used to do server-side ad injection. Work is still being done to make 3rd party clients work with this technology (sometimes works, sometimes doesn't).
Nsig/sig extraction example:
- https://github.com/yt-dlp/yt-dlp/blob/4429fd0450a3fbd5e89573...
- https://github.com/yt-dlp/yt-dlp/blob/4429fd0450a3fbd5e89573...
PoToken generation:
- https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide
- https://github.com/LuanRT/BgUtils
SABR:
- https://github.com/LuanRT/googlevideo
EDIT2: Addeded more links to specific code examples/guides
Now you know.
I disagree with the framing of "us vs them".
It's actually "us vs us". It's not just us plebians vs FAANG giants. The small-time independent publishers and creators also want to restrict the web because they don't want their content "stolen". They want to interact with real humans instead of bots. The following are manifestations of the same fear:
- small-time websites adding Anubis proof-of-work
- owners of popular Discord channels turning on the setting for phone # verification as a requirement for joining
- web blogs wanting to put a "toll gate" (maybe utilize Cloudflare or other service) to somehow make OpenAI and others pay for the content
We're long past the days of colleagues and peers of ARPANET and NFSNET sharing info for free on university computers. Now everybody on the globe wants to try to make a dollar, and likewise, they feel dollars are being stolen from them.
The web as we knew it before ChatGPT was built around the idea that humans have to scavenge for information, and while they're doing that, you can show them ads. In that world, content didn't need to be too protected because you were making up for it in eyeballs anyway.
With AI, that model is breaking down. We're seeing a shift towards bot traffic rather than human traffic, and information can be accessed far more effectively and, most importantly, without ad impressions. So, it makes total sense for them to be more protective about who has access to their content and to make sure people are actually paying for it, be it with ad views or some other form of agreement.
The reasons are similar for Cloudflare, but their stances are a bit too DRMish for my tastes. I guess someone could draw the lines differently.
- AI companies scraping YT without paying YT let alone creators for training data. Imagine how many data YT has.
- YT competitors in other countries scraping YT to copy videos, especially in countries where YT is blocked. Some such companies have a function "move all my videos from YT" to promote bloggers migration.
I laugh at people who think ActivityPub or Mastodon or BlueSky will save us. We already had that, it was called e-mail, look what happened once everyone started using it.
If we couldn't stop the centralization effects that occurred on e-mail, any attempt to stop centralization in general is honestly a utopian fool's errand. Regulation is easier.
Amazing how they simply couldn't win - you deliver content to client, the content goes to the client. Could be the largest corporation of the world and we still have yt-dlp.
That's why all of them wanted proprietary walled gardens where they would be able to control the client too - so you get to watch the ads or pay up.
How does this prove you are not a bot. How does this code not work in a headless Chromimum if it's just client side JS?
[1] https://github.com/LuanRT/BgUtils
I have a little experience with Selenium headless on Facebook. Facebook tests fonts, SVG rendering, CSS support, screen resolution, clock and geographical settings, and hundreds of other things that give it a very good idea of whether it's a normal client or Selenium headless. Since it picks a certain number of checks more or less at random and they can modify the JS each time it loads, it is very, very complicated to simulate.
Facebook and Instagram know this and allow it below a certain limit because it is more about bot protection than content protection.
This is the case when you have a real web browser running in the background. Here we are talking about standalone software written in Python.
It's it's always been very apparent that YouTube are doing _just enough_ to stop downloads while also supporting a global audience of 3 billion users.
If the world all had modern iPhones or Android devices you'd bet they'd straight up DRM all content
[0] https://windowsread.me/p/best-youtube-downloaders
[1] https://news.ycombinator.com/item?id=45300810
And I call that a theory for a reason. Creators can still download their videos from YT Studio, I'm not sure how much importance there is on being able to download any video ever (and worst case scenario people could screen recording videos)
e.g. censorship, metadata, real time society-wide trends, etc...
google is way-way more than just a company.
> Why can't we embed a lightweight interpreter such as QuickJS?
> @Ronsor #14404 (comment)
The linked comment [2]:
> @dirkf This solution was tested with QuickJS which yielded execution times of >20 minutes per video
How on earth can it be that terrible compared to Deno?
[1] https://github.com/yt-dlp/yt-dlp/issues/14404#issuecomment-3...
[2] https://github.com/yt-dlp/yt-dlp/issues/14404#issuecomment-3...
QuickJS uses a bytecode interpreter (like Python, famously slow), and is optimised for simplicity and correctness. Whereas Deno uses a JIT compiler (like Java, .NET and WASM). Deno uses the same JIT compiler as Chrome, one of the most heavily-optimised in the world.
That doesn't normally lead to such a large factor in time difference, but it explains most of it, and depending on the type of code being run, it could explain all of it in this case.
QuickJIT (a fork of QuickJS that uses TCC for JIT) might yield better results, but still slower than Deno.
In my mind, an acceptable time for users might be 30 seconds (somewhat similar to watching an ad). If QuickJS is taking >20 minutes, then it is some 40x slower? Seems very high?
> QuickJIT (a fork of QuickJS that uses TCC for JIT) might yield better results, but still slower than Deno.
Interesting, not come across it before. Running C code seems like an insane workaround from a security perspective.