OK so someone has noticed that you "only" need 15MBs-1 for a 4K stream. Jolly good.
Now let's address latency - that entire article only mentioned the word once and it looks like it was by accident.
Latency isn't cool but it is why your Teams/Zoom/whatevs call sounds a bit odd. You are probably used to the weird over-talking episodes you get when a sound stream goes somewhat async. You put up with it but you really should not, with modern gear and connectivity.
A decent quality sound stream consumes roughly 256KBs-1 (yes: 1/4 megabyte per second - not much) but if latency strays away from around 30ms, you'll notice it and when it becomes about a second it will really get on your nerves. To be honest, half a second is quite annoying.
I can easily measure path latency to a random external system with ping to get a base measurement for my internet connection and here it is about 10ms to Quad9. I am on wifi and my connection goes through two switches and a router and a DSL FTTC modem. That leaves at least 20ms (which is luxury) for processing.
A decent quality stream is not 256 kilobytes per second (1/4 megabyte), as you write. You probably meant 256kilobits, which is only 32 kilobytes. For speech that’s VERY high quality. Teams actually uses either G.722 or SILK codec, which is just 40kbit/s. That’s 5 kilobytes per second.
256kbps (yes I was pissed and miss-typed B for b and a few other transgressions) or whatever is sod all these days for throughput, as you well know, so worrying your codecs down to 40kbps means nothing if your jitter buffer is going mad!
Modern home/office internet connections are mostly optimised for throughput but rarely for latency - that's the province of HFT.
You see tales from the kiddies who fixate over "ping" times when trying to optimize their gaming experience. Well that's nice but when on earth do you shoot someone with ICMP?
I can remember calling relos in Australia in the 1970s/80s over old school satellite links from the UK or Germany and it nearly needed radio style conventions.
I've been doing VoIP for quite a while and it is so crap watching people put up with shit sound quality and latency on a Teams/Zoom/etc call as the "new" normal. I wheel out Wireshark and pretend to watch it and then fix up the wifi link or the routing (Teams outside VPN - split tunnelling) or whatever.
Tangentially related to the topic of the bandwidth efficiency of Teams: screen sharing in Teams has a very low framerate of about 3 to 4 fps. It is driving me insane, especially when the presenter starts relentlessly scrolling up and down and circling things with the mouse cursor.
I think Microsoft took bandwidth efficiency a bit too far here.
That's a 1/4 megabit per second. The codec should be around 32,00 bytes and thus 256,000 bits. The problem is the Internet is not multicast so if I'm on with 8 participants that's 8 * 32,000 bytes and our distributed clocks are nowhere near sample accurate.
If you really want to have some fun come out to the country side with me where 4G is the only option and 120ms is the best end to end latency you're going to get. Plus your geolocation puts you half the nation away from where you actually are which only expounds the problem.
On the other hand I now have an acquired expertise in making applications that are extremely tolerant of high latency environments.
The first time I learned about UDP multicasting, I thought it was incredible, wondering how many streams were out there that I could tap into, if only I knew the multicast group, and all with such little overhead! Then I tried to send some multicast packets to my friend's house. I kept increasing the hop limit, thinking I was surely sending my little UDP packet around the world at that point. It never made it. :(
> If you really want to have some fun come out to the country side with me where 4G is the only option and 120ms is the best end to end latency you're going to get.
That's basically me. 80ms on LTE or 25ms on Starlink.
Latency is an issue but it's not the biggest one. Most real-time protocols can adjust for latency (often using some sort of OOB performance measurement port/protocol.) The issue is jitter. When your latency varies a lot, it's hard for an algorithm to compensate.
And then comes all the latency _jitter_ inherent in a shared resource like 5G, and on top of that shenanigans like (transparently) MITM-ing TCP streams on a base station by base station base.
It is not on paper. Couple of ms ( 4ms ) is real, except it is not normal 5G but specific superset / version of 5G.
Your network needs to be End to End running on 5G-A ( 3GPP Rel 18 ) with full SA ( Stand Alone ) using NR ( New Radio ) only. Right now AFAIK only Mobile networks in China has managed to run that. ( Along with turning on VoNR ) Most other network are still behind in deployment and switching.
It is until you congest it. Then, like all other network technologies, performance collapses. You can still force traffic through with EDC, buffering, and re-tries, but at massive cost to overall network performance.
Barring some revolutionary new material where we can get EM waves to travel orders of magnitude faster through it over current stuff, I don't think we're ever getting around the latency problem.
It's usually not a speed of light problem. It's a problem of everyone optimizing for bandwidth, not latency, because that is the number that products are advertised with. Speed of light is 200'000-300'000 km/s in the media used, that should not be very noticeable when talking to someone in the same country.
> Could maximum data speeds—on mobile devices, at home, at work—be approaching “fast enough” for most people for most purposes?
That seems to be the theme across all consumer electronics as well. For an average person mid phones are good enough, bargain bin laptops are good enough, almost any TV you can buy today is good enough. People may of course desire higher quality and specific segments will have higher needs, but things being enough may be a problem for tech and infra companies in the next decade.
The answer is yes. But it's not just about speed. The higher speeds drain the battery faster.
I say this because we currently use an old 2014 phone as a house phone for the family. It's set to 2G to take calls, and switches to 4g for the actual voice call. We only have to charge it once every 2-3 weeks, if not longer. (Old Samsung phones also had Ultra Power Saving mode which helps with that)
2G is being shutdown though. Once that happens and it's forced into 4G all the time, we'll have to charge it more often. And that sucks. There isn't a single new phone on the market that lasts as long as this old phone with an old battery.
The same principle is why I have my modern personal phone set to 4G instead of 5G. The energy savings are very noticeable.
I actually miss the concept of house phones. Instead of exclusively person-to-person communication, families would call other families to catch up and sometimes even pass the phone around to talk to the grandparents, aunts and uncles, etc.
There are some more traditional home phones which work on 4/5G networks with a DECT handset which talks to a cellular base station. You might look into switching to that model to replace your "cell phone as a home phone" concept. It makes it a bit easier to add another handset to the DECT network and often means convenient cradles to charge the handsets while the base station stays in a good signal spot with plenty of power.
Just a thought when it comes time to change out that device.
Given that it's a house phone, have you tried enabling Wi-Fi Calling (VoWiFi) in the carrier settings (if you have that option), and then putting the phone in Airplane Mode with wi-fi enabled? AFAIK that should be fairly less impactful on battery.
(Alternately, if you don't have the option to use VoWiFi, you could take literally any phone/tablet/etc; install a softphone app on it; get a cheap number from a VoIP number and connect to it; and leave the app running, the wi-fi on, and the cellular radio off. At that point, the device doesn't even need a[n e]SIM card!)
I think it's more that mobile transfer speeds is no longer the bottleneck. It's the platforms and services people are using. If you record a minute video on your phone it ends up as like 1GB. But when you send it on a messaging app it gets compressed down to 30MB. People would notice and appreciate higher quality video. But it's too expensive to host and transfer for the service.
It won't be a problem for them. They'll find a way to make it not enough - disable functionality that people want or need and then charge a subscription fee to enable it. And more ads. Easy peasy.
Back in the day, if you knew the secret menu, you could change the default vocoder to use on the network. The cheap cell company I used defaulted to half-rate. I would set my phone to the highest bitrate, with a huge improvement in quality, but at the expense of the towers rejecting my calls around 25% of the time. When I would call landline phones, people would mention how good it sounded.
I don't understand this "good enough" argument. We never really needed anything we use daily today. Life was "good enough" 100 years ago (if you could afford it), should we have stopped?
4K video reaches you only because it's compressed to crap. It's "good enough" until it's not. 360p TV was good enough at some point too.
> 4K video reaches you only because it's compressed to crap.
Yes, but I assume when they say the "consumer" they mean everyone not us. Most people i've had in my home couldnt tell you the difference between a 4K bluray at 2 odd meters on a 65" panel vs 1080p.
I can be looking at a screen full of compression artificts that seem like the most obvious thing i've ever seen and ill be surrounded by people going "what are you talking about?"
Even if I can get them to notice, the response is almost always the same.
"Oh...yeah ok I guess I can see. I just assumed it supposed to look like it shrug its just not noticable to me"
> 4K video reaches you only because it's compressed to crap.
Streaming video gets compressed to crap. People are being forced to believe that it is better to have 100% of crap provided in real time instead of waiting just a few extra moments to have the best possible experience.
Here is a trick for movie nights with the family: choose the movie you want to watch, start your torrent and tell everyone "let's go make popcorn!" The 5-10 minutes will get enough of the movie downloaded so you will be able to enjoy a high quality video.
Due to a bunch of reasons combined, I'm stuck on a plan where I have 5GB of mobile data per month, and honestly, I never really use it up. I stopped browsing so much social media because it's slop. I don't watch YouTube on mobile because I prefer to do it at home on my TV. I don't stream music because I keep my collection on the SD card. Sometimes I listen to the online radio but it doesn't use much data anyway. Otherwise I browse the news, use navigation, and that's pretty much it.
Once every few months I'm in a situation where I want to watch YouTube on mobile or connect my laptop to mobile hotspot, but then I think "I don't need to be terminally online", or in worst-case scenario, I just pay a little bit extra for the data I use, but again, it happens extremely rarely. BTW quite often my phone randomly loses connection, but then I think "eh, god is telling me to interact with the physical world for five minutes".
At home though, it's a different situation, I need to have good internet connection. Currently I have 4Gbps both ways, and I'm thinking of changing to a cheaper plan, because I can't see myself benefitting from anything more than 1Gbps.
In any case though, my internet connection situation is definitely "good enough", and I do not need any upgrades.
That is the thing with capitalist exponential growth, it doesn't last forever, eventually it flattens out, where new goods only get acquired due to existing ones being replaced, or newer generations getting to acquire their first products.
Business should learn to earn just good enough to make by.
Business culture of the 50s and 60s was closer to this, largely due to how taxes on corporations worked. At least if I'm to believe some business historians I follow.
>Regulators may also have to consider whether fewer operators may be better for a country, with perhaps only a single underlying fixed and mobile network in many places—just as utilities for electricity, water, gas, and the like are often structured around single (or a limited set of) operators.
There are no words to describe how stupid this is.
It actually works well in most places. Look up the term “common carrier”.
The trick is that the entity that owns the wires has to provide/upgrade the network at cost, and anyone has the right to run a telco on top of the network.
This creates competition for things like pricing plans, and financial incentives for the companies operating in the space to compete on their ability to build out / upgrade the network (or to not do that, but provide cheaper service).
Your second and third paragraph are contradictory.
Common carriers become the barrier to network upgrades. Always. Without fail. Monopolies are a bad idea, whether state or privately owned.
Let me give you 2 examples.
In australia we had Telstra (Formerly Telecom, Formerly Auspost). Testra would resell carriers ADSL services, and they stank. The carriers couldn't justify price increases to upgrade their networks and the whole thing stagnated.
We had a market review, and Telstra was legislatively forced to sell ULL instead. So the non monopolist is now placing their own hardware in Telstra exchanges, which they can upgrade. Which they did. Once they could sell an upgrade (ADSL2+) they could also price in the cost of upgrading peering and transit. We had a huge increase in network speeds. We later forgot this lesson and created the NBN. NBNCo does not sell ULL, and the pennies that ISPs can charge on top of it are causing stagnation again.
ULL works way better than common carrier. In singapore the government just runs glass. They have competition between carriers to provide faster GPON. 2gig 10gig 100gig whatever. Its just a hardware upgrade away.
10 years from now Australia will realise it screwed up with NBNCo. Again. But they wont as easily be able to go to ULL as they did in the past. NBN's fibre isn't built for it. We will have to tear out splitters and install glass.
The actual result is worse than you suggest. A carrier had to take the government/NBNCo to court to get permission to build residential fibre in apartment buildings over the monopoly. We have NBNCo strategically overbuilding other fibre providers and shutting them down (Its an offence to compete with the NBN on the order of importing a couple million bucks of cocaine). Its an absolute handbrake on competition and network upgrades. Innovation is only happening in the gaps left behind by the common carrier.
Common carriers have some upsides, but one downside is that it sometimes removes the incentive for ISPs to deploy their own networks.
I was stuck with a common carrier for years. I could pick different ISPs, which offered different prices and types of support, but they all used the same connection... which was only stable at lower speeds.
It actually has nefarious benefits. Look up the term "HTLINGUAL" or "ECHELON." It's certainly nice for the government to have fewer places to shop when destroying our privacy.
The trick is that this is essentially wireless spectrum. Which can be leased for limited periods of time and can easily allow for a more competitive environment than what natural monopolies allow for.
It's also possible to separate the infrastructure operators from the backhaul operators and thus entirely avoid the issues of capital investment costs by upstart incumbents. When done there's even less reason to tolerate monopolistic practices on either side.
Feels a lot like whitelabeling. Where you have 200 companies selling exactly the same product at slightly different price points but where there isn't really any difference in the product.
"Common carrier" tends to raise prices for minimum service, though. And once the network is built the carrier is just going to keep their monopoly. You bet they're never upgrading to any new piece of technology until they're legally required to.
It also makes it more vulnerable to legal, bureaucratic and technical threats.
Doesn't make much sense to me to abstract away most of the parts where an entity could build up its competitive advantage and then to pretend like healthy competition could be build on top.
Imagine if one entity did all the t-shirt manufacturing globally but then you congratulated yourself for creating a market based on altered colors and what is printed on top of these t-shirts.
In New Zealand we have a single company that owns all the telecommunications wires. It was broken up in the 90's from a service provider because they were a monopoly and abusing their position in the market. Now we have a ton of options of ISPs, but only one company to deal with if there are line faults. BTW the line company is the best to deal with, the ISPs are shit.
Same for mobile infrastructure would be great as well.
In NZ we also have the Rural Connectivity Group (RCG) which operates over 400 cellular/mobile sites in rural areas for the three mobile carriers, capital funded jointly by the NZ Government and the three mobile carriers (with operational costs shared between the three carriers I believe). For context the individual carriers operate around 2,000 of their own sites in urban areas and most towns in direct competition with each other. It has worked really well for the more rural parts of the country, filling in gaps in state highway coverage as well as providing coverage to smaller towns that would be uneconomical for the individual carriers to cover otherwise. I'm talking towns of a handful of households getting high speed 4G coverage. Really proud of NZ as this sort of thing is unheard of in most other countries.
I dunno, it makes conceptual sense. Networks infrastructure is largely commodity utilities where duplication is effectively a waste of resources. e.g. you wouldn't expect your home to have multiple natural gas connections from competing companies.
Regulators have other ways to incentivize quality/pricing and can mandate competition at levels of the stack other than the underlying infrastructure.
I wouldn't expect that "only a single network" is the right model for all locations, but it will be for some locations, so you need a regulatory framework that ensures quality/cost in the case of a single network anyway.
IMO this can be neatly solved with a peer-to-peer market based system similar to Helium
https://www.helium.com/mobile.
(I know that helium's original IoT network mostly failed due to lack of pmf, but idk about their 5G stuff)
Network providers get paid for the bandwidth that flows over their nodes, but the protocol also allows for economically incentivizing network expansion and punishing congestion with subsidization / taxing.
You can unify everyone under the same "network", but the infrastructure providers running it are diverse and in competition.
I personally like the notion of a common public infrastructure that subleases access. We already sort of do that with mobile carriers where the big 3 provide all access and all the other "carriers" (like google fi) are simply leasing access.
Make it easy for a new wireless company to spawn while maintaining the infrastructure everyone needs.
Because competition drives innovation. 5G exists as widely as it does because carriers were driven to meet the standard and provide faster service to their customers.
This article is essentially arguing innovation is dead in this space and there is no need for bandwidth-related improvements. At the same time, there is no 5G provider without a high-speed cap or throttling for hot spots. What would happen if enough people switched to 5G boxes over cable? Maybe T-Mobile can compete with Comcast?
2. It must be guaranteed to continue to be well-run.
3. If someone can do it better, they must be allowed to do so - and then their improvements have to be folded into the network somehow if there is to be only one network.
Internet is treated this way in Germany, and it's slow and expensive. Eastern European countries that put their bets on competition instead of regulation have more bang for the buck in their network infrastructure
Maybe it is. Building multiple networks for smaller populations comes at enormous cost though. In my country there have been a tradition for this kind of network sharing, where operators are required to allow alternative operators on their physical network for a fee set by government.
It's not that stupid IMO, they could handle it like some places handle electricity — there's a single distributor managing infra but you can select from a number of providers offering different generation rates
Having 5 competing infrastructures trying to blanket the country means that you end up with a ton of waste and the most populated places get priority as they constantly fight each other for the most valuable markets while neglecting the less profitable fringe
You’re thinking legacy. In our new Italian Fascist/Peronist governance model, maximizing return on assets for our cronies is the priority. The regulatory infrastructure that fostered both good and bad aspects of the last 75 years is being destroyed and will not return.
Nationalizing telecom is a great way to reward the tech oligarchs by making the capital investments in giant data centers more valuable. If 10 gig can be delivered cheaply over the air, those hyperscale data centers will end up obsolete if technology continues to advance at the current pace. Why would the companies that represent 30% of the stock markets value want that?
Having a single provider of utilities is great when owned by the gov and run "at cost". Problem is, dickheads get voted in and they sell the utility to their mates who get an instant monopoly and start running the utility for profit.
> Of course, sophisticated representations of entire 3D scenes for large groups of users interacting with one another in-world could conceivably push bandwidth requirements up. But at this point, we’re getting into Matrix-like imagined technologies without any solid evidence to suggest a good 4G or 5G connection wouldn’t meet the tech’s bandwidth demands.
Open-world games such as Cyberpunk 2077 already have hours-long downloads for some users.
That's when you load the whole world as one download. Doing it incrementally is worse. Microsoft Flight Simulator 2024 can pull 100 to 200 Mb/sec from the asset servers.
They're just flying over the world, without much ground level detail.
Metaverse clients go further. My Second Life client, Sharpview, will download 400Mb/s of content, sustained, if you get on a motorcycle and go zooming around Second Life. The content is coming from AWS via Akamai caches, which can deliver content at such rates.
If less bandwidth is available, things are blurry, but it still works. The level of asset detail is such that you can stop driving, go into a convenience store, and read the labels on the items.
GTA 6 multiplayer is coming. That's going to need bandwidth.
The Unreal Engine 5 demo, "The Matrix Awakens", is a download of more than a terabyte. That's before decompression.
The CEO of Intel, during the metaverse boom, said that about 1000x more compute and bandwidth was needed to do a Ready Player One / Matrix quality metaverse. It's not that quite that bad.
Here in rural Finland, 4G/5G is the only option available to me. I'm getting 50-150Mbps download speed, but often just a dozen Mbps upload. On the night hours it's better and that's when I my game downloads and backup uploads. I think there's going to be another municipal FTTH program, let's see if I get a fixed line at that point.
Right now - not many. But at some point in the future, if metaverse is everywhere, you can pull out a phone and a combined data for the room you are in might be 100GB. Would we want to have 6G then?
For the better games, you'll need goggles or a phone that unfolds to tablet size for mobile use. Both are available, although the folding-screen products still have problems at the fold point.
> The Unreal Engine 5 demo, "The Matrix Awakens", is a download of more than a terabyte. That's before decompression.
The PS5 and Xbox Series S/X both had disks that were incapable of holding a terabyte at the launch of The Matrix Awakens. Not sure where you are getting that info from, but both the X S/X and PS5 were about 30GB in size on disk, and the later packaged PC release is less than 20GB.
The full PC development system might total a TB with all Unreal Engine, Metahumans, City Sample packs, Matrix Awakens code and assets (audio, mocap, etc) but even then the consumer download will be around the 20-30GB size as noted above.
On my current mobile plan (Google Fi[0]) the kind of streaming 3D world they think I would want to download on my phone would get me throttled in less than a minute. 200 MB is about a day's usage, if I'm out and about burning through my data plan.
The reason why there isn't as much demand for mobile data as they want is because the carriers have horrendously overpriced it, because they want a business model where they get paid more when you use your phone more. Most consumers work around this business model by just... not using mobile data. Either by downloading everything in advance or deliberately avoiding data-hungry things like video streaming. e.g. I have no interest in paying 10 cents to watch a YouTube video when I'm out of the house, so I'm not going to watch YouTube.
There's a very old article that I can't find anymore which predicted the death of satellite phones, airplane phones, and weirdly enough, 3G; because they were built on the idea of taking places that traditionally don't have network connectivity, and then selling connectivity at exorbitant prices, on the hopes that people desperate for connectivity will pay those prices[1]. This doesn't scale. Obviously 3G did not fail, but it didn't fail predominantly because networks got cheaper to access - not because there was a hidden, untapped market of people who were going to spend tens of dollars per megabyte just to not have to hunt for a phone jack to send an e-mail from their laptop[2].
I get the same vibes from 5G. Oh, yes, sure, we can treat 5G like a landline now and just stream massive amounts of data to it with low latency, but that's a scam. The kinds of scenarios they were pitching, like factories running a bunch of sensors off of 5G, were already possible with properly-spec'd Wi-Fi access points[3]. Everyone in 5G thought they could sell us the same network again but for more money.
[0] While I'm ranting about mobile data usage, I would like to point out that either Android's data usage accounting has gotten significantly worse, or Google Fi's carrier accounting is lying, because they're now consistently about 100-200MB out of sync by the end of the month. Didn't have this problem when I was using an LG G7 ThinQ, but my Pixel 8 Pro does this constantly.
[1] Which it called "permanet", in contrast to the "nearernet" strategy of just waiting until you have a cheap connection and sending everything then.
[2] I'm told similar economics are why you can't buy laptops with cellular modems in them. The licensing agreements that cover cellular SEP only require FRAND pricing on phones and tablets, so only phones and tablets can get affordable cell modems, and Qualcomm treats everything else as a permanet play.
[3] Hell, there's even a 5G spec for "license-assisted access", i.e. spilling 5G radio transmissions into the ISM bands that Wi-Fi normally occupies, so it's literally just weirdly shaped Wi-Fi at this point.
> I'm told similar economics are why you can't buy laptops with cellular modems in them
I don't know what you mean. My current laptop (Lenovo L13) has a cellular modem that I don't need. And I am certainly a cost conscious buyer. It's also not the first time that this happened as well.
So true. I remember the first time I got access to 5G was on a short visit to Dubai. I got a sim card with ~20GB of traffic and was super excited to try speedtest. My brother told me not to do that because 5G is so fast that if speedtest doesn't limit the traffic size for the test it will consume all of it within 30 seconds the test runs. Guess what? I didn't run the test because I didn't want to pay another $50 for the data package.
If I have 1Gbit connection even with 100G. What's the point? I'm still kind of on 4G if I want 100G to last me a month...
We live in a rural location, so we have redundant 5G/Starlink.
It's getting pretty reasonable these days, with download speeds reaching 0.5 gbit/sec per link, and latency is acceptable at ~20ms.
The main challenge is the upload speed; pretty much all the ISPs allocate much more spectrum for download rather than upload. If we could improve one thing with future wireless tech, I think upload would be a great candidate.
> The main challenge is the upload speed; pretty much all the ISPs allocate much more spectrum for download rather than upload.
For 5G, a lot of the spectrum is statically split into downstream and upstream in equal bandwidth. But equal radio bandwidth doesn't mean equal data rates. Downstream speeds are typically higher because multiplexing happens at one fixed point, instead of over multiple, potentially moving transmitters.
You identified the problem in your statement: “the ISPs allocate…”. The provider gets to choose this, and if more bandwidth is available from a newer technology, they’re incentive is to allocate it to downloads so they can advertise faster speeds. It’s not a technology issue.
With 5G, I have to downgrade to LTE constantly to avoid packet loss in urban canyons. Given the even higher frequencies proposed for 6G, I suspect it will be mostly useless.
Now, it's possible that that raw GB/s with unobstructed LoS is the underlying optimization metric driving these standards, but I would assume it's something different (e.g. tower capex per connected user).
There seem to be some integration issues in 5G Non-Standalone equipment and existing network. Standalone or not, 5G outside of millimeter wavelength bands("mmWave") should behave like an all-around pure upgrade compared to 4G with no downsides, in theory.
I can grant that a typical usage of wireless bandwidth doesn't require more than 10Mbps. So, what does "even faster buy you"?
The answer is actually pretty simple, at any given frequency you have a limited amount of data that can be transmitted. The more people you have chatting to a tower, the less available bandwidth there is. By having a transmission standard with theoretical capacities in the GB or 10GB, or more you make it so you can service 10, 100, 1000 more customers their 10Mbps content. It makes it cheaper for the carrier to roll out and gives a better experience for the end users.
> Transmitting high-end 4K video today requires 15 Mb/s, according to Netflix. Home broadband upgrades from, say, hundreds of Mb/s to 1,000 Mb/s (or 1 Gb/s) typically make little to no noticeable difference for the average end user.
What I find fascinating is that in a lot of situations mobile phones are now way faster than wired internet for lots of people. My parents never upgraded their home internet despite there being fire available. They have 80MBit via DSL. Their phones however due to regular upgrades now have unlimited 5G and are almost 10 times as fast as their home internet.
> Transmitting high-end 4K video today requires 15 Mb/s, according to Netflix.
It doesn't really change their argument, but to be fair, Netflix has some of the lowest picture quality of any major streaming service on the market, their version of "high-end 4K" is so heavily compressed, it routinely looks worse than a 15 year old 1080p Blu-Ray.
"High-end" 4K video (assuming HEVC) should really be targeting 30 Mb/s average, with peaks up to 50 Mb/s. Not "15 Mb/s".
On one hand it's nice that the option for that fast wireless connection is available. But on the other hand it sucks that having it means the motivation for ISPs to run fiber to homes in sparse towns goes from low down to none, since they can just point people to the wireless options. Wireless doesn't beat the reliability, latency, and consistent speeds of a fiber connection.
It's not that mobile is fast, it's that home internet is slow. It's the same reason home internet in places like Africa, South Korea and Eastern Europe is faster than in the USA and Western Europe: home internet was built out on old technology (cable/DSL) and never upgraded because (cynically) incumbent monopolies won't allow it or (less cynically) governments don't want to pay to rip up all the roads again.
Several Western European countries have deployed XGS-PON at scale, offering up to 10 Gbps, peaking at ~8 Gbps in practice. Hell I even have access to 25 Gbps P2P fiber here in Switzerland.
Also you can deliver well over 1 Gbps over coax or DSL with modern DOCSIS and G.fast respectively. But most countries have started dismantling copper wirelines.
5G can be extremely fast. I get 600 MBit over cellular at home.
Is your T-Mobile underprovisioned? Where I am, T-Mobile 5G is 400Mbps at 2am, but slows to 5-10Mbps on weekdays at lunchtime and during rush hours, and on weekends when the bars are full.
Not to mention that the T-Mobile Home Internet router either locks up, or reboots itself at least twice a day.
I put up with the inconvenience because it's either $55 to T-Mobile, $100 to Verizon for even less 5G bandwidth, or $140 the local cable company.
They're constantly running promotions "get free smartglases/video game systems/etc if you sign up for gigabit." Turns out that gigabit is still way more than most people need, even if it's 2025 and you spend hours per day online.
Now let's address latency - that entire article only mentioned the word once and it looks like it was by accident.
Latency isn't cool but it is why your Teams/Zoom/whatevs call sounds a bit odd. You are probably used to the weird over-talking episodes you get when a sound stream goes somewhat async. You put up with it but you really should not, with modern gear and connectivity.
A decent quality sound stream consumes roughly 256KBs-1 (yes: 1/4 megabyte per second - not much) but if latency strays away from around 30ms, you'll notice it and when it becomes about a second it will really get on your nerves. To be honest, half a second is quite annoying.
I can easily measure path latency to a random external system with ping to get a base measurement for my internet connection and here it is about 10ms to Quad9. I am on wifi and my connection goes through two switches and a router and a DSL FTTC modem. That leaves at least 20ms (which is luxury) for processing.
Perhaps we should insist on more from formatting from above. That's probably a post processing thing for ... gAI 8)
Modern home/office internet connections are mostly optimised for throughput but rarely for latency - that's the province of HFT.
You see tales from the kiddies who fixate over "ping" times when trying to optimize their gaming experience. Well that's nice but when on earth do you shoot someone with ICMP?
I can remember calling relos in Australia in the 1970s/80s over old school satellite links from the UK or Germany and it nearly needed radio style conventions.
I've been doing VoIP for quite a while and it is so crap watching people put up with shit sound quality and latency on a Teams/Zoom/etc call as the "new" normal. I wheel out Wireshark and pretend to watch it and then fix up the wifi link or the routing (Teams outside VPN - split tunnelling) or whatever.
I think Microsoft took bandwidth efficiency a bit too far here.
If you really want to have some fun come out to the country side with me where 4G is the only option and 120ms is the best end to end latency you're going to get. Plus your geolocation puts you half the nation away from where you actually are which only expounds the problem.
On the other hand I now have an acquired expertise in making applications that are extremely tolerant of high latency environments.
That's basically me. 80ms on LTE or 25ms on Starlink.
Used to be 1200-1500ms on BGAN, 160ms on 3G.
It has all the latency associated with cell networks combined with all the latency of routing all traffic through AWS.
As an added bonus, about 10% of sites block you outright because they assume a request coming from an AWS origin IP is a bot.
Maybe you are thinking of working around bufferbloat?
Your network needs to be End to End running on 5G-A ( 3GPP Rel 18 ) with full SA ( Stand Alone ) using NR ( New Radio ) only. Right now AFAIK only Mobile networks in China has managed to run that. ( Along with turning on VoNR ) Most other network are still behind in deployment and switching.
That seems to be the theme across all consumer electronics as well. For an average person mid phones are good enough, bargain bin laptops are good enough, almost any TV you can buy today is good enough. People may of course desire higher quality and specific segments will have higher needs, but things being enough may be a problem for tech and infra companies in the next decade.
I say this because we currently use an old 2014 phone as a house phone for the family. It's set to 2G to take calls, and switches to 4g for the actual voice call. We only have to charge it once every 2-3 weeks, if not longer. (Old Samsung phones also had Ultra Power Saving mode which helps with that)
2G is being shutdown though. Once that happens and it's forced into 4G all the time, we'll have to charge it more often. And that sucks. There isn't a single new phone on the market that lasts as long as this old phone with an old battery.
The same principle is why I have my modern personal phone set to 4G instead of 5G. The energy savings are very noticeable.
Just a thought when it comes time to change out that device.
(Alternately, if you don't have the option to use VoWiFi, you could take literally any phone/tablet/etc; install a softphone app on it; get a cheap number from a VoIP number and connect to it; and leave the app running, the wi-fi on, and the cellular radio off. At that point, the device doesn't even need a[n e]SIM card!)
4K video reaches you only because it's compressed to crap. It's "good enough" until it's not. 360p TV was good enough at some point too.
Yes, but I assume when they say the "consumer" they mean everyone not us. Most people i've had in my home couldnt tell you the difference between a 4K bluray at 2 odd meters on a 65" panel vs 1080p.
I can be looking at a screen full of compression artificts that seem like the most obvious thing i've ever seen and ill be surrounded by people going "what are you talking about?"
Even if I can get them to notice, the response is almost always the same.
"Oh...yeah ok I guess I can see. I just assumed it supposed to look like it shrug its just not noticable to me"
Streaming video gets compressed to crap. People are being forced to believe that it is better to have 100% of crap provided in real time instead of waiting just a few extra moments to have the best possible experience.
Here is a trick for movie nights with the family: choose the movie you want to watch, start your torrent and tell everyone "let's go make popcorn!" The 5-10 minutes will get enough of the movie downloaded so you will be able to enjoy a high quality video.
Once every few months I'm in a situation where I want to watch YouTube on mobile or connect my laptop to mobile hotspot, but then I think "I don't need to be terminally online", or in worst-case scenario, I just pay a little bit extra for the data I use, but again, it happens extremely rarely. BTW quite often my phone randomly loses connection, but then I think "eh, god is telling me to interact with the physical world for five minutes".
At home though, it's a different situation, I need to have good internet connection. Currently I have 4Gbps both ways, and I'm thinking of changing to a cheaper plan, because I can't see myself benefitting from anything more than 1Gbps.
In any case though, my internet connection situation is definitely "good enough", and I do not need any upgrades.
Business should learn to earn just good enough to make by.
Deleted Comment
There are no words to describe how stupid this is.
The trick is that the entity that owns the wires has to provide/upgrade the network at cost, and anyone has the right to run a telco on top of the network.
This creates competition for things like pricing plans, and financial incentives for the companies operating in the space to compete on their ability to build out / upgrade the network (or to not do that, but provide cheaper service).
Common carriers become the barrier to network upgrades. Always. Without fail. Monopolies are a bad idea, whether state or privately owned.
Let me give you 2 examples.
In australia we had Telstra (Formerly Telecom, Formerly Auspost). Testra would resell carriers ADSL services, and they stank. The carriers couldn't justify price increases to upgrade their networks and the whole thing stagnated.
We had a market review, and Telstra was legislatively forced to sell ULL instead. So the non monopolist is now placing their own hardware in Telstra exchanges, which they can upgrade. Which they did. Once they could sell an upgrade (ADSL2+) they could also price in the cost of upgrading peering and transit. We had a huge increase in network speeds. We later forgot this lesson and created the NBN. NBNCo does not sell ULL, and the pennies that ISPs can charge on top of it are causing stagnation again.
ULL works way better than common carrier. In singapore the government just runs glass. They have competition between carriers to provide faster GPON. 2gig 10gig 100gig whatever. Its just a hardware upgrade away.
10 years from now Australia will realise it screwed up with NBNCo. Again. But they wont as easily be able to go to ULL as they did in the past. NBN's fibre isn't built for it. We will have to tear out splitters and install glass.
The actual result is worse than you suggest. A carrier had to take the government/NBNCo to court to get permission to build residential fibre in apartment buildings over the monopoly. We have NBNCo strategically overbuilding other fibre providers and shutting them down (Its an offence to compete with the NBN on the order of importing a couple million bucks of cocaine). Its an absolute handbrake on competition and network upgrades. Innovation is only happening in the gaps left behind by the common carrier.
If the common carrier is doing all the work, what’s the point of the companies on top? What do they add to the system besides cost?
Might as well get rid of them and have a national carrier.
I was stuck with a common carrier for years. I could pick different ISPs, which offered different prices and types of support, but they all used the same connection... which was only stable at lower speeds.
The trick is that this is essentially wireless spectrum. Which can be leased for limited periods of time and can easily allow for a more competitive environment than what natural monopolies allow for.
It's also possible to separate the infrastructure operators from the backhaul operators and thus entirely avoid the issues of capital investment costs by upstart incumbents. When done there's even less reason to tolerate monopolistic practices on either side.
Doesn't make much sense to me to abstract away most of the parts where an entity could build up its competitive advantage and then to pretend like healthy competition could be build on top.
Imagine if one entity did all the t-shirt manufacturing globally but then you congratulated yourself for creating a market based on altered colors and what is printed on top of these t-shirts.
Same for mobile infrastructure would be great as well.
Regulators have other ways to incentivize quality/pricing and can mandate competition at levels of the stack other than the underlying infrastructure.
I wouldn't expect that "only a single network" is the right model for all locations, but it will be for some locations, so you need a regulatory framework that ensures quality/cost in the case of a single network anyway.
(I know that helium's original IoT network mostly failed due to lack of pmf, but idk about their 5G stuff)
Network providers get paid for the bandwidth that flows over their nodes, but the protocol also allows for economically incentivizing network expansion and punishing congestion with subsidization / taxing.
You can unify everyone under the same "network", but the infrastructure providers running it are diverse and in competition.
Make it easy for a new wireless company to spawn while maintaining the infrastructure everyone needs.
This article is essentially arguing innovation is dead in this space and there is no need for bandwidth-related improvements. At the same time, there is no 5G provider without a high-speed cap or throttling for hot spots. What would happen if enough people switched to 5G boxes over cable? Maybe T-Mobile can compete with Comcast?
1. It must be well-run.
2. It must be guaranteed to continue to be well-run.
3. If someone can do it better, they must be allowed to do so - and then their improvements have to be folded into the network somehow if there is to be only one network.
Having 5 competing infrastructures trying to blanket the country means that you end up with a ton of waste and the most populated places get priority as they constantly fight each other for the most valuable markets while neglecting the less profitable fringe
Nationalizing telecom is a great way to reward the tech oligarchs by making the capital investments in giant data centers more valuable. If 10 gig can be delivered cheaply over the air, those hyperscale data centers will end up obsolete if technology continues to advance at the current pace. Why would the companies that represent 30% of the stock markets value want that?
Deleted Comment
Open-world games such as Cyberpunk 2077 already have hours-long downloads for some users. That's when you load the whole world as one download. Doing it incrementally is worse. Microsoft Flight Simulator 2024 can pull 100 to 200 Mb/sec from the asset servers.
They're just flying over the world, without much ground level detail. Metaverse clients go further. My Second Life client, Sharpview, will download 400Mb/s of content, sustained, if you get on a motorcycle and go zooming around Second Life. The content is coming from AWS via Akamai caches, which can deliver content at such rates. If less bandwidth is available, things are blurry, but it still works. The level of asset detail is such that you can stop driving, go into a convenience store, and read the labels on the items.
GTA 6 multiplayer is coming. That's going to need bandwidth.
The Unreal Engine 5 demo, "The Matrix Awakens", is a download of more than a terabyte. That's before decompression.
The CEO of Intel, during the metaverse boom, said that about 1000x more compute and bandwidth was needed to do a Ready Player One / Matrix quality metaverse. It's not that quite that bad.
For my area all the mobile network home internet options offer plenty of speed, but the bandwidth limitations are a dealbreaker.
Everyone I know still uses their cable/FTTH as their main internet, and mobile network as a hotspot if their main ISP goes down.
The PS5 and Xbox Series S/X both had disks that were incapable of holding a terabyte at the launch of The Matrix Awakens. Not sure where you are getting that info from, but both the X S/X and PS5 were about 30GB in size on disk, and the later packaged PC release is less than 20GB.
The full PC development system might total a TB with all Unreal Engine, Metahumans, City Sample packs, Matrix Awakens code and assets (audio, mocap, etc) but even then the consumer download will be around the 20-30GB size as noted above.
The reason why there isn't as much demand for mobile data as they want is because the carriers have horrendously overpriced it, because they want a business model where they get paid more when you use your phone more. Most consumers work around this business model by just... not using mobile data. Either by downloading everything in advance or deliberately avoiding data-hungry things like video streaming. e.g. I have no interest in paying 10 cents to watch a YouTube video when I'm out of the house, so I'm not going to watch YouTube.
There's a very old article that I can't find anymore which predicted the death of satellite phones, airplane phones, and weirdly enough, 3G; because they were built on the idea of taking places that traditionally don't have network connectivity, and then selling connectivity at exorbitant prices, on the hopes that people desperate for connectivity will pay those prices[1]. This doesn't scale. Obviously 3G did not fail, but it didn't fail predominantly because networks got cheaper to access - not because there was a hidden, untapped market of people who were going to spend tens of dollars per megabyte just to not have to hunt for a phone jack to send an e-mail from their laptop[2].
I get the same vibes from 5G. Oh, yes, sure, we can treat 5G like a landline now and just stream massive amounts of data to it with low latency, but that's a scam. The kinds of scenarios they were pitching, like factories running a bunch of sensors off of 5G, were already possible with properly-spec'd Wi-Fi access points[3]. Everyone in 5G thought they could sell us the same network again but for more money.
[0] While I'm ranting about mobile data usage, I would like to point out that either Android's data usage accounting has gotten significantly worse, or Google Fi's carrier accounting is lying, because they're now consistently about 100-200MB out of sync by the end of the month. Didn't have this problem when I was using an LG G7 ThinQ, but my Pixel 8 Pro does this constantly.
[1] Which it called "permanet", in contrast to the "nearernet" strategy of just waiting until you have a cheap connection and sending everything then.
[2] I'm told similar economics are why you can't buy laptops with cellular modems in them. The licensing agreements that cover cellular SEP only require FRAND pricing on phones and tablets, so only phones and tablets can get affordable cell modems, and Qualcomm treats everything else as a permanet play.
[3] Hell, there's even a 5G spec for "license-assisted access", i.e. spilling 5G radio transmissions into the ISM bands that Wi-Fi normally occupies, so it's literally just weirdly shaped Wi-Fi at this point.
I don't know what you mean. My current laptop (Lenovo L13) has a cellular modem that I don't need. And I am certainly a cost conscious buyer. It's also not the first time that this happened as well.
It's getting pretty reasonable these days, with download speeds reaching 0.5 gbit/sec per link, and latency is acceptable at ~20ms.
The main challenge is the upload speed; pretty much all the ISPs allocate much more spectrum for download rather than upload. If we could improve one thing with future wireless tech, I think upload would be a great candidate.
We're getting 30-50 mbit/sec per connection on a good day.
For 5G, a lot of the spectrum is statically split into downstream and upstream in equal bandwidth. But equal radio bandwidth doesn't mean equal data rates. Downstream speeds are typically higher because multiplexing happens at one fixed point, instead of over multiple, potentially moving transmitters.
Now, it's possible that that raw GB/s with unobstructed LoS is the underlying optimization metric driving these standards, but I would assume it's something different (e.g. tower capex per connected user).
I can grant that a typical usage of wireless bandwidth doesn't require more than 10Mbps. So, what does "even faster buy you"?
The answer is actually pretty simple, at any given frequency you have a limited amount of data that can be transmitted. The more people you have chatting to a tower, the less available bandwidth there is. By having a transmission standard with theoretical capacities in the GB or 10GB, or more you make it so you can service 10, 100, 1000 more customers their 10Mbps content. It makes it cheaper for the carrier to roll out and gives a better experience for the end users.
What I find fascinating is that in a lot of situations mobile phones are now way faster than wired internet for lots of people. My parents never upgraded their home internet despite there being fire available. They have 80MBit via DSL. Their phones however due to regular upgrades now have unlimited 5G and are almost 10 times as fast as their home internet.
It doesn't really change their argument, but to be fair, Netflix has some of the lowest picture quality of any major streaming service on the market, their version of "high-end 4K" is so heavily compressed, it routinely looks worse than a 15 year old 1080p Blu-Ray.
"High-end" 4K video (assuming HEVC) should really be targeting 30 Mb/s average, with peaks up to 50 Mb/s. Not "15 Mb/s".
Also you can deliver well over 1 Gbps over coax or DSL with modern DOCSIS and G.fast respectively. But most countries have started dismantling copper wirelines.
…and we only pay for 500 MBit for my home fiber. (Granted, also 500 Mbit upload.)
(T-Mobile, Southern California)
Is your T-Mobile underprovisioned? Where I am, T-Mobile 5G is 400Mbps at 2am, but slows to 5-10Mbps on weekdays at lunchtime and during rush hours, and on weekends when the bars are full.
Not to mention that the T-Mobile Home Internet router either locks up, or reboots itself at least twice a day.
I put up with the inconvenience because it's either $55 to T-Mobile, $100 to Verizon for even less 5G bandwidth, or $140 the local cable company.
They're constantly running promotions "get free smartglases/video game systems/etc if you sign up for gigabit." Turns out that gigabit is still way more than most people need, even if it's 2025 and you spend hours per day online.