* not all cameras being available
* stabilisation not working
* 60 FPS unavailable
* not all cameras being available
* stabilisation not working
* 60 FPS unavailable
DataPacket has a very large network though and is kind of, sort of EU-based. AFAIK most operations are in Czechia, but the company is registered in UK. And there's also the Luxembourg-based Gcore.
VNPT is a residential / mobile ISP, but they also run datacentres (e.g. [1]) and offer VPS, dedicated server rentals, etc. Most companies would use separate ASes for residential vs hosting use, but I guess they don't, which would make them very attractive to someone deploying crawlers.
And Bunny Communications (AS5065) is a pretty obvious 'residential' VPN / proxy provider trying to trick IP geolocation / reputation providers. Just look at the website [2], it's very low effort. They have a page literally called 'Sample page' up and the 'Blog' is all placeholder text, e.g. 'The Art of Drawing Readers In: Your attractive post title goes here'.
Another hint is that some of their upstreams are server-hosting companies rather than transit providers that a consumer ISP would use [3].
[1] https://vnpt.vn/doanh-nghiep/tu-van/vnpt-idc-data-center-gia... [2] https://bunnycommunications.com/ [3] https://bgp.tools/as/5065#upstreams
During 2024 Summer Olympics my then employer which DNS and core network I was still managing as I returned summer holiday. I was told by helpdesk our users around different locations at campus were not able to open national TV broadcaster streaming services and view the games.
I found out by asking few of these users that they got denied claiming to be from UK and that streaming services were not allowed abroad. TV broadcaster told me once I got someone to know anything about the matter reply, that they use MaxMind GeoIP service. So I went to see and test few addresses from MaxMind debug page and that clearly showed many addresses from around 20 subnets of /16 our IPv4 CIDR block were showing the same.
So I sent email to MaxMind support asking why and tried to find out means they use to check where each network is located and populate it to their GeoIP DB, which then clients either mirror or use remotely from their service.
After few emails with their support that they did not use RIPE (RIR) database at all as RIPE terms of use doesn't allow using RIR information for commercial purposes. So MaxMind neither did not apparently use WHOIS (RDAP) LOC records, and wrong information did not update from our LOC records DNS had either.
I never got any explanation how they figure out where that IP or CIDR block is being used. Between the lines I was assuming it's perhaps some kind of trade secret they don't like to talk about. Maybe it's using mobile devices location service or like, but amount these days VPN's are being used that could lead them updating bogus information to database service use they then sell and naive customers trust <eh>.
But most I was surprised by that how easy it was update information, basically just communicating clearly and writing polite convincing message they seemed to take that information pretty much by face value and that I was sending my messages from DNS SOA RNAME address.
But if GeoIP data provicers don't use that then who or what services do, that I still have no idea.
If you're just renting servers instead, you have a few options that are effectively closer to a 1% commit, but better have a plan B for when your upstreams drop you if the incoming attack traffic starts disrupting other customers - see Neoprotect having to shut down their service last month.
How about we actually finally roll out IPv6 and bury CGNAT in the graveyard where it belongs?
Suddenly, everybody (ISPs, carriers, end users) can blackhole a compromised IP and/or IP range without affecting non-compromised endpoints.
And DDoS goes poof. And, as a bonus, we get the end to end nature of the internet back again.
Insofar as it makes a difference for DDoS mitigation, the scarcity of IPv4 is more of a feature than a bug.
Almost all of the DDoS mitigation providers have been struggling for a few weeks because they just don't have enough edge capacity.
And normal hosting companies that are not focused on DDoS mitigation also seem to have had issues, but with less impact to other customers as they'll just blackhole addresses under larger attacks. For example, I've seen all connections to / from some of my services at Hetzner time out way more frequently than usual, and some at OVH too. Then one of my smaller hosting providers got hit with an attack of at least 1 Tbps which saturated a bunch of their transit links.
Cloudflare and maybe a couple of the other enterprise providers (Gcore?) operate at a large enough scale to handle these attacks, but all the smaller ones (who tend to have more affordable rates and more application-specific filters for sensitive applications that can't deal with much leakage) seem to be in quite a bad spot right now. Cloudflare Magic Transit pricing supposedly starts at around $4k / month, and it would really suck if that became the floor for being able to run a non-HTTP service online.
Something like Team Cymru's UTRS service (with Flowspec support) could potentially help to mitigate attacks at the source, but residential ISPs and maybe the T1s would need to join it, and I don't see that happening anytime soon.
I was taught about this in engineering school, as part of a general engineering course also covering things like bathtub reliability curves and how to calculate the number of redundant cooling pumps a nuclear power plant needs. But it's a long time since I was in college.
Is this sort of thing still taught to engineers and developers in college these days?
My infrastructure is redundant and spread out among hosting providers and DCs so there's no real impact, but I'm pretty sure this is the longest outage I've ever had with any provider. And the communication level has been so dissapointing. 4 hrs to say it's a power / HVAC issue? Updates that basically just say we're still working on it since then.
Not associated with Meta, but this piqued my interest. That being said, I found some parts confusing and hard to follow. For example what does URPF (Unicast Reverse Path Forwarding) in the title of this submission have to do with the contents?
And is the packet loss supposedly happening at specific times only? It's not mentioned anywhere, but one screenshot highlights the time. I couldn't reproduce the packet loss using any of the looking glasses and dest IP addresses in the screenshots. At this point, if this was a report I had received about one of my services, I would have probably bumped down the priority to low and asked for a reproducible test, because in my experience even issues that affect a single path in an ECMP group are not this hard to reproduce. I think it's way more important to give the engineer who will process the report an easy way to check that there is indeed a problem than to start to teach how traceroute works.
TBF, there does seem to be an issue somewhere, because sticking 129.134.80.234, one of the Meta IP addresses from a screenshot, on ping.pe does definitely show significant packet loss from more locations than you'd expect to see for an address with no connectivity issues.