Readit News logoReadit News
trout commented on Google Cloud Global Loadbalancer Outage   status.cloud.google.com/i... · Posted by u/brian-armstrong
iowahansen · 7 years ago
https://aws.amazon.com/elasticloadbalancing/details/#details

IP addresses as Targets You can load balance any application hosted in AWS or on-premises using IP addresses of the application backends as targets. This allows load balancing to an application backend hosted on any IP address and any interface on an instance. You can also use IP addresses as targets to load balance applications hosted in on-premises locations (over a Direct Connect or VPN connection), peered VPCs and EC2-Classic (using ClassicLink). The ability to load balance across AWS and on-prem resources helps you migrate-to-cloud, burst-to-cloud or failover-to-cloud.

Looks like you need an active VPN connection to access external IPs.

trout · 7 years ago
That feature requires you to use a private IP address, so if you have a VPN or Direct Connect to another location you could load balance across locations. In the case of the global load balancers those will be public addresses though.

"The IP addresses that you register must be from the subnets of the VPC for the target group, the RFC 1918 range (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16), and the RFC 6598 range (100.64.0.0/10). You cannot register publicly routable IP addresses."

[1] https://docs.aws.amazon.com/elasticloadbalancing/latest/netw...

trout commented on Aws “Advanced Consulting Partner” not professional, what to do?    · Posted by u/oriettaxx
oriettaxx · 7 years ago
thanks for your link

yes, with partner at high level (competencies) I read that partner requesting to be partner have to be checked by a 3rd party:

"Once your firm’s application has been submitted through the APN (aws,ndr) Portal, the APN Team will review for compliance, then send to the third party audit firm to coordinate scheduling of the technical review."

so there is somehow a double check on partner competencies.

So, as I see, we made the mistake of choosing a "normal" partner and not one with competencies. Do you think aws care some how to know our "bad" experience to get a better network of partners? or should be expect them to tell us: get a "competent" partner?

trout · 7 years ago
Mistake seems like a harsh word here. There are lots of partners that aren't in the competency tier that do perfectly fine work. But we try to highlight the ones we can somehow quantify as 'top tier', which is competency. It's imperfect just like any other subjective rating.

AWS definitely cares about any bad experiences. It's the way we improve things for customers, so let us (or me, or anyone at AWS) know the details.

trout commented on Aws “Advanced Consulting Partner” not professional, what to do?    · Posted by u/oriettaxx
trout · 7 years ago
AWS keeps a list of vetted partners (business requirements, public references, case studies, good AWS relationship, etc) on the competency page.

You can see the different sorts of competencies here, in case your solution has a specific vertical or technology focus: https://aws.amazon.com/partners/competencies/

You would want to focus on the consulting partners for this type of engagement.

If you're not sure, it sounds like it's more of a migration use case and you can get a short list of folks here: https://aws.amazon.com/migration/partner-solutions/

If you know your AWS account team, they'd like to get that feedback. Otherwise my contact information is in my profile and you can email me and I can try to connect you to some AWS folks responsible for the partner as well.

trout commented on Announcing Docker 1.9: Production-Ready Swarm and Multi-Host Networking   blog.docker.com/2015/11/d... · Posted by u/ah3rz
ninkendo · 10 years ago
How is the multi-host networking implemented? Is there a dependency on a service discovery system? What service discovery system? Or are they using portable IP addresses? How are those implemented? Overlay networks? BGP? Or is it doing some crazy IPTables rules with NAT?

Will it work with existing service discovery systems? What happens if a container I'm linked to goes down and comes up on another host? Do I get transparently reconnected?

There's so much involved with the abstraction they're making that I'm getting a suspicion that it's probably an unwieldy beast of an implementation that probably leaks abstractions everywhere. I'd love to be proven otherwise, but their lack of details should make any developer nervous to bet their product on docker.

trout · 10 years ago
This link is basically what Socketplane was working on when they got acquired: https://github.com/docker/docker/issues/8951

Basically integrating OVS APIs into Docker so it could use more mature networking code as well as VXLAN forwarding. VXLAN is basically IP encapsulation (a 16-bit ID) that the networking industry has standardized on. It more or less allows for L2 over L3 links. I like to think of it as the next Spanning Tree.

So the unwieldy part is the weight OVS brings as well as the VXLAN encapsulation in software - both of which have momentum towards being more lightweight.

trout commented on Enterprise Sales Guide: The Process of Selling Enterprise Software Demystified   enterprisesales.nyc/... · Posted by u/mickeygraham
johnward · 10 years ago
As someone on the implementation side I have a sour taste for Sales and Sales Engineers. Always selling features we do not have or things we cannot do. Then they mock it up and leave and I have to break the news to the customer.
trout · 10 years ago
I've been on break/fix and on the Sales Engineer side and know both - there's never a single side to these things.

Sales Engineers in particular don't want to sell something dishonestly - it opens the company up to risk and it hurts their credibility, particularly if you sell multiple products.

There are a few reasons I can think of:

1. The technical people that run the current implementation didn't put their requirements into the sales process.

2. Some of those features really don't matter to the business and were fodder.

3. The requirements were listed, but not accurately or with enough depth.

4. The Sales team didn't have enough knowledge (or training) to know the difference - or inaccurate documentation.

5. It was roadmapped close enough to implementation and including delays to go ahead anyway.

6. The competitor claims to have this feature but theirs is broken also, so it's a race to who can sell broken stuff faster - because nobody can truly do it.

The people and the companies that support this exist certainly - just not for very long.

trout commented on Skype group video calling becomes free   blogs.skype.com/2014/04/2... · Posted by u/Siyfion
izzydata · 11 years ago
I feel like this was the only thing on the premium feature list that even mattered. Does skype even generate revenue?
trout · 11 years ago
Skype pulls in a lot of revenue in the OCS/Lync product set. Companies want still largely want to be able to IM MSN, Yahoo, Skype, and AOL users and they're the only ones that have access to all those user groups. They also leverage B2C video/calls to Skype users.
trout commented on Whatever happened to the IPv4 address crisis?   networkworld.com/news/201... · Posted by u/kenrose
exabrial · 12 years ago
Truth is NAT works just fine for the vast majority of cases, and makes a layered (IE not-eggs-all-in-one-basket) approach to security much simpler.

The real problem is routing table size with BGP. As we continue to divide the internet into smaller routable blocks, this is requiring an exponential amount of memory in BGP routers. Currently, the global BGP table requires around 256mb of RAM. IPv6 makes this problem 4 times worse.

IPv6 is a failure, we don't actually _need_ everything to have a publicly routable address. There were only two real problems with IPv4: wasted space on legacy headers nobody uses, and NAT traversal. IETF thumbed their noses as NAT (not-invented-here syndrome) and instead of solving real problems using a pave-the-cowpaths-approach, they opted to design something that nobody has a real use for.

Anyway, I'm hoping a set of brilliant engineers comes forward to invent IPv5, where we still use 32 bit public address to be backward compatible with today's routing equipment, but uses some brilliant hack re-using unused IPv4 headers to allow direct address through a NAT.

Flame away.

trout · 12 years ago
Not a flame - your perspective is very typical for people that don't have a lot of experience with networking past the host or server level. (Very little experience with networking in the core, provider, or putting together network services architecture).

1. In theory the routing table with IPv6 can be smaller. The address design should be hierarchical, which means you should be able to have much fewer routes. It's too early to tell if this is actually true or not, but the addresses themselves are 4x larger - which isn't going to be the determining factor in routing table size.

2. Not everything needs to be publically routable, true. IPv6 has the idea of link local and autonomous system local addressing which IPv4 doesn't have. The RFC 1918 block was used instead. But think for a second - there's only 4 billion addresses (less when you count bogons and multicast ranges), and it's only a matter of time until those are taken up. So we can choose to do it now, 2 years from now, or 5 years from now, but devices are growing faster than ever and it's only a function of time.

3. NAT is not a security feature, is not good for the internet, and the sunk costs spent building an ALG for every protocol to work around it is a significant development sinkhole. It's a workaround often masqueraded as security, and does cause many application problems. It's just not normally the application developers that have to fix those problems - it's the network and security teams.

4. IPv6 was created in the late 90's. People have been waiting for brilliance to supercede IPv6 for a while. I'll admit it's not the easiest, but there are a certain set of problems you have when you expand the address space.

5. I'm familiar with all the IPv4 headers, and nearly all of them are used. ID is used for packet identification, particularly through network services, DSCP is used heavily, DF and other flags are used - they're just obscure. If you look at IPv6 those same headers are basically recreated, though with slightly different names. The ones that aren't included are addressable through the extension headers.

So, yeah. That's another perspective that may help you understand why IPv6 is a bit of a quagmire. The faster people understand this, the sooner we get to a place where the chicken-egg problem fades away.

trout commented on Whatever happened to the IPv4 address crisis?   networkworld.com/news/201... · Posted by u/kenrose
trout · 12 years ago
Here's a report you can see the current projects with a bit of history: http://www.potaroo.net/tools/ipv4/index.html

The potaroo site by Geoff Huston has been running for over a decade tracking address consumption.

Some history for ARIN consumption predictions: Feb 2014 predicts Mar 2015.

Oct 2013 predicts Jan 2015 [0].

Apr 2013 predicts Apr 2014 [1].

Nov 2012 predicts Sept 2013 [2].

Sep 2012 - RIPE out of addresses.

Apr 2011 - APNIC out of addresses.

Feb 2011 - IANA out of addresses.

Dec 2011 predicts July 2013 [3].

July 2011 predicts Nov 2013 [4].

Prior to this it's simply about IANA calculations, though with some algebra some dates could be extracted.

As well, here's a Cisco article from 2005 describing some of the painful parts of trying to predict the address consumption (where they guess 2016 in 2005): http://www.cisco.com/web/about/ac123/ac147/archived_issues/i...

[0] http://web.archive.org/web/20111227105916/http://www.potaroo... [1] http://web.archive.org/web/20111227105916/http://www.potaroo... [2] http://web.archive.org/web/20121122120407/http://www.potaroo... [3] http://web.archive.org/web/20111227105916/http://www.potaroo... [4] http://web.archive.org/web/20110709090704/http://www.potaroo...

trout commented on Tell HN: Server Status    · Posted by u/kogir
cincinnatus · 12 years ago
I'm sure it has been asked many times before, but I'd love to hear the latest thinking... Why in 2013 is HN still running on bespoke hardware and software? If a startup came to you with this sort of legacy thinking you'd laugh them out of the room.
trout · 12 years ago
If HN was on AWS, where would we go to discuss AWS outages?
trout commented on A crossword based on the Adobe password leak   zed0.co.uk/crossword/... · Posted by u/mdisraeli
aviraldg · 12 years ago
trout · 12 years ago
It looks like the explain xkcd community finally cracked the codes: http://www.explainxkcd.com/wiki/index.php?title=1286:_Encryp...

Check out the discussion - it took a while to find a solution to the last 3.

u/trout

KarmaCake day373November 18, 2010
About
I love computer networking.

Email is matt.h nick @ [gmail.com]

View Original