Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Some websites will go past that.

Do you literally mean a website? Using a browser? What’s an example website that would go past that?



You can overwhelm a NAT in several ways.

UDP is connectionless, but typically a UDP communication is bidirectional. This means a NAT needs to inspect UDP packets and retain a mapping to direct incoming UDP packets to the right place. With no connection information this can only be done as an LRU cache or similar.

TCP is connection oriented, and a NAT might rapidly free up resources when a connection is closed (ie, when the final FIN has been ACKed). But if there's no FIN, the NAT is in the same case as it is with UDP. Making a lot of connections without closing them fills up NAT buffers.

When you have a home NAT and a carrier-grade NAT you may get an impedance mismatch of sorts. The CGNAT might have insufficient ports allocated to your service to keep up with your home NAT, resulting in timeouts or dropped mappings. Your home NAT will have one set of mappings and the CGNAT another, and the two sets probably won't be exactly the same. This means some portion of the mappings held in memory are useless.

As a specific example, many years ago Google Maps would routinely trigger failures. Using Maps would load many tile images, which could overwhelm a NAT or CGNAT. The result was a map with holes in it where some tiles failed to load.

Browsers have long had limits on concurrent connections per domain. Total concurrent connection limits are also old news, but are not quite as old as per-domain limits. You probably can't make a NAT choke with just simple web requests (even AJAX) any more. You might be able to do it using, eg, WebRTC APIs, though I would be surprised if those aren't also subject to limits.


I remember being able to overwhelm my first "home router" with the "Browse for servers" tab in Counter Strike 1.6! It would fetch a list of all servers from Steam, and then connect to them individually, eventually killing my router.


"Using Maps would load many tiles images, which could overwhelm NAT of CGNAT."

Just curious, were these image resources all hosted on the same domain?


No, and that's by design, as many browsers limit you to two http-connections per domain. When you're loading tens of images (like map tiles), you want to use as many different subdomains as possible to load them in parallel.


With HTTP/2 one is way better off using one connection to one domain instead.

How has the world changed.


For many years before "HTTP/2", I have been using HTTP/1.1 pipelining, outside the browser, to download hundreds of files over a single TCP connection.


I'm afraid I don't recall. I suspect that they could not have been, based on best practices for performance at the time and the fact that the problem existed at all. I did, however, find a reference to the problem:

https://meetings.apnic.net/32/pdf/Miyakawa-APNIC-KEYNOTE-IPv...

Slides 12-15 show the degradation of Maps in action. 20 connections per user is a heavily over-committed CGNAT, but that level of port sharing does happen.


But the "connections per user" limit is per-webserver. You'd have to have thousands of users simultaneously loading maps off the same google server just to run out of ports on one IP.

I bet you could put 10k people behind each IP and never even get close to an issue of this type.


Carrier grade NAT puts thousands of users behind the same IP address - that's what it's for.

You can't put 10k people behind each IP and not have problems. That's 6.5 ports per person, you need one for each connection. Pretty much any website will have issues with that little connectivity.


> Carrier grade NAT puts thousands of users behind the same IP address - that's what it's for.

It doesn't have to. 100:1 would work just fine. With IPs being about $25 each that's an acquisition cost of less than a dollar per user.

> That's 6.5 ports per person, you need one for each connection.

That's not how connections work. Each user could make a million connections as long as they're spread around different servers. The 65k limit applies to simultaneous connections to a single webserver. Only the most-connected server matters, so probably something at google/youtube/facebook, and even then most of those servers have multiple IPs.


wouldn’t websockets be impacted by this limit?


Yes but I've yet to see a website use more than 10 simultaneous websocket connections, let alone 1000.


There's something like a 256 count limit on total websockets, and 30 per domain, in Chromium.

A malicious website could open up 256 websockets and as many HTTP connections as the browser allows, and that might be enough to swamp cheaper NATs.

See https://bugs.chromium.org/p/chromium/issues/detail?id=12066 for some 2009 discussion about people having troubles using the web when background tabs held connections open for polling. That wasn't a NAT issue, but it does highlight that a decade or two ago we all thought we only needed to manage tens of connections for a host to be online but that rapidly spiralled into hundreds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: