AT&T's home gateways have a maximum NAT translation table of 1024^H^H^H^H8192 connections. Some websites will go past that. A torrent client almost certainly will. And, now that people are working from home, there's a good chance that having multiple computers will only make that 1024 table limit even more laughable.
EDIT: okay I'm wrong. It's 8192 connections, not 1024 connections. But still ridiculously low
Just as a FYI/aside, it is fairly trivial to root AT&T home gateways, pull the certs and use your own hardware to authenticate to the network, removing their hardware off your stack entirely except for the ONT. (goodbye internet downtime due to random uncontrolled gateway "upgrades"). You just need a router capable of 802.1x client auth.
Throughput both ways actually gets really close to what I am paying for with this configuration, where as before with the default gateway (regardless of configuration), I was lucky to see 1/2 of the gigabit speeds I have been paying for.
If you are willing to move to Ubiquiti hardware (recommended, security breach from today notwithstanding) there's a relatively straightforward bypass method where the authentication packets are forwarded from the ONT to the AT&T box but it's otherwise out of the loop, and you have fully native routing with the Ubiquiti USG (a really nice router and ecosystem).
It's definitely not plug and play but I've been using this setup for a year and a half and I get my full 1gb bandwidth throughout my network with lots of hosts.
This is true for existing installs. But recently ATT moved to XGPON gateways with integrated ONT. You can no longer bypass these gateways. Also to my knowledge you can’t extract the certs from Pace gateways.
If an ISP is NAT'ing everyone (which I've heard of referred to as an "InterNAT Service Provider"), does "bridge mode" mean you get a real public IP? How does that work with everyone else still behind the NAT?
(I have an actual end-to-end-connectable public IP from my ISP, which from the general discussion seems like an increasingly rare thing --- they keep pestering me to "upgrade" to outrageously faster yet slightly cheaper plans with a "free router included", so I suspect they are trying to get me to give up that IP...)
There are 2 different topics here. One is carrier grade NAT (CGNAT), which is used by ISPs that have run out of IPv4 addresses so you don’t get a real public IPv4 address, although you should have a public IPv6. If you’re unlucky enough to be on one of thee ISPs there’s likely not much you can do.
The other issue is ISP provided gateways that handle authentication onto the ISP network, like ATT fiber. These devices contain the certificate/keys to gain access to the network. Unfortunately theses devices also try to be more than just an auth device/gateway. In ATT’s case the gateway also handles some Uverse/IP TV services so they don’t have a true bridge mode where they send all traffic to another device. This approach then causes issues like update downtime or NAT table issues.
Either of these issues shouldn’t be caused simply by an ISP provided router. If an ISP wants to implement either approach they will do so without your approval.
> carrier grade NAT (CGNAT), which is used by ISPs that have run out of IPv4 addresses … If you’re unlucky enough to be on one of thee ISPs there’s likely not much you can do.
I had the same SSH dropout problem, asked my ISP[1] to switch me from CGNAT to dedicated IPv4; they did, and it's fixed.
[1] Aussie Broadband, a smaller ISP in Australia renowned for good customer service.
Consider sending Aussie Broadband a link to my blog post. It should be a simple fix for them to raise the timeout, which should fix the problem for all their customers.
you can still get around this with some effort [1] and a pfsense box, the pfsense box gets wan from the ont and the original att router is hung off a third nic where it's allowed to do 802.1x and nothing else. the setup was a little challenging at first but has been maintenance free since. maybe there is a technical reason they have their network set up this way but i was offended at the idea of being prevented from using my own router.
> One is carrier grade NAT (CGNAT), which is used by ISPs that have run out of IPv4 addresses so you don’t get a real public IPv4 address, although you should have a public IPv6. If you’re unlucky enough to be on one of thee ISPs there’s likely not much you can do.
This is true. Your options look like:
1. Get a new ISP
2. Get a VPN that supplies you with a public IP (these exist)
3. Hope you can do whatever you need on IPv6 instead
Some CGNAT ISPs will also sell service with a public IPv4 for a premium. That's probably the most "user-friendly" option but it's also probably something they don't advertise and you need to ask for explicitly, if offered.
1k certainly seems absurdly small considering how much RAM routers likely have, the fact that they can use most of it, and the amount of data needed for a single connection table entry (2 bytes external port, 2 bytes internal port, 4 bytes internal IP adds up to 8 bytes per entry, even being very generous at 16 bytes including overhead, that's still only 16K --- on a device that likely has several MB if not more, and whose primary function is likely NAT.
Some providers do this to force you to upgrade to business plans. Comcast business though, at least a while back, still had a limit too low for the office I worked. We switched to ATT business fiber and used our own GW.
UDP is connectionless, but typically a UDP communication is bidirectional. This means a NAT needs to inspect UDP packets and retain a mapping to direct incoming UDP packets to the right place. With no connection information this can only be done as an LRU cache or similar.
TCP is connection oriented, and a NAT might rapidly free up resources when a connection is closed (ie, when the final FIN has been ACKed). But if there's no FIN, the NAT is in the same case as it is with UDP. Making a lot of connections without closing them fills up NAT buffers.
When you have a home NAT and a carrier-grade NAT you may get an impedance mismatch of sorts. The CGNAT might have insufficient ports allocated to your service to keep up with your home NAT, resulting in timeouts or dropped mappings. Your home NAT will have one set of mappings and the CGNAT another, and the two sets probably won't be exactly the same. This means some portion of the mappings held in memory are useless.
As a specific example, many years ago Google Maps would routinely trigger failures. Using Maps would load many tile images, which could overwhelm a NAT or CGNAT. The result was a map with holes in it where some tiles failed to load.
Browsers have long had limits on concurrent connections per domain. Total concurrent connection limits are also old news, but are not quite as old as per-domain limits. You probably can't make a NAT choke with just simple web requests (even AJAX) any more. You might be able to do it using, eg, WebRTC APIs, though I would be surprised if those aren't also subject to limits.
I remember being able to overwhelm my first "home router" with the "Browse for servers" tab in Counter Strike 1.6!
It would fetch a list of all servers from Steam, and then connect to them individually, eventually killing my router.
No, and that's by design, as many browsers limit you to two http-connections per domain. When you're loading tens of images (like map tiles), you want to use as many different subdomains as possible to load them in parallel.
For many years before "HTTP/2", I have been using HTTP/1.1 pipelining, outside the browser, to download hundreds of files over a single TCP connection.
I'm afraid I don't recall. I suspect that they could not have been, based on best practices for performance at the time and the fact that the problem existed at all. I did, however, find a reference to the problem:
Slides 12-15 show the degradation of Maps in action. 20 connections per user is a heavily over-committed CGNAT, but that level of port sharing does happen.
But the "connections per user" limit is per-webserver. You'd have to have thousands of users simultaneously loading maps off the same google server just to run out of ports on one IP.
I bet you could put 10k people behind each IP and never even get close to an issue of this type.
Carrier grade NAT puts thousands of users behind the same IP address - that's what it's for.
You can't put 10k people behind each IP and not have problems. That's 6.5 ports per person, you need one for each connection. Pretty much any website will have issues with that little connectivity.
> Carrier grade NAT puts thousands of users behind the same IP address - that's what it's for.
It doesn't have to. 100:1 would work just fine. With IPs being about $25 each that's an acquisition cost of less than a dollar per user.
> That's 6.5 ports per person, you need one for each connection.
That's not how connections work. Each user could make a million connections as long as they're spread around different servers. The 65k limit applies to simultaneous connections to a single webserver. Only the most-connected server matters, so probably something at google/youtube/facebook, and even then most of those servers have multiple IPs.
There's something like a 256 count limit on total websockets, and 30 per domain, in Chromium.
A malicious website could open up 256 websockets and as many HTTP connections as the browser allows, and that might be enough to swamp cheaper NATs.
See https://bugs.chromium.org/p/chromium/issues/detail?id=12066 for some 2009 discussion about people having troubles using the web when background tabs held connections open for polling. That wasn't a NAT issue, but it does highlight that a decade or two ago we all thought we only needed to manage tens of connections for a host to be online but that rapidly spiralled into hundreds.
I know it's not 2002 anymore, but I'm pretty sure no website on this planet would even come close to 1000 open connections, unless it actively tries to achieve just that, but even then I think browsers still have a limit on number of concurrent open connections, per tab and maybe total.
I also was very surprised about that number, so I checked with tcpdump and google maps on a new browser instance: I count just 31 syns after zooming in, moving and clicking on a pub :?
EDIT: okay I'm wrong. It's 8192 connections, not 1024 connections. But still ridiculously low