I have worked on a device with this exact same "send a tiny sensor reading every 30 minutes" use case, and this has not been my experience at all. We can run an STM32 and a few sensors at single digit microamps, add an LCD display and a few other niceties and it's one or two dozen. Simply turning on a modem takes hundreds of microamps, if not milliamps. In my experience it's always been better for power consumption to completely shut down the modem and start from scratch each time [1] - which means you're paying to start a new session every time anyway. Now I'll agree it's still inefficient to start up a full TLS session, a protocol like in the post will have it's uses, but I wouldn't blame it on NAT.
[1] Doing this of course kills any chance at server-to-device comms, you can only ever apply changes when the device next checks in. This does cause us complaints from time to time, especially for those with longer intervals.
Power Saving Mode (PSM), a power-saving mechanism in LTE, was specifically designed to address such issues. It allows the device to inform the eNB (base station) that it will be offline for a certain period while ensuring it periodically wakes up to perform a Tracking Area Update (TAU), preventing the loss of registration. This concept is similar to Session Tickets or Session IDs in (D)TLS—or at least, that’s how I like to think about it. However, there are no guarantees that the operator will support this feature or that they will support the report-in period that you want!
Maintaining an active session for communication between the endpoint and the edge device is highly power-intensive. Even with (e)DRX, the average power consumption remains significantly higher than in sleep mode. Moreover, the vast majority of devices do not need to frequently ping a management server, as configuration and firmware updates are typically rare in most IoT deployments.
Great pointer! My sibling post in this thread references a few other blog entries where we have detailed using eDRX and similar low power modes alongside Connection IDs. I agree that many devices don't need to be immediately responsive to cloud to device communication, and checking in for firmware updates on the order of days is acceptable in many cases.
One way to get around this in cases where devices need to be fairly responsive to cloud to device communication (on the order of minutes) but in practice infrequently receive updates is using something like eDRX with long sleep periods alongside SMS. The cloud service will not be able to talk to the device directly after the NAT entry is evicted (typically a few minutes), but it can use SMS to notify the device that the server has new information for it. On the next eDRX check in, the SMS message will be present, then the device can ping the server, and if using Connection IDs, can pull down the new data without having to establish a new session.
Is "Non-IP Data Delivery" (basically SMS but for raw data packets, bound to a pre-defined application server) already a thing in practice?
In theory, you get all the power saving that the cellular network stack has to offer without having to maintain a connection. While on protocol layer NIDD is almost handled like an SMS (paging, connectionless), it is not routed through a telephony core (and hence sloooow). The base station / core will directly forward it to your predefined application server.
It has been heavily advertised, but its support is inconsistent. If you are deploying devices across multiple regions, you likely want them to function the same way everywhere.
802.11 supports the same thing. A STA (client) can tell an AP that it'll be going away for some time, and the AP will queue all traffic for the STA until it actively reports back. Broadcast traffic can also be synchronized to particular intervals (but low power devices are usually not interested in that anyway for efficiency reasons).
I have very little experience with Wi-Fi, as the industry I worked in relied almost exclusively on cellular networks. However, I wonder how many Wi-Fi routers actually support this functionality in practice - as queing traffic means you need to cache it somewhere.
Author of this post here -- thanks for sharing your experience! One thing I'll agree with immediately is that if you can afford to power down hardware that is almost always going to be your best option (see a previous post on this topic [0]). I believe the NAT post also calls this out, though I believe I could have gone further to disambiguate "sleeping" and "turning off":
> This doesn’t solve the issue of cloud to device traffic being dropped after NAT timeout (check back for another post on that topic), but for many low power use cases, being able to sleep for an extended period of time is more important than being able to immediately push data to devices.
(edit: there was originally an unfortunate typo here where the paragraph read "less important" rather than "more important")
Depending on the device and the server, powering down the modem does not necessarily mean that a session has to be started from scratch when it is powered on again. In fact, this is one of the benefits of the DTLS Connection ID strategy. A cellular device, for example, could wake up the next time in a completely different location, connect to a new base station, be assigned a fresh IP address, and continue communication with the server without having to perform a full handshake.
In reality, there is a spectrum of low power options with modems. We have written about many of them, including a post [1] that followed this one and describes using extended discontinuous reception (eDRX) [2] with DTLS Connection IDs and analyzing power consumption.
[1] Doing this of course kills any chance at server-to-device comms, you can only ever apply changes when the device next checks in. This does cause us complaints from time to time, especially for those with longer intervals.