Not so sure it would even be "the lowest", rather than "low enough, stable and predictable", even if it's higher on average than optimal.
For example, comparing a hashtable lookup to a BST lookup (and ignoring the memory hierarchy for the sake of example), the former would be faster on average, but the latter would produce more predictable lookup times without hiccups.
Another analogy that comes to mind would be 'going to gym'. Short-term, working out is exactly a heat loss (doing hard work with no useful results). But long-term, it makes you healthier and able to do more things.
Back to corporate IT, it's not that hard to stabilize a decent big ol' legacy revenue generating system, while avoiding making changes. But if the company has to compete against smaller and leaner startups, avoiding changes might become a major risk, as the cost of all changes tends to go up.
So, it would be reasonable to keep the system is a constant state of flux, so that it has no choice but to become fitter and learn to change quickly without breaking: stuff like reproducible builds, reproducible deployment, CI/CD don't matter a lot for monthly stable releases, but one would have a hard time doing nightly stable releases without them.
> Therefore, under memory pressure (especially if you set swappiness to zero), you will be preferentially swapping out your program code (because it is always clean) in preference to your program data.
Regarding systems with near 100% utilization, wouldn't it be a better advice to pin all executable code to memory via tmpfs (at system level) or mlockall() (application level)?
Encountered this case on some heavily-loaded batch data processing workers: at some point, client programs would slow down and generate random disk I/O (which in turn is detected by monitoring and throttled). Turned out, processes would go through major page faults at random moments in time, when their executable code pages are discarded from memory and subsequently re-read from disk.
I'd love to have a configuration option on Linux that elevates the penalty for evicting executable pages, or just makes them non-discardable. I think it would help a lot.
The problem with all (widely known) the non-standard JSON packages is, they all have their gotchas.
cjson's way of handling unicode is just plain wrong: it uses utf-8 bytes as unicode code points. ujson cannot handle large numbers (somewhat larger than 263, i've seen a service that encodes unsigned 64-bit hash values in JSON this way: ujson fails to parse its payloads). With simplejson (when using speedups module), string's type depends on its value, i.e. it decodes strings as 'str' type if their characters are ascii-only, but as 'unicode' otherwise; strangely enough, it always decodes strings as unicode (like standard json module) when speedups are disables.
Agreed, especially about simplejson. I work on a project that uses simplejson, and it leads to ugly type checking all over the place because you never know what your JSON string got turned into. For example:
There are so many poorly-written JSON decoders out there. I've had the misfortune of fixing two of PHP's to follow JSON's case-sensitivity and whitspace rules properly.
Not so sure it would even be "the lowest", rather than "low enough, stable and predictable", even if it's higher on average than optimal.
For example, comparing a hashtable lookup to a BST lookup (and ignoring the memory hierarchy for the sake of example), the former would be faster on average, but the latter would produce more predictable lookup times without hiccups.