I've never read the debunking, but the NSA's reliance on codenames (even before Microsoft) for everything leads me to believe they wouldn't ever be so obvious.
Actually, I looked into things as promised. The Wikipedia links and whatever popped up in my Google searches. Turns out the only citation you had, a HN news commenter, was just repeating the claims of a MS spokesperson with some added opinions. I found a more interesting source which has similar claims:
On the surface, it seems believable. However, like Duncan, I'm seeing glaring problems with these responses. It boils down to this: the Microsoft rep claimed they came up with all details (including the key & its name) on their own without NSA intervention, generated/control both keys themselves, submitted the design for review, and got NSA approval. Yet, without NSA's involvement, a developer named their key NSAKEY and when decrypted its email was "postmaster@nsa.gov." Also, as Duncan noted & my HSM guy as well, it's standard for HSM's to let you export keys for backing them up or multi-site usage.
So, their claims are highly unusual, suspect, and weak. The evasive behavior that led up to this isn't typical of even Microsoft. They usually come up with some CYA BS rather quickly. Also, the odds of a Microsoft-controlled key they made entirely on their own being called NSAKEY with NSA's email on it without NSA participation are low albeit possible (dev joke).
Far from Alex Jones stuff, there's actual substance to worries over what NSAKEY did or does. I agree with critiques that it's a lower concern given all the attack angles on Windows from outside or inside perspective. Yet, dozens of reports didn't show anything along lines of "comprehensively debunked." Instead, we had a series of weak stories from Microsoft that raised more questions than answers with plenty of evasion on their part. And they're known to work with NSA. So, people should still not trust that and use the steps that one person published to replace the key with their own.
This is a very long source from 2000 that says in 6500 words what 'geofft said in just ~300: that this is a code signing key for crypto libraries. If Microsoft wanted to backdoor your Windows machine, they already have complete control over Windows code signing. They do not need a special key literally labeled "NSA_KEY" to do that.
It's not "lower concern". It's not a concern at all. Microsoft's code is comprehensively reverse engineered. People have reversed the most boring, tedious libraries on the system looking for memory corruption bugs. You posit that maybe they just forgot to look into this "NSA_KEY" business.
For a software security professional, you really come loaded for bear with a lot of weird advice:
* Avoid elliptic curve and use conventional Diffie Hellman and RSA because the NSA controls the ECC patents (?!).
* Use Blowfish because it has a long security track record (?!).
* Watch out for Google because they don't understand endpoint security (?!).
* NSAKEY isn't a high-priority issue but it's something that people should be concerned about (?!).
* Here's an unencrypted HTTP website that you can cut-and-paste GPG commands from instead of reading the manual (?!).
* Firewalls are just some made-up crap that don't actually provide any security (?!).
* Use MatrixSSL and PolarSSL instead of OpenSSL because OpenSSL's code quality is crap (?!).
I get the feeling that we do very different kinds of security. You talk a bit about formal verification and EAL levels. I had the misfortune of losing a couple months of my life in the early 2000s to Common Criteria work. If you're coming from a CCTL background, some of the very weird perspectives you have on this stuff start to make sense to me.
They forgot to look into the subversion, BIOS, peripheral firmware, and covert channel risk despite me repeating those on forums all over going back years. Then the shit ended up in TAO's catalog (including my "amplifying cable" concept aka RAGEMASTER) & most stuff was weak to it. Same argument could applied to widespread open source software that people find "previously unknown" bugs in despite that code existing for years and "widely reviewed." So, I don't have to posit anything given the horrid state of INFOSEC and app review going back years: they should instead prove they did better than usual in the security analysis and show what they found for independent review. This is, coincidentally, a part of the scientific method as well.
You instead could've settled this argument if you linked to a group that reversed engineered all of that, found that it did exactly what they said, and had no conflict of interest with U.S. government. You say you know these exist but still haven't produced them. You instead expected people to take your word for it or comb the Internet looking for the proof you allege. Both a tall order.
Those of us in INFOSEC against highly subversive opponents don't deal in pure faith [esp in similarly evil organizations]. So, where is this evidence that it was totally reverse engineered and proven to be functionally equivalent to claims? I'd like to read it, determine its credibility, pass it onto peer review, and share it widespread if it passes enough of that. As promised, I'll even add it to the Wikipedia article that tops search to this day.
No part of this comment addresses anything that I said. This is a pattern with our interactions: I bring up something specific, and you change the subject, usually to a flurry of Snowden jargon (this time it's the spy-mall catalog).
We were talking about the transparency of the Windows kernel.
It's a little funny that you feel like you need proof that the code has been reverse engineered, as if there were like 4 people in the world who could do it, one of whom is in a mental institution, 2 of whom are in Russia, and the last is hiding in a monastery in Tibet. You know, as opposed to something you could literally learn from a book that was on the shelf at Borders, back when they still sold books retail.
To address your edit which added all the bullet points:
1. Avoid ECC's for commercial activity because NSA & a private company were asserting 130+ patents existed on them. Be ready with lawyers otherwise. Anyone following all the patent battles would be concerned. If not concerned or doing FOSS, I always recommend using Bernstein's NaCl or a double signature scheme with a post-quantum algorithm as secondary for those worried about such things. Lots of good work on Merkle Tree's recently, for instance.
2. Use Blowfish in cascading ciphers with other strong one's such as AES candidates. Cryptographers make barely substantiated claims about things' risk all the time. Worse, their proofs on "secure" constructions sometimes don't even apply to the real protocol rather than an abstraction of it lacking key details (eg padding). Always include things the NSA and other strong attackers have failed to beat for 10+ years. If it was so bad, they'd be dominating it across the board. They not. So, it's either not so bad or good obfuscation to add to stronger stuff.
3. Google's most clever work, which I praised, was NaCl. It ended up being one of the weakest CFI schemes in practice because they sacrificed too much security for performance and maybe other reasons. They continue to build on it despite better stuff available. Much of their other stuff was COTS implementation quality with no specialist security engineering that I could tell. They're weak on endpoints like majority by relying on insecure tools and endpoints. My advice is to think of them like any other vendor rather than on another level. Depending on architecturally-weak TCB's such as Linux doesn't add confidence to any notion you have of strong endpoint security. There are no strong endpoints in mainstream lol.
4. NSAKEY is possible evidence of subversion from a company with a whole history of screwing customers for profit on their own and with government. By itself, not the biggest worry. It's just one more circumstance among many that should get people away from Microsoft tech. Seriously, how many times do BSD or Linux users get into huge debates about something like this where the NSA's actual name dropped in there with secret functionality and evasiveness by developers? If it happens at all, you could name it on one hand. Getting away from such companies is a positive.
5. A risk indeed I took on a PC that was already compromised by my main opponent as far as I knew in a sandbox. I cross-referenced the command against the documentation as I indicated in the conversation. Yet, most people who use software do similar things without cross-referencing. People were likely to Google signify and OpenBSD issues as well along with following steps they saw online. That risk is pervasive. Nonetheless, unlike you, I decided to change my position back to being more careful after a good critique by a commenter. You've only increased your number of attacks with opposition to... anything.
6. Firewalls, as industry uses them, are a weaker form of security than high assurance guards that predated them, strong endpoint security that predated them, or even firewalls on dedicated PCI cards with more secure TCB's & I/O offloading that existed in 90's. The firewalls were easy to port, cheap to make, and often pricey to sell, though! They're regularly bypassed on endpoints and in many organizations' networks. They include almost no assurance activities, which are critical for security. A number of attacks springboarded off of them for this reason and leveraged their privileged position in the network. My position is firewalls are a filter for hackers without talent along with providers of other security or non-security functionality buyers find useful. Industry needs to switch back to guards with real assurance. And not that Linux or Solaris based crap vendors are pushing recently, either.
7. Avoid OpenSSL because every code review of it showed it to be utter crap with all kinds of issues, about the worst coders one could find, and whose reviewers could barely follow it or justify a number of things in it. Consider alternatives like MatrixSSL, PolarSSL, Cryptlib, Botan, and whatever that haven't been shown to be utter garbage yet. They might be better. Shockingly, there were still people recommending OpenSSL even as LibreSSL team dug one problem after another in the worst horror story of bad coding and security of that time. Malpractice, indeed.
"I get the feeling that we do very different kinds of security."
We do. You trust Windows, think such software isn't black box because people can pay millions to fully R.E. it, abandon stuff that works in a use case because someone said to without evidence, trust subversive companies because someone else said to, and so on. Lot's of faith-based activities and misrepresentations of others' comments by stripping context in a totally-unrelated discussion.
My form of security says you (a) have clear requirements proven in practice to work; (b) a design that meets them with strong argument; (c) a security model that makes sense; (d) evidence you use it; (e) implementation designed for review with good layering, modularity, and interface protections; (f) strongest software and hardware protections you can use; (g) extensive testing of successful and failure states; (h) covert storage and timing channel analysis; (i) robust compilation to object code; (j) secure SCM setup; (k) independent evaluation and pentesting of these; (l) delivery of product with source & build tools with hash that matches independent evaluation; (m) preferably mutually distrusting evaluators. All this applied as much as possible from processor up or using diverse hardware with careful interfaces to counter hardware risks a bit.
Yes, a lot of this was borrowed from higer EAL's and papers on high assurance processes/products. Many things like this survived NSA pentesting for years. The other approach, which you & INFOSEC industry in general recommend, has produced things with endless vulnerabilities at high severities and subversions. China, Russia, and TAO are having a field day with it all. Among others. That you all continue to promote such methods despite them having no results for decades hasn't started to make sense to me outside studies on psychology and networking effects. That industry doesn't adopt even a fraction of stronger methods is amazing despite empirical evidence and strong anecdotes showing they produce more robust systems. The industry continues to focus instead of churning out low-quality offerings and deceiving buyers on their security for the high profits involved.
Keep it up, though, as black hats need the job security. Mainstream INFOSEC has always helped them with that. The oft-ignored high-assurance community will continue doing what we can and helping any willing to learn understand strong security. Although, in this thread, only the firewall point even applied to my expertise as it's a watered-down guard in terms of assurance. The others are a random assortment of comments with no supporting context given. Nonetheless, you seemed to be confused about what strong security engineering takes and I was happy to break it down for you. Have a good night.
Keep it up, though, as black hats need the job security. This is not the kind of comment that I come to HN for.
Cries for tptacek to produce evidence, for example, of the reversibility of Windows code, would be more pleasantly satisfied by some deeper investigation, such as discovering React OS. It would be surprising to me for anyone to conclude that Windows code has not been totally, and publicly, reverse engineered.
It is not a big leap from there for you to investigate from that code the true nature of NSAKEY. My guess is that you would find that there is no there, there.
Your statement My form of security says you (a) have clear requirements proven in practice to work; reminds me of my youth, when I discovered the writings of Dijkstra and others, talking about proving programs correct. While this is a wonderful idea, and a small handful of folks have actually done this, modern business doesn't seem to want to stand still for Category 5 maturity.
While you denigrate OpenSSL and its code quality, keep in mind that several of the high-severity bugs, such as BEAST and CRIME were not code quality bugs, but essentially protocol bugs. The perfectly-specified requirements (implement compression in HTTPS) could have been backed up by code proved correct, and still exhibited the flaw. Similarly for BEAST.
Also, one wonders if you have tried to break real-world crypto. If you haven't, I suggest that you give some of the online exercises a try. It is rather eye-opening.
Finally, many statements in your comments mischaracterize the points arguments make. This has become more than annoying. Please stop.
"It would be surprising to me for anyone to conclude that Windows code has not been totally, and publicly, reverse engineered."
The subject we were talking about was massively debated by IT people. Nobody showed up with the code to say, "Here's the reverse engineered code. We know exactly what it does." If he was right, that should've happened. Instead, all references I found to the subject by INFOSEC professionals are trying to guess what it did. So, I think it's fair to ask for a reference to slam-dunk evidence supported by reverse engineering if none is to be found with Google & nobody involved acted like it exists. Only source so far is tptacek's word.
"While this is a wonderful idea, and a small handful of folks have actually done this, modern business doesn't seem to want to stand still for Category 5 maturity."
I agree. It's why I don't work in high assurance security any more outside consulting, R&D, and free advice on the Internet. I'll add to your comment that this is not just for heavy-weight processes. Cleanroom & Fagan Inspections, which I used, drove the defect rate close to zero while often reducing the labor due to less time debugging & with little impact on time-to-market. The Cleanroom stuff, when apps were similar, even had statistically certifiable bug rates we used to issue warranties on code. Despite no extra time or cost, virtually no company we presented those processes to were interested in using them. Demand in IT and INFOSEC space is so against quality you can't even sell it to majority even if it makes money rather than costs it. It's that bad.
"he perfectly-specified requirements (implement compression in HTTPS) could have been backed up by code proved correct, and still exhibited the flaw. Similarly for BEAST."
Totally true. It's why I mention getting requirements and specs right. Even CompCert, an amazing piece of engineering, eventually had flaws during a test because the specs on very front and back ends were wrong. Other side of the coin: implementation of of all specs flawlessly implemented them with middle end having zero defects. Every other compiler, from proprietary to FOSS, had bugs throughout. So, the existence of protocol errors doesn't argue against verified software processes in any way. Just means your protocol better be as good.
And if it wasn't clear, my gripes on OpenSSL were based on the commit logs of LibreSSL team commenting on each piece of code as they went through it. I'm not talking the protocol at all: just terrible coders making garbage that got widespread adoption without hardly any review. Unsurprising given my above comments on how IT market works.