If you trust an application so far as to properly limit itself in what it can do by requesting a sandboxed environment so you don't have to type a few additional letters you might just as well run it without a sandbox.
Hey kind stranger who is supposed to do the garden while I go shopping, I really don't trust you. So to be sure you only do the garden and nothing else, here are the keys to my house, please ensure that every door and window is locked. Thanks.
The only other entity who could set it up for you, so every application automatically launches in a sandboxed environment, is the distributor, but then again it's your responsibility to chose a distribution that does that.
If you want security you have to do something about it at one point or another.
I think this is the wrong attitude. No one is better suited to implement a sandbox than the developer of the application. The fact that most developers are not trained to do so is just a reflection of our field's terrible progress re: education devs on secure app dev.
Leaving this to the user leaves the vast majority of users unsafe. This is an unacceptable state.
Why should an application developer implement a sandbox? That's a huge waste of time and it's much more efficient if the operating system or the user enforces it instead by using existing sandboxing technologies like firejail. It is also untrustworthy and insecure, since after all you don't trust the application. If an application is responsible for sandboxing itself it can also choose not to sandbox itself properly if it wants to do harm.
There is no way around you either taking care of that yourself or you choosing an operating system that enforces it for you, like Qubes OS.
> Why should an application developer implement a sandbox?
Because they are the ones who understand the necessary capabilities of their program and the ones who have access to the source code...
> That's a huge waste of time and it's much more efficient if the operating system or the user enforces it instead by using existing sandboxing technologies like firejail.
Actually it's a far better sandbox when built into the program. And it doesn't leave users relying on installing arcane operating systems or becoming technically savvy.
> It is also untrustworthy and insecure, since after all you don't trust the application.
No, trusting the application is implicit since it's installed by the user. The sandbox exists to protect against a compromised application.
Since the other examples don't appear to have convinced you, how about this one: https://samy.pl/poisontap/
Visit a single HTTP page while that's plugged in and it'll trigger an exploit that siphons all non-secure-flagged cookies off of every popular site that doesn't use HSTS (including the config pages of insecure routers on your LAN), and installs a persistent backdoor in them so the attacker can continue accessing data on those sites even after you're no longer being MITM'd. And that's not even using any zero-days; it's just exploiting the inherent vulnerabilities in non-secure HTTP.
(Note that while the site I linked talks about a USB device the same attack can be carried out by any MITM, like a WiFi router or upstream ISP; it's not exclusive to local attackers.)
Yeah, the DHCP trick is what allows this particular method of conducting MITM via USB.
All the stuff it does _after_ becoming a MITM though are things that any MITM could do, regardless of how they became a MITM in the first place. (ARP spoofing, operating or compromising a Wi-Fi access point, etc.)
An example from the real world -- Comcast, a large ISP in the USA, has been caught injecting JavaScript into websites: https://thenextweb.com/insights/2017/12/11/comcast-continues... It's not hard to imagine a more malicious use, like tracking or injecting adverts the ISP wants you to see on webpages.
This is only possible because the connection isn't encrypted.
Another example -- Verizon were injecting a header called X-UIDH which had a unique identifier, acting as a super-cookie that was present on all websites and couldn't be removed: https://www.eff.org/deeplinks/2014/11/verizon-x-uidh
This is only possible because the connection isn't encrypted.
All of that is bad, none of it is a security issue. Privacy, sure. But not security. And the article specifically shows that Google is planning to mark example.org as insecure. Which it's not.
insecure (adj.)
(of a thing) not firm or fixed; liable to give way or break.
not sufficiently protected; easily broken into.
A webpage loaded over HTTP is easy to tamper with. Let me give you an example of traffic over HTTP that is secure -- apt repositories; because you're only retrieving payloads protected by PGP, so the actual payload is firm, fixed, and not easily broken into.
How else do you define insecure? Have I misunderstood the definition?
Insecure can't be used as a drop-in replacement for compromised though; Being insecure will get you compromised. One distinct thing might lead to another distinct thing
Your argument seems to be that because there are multiple ways to exploit people that closing any of those methods is not useful. I shouldn't have to explain why this is not a meaningful argument.
What I will say is that in many cases an attacker is far more capable of MITM than they are of posting forum comments, or otherwise convincing you to click a link. A phishing campaign is noisy - you are often alerting many parties that you're malicious. MITM within a network is much stealthier and you don't have to rely on users clicking on anything.
Really, they're just completely different attacks and the existence of one has no bearing on the other. TLS on every page would close off real attacks and, if it forced attackers to use noisy methods like phishing, that's a huge win.
It's impossible to get a valid SSL certificate for an appliance running within someone their lan, without having to open ports. And opening ports would make the appliance even more vulnerable to attack.
Now that fully automated certificate issuance is becoming more mainstream (thanks to Let's Encrypt) I foresee this sort of thing becoming much more common in the future.
Unless I'm misunderstanding they did that by partnering with a CA. Becoming a semi-trusted CA themselves. This is not an option for most organizations.
That was only necessary because, at the time, there was no other way to get a large number of wildcard certs issued for their domain in an automated fashion.
With ACME that will no longer be the case. Let's Encrypt will allow you to do basically the same thing for free with ~20 devices a week[1] starting on February 27[2], for example. In the future, commercial CAs may choose to offer similar services with more relaxed rate limits.
It's possible, why not? Just use your own servers as a cert signing service for your IoT device as part of the bootstrap process if you are unwilling to have any services running on it. Or ship the device with the signed cert. You can have the host name in the DNS even though it's not accessible from everywhere.
> Can you not just create a certificate and push it to the system as a trusted cert?
If you were to control the user's machine, yes. But imagine you bought a shiny new internet connected coffee pot. Once you turn it on it does the following:
1. Coffeepot Determines its LAN IP address (e.g. 192.168.1.100)
2. Coffeepot connects to the coffeepot cloud service to register a dynamic DNS entry (e.g. user1.coffeepot.com) to point to its LAN IP address.
3. User is told they can access their coffeepot WebUI by going to user1.coffeepot.com, which resolves to 192.168.1.100
This is secure since the coffeepot can only be controlled if you are in the same network. Yet, since the coffeepot webui can only be reached if you are in its network, it is nearly impossible to get a valid SSL certificate on the coffeepot appliance.
> Presumably there is already some sort of communication going on if they're receiving Chrome updates.
There is a difference between outgoing network traffic and incoming network traffic. Only the latter requires open ports.
3. Coffeepot fails to connect to the cloud service because it's in some remote place with no internet.
Why does the coffeepot / TV / thermostat need internet access? That's often undesirable for the user (because that means the whole things breaks if the originating company goes away). Not to mention, how would the user know which host to connect to? How would the device get on WiFi if there is no way to enter the password?
I know Chromecast does this by making you download a custom application (Google Home on a phone, or Chrome on a desktop); that's not always practical.
I do think SSL in as many places as possible is great; I just also think they're trying to push for too much before solving the problems it will cause first.
It doesn't need it. You can always just nmap your network, find its lan ip and connect straight through that over http. But that's not very user friendly, hence the dyndns.
2.alternative: Coffeepot connects to the coffeepot cloud service to register a dynamic DNS entry (e.g. user1.coffeepot.com) to point to its LAN IP address and sends a Certificate Signing Request for user1.coffeepot.com?
If you are already registering a dynamic DNS, a CSR shouldn't be that much additional overhead?
Actually, now that I think about it, with the Let's Encrypt DNS challenge this might actually be viable... That's pretty recent, though. And they rate limit harshly. I was thinking about the HTTP validation, which would definitely fail, due to the DNS resolving to a LAN IP. Which a CA would obviously not be able to verify.
Right, that burden becomes coffeepot.com's. Supposedly they would already be doing due diligence to make sure that the dynamic DNS requests were from legitimate coffeepots that they themselves manufactured (rather than say the fraudulent activities of a botnet using their open DNS for communications). At that point they should also have enough security information to verify if they should sign a certificate presented to them by their manufactured coffeepot under their certificate authority delegation to *.coffeepot.com.
To my knowledge you can even piggy back off of ACME's protocol work from Let's Encrypt, even if the auth/validation checks are different for the different security models.
It's certainly possible to pay for such a thing today; many of our friends in Fortune 50+ companies have access to such things. You are right that we mere mortals with dreams of a tiny coffeepot IoT empire over HTTPS must hope for the post-Let's Encrypt era that the cost of such delegating certificate authority certificates drops in commensurate to other certificate types.
The "premature optimization is the root of all evil" thing is totally blown out of proportion. I think what they're saying is don't use that quote as a reason to write garbage slow code.
I'm just making more of a general comment re. the website/company, not the specific post this HN thread covers. Granted, it's a minor detail, and most people probably won't hit the www version of the site, but for whatever reason it's the version of their site that my search engine surfaced, so FWIW I just found it ironic.
Apologies for being unclear, to elaborate ... IMHO cyber-security analysts could do much better in defining how they do their job in order to give people that are interested a path forward in developing the needed skills. To be clear, I'm not talking about the larger cyber-security profession that is mostly focused on setting and enforcing rules.
An attacker needs the ability to compute on your local machine. Javascript is the way to do that in a browser.
With just CSS this should be impossible/ very unlikely. I guess it is probably technically possible, but I do not expect to see exploits using just CSS.
I work for a company with a massive rust codebase. Rust is very much about building production code.
What is 'good production code' ?
* Few errors
* Readable, well documented
* Testable, has tests, has testing tools like quickcheck, fuzzing, etc
* Meets performance constraints
Rust hits those better than any language I've used. The downside is, oh gosh, you'll have to actually learn a programming language that isn't just another variation of the ones you learned in school.