This raises an interesting question: On clouds where you can provision baremetal instances, is it possible for the tenant to modify/corrupt firmware/hardware in such a way that the machine is subsequently "owned" and untrustworthy after it is returned to the pool of available hardware?
Yes. Some vendors have mitigations against that (Oracle have done significant work in this respect), but otherwise there's no especially good reason to believe that public bare metal providers are giving you trustworthy machines.
Definitely, if you can update the bios from your operating system then you could load something malicious.
Also, I wonder if you could read the current IPMI password for a machine from the operating system, and if baremetal providers set those passwords to some default...
If you're able to get access to the management network then you've basically already won - IPMI implementations just aren't good enough in general to resist any kind of determined attack. But to answer your question, conforming implementations shouldn't let you read the password back over the local interface. Whether that's true in reality, well…
Hmm so with ipmitool you can test passwords for every registered IPMI user - so that's scriptable. (How fast can you test? who knows)
Then, you can get the IPMI network VLAN from ipmitool as well, if your hosting provider was lazy and didn't use the dedicated port. After that, configure your box to be on that VLAN and then you're in???
Most approaches to GPU security seem to say "The architecture is complex and proprietary enough we won't bother securing it. We'll just make sure we have a robust reset procedure and make sure a compromised GPU can't compromise the rest of the system".
Normally I'd agree with you (despite my vested interest), but from a security standpoint these are launched. We simply don't let users run side by side or one after the other unless we believe in the setup. On a related point, being GA then certainly shouldn't the bar for security, it would at best be for Beta.
Disclosure: I work on Google Cloud (and even pitched in on GPUs).
overall I thought fuzzing PCI-E was really interesting, I hope having the GPU's on a switch doesn't degrade performance too bad.
When they hit general availability I'll be probably running games on GCE and streaming the output to my laptop, so long as egress costs aren't too bad.
> The most interesting challenge here is protecting against PCIe's Address Translation Services (ATS). Using this feature, any device can claim it's using an address that's already been translated, and thus bypass IOMMU translation. For trusted devices, this is a useful performance improvement. For untrusted devices, this is a big security threat.
I wonder whether operating systems disable this by default. As far as I know modern linux versions try to use the IOMMU to isolate devices by default, but that would bypass it.
Probably a good reason why you shouldn't use hard drives' built-in encryption features for anything but at-rest protection. The bus between the drive and the CPU is not protected