Hacker Newsnew | past | comments | ask | show | jobs | submit | Nextgrid's commentslogin

Downvoted - this is false, sorry. The whole point of security keys (whether exposed via PKCS#11, or FIDO) is that the private key material never leaves the security key and instead the cryptographic operations are delegated to the key, just like a commercial HSM.

Technically, a private key that was imported (and is marked as exportable) to a PKCS#11 device can subsequently be re-exported (but even then, during normal operation the device itself handles the crypto), but a key generated on-device and marked as non-exportable guarantees the private key never leaves the physical device.


The idea with HSM-backed keys is that even in case of compromise, you can clean up without having to rotate the keys. It also makes auditing easier as you can ensure that if your machine was powered down or offline then you are guaranteed the keys weren't used during that timeframe.

Rotating keys is easy with the right software. (I work @ Userify) Agree with the auditing point

Token-based keys, to tptacek's point, is that they can be a giant pain once you start scripting across fleets.


> not pull a sidearm and shoot the corrupt commander

Wouldn't you just get "zeroed" by the upstream commander or court-martialed and sentenced to a gulag?


I don't mind the increase per-se, but the "improvements" they advertise to justify it are laughable. Not to mention that 1Password 8 has been a major downgrade across the board.


Trial and error?

Just like it does when given an existing GPL’d source and dealing with its hallucinations, the agent could be operated on a black box (or a binary Windows driver and a disassembly)?

The GPL code helped here but as long as the agent can run in a loop and test its work against a piece of hardware, I don’t see why it couldn’t do the same without any code given enough time?


Presumably one would like to use the laptop before the million years it would take the million monkeys typing on a million typewriters to produce the Shakespearean WiFi driver.

Consider that even with the Linux driver available to study, this project took two months to produce a viable BSD driver.


This process took two months, including re-appraisals of the process itself, and it isn't clear that the calendar on the wall was a motivator.

The next implementation doesn't have to happen in a vacuum. Now that it has been done once, a person can learn from it.

They can discard the parts that didn't work well straight away, and stick to only the parts of the process that have good merit.

We'll collectively improve our methods, as we tend to do, and the time required will get shorter with each iteration.


Seems very promising but then you realize the LLM behind said agent was trained on public but otherwise copyright encumbered proprietary code available as improperly redistributed SDKs and DDKs, as well as source code leaks and friends.

In fact most Windows binaries have public debug symbols available which makes SRE not exactly a hurdle and an agent-driven SRE not exactly a tabula rasa reimplementation.


Your ISP will cut your account when you saturate the upstream pipe 24/7 for weeks on end... which will only happen if you host video.

And your home insurance will not know/care if you're operating a desktop-sized computer or even a single server (it is perfectly fine and expected a developer might bring an actual server home for troubleshooting). Home insurance only cares if you're running dozens of them.


It depends. If you're running a business from home is where my local insurance draws the line and you need additional professional insurance.

You should be able to do this hassle free, and you probably can get away with it, but you may find yourself in a grey area later.

It's just one of many types of red tape that stiffles innovation.


That's a tricky one. Is working from home considered running a business? What if you have formed an LLC that you work for and your company is the one hired? That's technically a business running from your address, yet is nothing different than what would be considered WFH.


> Is working from home considered running a business?

It actually is in some jurisdictions, even if of course no one cares (except of course insurances looking for reasons to refuse a claim).


> Perhaps someone at their end screwed up a loop conditional, but you'd think some monitoring dashboard somewhere would have a warning pop up because of this.

If you've been in any big company you'll know things perpetually run in a degraded, somewhat broken mode. They've even made up the term "error budget" because they can't be bothered to fix the broken shit so now there's an acceptable level of brokenness.


>they can't be bothered to fix the broken shit

Surely it's more likely that it's just cheaper to pay for the errors than to pay to fix the errors.

Why fix 10k worth of errors if it'll cost me 100k to fix it?


The orgs are not ruthless like that, anything less than a certain % of the org revenue is not worth bothering unless it creates _more_ work to the person responsible for it than fixing it does.

Add some % if person who gets more work from the problem is not the same as the person who needs to fix it. People will happily leave things in a broken state if no one calls them out on it.


In my opinion, if something isn’t actually an error, you modify your logging to not log it as an error. Your error logging/alerting pipeline should always stay clean.

If something shows up in there, you should only have 2 options: 1) it’s an actual error and you fix it and make sure it never happens again, or 2) it’s not an error and then you fix it by adjusting the log level to make sure it isn’t one.

If someone suggests an “error budget” on my watch they get the door. You can have a warning budget (and the resources to adjust the log levels or remediation protocols to fix said “errors”) but actual errors should remain errors - otherwise they’re delivering broken software and that’s not what I’m paying them for.

Of course, companies who have the common sense to do this already do it and nobody in their right mind would suggest an “error budget”, but for those that don’t they have a serious problem that needs to be rectified.

The danger otherwise is that you’re making your observability pipeline useless if “errors” no longer actually mean errors. That’s really bad because now it opens the door to actual errors being ignored until it’s too late and then remediation is more costly.


At Facebook a full outage is accompanied by "first time?" Memes. Unless you are on the specific team responsible you would indeed not really have any reason to care


In my 3rd year of enterprise now and learned that there are many engineers who will purposefully not fix/improve their problematic applications as a weird sort of job security. It kind of blew up in their faces last year when we moved most of the affected on-premise applications to cloud. Seems like when you introduce tons of friction on-premise it makes the cloud look even better to the suits.


It's not a matter of "can't be bothered." Engineers are constantly fixing things and rolling out new features. "Error budgets" are an acknowledgement of the tradeoff between these two things, and making a conscious choice about the balance between them, according to the business requirements of the application in question.

Keep in mind that "fixing things" is essentially a Sisyphean task - no matter how much you do there's always more you can do. Just like adding features. You have to have some kind of guideline on when enough is enough.


> the maximum fine for this is 4% of last years total revenue or 20 mio €, whichever is the larger number.

The maximum fine wasn't even achieved by Facebook, after years and many blatant GDPR cases. Do you really think someone is getting a fine for not replying to a subject access request in due time? If so I have a very good bridge to sell you, and that bridge has more probability to exist than Amazon getting any kind of GDPR fine for not acknowledging a SAR.


The badge could (I don't know, haven't done it yet) help you differentiate yourself in a sea of monkeys slinging ChatGPT'd profiles from a third-world boiler room.

(whether it actually does or the monkeys now got a steady source of fake/stolen IDs is another matter)


EU GDPR has very little enforcement. So while the regulation in theory prevents that, in practice you can just ignore it. If you're lucky a token fine comes up years down the line.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: