Reported this exact bug to Zendesk, Apple, and Slack in June 2024, both through HackerOne and by escalating directly to engs or PMs at each company.
I doubt we were the first. That is presumably the reason they failed to pay out.
The real issue is that non-directory SSO options like Sign in with Apple (SIWA) have been incorrectly implemented almost everywhere, including by Slack and other large companies we alerted in June.
Non-directory SSO should not have equal trust vs. directory SSO. If you have a Google account and use Google SSO, Google can attest that you control that account. Same with Okta and Okta SSO.
SIWA, GitHub Auth, etc are not doing this. They rely on a weaker proof, usually just control of email at a single point in time.
SSO providers are not fungible, even if the email address is the same. You need to take this into account when designing your trust model. Most services do not.
I do web app testing and report a similar issue as a risk rather often to my clients. You can replace Google below with many other identity providers.
Imagine Bob works at Example Inc. and has email address bob@example.com
Bob can get a Google account with primary email address bob@example.com. He can legitimately pass verification.
Bob then gets fired for fraud or sexual harassment or something else gross misconduct-y and leaves his employer on bad terms.
Bob still has access to the Google account bob@example.com. It didn't get revoked when they fired him and locked his accounts on company systems. He can use the account indefinitely to get Google to attest for his identity.
Example Inc. subscribes to several SaaS apps, that offer Google as an identity provider for SSO. The SaaS app validates that he can get a trusted provider to authenticate that he has an @example.com email address and adds him to the list of permitted users. Bob can use these SaaS apps years later and pull data from them despite having left the company on bad terms. This is bad.
I think the only way for Example Inc. to stop this in the case of Google would be to create a workspace account and use the option to prove domain ownership and force accounts that are unmanaged to either become managed or change their address by a certain date. https://support.google.com/a/answer/6178640?hl=en
Other providers may not even offer something like this, and it relies on Example Inc. seeking out the identity providers, which seems unreasonable. How do you stop your corporate users signing up for the hot new InstaTwitch gaming app or Grinderble dating service that you have never heard of and using that to authenticate to your sales CRM full of customer data?
This is the right answer for this problem. If you're not interested in being a paying workspace customer, get cloud identity free and verify your domain. You can then take over and/or kick out any consumer users in the domain.
Every time I've left an organization, they have swiftly deleted the company email address/revoked my access to it. I assume every reasonable organization will have processes in place to do this.
I don't see this as a vulnerability: how is Google supposed to know that a person has left the company? You let them know by deleting the account.
In the above example, the normal flow to get a Google address user@company.com relies on setting DNS records for company.com, both to prove control of the domain as well as to route email to that domain. There may be an exploit/bypass I'm not seeing, but I legitimately don't see any way a user who has a legitimate user@company.com email address hosted somewhere besides Google workspace could then setup a user@company.com email address with Google.
If there's a way to do this, I would greatly appreciate a link or brief explanation, as our process for employee termination/resignation does involve disabling in the Google admin portal and if we need to be more proactive I definitely want to know.
The issue here is that if company.com does not use Google Workspace and hasn't claimed company.com, then any employee can sign up for a "consumer" Google account using user@company.com.
There are legitimate reasons for this, e.g. imagine an employee at a company that uses Office365 needing to set up an account for Google Adwords.
You can sign up for google with an existing email. So if example.com is all on MS365 that's where the admins control stuff. No google workspace at all, no DNS records or proof of domain to anyone but MS.
So anyone with an example.com email can make a google account using that email as their login. Verify they have the email and that's their login. A common system for users who need to use google ads or analytics.
But when the company disables 365 login the google account remains. And if you use something third party that offers a "Sign in with google" then assumes because you have a google account ending "example.com" you are verified as "example.com" you've got access even if that account is disabled.
If you have the google admin portal this doesn't work as you're controlling it there. But signing up for Microsoft or Apple accounts with that google workspace address might have the same loophole.
That removes you from their system. If I make a GitHub account using bob@example.com, GitHub doesn't get notified that I got fired from example.com, so I can keep using my GitHub bob@example.com account in places that ask GitHub if I'm Bob@example.com even though I don't have access to that email anymore.
This no longer happens for services that have accounts that follow a social media style. For such accounts, employees are expected their own accounts (presumably with followers, reputation etc.) and keep it after leaving the company. For real social media, this is probably fine, but I don't understand why we accept this model for Github and Gitlab (and Sourceware before that). Even from an employee perspective, it's not great because it makes it unclear who owns what. Especially with services like Github which have rules about how many accounts you can create for one person, and under what circumstances.
I have no idea how this is supposed to work in practice for Github and Gitlab, where people gain access to non-public areas of those websites, but they are still expected to use their own accounts which they keep after leaving their employer.
(The enterprise-managed Github accounts do not address this because they prevent public, upstream collaboration.)
This is why you store and match on the sso provider’s uuid, not on the email address. Emails are not a unique identifier and never have been. Someone can delete their email account and later someone else can sign up and claim the same email. Anyone matching on email addresses is doing it wrong. I’ve tried to argue this to management at companies I’ve worked at in the past, but most see my concern as paranoid.
I wonder why Google would make an SSO assertion along the lines of "yes, this user Bob has email address bob@example.com" in the situation where example.com is not under a Workspace account. Such assertions ought to be made only for Workspace (and Google's own domains such as gmail.com, googlemail.com, etc.) since outside of that it's obsolete proof as you say, i.e. it's merely a username of a Google account which happens to look like an email address, and nothing more.
I read the GP's question as "why" would Google allow that in the first place?
The reason is obvious: because a Google account gets you access to many a Google service without requiring you to open a Gmail account.
However, the question still stands: why does Google allow authentication with a non-Gmail/Workspace account? Yes, it would be confusing since not all Google Accounts would be made the same, but this entire class of security issues would disappear.
So it's the usual UX convenience vs security.
Alternative "fix" that's both convenient and secure is to have every company use Google Apps on their domain ;-)
Perhaps the following could be a solution to this issue?
Any OAuth provider should send a flag called "attest_identity_ownership" (false, true) as part of the uaht flow, which is set to true if the account is a workspace account or gmail (or the equivalent for other services), and false if the email is an outside email. Thus, the service handling the login could decide whether to trust the login or proceed otherwise, e.g. by telling the user to use a different OAuth service/internal mechanism where the identity is attested.
If anyone needs motivation of such unmanaged users, I actively use this feature. I have my own Google Workspace on my own domain. Years ago when I bought a Nest product I found that I couldn't use a Google Workspace account to access Nest. No problem, I create a consumer Google account under my Google Workspace domain. The email looks just like a Workspace account. And it doesn't need any additional Workspace licenses. (I no longer plan to buy any more Nest devices so I'll delete the account once my last Nest product stops working.)
Presumably one of the PMs you’re referring to has posted this article for additional information. Feels like they’re doubling down on their initial position.
> Although the researcher did initially submit the vulnerability through our established process, they violated key ethical principles by directly contacting third parties about their report prior to remediation. This was in violation of bug bounty terms of service, which are industry standard and intended to protect the white hat community while also supporting responsible disclosure. This breach of trust resulted in the forfeiture of their reward, as we maintain strict standards for responsible disclosure.
Wow... there was no indication that they even intended on fixing the issue, what was Daniel hackermondev supposed to do? Disclosing this to the affected users probably was the most ethical thing to do. I don't think he posted the vulnerability publicly until after the fix. "Forfeiture of their award" -- they said multiple times that it didn't qualify, they had no intention of ever giving a reward.
As someone who manages a bug bounty program, this kind of pisses me off.
For some of our bugs given on h1, we openly say, "Hey, we need to see a POC in order to get this to be triaged." We do not provide test accounts for H1 users, so, if they exploit someone's instance, we'll not only take the amount that the customer paid off of their renewal price, we'll also pay the bounty hunter.
Fwiw, I wouldn't be surprised if the author of this article is a bit upset that Daniel hackermondev gained a significant % of the income that the author makes a year. If this was "fixed" by Zendesk, they would have paid less than a few % from the 50k they actually made.
Edit: to those downvoting, the fact of the matter is that Zendesk's maximum bounty is far lower than 50k; yet OP made 50k; meaning by definition the value of the vulnerability was at least 50k.
If anything, they are probably upset that they apparently lost some customers over this. That must (rightfully) hurt. But it's their own mistake - leaving a security bug unaddressed is asking for trouble.
He didn't even "go public" as that term is normally used in bug disclosure. He didn't write it up and release and exploit when Zendesk told him it was out of scope and didn't give him any indication they considered it a problem or were planning a fix. Instead he reached out to affected companies in at least a semi private way, and those companies considered the bug serious enough to pay him 50k collectively and in at least some cases drop Zendesk altogether.
I am 100% certain that every one of the companies that paid the researched would consider the way this was handled by that researched "the best alternative to HackerOne rules 'ethical disclosure' in the face of a vendor trying to cover up serious flaws".
In an ideal world, in my opinion HackerOne should publicly revoke Zendesk's account for abusing the rules and rejecting obviously valid bug payouts.
Aren't such disputes about scope relatively common? Not sure what Hackerone can do about it.
For example, most Hackerone customers exclude denial-of-service issues because they don't want people to encourage to bring down their services with various kinds of flooding attacks. That doesn't mean that the same Hackerone customers (or their customers) wouldn't care about a single HTTP request bring down service for everyone for a couple of minutes. Email authentication issues are similar, I think: obviously on-path attacks against unencrypted email have to be out of scope, but if things are badly implemented that off-path attacks somehow work, too, then that really has to be fixed.
Of course, what you really shouldn't do as a Hackerone customer is using it as a complete replacement for your incoming security contact point. There are always going to be scope issues like that, or people unable to use Hackerone at all.
Once they'd brushed him off and made it clear they were not interested in listening to him, resolving the bug, or living up to the usual expectations that researchers have in companies claiming to have bug bounties on HackerOne, I'd say they lost any reasonable expectation that he'd do that.
I'll note he did go to the effort of having the first stab at that sort of resolution, when he pushed back on HackerOne's inaccurate triage of the bug as an SPF/DKIM/DMARC email issue. He clearly understood the need for triage for programs like this, and that the HackerOne internal triage team didn't understand the actual problem, but again was rebuffed.
When in doubt, go with the side which has been forthcoming. Zendesk didn’t publish details and wrote their post misleadingly describing it as a supply chain problem sounding almost as if they were a victim rather than the supplier of the vulnerability. It’s always possible that there are additional details which haven’t come out yet but that impression of a weasel-like PM is probably accurate.
That article claims to have “0 comments”, but currently sits at a score of -7 (negative 7) votes of helpful/not helpful. I think they have turned off comments on that article, but aren’t willing to admit it.
EDIT: It’s -11 (negative 11) now. Still “0 comments”.
In damage control mode, Zendesk can't pay a bounty out here? Come on. This is amateur hour. The reputational damage that comes from "the company that goes on the offensive and doesn't pay out legitimate bounties" impacts the overall results you get from a bug bounty program. "Pissing off the hackers" is not a way to keep people reporting credible bugs to your service.
I don't understand what this tries to accomplish. The problem is bad, botching the triage is bad, and the bounty is relatively cheap. I understand that this feels bad from an egg-on-face perspective, but I would much rather be told by a penetration tester about a bug in a third-party service provider than not be told at all just to respect a program's bug bounty policy.
> "Pissing off the hackers" is not a way to keep people reporting credible bugs to your service.
That doesn’t matter if your goal with a bug bounty program is not to have people reporting bugs, but instead to have the company appear to care about security. If your only aim is to appear serious about security, it doesn’t matter what you actually do with any bug reports. Until the bugs are made public, of course, which is why companies so often try to stop this by any means.
sounds like a great way to get a bunch of black hats to target you after pissing off the white hats. Playing nice with people this smart should be precisely to prevent this kind of damage to a company that results in losing clients.
But I geuss corporations ignoring security for more immediately profitable ventures on the quarterly report is a tale as old as software.
"Hi, we are ZenDesk, a support ticket SaaS with a bug bounty program that we outsource to our effected customers, who pay out an order of magnitude more than our puny fake HackerOne program. Call now, to be ridiculously upsold on our Enterprise package!"
We, the company that doesn't understand security, can't tell whether this was exploited, therefore we confidently assert that everything is fine. It's self consistent I suppose but I wouldn't personally choose to scream "we are incompetent and do not care" into the internet.
As a former ZD engineer, shame on you Mr Cusick (yes, I know you personally) and shame on my fellow colleagues for not handling this in a more proactive and reasonable way.
Another example of impotent PMs, private equity firms meddling and modern software engineering taking a back seat to business interests. Truly pathetic. Truly truly pathetic.
This is very important to keep in mind when implementing OAuth authentication! Not every SSO provider is the same. Even if the SSO provider tells you that the user's email is X, they might not even have confirmed that email address! Don't trust it and confirm the email yourself!
Use OIDC. It is based on Oauth. I would fiddle with implementing basic Oauth clients first. Like a Spotify playlist fetcher or something. Just to start getting a feel for the flows and things you would be concerned with.
[1] - They had to go commercial to stay afloat, there wasn't enough contributions from community/etc. That said it's pretty cheap for what it does in the .NET space.
Can you explain a bit more what makes Sign in with Apple different from Google Sign-in? Apple certainly does maintain a list of users with accounts. So what does "non-directory" mean here exactly? Why can Apple not attest that you control that account at sign-in time?
Now? nothing.
I think this thinking is a relic of Google's status as seemingly the last remaining email provider to automatically create a Gmail account when signing up for a Google account. So using Google SSO meant using your Gmail account, and so control of the email address was nessissary for control of the Google account. If you lose the email account, you lose the Google account. This is not true anymore since you can sign up for a Google account with any email.
Whereas you can (and I believe always could*) create an apple ID with any old email address.
*Maybe this delinked situation only came about when they added the App Store to OS X, and figured they'd make less money if they require existing Mac users to get a new email account in order to buy programs in the manner which would grant them a cut.
Apple has a list of all the email addresses for its sole IDs, but it doesn't control them, and having one deleted doesn't nessisarilly affect the other.
Google and custom domain email have always been delinked from this perspective. You could create a Google account with a custom domain and then point the domain elsewhere or lose control of it, and you'd still retain conto of the account.
Basically, the required example essentially theoretical at this point - maybe it works for employers at companies that also happen to provide SSO services. So if you work at Facebook, Google, Apple, or github and have a me@FGAG.etc.com email dress, and you signed into slack through the SSO that affiliated with your company and the company email, but later don't work there and you've had your work account access revoked, you won't be able to use that SSO to sign into slack. That's what they mean by directory control or whatever.
In contrast, if you sign up to github with your work email account, unless it's a managed device managed by your work, your work doesn't actually control the account. They just vouched for your affiliation at sign up when you verified your email. So if you use a github SSO to sign up for a service that 'verifies' your work email address from github during the process, that won't change when you leave and the company revoked access to the email. Github SSO, in this case, isn't verifying you have an email account @company.com. They are verifying you once did, or at least once had, access to it. This is what they mean by the non-directory whatever.
I think what he means is, if you have an @gmail.com account via Google, that is pretty good proof of control. But if you have any other e-mail (e.g. a custom domain) via Google, it's not.
Similar with Apple, if you were signing in with an @icloud.com, it's pretty good proof, but if you have an Apple ID with a third-party e-mail it's not proof of current control of that e-mail.
That helps, but I still don't have a full picture. What's the threat here? Is it that: if a hacker gains temporary access to Bob's email bob@example.com, they can create an Apple account attached to it, and use that account to sign in with a service ABC, then that hacker gains access to Bob's private info in service ABC? But if the hacker already has email access, can't he just log into service ABC directly anyway?
Also, is it impossible to have a Google account with a non-gmail address? The original poster seemed to be saying that Google _is_ a directory SSO and Apple _is not_ categorically. But if you can have a Google account without a Gmail-ran email account, wouldn't Google have the same vulnerability?
I think the most likely threat in this case is with ex-employees. If Bob has access to bob@example.com and creates an Apple account with it, then subsequently gets fired from example.com, they might delete his email address but his Apple account will still allow him to login to services using Sign in with Apple. (Because Apple only checks ownership of the email address when the Apple account is being created.)
Google accounts have the exact same issue so I don't understand the distinction made by the OP though.
You can also sign up for a google account using a non-gmail email address without creating a new gmail address, providing the domain owner hasn't created a workspace account with that domain in the past.
This can be done with an account that you once had control over but don't anymore, like if you leave an employer.
You can't send mail from it, but many apps will take having a google account with a given email is proof of ownership, or an @example.com email address is proof that you are an employee of Example Inc. when they are a customer of the app and have a tenant set up.
Isn't the simplest solution here to not support SSO at all?
I get there's a convenience factor, but even more convenient is the password manager built into every modern browser and smartphone. If the client decides to use bad passwords, that's will hurt them whether or not they're using SSO.
Well, if the "rager hard" mode is the default, that doesn't help the general population, now does it? I still hate hCaptcha with a burning passion and hope no site ever adopts it
I dislike any kind of CAPTCHA. That said, reCAPTCHA is the worst. It may be my combination of anonymised Firefox, CG-NAT and other tracking and privacy protections, but reCAPTCHA is overly hostile to me.
This is entirely configurable by the site owner. hCaptcha has entirely passive score-based detection, 99.9% passive mode, and more aggressive options as needed.
Passive scores rarely work for me, because I block trackers and as much 3rd party things as I can get away with. But I guess I’ll believe you and be slightly less annoyed with hCaptcha.
This is not what empirical measurements show. It is referring to a minimum internal benchmark, rather than what actually happens. (source: work there)
If you're getting shadow banned somewhere, it is typically either the site itself or their CDN or bot management providers that is sending you into a challenge loop; hCaptcha doesn't control the behavior in those scenarios.
Like any attestation scheme, it doesn't prove anything about the humanity of the button-presser, only that software, hardware, or flesh triggered an action in iOS.
Vast majority of traffic from Tor IPs is abuse, which makes it challenging to deliver a good experience for the handful of normal users mixed in there.
That does not help me on mobile, or in disposable Qubes I browse in to keep totally anonymous. At best I could see using fido2 taps with random identifiers to prove I am physically touching a hardware authenticator even if the ID is a throw away.
I doubt we were the first. That is presumably the reason they failed to pay out.
The real issue is that non-directory SSO options like Sign in with Apple (SIWA) have been incorrectly implemented almost everywhere, including by Slack and other large companies we alerted in June.
Non-directory SSO should not have equal trust vs. directory SSO. If you have a Google account and use Google SSO, Google can attest that you control that account. Same with Okta and Okta SSO.
SIWA, GitHub Auth, etc are not doing this. They rely on a weaker proof, usually just control of email at a single point in time.
SSO providers are not fungible, even if the email address is the same. You need to take this into account when designing your trust model. Most services do not.