Zero-day in Sign in with Apple

  • "Apple also did an investigation of their logs and determined there was no misuse or account compromise due to this vulnerability."

    Given the simplicity of the exploit, I really doubt that claim. Seems more likely they just don't have a way of detecting whether it happened.

  • > The Sign in with Apple works similarly to OAuth 2.0.

    > similarly

    I understand why they wanted to modify OAuth 2.0, but departing from a spec is a very risky move.

    > $100,000

    That was a good bounty. Appropriate given scope and impact. But it would have been a lot cheaper to offer a pre-release bounty program. We (Remind) occasionally add unreleased features to our bounty program with some extra incentive to explore (e.g. "Any submissions related to new feature X will automatically be considered High severity for the next two weeks"). Getting some eyeballs on it while we're wrapping up QA means we're better prepared for public launch.

    This particular bug is fairly run-of-the-mill for an experienced researcher to find. The vast majority of bug bounty submissions I see are simple "replay requests but change IDs/emails/etc". This absolutely would have been caught in a pre-release bounty program.

  • How is this something that can happen? I mean, the only responsibility of an "authentication" endpoint is to release a JWT authenticating the current user.

    At least from the writeup, the bug seems so simple that it is unbelievable that it could have passes a code review and testing.

    I suspect things were maybe not as simple as explained here, otherwise this is at the same incompetence level as storing passwords in plaintext :O.

  • Wow. That's almost inexcusable, especially due to the requirement of forcing iOS apps to implement this. If they didn't extend the window (from originally April 2020 -> July 2020) so many more apps would have been totally exploitable from this.

    After this, they should remove the requirement of Apple Sign in. How do you require an app to implement this with such a ridiculous zero day?

  • This is an amazing bug, I am indeed surprised this happened in such a critical protocol. My guess is that nobody must have clearly specified the protocol, and anyone would have been able to catch that in an abstracted english spec.

    If this is not the issue, then the implementation might be too complex for people to compare it with the spec (gap between the theory and the practice). I would be extremely interested in a post mortem from Apple.

    I have a few follow up questions.

    1. seeing how simple the first JWT request is, how can Apple actually authenticate the user at this point?

    2. If Apple does not authenticate the user for the first request, how can they check that this bug wasn’t exploited?

    3. Anybody can explain what this payload is?

    { "iss": "https://appleid.apple.com", "aud": "com.XXXX.weblogin", "exp": 158XXXXXXX, "iat": 158XXXXXXX, "sub": "XXXX.XXXXX.XXXX", "c_hash": "FJXwx9EHQqXXXXXXXX", "email": "contact@bhavukjain.com", // or "XXXXX@privaterelay.appleid.com" "email_verified": "true", "auth_time": 158XXXXXXX, "nonce_supported": true }

    My guess is that c_hash is the hash of the whole payload and it is kept server side.

  • Is it me or is this writeup low on details? There are a couple of commenters saying that this is a great writeup, but all it amounts to is:

    1. what sign in with apple is

    2. sign in with apple is like oauth2

    3. there's some bug (not explained) that allows JWTs to be generated for arbitrary emails

    4. this bug is bad because you can impersonate anyone with it

    5. I got paid $100k for it

  • Wow, I'm so glad that apple forced me to implement this broken garbage into my apps!

    For those not aware, some time ago apple decided it would be a good idea to develop their own sing in system, and then force all apps on their store (that already support e.g. Google Account login) to implement it.

    So they brought a huge amount of additional complexity in a large amount of apps, and then they fucked up security. Thank you apple!

  • Just want to mention something about the id_token provided. I'm on my phone, so I don't have apples implementation handy, but in OIDC, the relying party (Spotify for example) is supposed to use the id_token to verify the user that is authenticated, specifically the sub claim in the jwt id_token.

    https://openid.net/specs/openid-connect-core-1_0-final.html#...

    It's likely (although like others have noted, this is scant on details), that this value was correct and represented the authenticated user.

    A relying party should not use the email value to authenticate the user.

    Not contesting that this is a bug that should be fixed and a potential security issue, but perhaps not as bad.

    Anyone else? Am I reading this right?

  • Wow, I'm in shock. How could Apple let this one slip in? When I was a junior fullstack I had to design a very similar system and this was one of the very basic checks that I had in mind back then. I don't know how could anyone excuse this very basic bug in such critical service.

  • Excellent writeup! About 4 months ago, I wrote a comment[0] on HN telling folks how Apple simply omitted the server-side validations from their WWDC videos. And given the lack of good documentation at the time, WWDC videos were what most developers were following.

    Even then, the only "security" that developers had was that the attacker wouldn't know the victim's Apple userId easily. With this zero-day attack, it would have been trivial for many apps to get taken over.

    [0] https://news.ycombinator.com/item?id=22172952

  • After observing its endless flow of security and reliability bugs, I begin to think that the recent decline of Apple's overall software quality over the several years is probably a more of systematic problem.

    https://www.bloomberg.com/news/articles/2019-11-21/apple-ios...

    Looks like Federighi agrees with this diagnosis and tries to improve the overall development process but not sure if it can be really improved without changing the famous secretive corporate culture. At the level of Apple's software complexity, you cannot really design and write a quality software without involving many experts' eyes. And I have been complained by my friends at Apple about how hard to get high level contexts of their works and do a cross-team collaboration.

    And IMO, this systematic degradation of the software quality coincides with Bertrand's leaving, who had allowed relatively open culture at least within Apple's software division. I'm not an insider, so this is just a pure guess though.

  • Replace "zero-day" with "privately reported security bug for which I got $100k"

    That's not how zero-day works

  • my brain mis-parsed as:

    (sign in) with (apple zero day)

    which is kind of appealing

  • The write-up is not very clear in my opinion. The graph seems to show that there're 3 API calls (maybe there're more API calls in reality?).

    And if I understand this correctly, the issue is in the first API call, where the server does not validate whether the requester owns the Email address in the request.

    What confuses me are where're the "decoded JWT’s payload" comes from. Is it coming from a different API call or it's somewhere in the response?

  • "A lot of developers have integrated Sign in with Apple since it is mandatory for applications that support other social logins" -- How pathetic Apple is to force their own service on developers!!

  • Wow that’s a really simple bug. Kudos to the OP to even try that. Most people would just look elsewhere thinking Apple of all companies would get such a basic thing right.

  • Am I understanding the article right: the endpoint would accept any email address and generate a valid JWT without verifying the caller owned the email address?

    If so, what extra validation did Apple add to patch the bug?

  • $100,000 (!)

    Props to Apple for raising the bar on bounties!

  • With all those high-profile third parties using Apple ID, what would happen if somebody stole/deleted/damaged my data/assets on Dropbox/Spotify/Airbnb/...? Would I sue the provider who would sue Apple? But does Apple provide any guarantees to the relying parties? And if not and the only way is to depend on the reputation when choosing the ID providers you want to support, how would anyone want to support Apple ID after this? And could they not use it if Apple forces them to...?

  • Absolutely astonishing. The internal controls at Apple seem to be borderline non existent.

  • I always have a minute of nervousness while I read these security posts hoping that the bottom will say it's already been fixed with XYZ security team. Glad it's fixed w/ Apple already. The "they still haven't fixed it" or "still haven't responded" ones are scary.

  • I am hoping WWDC 2020 will have some great news and events that let us forget all the mistake they made in Catalina and incidents like this.

    I am not sure if I am understanding the blog post correctly, because its simplicity is beyond ridiculous.

  • What a click-bait title. 0-day implies it was found already being exploited in the wild.

    The author even says that Apple found no evidence of it being exploited.

    By definition when this blog post was published it was not the 0th day.

  • Some people commenting this to be overpriced, but I don't think so even if they are considering the INR value. The bug is quite critical considering how large the mac and iOS ecosystem is.

  • To me this seems like a poor protocol design that created an opportunity for an implementation error, and that opportunity was seized.

    In the initial authorization request rather than passing a string with an email address, the caller could pass a boolean `usePrivateRelay`. If true generate a custom address for the third party, if false use the email address on file.

    With that one change the implementer no longer has the opportunity to forget to validate the provided email address, and the vuln is impossible.

  • There are few other issues with how websites implemented it. For example, at work, appleid or few apple domains are banned (they wanted to ban iTunes streaming etc.) when I tried to login into Pocket (Read It Later) [Web Version], due to this blocking, the whole login form get hidden once page load complete, and I cannot even login with my username and password.

  • You didn’t make enough money.

  • It’s unclear to me exactly where the vulnerability is given the authors description in “technical details”. Does this occur in the implicit flow as well as the code flow? Is the token request unauthenticated? This seems highly unlikely. Or does Sign In With Apple deviate from the Open ID specification in a way that I’m unfamiliar with?

  • What level of incompetence will it take for the government to step in and create some laws surrounding companies exposing user's private data because 'oops, we don't want to pay security experts what they're actually worth, even though we have billons sitting in bank accounts doing nothing'.

  • I'm still dreaming about a world where OpenID is the norm. Just think if Apple forced all apps to use that instead, that would be a great move for privacy and security.

    But no. Instead they make more proprietary shit without having the basic skills to do so. Then they force that shit on their users.

  • Does it rely on a service to log you in with same email that you provide? Because normally services don’t do that. They suggest you to attach new apple account to old account with that email, but allowing outright logging in would be very bad practice.

  • I find it crazy that Apple can force devs to support apple id if they support a competing service. The US has gone soft on Monopoly abuse. People have got so used to it they dont notice. Gaping holes in security is only one of the consequences.

  • (Unrelated to the Apple bug)

    Is there any bug bounty program for small businesses/apps? I only found hackerone but it seems to be only for enterprise. Is there any recommended platform for small businesses to create their own public bounty program?

  • I’m thankful for all the smart, diligent people working hard to keep us all safe.

  • Again something with sign in / log in in Apple products? Didn't we have the ridiculous empty password thingy a while back already?

    Who is implementing that stuff?

  • > For this vulnerability, I was paid $100,000 by Apple under their Apple Security Bounty program.

    Fucking hell. Even after tax, that's a substantial pay-out.

  • Glad we have people willing to disclose these vulnerabilities rather than just selling it on the black market.

  • Isn't this not a "zero-day"? Zero-day refers to when the company has no notice of an exploit.

  • “I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple’s public key, they showed as valid.”

    What are they teaching them in computer school these days. How can you write a security function and not test it for these kind of bugs. Unless all there accidental backdoors have a more nefarious purpose <shoosh>

  • Wow, this bug is incredibly simple but severe. I’m wondering how did Bhavuk Jain find it

  • Any word on what the fix was?

  • Since this was an extremely simple exploit, I can't help but wonder if it was a purposeful one on Apple's part.

    Apple has been spending a lot of money on a security-focused marketing campaign these past few years, and encouraging a high-price payout of $100k is sage marketing.

  • This is why it's good to run fuzzers against any public API (especially an auth API), to verify its behavior on novel inputs.

    https://en.m.wikipedia.org/wiki/Fuzzing

  • where is the ptacek rant about JWT?!

  • If the bug is as simple as everyone is saying, why hasn’t it been discovered until now?

  • Easiest $100k ever made?

  • Is the dev team that wrote that line of code fired?

  • bwut my appwel dewvice is PEREFECT

  • What's amazing is that Apple gets away with claiming their computers are "secure by design." https://www.apple.com/business/docs/site/AAW_Platform_Securi...

    There's nothing inherent in their design that guarantees security.

  • WTF is a "zero-day?"

  • Why haven't they just implemented OAuth 2.0, like everyone else has done? They've tried to reinvent the wheel with their own implementation of a three-legged user authentication that doesn't add anything to what OAuth does and, surprise, they've exposed themselves to a critical vulnerability that could have been completely avoided.

  • > This bug could have resulted in a full account takeover of user accounts on that third party application irrespective of a victim having a valid Apple ID or not.

    The headline makes me think the entire problem lies with Apple, when that’s not the case.

  • > I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple’s public key, they showed as valid. This means an attacker could forge a JWT by linking any Email ID to it and gaining access to the victim’s account.

    Great writeup there. Looks like a Apple JWT bug and the verification went through despite it being 'signed' and 'tamperproof'. Clearly its footguns allowed this to happen, thus JWTs is the gift that keeps on giving to researchers.

    What did I just outline days before? [0]. Just don't use JWTs, there are already secure alternatives available.

    [0] https://news.ycombinator.com/item?id=23315026

  • Perhaps slightly related that finding Apple zero days was less bounty award than finding Android zero days.

    I think we can wrap up the security and anonymous part that Apple has been claiming for their overpriced devices.