Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-OIDC email verification #371

Open
znewman01 opened this issue Feb 1, 2022 · 30 comments
Open

Non-OIDC email verification #371

znewman01 opened this issue Feb 1, 2022 · 30 comments
Labels
enhancement New feature or request

Comments

@znewman01
Copy link
Contributor

znewman01 commented Feb 1, 2022

Discussion in the RubyGems RFC indicated interest in a mechanism to verify emails without going through an OIDC provider.

This should be doable using a stateless email verification flow:

  1. User requests cert for non-OIDC email
  2. Fulcio sends email to that address containing token
  3. User clicks link/sends request with token
  4. Fulcio issues cert

However, there are a few details to address:

  • "email forwarding attacks"
  • details of that flow
  • formal threat model/analysis
  • distinguishing between "[email protected]" verified by Google vs. verified by email
  • allow users to manage their own key pairs (maybe out-of-scope?)
@znewman01 znewman01 added the enhancement New feature or request label Feb 1, 2022
@znewman01
Copy link
Contributor Author

With OIDC flows, the Fulcio cert attests (roughly) "I signed this cert for for someone who successfully completed an OIDC login with XXX provider and YYY account."

With email, what are we attesting? There's a couple options here:

  1. "I signed this cert for someone who saw an email I sent to XXX address"
  2. "I signed this cert for someone who initiated a request and saw an email I sent to XXX address"
  3. "I signed this cert for someone who can send emails from and receive emails to this address"

Email is not known for being super secure, so we have to be careful. (1) allows "drive-by" attacks -- if I'm monitoring a bunch of emails, I can use any tokens that I see. (2) requires the user to have some local state, but prevents "drive-by" attacks -- though if someone can read your email (e.g., if they set up an auto-forward rule), they can definitely still get a cert. (3) prevents auto-forward attacks but the workflow might be a little awkward.

@haydentherapper
Copy link
Contributor

Also consider how OAuth/OIDC mitigates attacks around replay and CSRF. Can email mitigate these?

@znewman01
Copy link
Contributor Author

znewman01 commented Feb 1, 2022

Also consider how OAuth/OIDC mitigates attacks around replay and CSRF. Can email mitigate these?

  • Replay: standard practice is to encode the current time in the token that gets emailed to the user, so you could only replay for 5 minutes. This seems okay since the replay would just get you a reissue of exactly the same cert.
  • CSRF: I think the "local state" bit helps here (i.e., the flow would be CLI generates randomness + ephemeral key -> triggers email -> feed token from email back to CLI which needs the randomness to successfully complete the flow)

In both cases it's important to stick the key in the initial request to Fulcio, which then gets included in the emailed token => the damage a replay-type attack could do is limited.

Are those arguments convincing?

@rochlefebvre
Copy link

I don't quite get option 3. That said, option 2 sounds good and doable.

IMO, one of the killer features of the OIDC flow in the RubyGems RFC is keeping the private key in memory while we acquire a cert and then sign the gem. Earlier you mentioned having the CLI send a random token + an ephemeral key. I assume you meant the public key?

Here's some stuff I just made up:

$ gem signatures --sign --email-address the_email_address foo.gem
< generate private_key, public_key, and email_challenge >
< POST sigstore.dev/.../verifyEmail
  { public_key, email_challenge } >
A challenge response was sent to the_email_address. Please enter the response code below.
Challenge response code:  < user pastes challenge_response from the email >
< POST fulcio.sigstore.dev/api/v1/signingCert
  Authorization: "Bearer challenge_response"
  { public_key, sign(the_email_address) } >

The Authorization bit is pure conjecture. It would be nice if the signingCert request was similar to the typical OIDC one, but it's not too important.

We could split up the flow into two CLI commands: one to start the challenge, one to resume signing using the challenge response. That would entail persisting the key pair between the two commands.

@haydentherapper
Copy link
Contributor

One thing that's worth bringing up is this mode does not support automation. A benefit to OIDC that's been discussed in the RFC is automated signing without the need for a browser, by leveraging refresh tokens. I'm not convinced that email verification without OIDC adds value.

@haydentherapper
Copy link
Contributor

I'm seeing that the primary concern around OIDC pointed out in the RFC discussion is that it "vendorizes" the feature. While I recognize this concern, I assume that almost all developers have an account for at least Github. Over time, we can also add additional identity providers as needed.

Thinking more about implementing email verification when it comes to securely designing the feature, we're going to end up effectively reimplementing OIDC/OAuth with respect to the necessary security considerations. I'd greatly prefer to use a vetted library for OIDC rather than implement our own. If we get something wrong, and someone can impersonate another user, it compromises the ecosystem.

It was also brought up that users could decide via a policy whether or not to trust OIDC vs email-based verification. This is asking a lot of the user to know the differences between these. I'd prefer to not have to present this decision to the user. If we're considering email-based verification to be less vetted/secure, we shouldn't be offering it. Optionality is valuable, but not at the risk of getting security wrong, especially for such a core part of the ecosystem.

In my opinion, the security benefits of OIDC far outweigh the overhead of needing an account on a major website.

@rochlefebvre
Copy link

One thing that's worth bringing up is this mode does not support automation. A benefit to OIDC that's been discussed in the RFC is automated signing without the need for a browser, by leveraging refresh tokens. I'm not convinced that email verification without OIDC adds value.

Yes, I'd like to keep the refresh token-like flow on the table as well. I'm still trying to understand the base flow in Option 2.

What if the email response contains half of a bearer token? The other half is the aforementioned CLI local state. An attacker would need to intercept both the email and the ephemeral secret in order to acquire a certificate. If the code signing key pair plays no part in negociating a bearer token, the pair may then be generated later on as part of the certificate request + gem signing step.

$ gem signatures --email [email protected]
< generate random challenge_code >
< POST sigstore.dev/.../verifyEmail
  { "[email protected]", challenge_code } >
Your email challenge code is: #{challenge_code}
A challenge response was sent to [email protected]. Provide both the --email-challenge-code and the --email-challenge-response to sign your gem.

The user reads the one time challenge response from their email. Some time within the next hour, they sign the gem by providing both codes (either as part of gem build or gem signatures --sign).

$ gem build --email-challenge-code challenge_code --email-challenge-response challenge_response foo.gem
< generate private/public key pair for gem signing >
< POST fulcio.sigstore.dev/api/v1/signingCert
  Authorization: "Bearer challenge_code:challenge_response"
  { public_key, sign(the_email_address) } >
< gem signing and signature publication proceeds as normal >

In that second command, the email address needs to come from somewhere. I'm thinking it's just an --email [email protected] parameter, whose value needs to jive with the challenge_response.

@rochlefebvre
Copy link

I'm seeing that the primary concern around OIDC pointed out in the RFC discussion is that it "vendorizes" the feature. While I recognize this concern, I assume that almost all developers have an account for at least Github. Over time, we can also add additional identity providers as needed.

In my opinion, the security benefits of OIDC far outweigh the overhead of needing an account on a major website.

I have the same opinion on both points. Package ecosystems need to stop prioritizing package maintainer comfort and ideology at the expense of application security. We need meaningful security guarantees as the result of erecting all these barriers.

We should be willing to lose a small fraction of the die-hard "muhfreedums" gem maintainers out there.

How confident are we that most gatekeepers from most ecosystems will feel the same way, and accept that OIDC is the only way to verify that "Person" has access to [email protected]? We're already seeing that very reasonable question pop up from a maintainer in the first ecosystem. Assuming that maintainers are genuinely concerned with offering alternate email verification flows (and not just raising objections as an excuse for maintaining status quo), then we should be exploring that.

Thinking more about implementing email verification when it comes to securely designing the feature, we're going to end up effectively reimplementing OIDC/OAuth with respect to the necessary security considerations. I'd greatly prefer to use a vetted library for OIDC rather than implement our own. If we get something wrong, and someone can impersonate another user, it compromises the ecosystem.

That's true. Case in point: the example I posted in #371 (comment) feels like a poor man's PKCE. We can't have gem signing devolve into security theatre.

It was also brought up that users could decide via a policy whether or not to trust OIDC vs email-based verification. This is asking a lot of the user to know the differences between these. I'd prefer to not have to present this decision to the user. If we're considering email-based verification to be less vetted/secure, we shouldn't be offering it. Optionality is valuable, but not at the risk of getting security wrong, especially for such a core part of the ecosystem.

I completely agree. I don't see that being a factor in any verification policy.

@di
Copy link
Member

di commented Feb 2, 2022

I think providing non-OIDC email verification for arbitrary domains in general may be a bad idea, and providing it as an alternative to OIDC for package managers like RubyGems or PyPI is very likely a bad idea.

@varunsh-coder stated the issue fairly well at rubygems/rfcs#37 (comment):

Let us say there is a gem gemA which is popular. The owner of the gem signs it with their email address [email protected]. Consumers install it and the signature is validated.

Now, let us assume the repository rubygems.org is compromised. The attacker decides to release a new version. The attacker creates a gemspec file on disk, changes the email address to [email protected] (which the attacker owns). The attacker then signs this gem and makes the new version available on rubygems.org.

Consumers get the latest version, and everything looks fine. Sure, a new email address is used for signing, but as a consumer how am I to know if [email protected] is not the right email address for whoever is developing gemA. After all, their earlier email address was [email protected]...

Generally users of package indexes like RubyGems or PyPI don't have a way to determine if someone with control of a given email address actually has control of the project on the index and should be authorized to sign it. While an email address may be added to the metadata for the artifact, generally this does not equate to ownership. Additionally, most indices do not make user emails public by default.

Instead these artifacts should be signed via an OIDC identity token generated by some authorization service for the index, where the identity corresponds to an identity on the index (e.g. https://rubygems.org/profiles/username or [email protected]. And if this feature eventually exists, indexes should reject signatures not signed with these identities.

That said, I am generally interested in this feature. For example, it would be nice for a Python release manager to be able to sign a CPython release with their [email protected] email, without needing to stand up an OAuth server on python.org. This is different than the "package index" case, because consumers can assume that python.org controls both the artifacts hosted on it, and the emails on its domain, and thus a signature from [email protected] can be trusted.

@jchestershopify
Copy link

jchestershopify commented Feb 2, 2022

Instead these artifacts should be signed via an OIDC identity token generated by some authorization service for the index, where the identity corresponds to an identity on the index (e.g. https://rubygems.org/profiles/username or [email protected]. And if this feature eventually exists, indexes should reject signatures not signed with these identities.

For the rubygems RFC we considered something like this at signing time but rejected it for two reasons. The first is that it we would require rubygems.org to operate and secure an OAuth/OIDC infrastructure, which is a significant burden and risk in itself. The major IdPs are extremely well-resourced. Rubygems.org is not. The second disadvantage would mean that account takeover would be a complete compromise: controlling the rubygems.org account would let you sign without proving an independent but cross-referenced identity.

I would add that the story can change at push time. I would expect that entries published by rubygems.org (or PyPI et al) would include information about the rubygems.org user.

@di
Copy link
Member

di commented Feb 2, 2022

The first is that it we would require rubygems.org to operate and secure an OAuth/OIDC infrastructure, which is a significant burden and risk in itself. The major IdPs are extremely well-resourced. Rubygems.org is not.

I can definitely empathize with being an under-resourced package index maintainer. 🙂 I think if this is the main blocker, we can probably find the necessary resources, either via the OpenSSF, Google, etc.

The issue I have with using a major IdP is that while they're good at providing their _own_identitiess, there's (generally) no verifiable link here between their identity and an identity on rubygems/PyPI/etc. Additionally, this has the effect of requiring the signer to expose their IdP identity (e.g. their @gmail.com address) if this was not public before, which I think would be a dealbreaker for many maintainers.

The second disadvantage would mean that account takeover would be a complete compromise: controlling the rubygems.org account would let you sign without proving an independent but cross-referenced identity.

I'm assuming an account takeover would also give the attacker to add/remove emails from the account or change authorized contributors to the project. How would such a takeover look different to the end user than the legitimate maintainer changing their signing email address? How is an end user supposed to determine what emails are trusted in the event of such a change?

@znewman01
Copy link
Contributor Author

It sounds like there's agreement that email validation provides a lower level of assurance than OIDC, and as such we should strongly encourage downstream users like RubyGems and PyPI and their clients to require OIDC identities on signed packages. There's a long list of certified OpenID providers, including academic institutions, governments, and open-source consortia so hopefully at least one is both reputable enough for package managers to trust and sufficiently non-corporate to not be considered "vendoring" (and if not, I bet a privacy-forward NGO could make one).

Longer-term, it seems like it might be useful to provide this feature in Fulcio, but have clients ignore email-validated certs by default. Then, in certain circumstances, these clients could turn on support for email validation.

I'll defer discussion of technical details for now but yes, it seems like it would require a lot of thought and at the end of the day we'd only be "as secure as email itself." However, we should note that "log in with email" or password reset via email are quite popular and provide a limited but nonzero degree of assurance.

Addressing some miscellaneous points:

I'd prefer to not have to present this decision to the user.

Agreed 100% for end-users. I think it's reasonable to give direct users (e.g., package repos or CLI tools) a little bit of freedom here to opt-in to particular providers.

This is different than the "package index" case, because consumers can assume that python.org controls both the artifacts hosted on it, and the emails on its domain, and thus a signature from [email protected] can be trusted.

Stay tuned here. We're quite interested in allow domain owners to specify a canonical way to authorize its users.

[paraphrasing: if we have as the identity provider, it becomes a single point of failure]

This is also a real problem. I know longer-term RubyGems has tossed out the idea of implementing some of the principles of TUF+transparency logs, which would help with the map of "package name -> trusted owners".

@haydentherapper
Copy link
Contributor

For a long-term solution, I'd love to explore privacy-conscious approaches that don't require users to expose their emails. I don't know what this might look, but it's a very exciting area to explore.

I originally strongly believed that package managers should not stand up their own IDPs, due to the maintenance burden and security implications. I've walked that back a bit now. From community feedback, I see the benefit of package managers having their own IDP, since it provides that link. However, I also believe we shouldn't ask every package manager to maintain an IDP.

Hopefully we can convince communities to either:

  • Stand up their own IDP, if they understand the cost of doing so
  • Rely on public IDPs

@dlorenc
Copy link
Member

dlorenc commented Feb 3, 2022

I think there's one more choice we could add to the mix if we want to support "run my own email server and domain use cases": namespaced IdPs.

Right now we allow Google to mint tokens for any email address and trust that they do it properly. We could allow self-hosted OIDC discovery endpoints to mint tokens for that specific namespace. We support something like this for SPIFFE today.

A user could run their own email server for foo.com and an OIDC server at foo.com/.well-known...

Then the blast radius is fairly limited.

@haydentherapper
Copy link
Contributor

I like it! That would definitely minimize the risk of accepting a misconfigured IDP.

Also this comes with the benefit of being able to create automation around adding scoped IDPs to Fulcio through ACME.

@jchestershopify
Copy link

jchestershopify commented Feb 3, 2022

This thread is giving me a lot of food for thought.

One thing I'm confident in, though -- I'm wary of having multiple flavours of IdPs: 1st party (run your own), 2nd party (Rubygems runs one) or 3rd party (Google et al). I think we should pick just one option, because the burden of confusion will be too high otherwise.

@varunsh-coder
Copy link

This thread is giving me a lot of food for thought.

One thing I'm confident in, though -- I'm wary of having multiple flavours of IdPs: 1st party (run your own), 2nd party (Rubygems runs one) or 3rd party (Google et al). I think we should pick just one option, because the burden of confusion will be too high otherwise.

@jchestershopify how do these 1st party/ 2nd party/ 3rd party options equate to what exists for the Apple App Store or the Google App Store? Would you classify those as 2nd party? I was told recently that Maven repository requires signing (have not confirmed it myself). Would that be 1st party? From a burden of confusion perspective, one could learn from other ecosystems where signing is already being done...

@jchestershopify
Copy link

jchestershopify commented Feb 3, 2022

To be clear, I'm using 1st/2nd/3rd party to refer to who provides an assertion of identity as an input to signing, not to who signs. I still see the signing operation as being performed by the 1st party.

@simi
Copy link

simi commented Feb 23, 2022

I tried to follow the technical part of the discussion and I don't understand (probably simple) part of those security concerns regarding email verification.

When my email inbox gets compromised and I don't have 2FA auth on GitHub enabled, attacker can use forgotten password to reach my GitHub account and do the GitHub verification as well. What's the difference? Is the problem in the process of handing over the token over the email? Is that part considered insecure? Wouldn't be attacker able to setup MITM for GitHub verification as well?

rubygems.org is currently based on email identifiers. Every user has one unique email linked to the account. Gem owners (= push access to gem namespace - ability to release new versions) are added to gems based on emails. Internally all is based on user ids, that means you can change email at your account and you will not lost access ownership of gems. It is super simple and it works well. But indeed it relies on email access. Recently MFA was added, but it is not widely adopted yet. If email verification is not the way to go in here, we can try to think about alternative solution.

Historically rubygems.org was also OAuth provider for internal needs. It was removed since projects using that feature died. rubygems/rubygems.org#1435 I can take a look what would be needed to make rubygems.org OIDC provider. Feel free to ping me if that's worth a spending time. Maybe that can be the way for the future. It should be possible to provide this feature only for users having MFA enabled.

Instead these artifacts should be signed via an OIDC identity token generated by some authorization service for the index, where the identity corresponds to an identity on the index (e.g. https://rubygems.org/profiles/username or [email protected]. And if this feature eventually exists, indexes should reject signatures not signed with these identities.

This sounds like the best of both world since it can be managed outside of the rubygems.org and indeed could be shared across multiple package indexes within one service. Are there any initial ideas who would be able to run this kind of service? To make it trustworthy it would be great to keep it open and under control of maintainers from related package indexes (like rubygems.org, PyPI, ...).

Personally (speaking of myself) I'm ok with launching first phase of gem signing (opt-in) with given 3 vendors only. But for the final phase (opt-out on signing and verification) I think there should be alternative solution prepared already .


I'm seeing that the primary concern around OIDC pointed out in the RFC discussion is that it "vendorizes" the feature. While I recognize this concern, I assume that almost all developers have an account for at least Github. Over time, we can also add additional identity providers as needed.
In my opinion, the security benefits of OIDC far outweigh the overhead of needing an account on a major website.

I have the same opinion on both points. Package ecosystems need to stop prioritizing package maintainer comfort and ideology at the expense of application security. We need meaningful security guarantees as the result of erecting all these barriers.

We should be willing to lose a small fraction of the die-hard "muhfreedums" gem maintainers out there.

How confident are we that most gatekeepers from most ecosystems will feel the same way, and accept that OIDC is the only way to verify that "Person" has access to [email protected]? We're already seeing that very reasonable question pop up from a maintainer in the first ecosystem. Assuming that maintainers are genuinely concerned with offering alternate email verification flows (and not just raising objections as an excuse for maintaining status quo), then we should be exploring that.

😞

I'm not sure this was intended to be written in that way, but I'm feeling really insulted by those paragraphs since I was the one who raised those initial comments at RubyGems.org side in RFC. I just shared my experience. Mostly the indirectly accusing part of raising objections as an excuse for maintaining status quo is unfair in my eyes. The part explaining willing to lose a small fraction of the die-hard... is not super friendly either.

I'm not sure what's your experience on OSS @rochlefebvre, but there is much more than GitHub even today. Personally I do have experience on multiple projects avoiding any kind of vendor lock-in at any cost like Linux kernel development or PostgreSQL development. There is nothing wrong to do that and any open source community should not be penalising people deciding on doing that otherwise it is not "open". 🙏

@rochlefebvre
Copy link

Hi @simi. I apologize for my comment. I'm sorry that I hurt you and others.

I'm grateful for your comments so far.

@znewman01
Copy link
Contributor Author

Thanks for the feedback, @simi! You make some really good points.

When my email inbox gets compromised and I don't have 2FA auth on GitHub enabled [emphasis mine], attacker can use forgotten password to reach my GitHub account and do the GitHub verification as well. What's the difference?

I think that's a pretty critical caveat. OIDC allows clients to requesting information about authorization methods, including 2FA. Further, password reset processes can be more involved than just "click this link in an email" (requiring human intervention, security questions, SMS recovery, etc.).

You're right that if we used RubyGems for identity we could require 2FA, which is great. However I do think there's a big advantage to diversity here, where RubyGems itself should not be a single point of failure, so we'd need a 3rd party for authentication. If that 3p is Fulcio, we don't have support for 2FA (nor is it in-scope).

I guess in general I'm not very confident in email security, partly because it's really easy to misconfigure email on a particular domain. I think real OIDC providers do offer a concrete security benefit over email. I'd really like to see a privacy-conscious, full-featured OIDC provider stood up by an organization that has a lot of credibility in the OSS community; IMO that's the best way out here.

Instead these artifacts should be signed via an OIDC identity token generated by some authorization service for the index, where the identity corresponds to an identity on the index (e.g. https://rubygems.org/profiles/username or [email protected]. And if this feature eventually exists, indexes should reject signatures not signed with these identities.

This sounds like the best of both world since it can be managed outside of the rubygems.org and indeed could be shared across multiple package indexes within one service. Are there any initial ideas who would be able to run this kind of service? To make it trustworthy it would be great to keep it open and under control of maintainers from related package indexes (like rubygems.org, PyPI, ...).

Stay tuned :) this is something we're very interested in enabling. I believe that TUF, along with transparency logs get you most of the way there, allowing the delegation of package ownership to a specific identity in a tamper-proof and publicly auditable way.

@rochlefebvre
Copy link

Historically rubygems.org was also OAuth provider for internal needs. It was removed since projects using that feature died. rubygems/rubygems.org#1435 I can take a look what would be needed to make rubygems.org OIDC provider. Feel free to ping me if that's worth a spending time. Maybe that can be the way for the future. It should be possible to provide this feature only for users having MFA enabled.

I have also looked at supporting OAuth & OpenID Connect in rubygems.org. Here is a PR that explores the topic: Shopify/rubygems.org#12. There's the question of what the subject in the code signing certificate should look like, if we're ditching emails. I shared some thoughts in the RFC here.

As you said, MFA on the rubygems.org account would be a critical prerequisite for this scheme. Even then, if rubygems is the IdP, in the event that someone's rubygems.org account is compromised, gem signing affords us no additional safeguards nor means of detection.

The idea of having an independent IdP for multiple package indexes is worth pursuing first. If that leads nowhere, then rubygems.org as an IdP is a workable solution.

Personally (speaking of myself) I'm ok with launching first phase of gem signing (opt-in) with given 3 vendors only. But for the final phase (opt-out on signing and verification) I think there should be alternative solution prepared already .

I agree. Vendor lock-in is anathema to OSS. We might start the work using only established identity providers, but it's not sufficient for Phase 2 and beyond.

As time goes by, I'm less inclined to sign code using an email address, especially if it's posted into a public, immutable transparency log. We still need some kind of stable identifier, but it can be a profile URI or domain-scoped username. @haydentherapper and @di have started looking at this here

@jchestershopify
Copy link

This sounds like the best of both world since it can be managed outside of the rubygems.org and indeed could be shared across multiple package indexes within one service. Are there any initial ideas who would be able to run this kind of service? To make it trustworthy it would be great to keep it open and under control of maintainers from related package indexes (like rubygems.org, PyPI, ...).

Spookily I was thinking about exactly this yesterday and even talked to @rochlefebvre about it.

I would prefer such an independent / neutral IdP over having rubygems as an IdP. It would save a lot of administrative overhead and retains the extra security of an independent account (as we have with vendor accounts). I think I would still keep the vendor options for folks who are comfortable with them, but having a neutral fallback is definitely ideal.

In terms of who could operate such a service, I would suggest that the OpenSSF is a natural home (assuming they agreed, of course). It's running, it has tons of members and solid funding. It's already home to the sigstore project and looks like it will be a home for cooperation across source package ecosystems as well. We could apply for the same kind of "special project" status that sigstore has and set up governance so that source package ecosystems hold leadership.

Personally (speaking of myself) I'm ok with launching first phase of gem signing (opt-in) with given 3 vendors only. But for the final phase (opt-out on signing and verification) I think there should be alternative solution prepared already .

Agreed.

I think that in terms of the rubygems RFC, this would mean that phase 1 can launch with vendors as the available options, but phases 2 and 3 would be gated on having the neutral fallback provider. That pushes out the timeline a fair amount, but I think it would be a good tradeoff to ensure everyone's interests are best served.

@haydentherapper
Copy link
Contributor

I would prefer such an independent / neutral IdP over having rubygems as an IdP. It would save a lot of administrative overhead and retains the extra security of an independent account (as we have with vendor accounts). I think I would still keep the vendor options for folks who are comfortable with them, but having a neutral fallback is definitely ideal. In terms of who could operate such a service, I would suggest that the OpenSSF is a natural home...

I'd prefer we don't create another universal identity store and provider. We discussed this a bit in yesterday's community meeting, but it's important that there's some link between an artifact and a signer that can be clearly defined via a policy. Currently, this is solved by email addresses, where it's expected that there's some public way to link the signer to the email. For ecosystems that want to run their own IdPs, the link is far stronger - For example, the identity of the signer (such as a user ID) should be publicly associated with the artifact (such as part of the metadata of the artifact), and then it's trivial to create verification policies that enforce an artifact's maintainer was the signer of the artifact.

Creating another third-party IdP is no better than email. You still need some way to publicly link a signer to their third-party identity. Users must sign up for another service and not lose their credentials. We need to consider 2FA - Email providers already support this, and there's active work in many of the ecosystems to add 2FA support. The maintenance of an identity store in addition to an IdP will be more difficult than integrating an IdP into an ecosystem with an existing identity store.

If the primary concern is around privacy and not disclosing email addresses, this is something we're actively discussing. We've already discussed the option of ecosystems running their own IdPs. Also note you can initiate signing from GitHub Actions, which means the signature is associated with a GitHub repository and not a specific user.

If the concern is around maintenance of an IdP, then let's chat more! We should evaluate what's available and see if there's improvements we can make, whether it be improving documentation or larger changes to simplify set up and management.

@jchestershopify
Copy link

The original rubygems RFC proposal already required a second account, just with a vendor IdP. And I've seen the separation into two accounts as a feature, as it means that the account that signs is not the account that pushes.

If rubygems, PyPI et al become IdPs, that separation is lost and in the case of a single account takeover, the attacker can sign anything as that user. Put another way: it means signing by the user is basically just signing by the package repository, with extra steps.

The idea of an independent IdP splits the difference. It creates a neutral IdP for folks who won't deal with vendor IdPs, but retains the separation from the package manager account. Again, I see that as a feature.

If such a separate IdP exists, then in terms of the economics it would make sense to pool resources and to defend a single point of failure. Otherwise we're losing economies of scale.

@haydentherapper
Copy link
Contributor

I think it's fair to call this a feature, but to me, it's not a critical feature. I agree that it makes the attack harder, but it's also dependent on the security of each of the identity providers. For example, having two accounts without 2FA or strong passwords would be easier to compromise than a single 2FA-protected account.

Another way to look at a single universal IdP is if it gets compromised, it undermines the security of every ecosystem and all of Sigstore. This is the same with the current vendor IdPs, but I trust that those are hardened and battle tested. Having per-ecosystem IdPs minimizes the blast radius. I'm very much a fan of domain-scoped certificates issued from domain-scoped identities.

@jchestershopify
Copy link

Having per-ecosystem IdPs minimizes the blast radius.

Yes, but at the tradeoff of increasing the individual probability of compromise. And to be clear, this is not intended to be for everyone. It's an escape hatch for a minority. I assume events would be published to rekor by both the package repositories ("user foobar added external link") and the neutral IdP ("rubygems.org user foobar has been claimed") that could be relied on by paranoid clients and monitors to enforce policies against unexpected accounts.

Also, the blast radius would be limited to signatures only, not allowing push/publish operations. Whereas the case where Rubygems et al act as their own IdPs, an account takeover is a complete attack. There's no additional protection given by signing in that scenario. It's actually a regression over the current mechanism.

@znewman01
Copy link
Contributor Author

For example, having two accounts without 2FA or strong passwords would be easier to compromise than a single 2FA-protected account.

[...] if it gets compromised, it undermines the security of every ecosystem and all of Sigstore. This is the same with the current vendor IdPs, but I trust that those are hardened and battle tested

Hmm, it seems we have different philosophies about this. I see the tradeoff more like "one 2FA-protected account" vs. "one account with 2FA account and one without" I'm really interested in allowing Fulcio's clients to set their own policies (with conservative and obvious defaults, of course!) for which identities to trust: requiring multiple identities/providers and adding/removing trusted providers based on their threat model. This allows clients to opt-in to saying "yes, for my application email verification is totally fine." There's room for various types of identities:

  • domain-scoped: a canonical provider per identity: "[email protected]". I think this is a great idea and something we should work toward.
  • explicit OIDC providers: "facebook:[email protected]"
  • other: non-OIDC email falls in this camp

Fulcio's job is to say "this cert is associated with an OIDC login/an email verification flow/some other event," and consumers should decide which ones they trust. On the one hand, "empowering" clients when it comes to security is dangerous -- we know how easy it is to make bad choices. But on the other hand, different users do have different threat models. If I want Fulcio-backed signatures for JPEGs I'm displaying on a screen in my office, that's different from "I'm running these software artifacts on my $10m supercomputer cluster." I might really trust IdPs operating in country X, while someone else views them as a risk to national security. I might require a signature from someone in my specific domain, from my chosen IdP.

In this model, there's less danger in allowing new identity providers into the ecosystem: a compromised IdP matters only to those who view it as legitimate, which might be nobody! Then, what we need to be really careful about is our defaults and recommendations, which can be very conservative, and how explicitly we force API consumers to express their policies (e.g., the library interface might require you to spell out a list of trusted identity providers).

I don't think either of us is "right" here, it's just a matter of opinion . And I don't even necessarily think that this implies we should proceed with support for non-OIDC email verification. But perhaps it's worth clarifying our end goal, as "one canonical IdP per email" (possibly a great default!) is a little bit different world than "a Facebook login with my @gmail.com address."

This is a really long-winded way of saying:

  1. I don't think there should be only one "universal" IdP, but a shared, non-vendor IdP would be universally quite useful in terms of Sigstore adoption (standing up your own IdP for each language/OS ecosystem isn't going to scale).
  2. I think the work in OIDC enhancement: Support for additional OIDC subjects #398 is a great idea.

@lumjjb
Copy link

lumjjb commented Mar 8, 2022

Coming in late to the party here. But the discussion has been enlightening!

My understanding is that there are two groups of concerns. One stems from not trusting particular vendors/IdPs, and another is about the mechanisms - i.e. fulcio + OIDC mechanism is insufficient/doesn't cater to my use case/etc.

Disclaimer: fairly new to fulcio details so please correct me if i'm wrong

My understanding of the current model of fulcio is that it is rather "flat", in a sense that I trust the fulcio server (and its root keys), and trust that it performs the necessary checks. The discussion seems to revolve around the idea that this trust model is not the same across IdPs and across different mechanisms - Each (mechanism, parameters) tuple should be able to be discerned by a policy. I'm thinking that an aspect to explore could be to express this at the key level.

I've had a chat with @SantiagoTorres about this briefly and he suggested that TUF can be used to help govern such a mechanism, through the use of trust delegation. Where the fulcio server will only act as the timestamping authority.

One thought is to create trust delegatable fulcio namespaces. The mechanisms of authentication could act as plugins to fulcio (or a call to an external entity). Since there probably will be scenarios where I don't necessary trust the fulcio server besides being a timestamping authority.

@znewman01
Copy link
Contributor Author

Another reference: RFC 8823 proposes "ACME email challenges" for end-user S/MIME certs. No support for 2FA and I don't see a plan for timestamping/expiry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

9 participants