-
-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement GPG signing of the binary build artifacts #320
Conversation
Thank you for creating a pull request!Please check out the information below if you have not made a pull request here before (or if you need a reminder how things work). Code Quality and Contributing GuidelinesIf you have not done so already, please familiarise yourself with our Contributing Guidelines and Code Of Conduct, even if you have contributed before. TestsGithub actions will run a set of jobs against your PR that will lint and unit test your changes. Keep an eye out for the results from these on the latest commit you submitted. For more information, please see our testing documentation. In order to run the advanced pipeline tests (executing a set of mock pipelines), I require an admin to post |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - do we need to keep this in draft so we execute the right chain of events?
No, this can safely go in without the other bits being present. It just won't put the artifacts anywhere other than in jenkins at the moment. |
Sign the hashcode files - that's unusual isn't it? |
Not really - here are a couple of examples where the hash which is verified by the signature rather than the main download.
The GPG verification process will be more iCPU ntensive than the SHA sum (and our files are quite big, but I appreciate that on modern hardware it likely isn't too much of an overhead) and this way you're getting some guarantee that both are integral without having to clog up the number of artifacts we have available to download by having double the number of GPG signature files. Obviously either can be technically done (I initially prototyped doing this with the full archive file, but switched over after a recommendation from Eclipse) |
Like you, I'm not too concerned about the CPU load for signing the full artefacts. I hadn't come across people generally signing the hash as a proxy for the full artefact, so wondering why that would be preferred. In my mind, the hash is a statement of fact - the bytes I (Adoptium) have can be described by this succinct and very-likely-to-be-unique value (sha256) and you (end user) can compute the same thing to check that the file you have match mine. Whereas, the signature is an authentication check that you (end user) can verify based upon a secret I (Adoptium) have, to ensure the bytes are the ones I declare to have released. Signing the hashcode is akin to "authenticating a summary" of the actual artefact, not the artefact itself. Given we can easily verify the binary itself why not do that. Again, in a practical sense I guess there is little difference by this extra layer being introduced, and I've not seen it done before. I'm reading a few of the best practices and guidelines for code signing, and they don't referer to signing the hash only signing the code. Maybe we can seek input from folks more experienced in this area as I don't want to derail our SSDF initiative. |
@tellison FYI The artifacts that get signed aren't affected by this PR. I've adjusted the jenkins job that does the work to pick up all the tar.gz/zip/msi/pkg files instead of the SHAsums and modified this PR to copy back all of the [EDIT: Adjusted again to do the signing of the binary artefacts in the platform build pipelines after they've been built] |
The way I've always done it for IcedTea and OpenJDK releases is to sign the binary itself, and then compute the hashsums of both the binary and the signature. The signature is a check that the binary is one published by someone with access to the corresponding private key and the hashsums are a check that both files have downloaded correctly. However, I can also see that, assuming no hash collisions, the binary and its hashsum are equivalents; only that binary can produce that hashsum. The advantage of signing the binary is there is no obligation to download the hashsum to run signature verification. The advantage of signing the hashsum is in computation time, as mentioned. I have no issue with just leaving it open to picking up signatures for whatever file or files they correspond to. |
@AdamBrousseau Does OpenJ9 use the signing options in these pipelines and would therefore be affected by this unless an additional guard was put in place around it? |
Signed-off-by: Stewart X Addison <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
@sxa will let you merge at a good time to monitor it. |
Yeah I'm holding off pending verification that it can be done without breaking OpenJ9 |
Confirmed with OpenJ9 that it will impact them but we have a mitigation and I'll aim to come back to this alongside all the other items in the description. |
First stage in solving adoptium/temurin-build#1275
Will require:
master
nodes in jenkins (Hasn't been a problem ... so far!)But this is the first step and is a prerequisite for the others. Note that this functionality will always be executed and is not currently gated by the
enableSigner
checkbox on the pipeline jobs. That can be added in a subsequent PR if desired.This has been successfully trialled on the pipeline jobs.