Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] How should we evaluate "completeness"? #19

Open
idunbarh opened this issue Aug 13, 2024 · 4 comments
Open

[Discussion] How should we evaluate "completeness"? #19

idunbarh opened this issue Aug 13, 2024 · 4 comments
Labels
question Further information is requested

Comments

@idunbarh
Copy link
Contributor

We need a automated method for evaluation "completeness" of SBOMs which can be incorporated into a pipeline.

The following tools have quality checks:

Ideally we be able to determine if we're meeting all minimum requirements of Framing Software Component Transparency Third Edition (DRAFT) but tooling probably doesn't support these checks.

Another option of completeness is the BOM Maturity Model which is being incorporated into sbomqs.

Are there other ways we could measure completeness of SBOMs today in a pipeline?

@idunbarh idunbarh added documentation Improvements or additions to documentation question Further information is requested and removed documentation Improvements or additions to documentation labels Aug 13, 2024
@surendrapathak
Copy link

sbomqs is working on supporting the draft 4 here: interlynk-io/sbomqs#313

@idunbarh
Copy link
Contributor Author

Lets proceed with sbomqs similar to the phase 1 - docker implementation created by @vpetersson .

  Validate:
    needs: Assemble
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Download SBOMs
        uses: actions/download-artifact@v4

      - name: Install sbomqs
        run: |
          curl -L -o /tmp/sbomqs \
            "https://github.com/interlynk-io/sbomqs/releases/download/v${SBOMQS_VERSION}/sbomqs-linux-amd64"
          chmod +x /tmp/sbomqs


      - name: "Display SBOM quality score through sbomqs"
        run: |
          echo \`\`\` >> ${GITHUB_STEP_SUMMARY}
          for SBOM in $(find . -iname *.json); do
            /tmp/sbomqs score "$SBOM" >> ${GITHUB_STEP_SUMMARY}
          done
          echo \`\`\` >> ${GITHUB_STEP_SUMMARY}

@tiegz
Copy link
Collaborator

tiegz commented Sep 15, 2024

Do we care about SBOM spec validation at all? I just noticed that sbomqs will only check that the version of the declared SBOM spec is supported, but it doesn't actually do validation against that spec.

There's an issue to add support here: interlynk-io/sbomqs#248

I noticed this when sbom-utility raised some spec errors on an SBOM that were not reported by sbomqs:

[INFO] Too many errors. Showing (10/2099) errors.
1. {
        "type": "unique",
        "field": "dependencies.1.dependsOn",
        "context": "(root).dependencies.1.dependsOn",
        "description": "array items[0,1] must be unique",
        "value": {
            "type": "array",
            "index": 0,
            "item": "pkg:npm/[email protected]"
        }
    }

@riteshnoronha
Copy link

@tiegz yes this is accurate sbomqs does not validate against the spec. This was intentional on our part. Our reasoning was as follows

  • The validating against the schema, checks for every field of the sbom. While this is great, the tool, was written to check for compliance standards, like NTIA, BSI. These standards currently don't go beyond metadata/components/dependencies.
  • These schemas are only available for cyclonedx, spdx nothing official exists. This will change with spdx3.0.
  • The issue above is interesting, we should add that as a quality metric.

We think of validating a sbom with a schema as a pre-step to scoring. This is how we plan to implement it. It will however be optional.

-Ritesh
(author sbomqs)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants