System decomposition to OSCAL controls #1349
Replies: 11 comments 32 replies
-
I will circle back around to this eventually but there is a lot of interesting stuff to unpack here. I hope to see some good conversations.
I think a lot of your perspective aligns with interest in (my desired and perceived use case in) the application of the rules work in #1058 to this philosophy of "systems are systems of systems, not controls" and how you would facilitate the "connect the dots approach" in a tool using it once done to connect one or more sub-systems of a composed system to control information with logical descriptions of the system and mechanisms to test control relevance.
So do we agree that More particularly: are you suggesting there are challenges or deficiencies in OSCAL to achieve this? Or from your perspective the models do facilitate this, it's just a sample data and issue with lack of examples to prove this out for the community? |
Beta Was this translation helpful? Give feedback.
-
I would like to be able to pull an SSH Component out of a compliance library that can be composed with a number of other components (AWS E/W, Django, Jira, Splunk, etc...) to help form the basis of an SSP. After such composition, a UI that can direct the user's attention to to their remaining responsibility (addressing Hybrid and missing controls) during which they could add additional components and/or generate a "This System" (we call it "system-specific") for narratives/configurations unique to the application. While generating an SSP for every separate component and pulling them in via leveraged authorizations fills the bill, it is more complex for the user. Working with technology/process/policy-based components supports a Natural relationship with technology inventory, |
Beta Was this translation helpful? Give feedback.
-
With that said, issue #1024 requests similar support in OSCAL (systems composition and decomposition into (sub)systems - the ability to decompose the system into (sub) systems and compose it back. Two challenging aspect to consider are:
|
Beta Was this translation helpful? Give feedback.
-
Re: uneven control tailoring: When reviewing an SSP from a control level perspective, once should be able to see all of the components that have contributed implementation statements to that control. If a component tailored out that control, then it won't appear. I'm thinking (hopefully correctly) that a "This System" Profile can indicate system-specific control tailoring for an SSP. |
Beta Was this translation helpful? Give feedback.
-
Sorry that I haven't been keeping up with this as well as I would have liked. @aj-stein-nist Going back to the 'why does OpenSSH require an SSP', it's because we live in a world of composed systems. OpenSSH, by itself, in a container is a 100% valid system and is running as such in many places. Likewise, One of the big issues that I've seen across implementations of the NIST controls is where the system starts and ends. When we're developing new software, the system starts and ends at everything that has an interface and the SSP may be extremely simple, but why wouldn't it exist? However, there seems to be this magic land of "vendor software" where having excessively complex systems presented as a big blob is OK. From a very practical point of view, this may be valid. But software is software, and containers aren't special. Part of starting this thread was to help figure out how far down the stack we're supposed to go and what should actually be composable. From a purely theoretical standpoint, everything is easily composable, it just takes rigor. The RMF appears to back this level of rigor but budgets do not. I'm hoping that we can present a working model of this level of rigor and then leave it up to the authorizing officials as to how much they're willing to pay for and/or how much vendors are required to implement. |
Beta Was this translation helpful? Give feedback.
-
I'm game - I've got to generate some CDefs for a FaaS SSP so happy to
donate IRL examples and have them ripped apart!
…On Mon, Nov 7, 2022 at 7:39 AM Trevor Vaughan ***@***.***> wrote:
YAML please :-D.
I don't mind XML but it's a pain. JSON doesn't have comments so isn't
great for this exercise.
I'm hit and miss on time but definitely interested on trying to get this
going!
—
Reply to this email directly, view it on GitHub
<#1349 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQWTGFGLQ2JOWOJUT46CEV3WHEPBPANCNFSM53PFVQPQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Extra points for inclusion of attestation of NIST CMVP validation (as a
transitive component inclusion artifact).
…On Mon, Nov 7, 2022 at 10:18 AM Alexander Stein ***@***.***> wrote:
OK, so maybe it is example time, I will make a place for us to work
through this then and we can all spitball (since I can tell you two are
passionate about the subject)!
—
Reply to this email directly, view it on GitHub
<#1349 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG7QCC7VCK7KXGRTGFB6QTWHEMTFANCNFSM53PFVQPQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
<https://flexion.us/>
Gary Gapinski
Security/XML Engineer
*T.* 608.834.8600
*E.* ***@***.***
FLEXION - VALUE FORWARD
811 E Washington Ave
Suite 400
Madison, WI 53703
--
*Notice*: The information contained in this message or any attached
document is confidential and intended only for individuals to whom it is
addressed. If you got this message in error, please inform me immediately
using one of the methods above. In some cases, I may ask you to return the
documents at my expense. In general, please simply destroy the information
at once. Any unauthorized use, distribution, or copying of this information
is prohibited.
|
Beta Was this translation helpful? Give feedback.
-
This is exactly how I do it (Neo4j is the open source graph database I use)
One tweak I would make - the component definition should be *loosely*
coupled to the thing (asset, service, interface) it describes so that it is
composable as things change.
Example:
- Kubernetes cluster generically should have a CDef
- The actual instances (inventory) of your system clusters should be in the
graph with all the relevant data, eg an app cluster, an ML cluster, and a
Management cluster.
To gen the SSP “instantiate” (assign values to variables) a component from
each specific category of cluster and map to the control implementations
required. They will be mostly the same since a cluster is a cluster, but
there is some differences for example audit controls for a management
cluster vs the ML cluster with only de-identified data, and the app cluster
with PII, etc etc
The magic is in the 1) classification of your inventory into accurate, but
not too granular, categories that are functionally and structurally
distinct so that the SSP describes a system, not just a copy of the
inventory so you can 2) map the components to the control implementations
required for each type of component for a particular function.
I have done 1 and 2 many ways and there is no EZ button, but there are
manual and automated ways, and hybrid is the best currently (until GPT-10).
As a benchmark, we (a volunteer CSA group) are doing #2 manually and at a
high (conceptual, not baseline) level for Funtion-as-a-Service control
mappings. About 6 people have been working on it a few hours a month for
about 6 months. I will be presenting a Kubernetes example system next week
that we spent roughly 4 person months engineering scripts with python and
ML to do both 1 and 2. YMMV.
NISTIR 8011 and the FedRAMP Risk Based Assessment guidance are a good start.
I think it would be good to do a DEFINE or other project to run through
several concrete examples together and work it out on virtual whiteboard
(deidentified of any sensitive real data). I have analyzed about 3 dozen
systems now and I would guesstimate I have seen ~5% of the real world
corner cases. Corner cases are everywhere. This domain is nothing but
corner cases like some MC Escher cubic nightmare scene :)
…On Thu, May 18, 2023 at 5:31 AM Trevor Vaughan ***@***.***> wrote:
Well, after getting far too busy, I'm still very much interested in
getting to the bottom of this concept.
While working on some recent projects, I realized that what I *want* is a
distributed Requirements Traceability Matrix that essentially acts as what
ends up being a composable graph database where the data travels with the
artifact itself.
Traditional methods have focused on centralizing the data but that's not
how any of our systems work. I propose that, in a similar manner to the
SBOM, the Component Definition should be tightly coupled to the thing which
it describes. This leads to a system based on large-scale discoverability
of reality instead of trusting that an "oracle" will have the correct
answer (because it won't).
I believe that the OSCAL component definition has all of the correct parts
but that it also needs to be more finely composable (I may have missed
include statements along the way) since some of the information is either
more fluid or more sensitive than should be in the "generic" portion of the
data.
—
Reply to this email directly, view it on GitHub
<#1349 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGENIR5SA5L3WO7WDKTBCDXGYJA3ANCNFSM53PFVQPQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Yes the granularity needs to be down to the config level, for sure. Just an
example a relatively simple system has ~25K entities in the graph at all
levels of granularity. And each of those has +/- a dozen or more
properties.
Query performance starts to lag pretty quickly. Then there is our 2.5M
entity system….
…On Thu, May 18, 2023 at 6:23 AM Trevor Vaughan ***@***.***> wrote:
@rficcaglia <https://github.com/rficcaglia> What you've said make sense
but I think that fine granularity is required to actually know if you meet
any given set of controls in reality.
Given the scenarios further down, the actual method in which everything
has been specifically configured determines whether or not you have met a
control regardless of the ability to meet the control.
I agree on the Escheresque nature of the problem but I think that it can
be addressed in small bites by taking into account the reality of how
systems are built. For instance, one of my projects
<https://github.com/simp/pupmod-simp-compliance_markup/blob/master/SIMP/compliance_profiles/checks.yaml>
uses the puppet configuration management language and adds explicit
detection of controls that allow users to determine, at runtime, what
controls have been violated due to changing different settings. To me, it
is a small leap from that to being able to actually provide OSCAL/XCCDF
mappings that reflect both the desired and the actual state of the system
based on mapping points...somewhere(?).
In relation to your statement
The magic is in the 1) classification of your inventory into accurate, but
not too granular, categories that are functionally and structurally
distinct so that the SSP describes a system, not just a copy of the
inventory so you can 2) map the components to the control implementations
required for each type of component for a particular function.
To support the mapping of a system at scale, the system must be
discoverable. Items that fall outside of discoverability have to be
considered suspect since they are effectively untraceable. Fundamentally, I
think that, as an industry, we have forgotten that systems are just things
that talk and they are reality. Controls are requirements and we're trying
to answer the question "does the system meet the requirements and how?".
Scenarios Scenario - Developed Software
- Developer must demonstrate adherence to security (or otherwise)
requirements
- Developer must provide proof of validation/inspection along with the
artifacts for bi-directional traceability
- Component Definition 💡
- This is the end of the Developed Software area of responsibility,
gaps should be able to be detected
Scenario - Container Image
- Developed Software is inserted into Container (or OS, or whatever)
- Container Image provides whatever it provides in addition to
Developed Software along with the artifacts for bi-directional traceability
- Component Definition with composition of Developed Software
- This is the end of the Container Image area of responsibility, gaps
should be able to be detected
Rinse and Repeat
- Operating System Running Container
- Kubernetes Cluster with Sidecars
- Container in container (because sure)
Keep Going
- Hosting Environment
- Physical Measures
- Etc...
Where I'm Struggling
I'm trying to start at the 'micro' level and not get caught up in 'the
whole system'. Ideally, you can move the slider up or down to suit your
needs but the data will be present at all levels.
Hopefully, this demonstrates what I mean by "moves with the component".
Like, if I hand you signed Developed Software v1.2.3 then I also hand you
signed Component Definition v1.2.3 that maps to the cryptographic hash of
Developed Software v1.2.3.
Then, of course, you can just query the system (or whatever holds the
reference artifacts for the system) and figure out what
requirements/controls are met and where you have gaps.
This may seem too myopic but I think that this is essentially a fractal
problem and, if the micro-problem can't be addressed, then the larger
system will not be fully applicable and we'll be waiting for the next
standard to come along.
—
Reply to this email directly, view it on GitHub
<#1349 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQWTGFBZGUPRMAVIVTNDMADXGYPFDANCNFSM53PFVQPQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
🤔 I was re-reading the Creating a Component Definition page and I think I've hit on one of the things that is causing me trouble. It's actually clearly expressed in The Final Component Definition where the following statement is issued under
In a generic sense, the text above provides a description not of I understand that the description text for implemented-requiremenst indicates that they're more of suggestions than potential reality but I think that this causes issues with system composition from component parts. Personally, I think that the human perception of For instance, the "generic" MongoDB Component Definition could have For your internal implementation, you could then use Likewise for the specific implementation, validation could come in the form of a host of different methods, some of which may be more, or less, acceptable to a given approving authority. So, given the current model, how do I determine if the implementation of TLS has actually been met in my implementation of the Component Definition as written? The current documentation indicates that validation should be done via Personally, I think that this is too decoupled from the description of the implementation itself and validation should be a first-class citizen in the OSCAL framework considering that it's literally the thing that we are always asked to provide. There are countless hours of C&A activities that could be eliminated by simply having the ability to concretely map implementation to requirement in a scalable and portable manner. Taking a leaf out of the upcoming SysML 2.0 specification, I think that |
Beta Was this translation helpful? Give feedback.
-
more formal agreement on what constitutes an "acceptable" system
something akin to in-toto (
https://github.com/in-toto/docs/blob/v0.9/in-toto-spec.md) - but for
deployment control implementation attestation, not just coding and
packaging - might be useful here.
eg controls can be encoded in the "metadata" and then verified at
deployment time, eg by running a CIS benchmark scan according to the
metadata, running an OPA policy as specified in the metadata and the
expected result, ansible based checks, etc.
that in a sense "solves" this decomposition problem - for cloud native
systems; not hardware, etc - that anything that can be described in a
formal/attestation based agreement and deployed with the necessary
policy-as-code and other verification metadata *should* be a component. if
it cannot be deployed in such a way, ipso facto it cannot be a component.
e.g. SSH as a binary can't be "deployed" in a cloud native system - it has
to be in an image and deployed as a container to do something. that "SSH
host" container *is* a component. It needs "in-deployo" :) attestation
metadata about its SBOM, image malware scans, open ports, allowed clients,
FIPS 140 crypto support, audit logging, access control, etc - before it is
allowed to be deployed.
arguably the control implementation links in OSCAL for the SSP could simply
refer to uuids for the "in-deployo" steps. The SAP could link to the
corresponding policy rules/checks. The SAR could be the resultant
verification outputs.
admittedly this scheme works best (at all) for Infra-as-Code and container
based systems.
…On Fri, May 26, 2023 at 9:50 AM Trevor Vaughan ***@***.***> wrote:
@jon-burns <https://github.com/jon-burns> We actually have precedent for
this between services using PKI CP/CPS mappings across organizations and
SAML.
It *is* horribly tedious, but having systems be able to validate the
compliance state of other systems through some trusted mechanism is a huge
step towards practical enforcement.
Kubernetes also has policy adjudicators that could use a more formal
agreement on what constitutes an "acceptable" system to which to connect.
At the lowest level, using TPM attestation services could also help prior
to connecting to the network at all for instance.
All of the parts are there and generally public we just have to be careful
to keep it a decoupled collaboration instead of a cash grab that ruins the
party.
—
Reply to this email directly, view it on GitHub
<#1349 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQWTGFDA3F5ABUEQJSMST7DXICYJPANCNFSM53PFVQPQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
This is a follow on from the OSCAL mini workshop on 2022-07-13.
I've been struggling with how to provide composable SSPs with OSCAL and I wanted to open up a discussion to see if the community can come to some level of usability model for SSP composition.
In many cases, it seems that the question being answered is "what controls does system X meet". However, systems are made of systems; systems are not made of controls.
In order to build completely portable SSP stacks, it seems like we need to work down from systems modelling (maybe something like the C4 Model) into individual SSP components.
If the system is modeled in a composable manner, then the individual OSCAL SSPs for each GSS, major, and minor support system can be freely shared across environments.
In theory, this would go to the level of doing things like creating an SSP for
openssh
that is as narrowly focused as possible. This would then be inherited by whatever container/operating system/whatever that is holding it.Definitely looking forward to the discussion on figuring out how to scale OSCAL out to fully composable systems.
Beta Was this translation helpful? Give feedback.
All reactions